#FactCheck - Viral Video Falsely Attributes Statements on ‘Operation Sindoor’ to Army Chief
Executive Summary
A video widely circulated on social media claims to show a confrontation between a Zee News journalist and Chief of Army Staff (COAS) General Upendra Dwivedi during the Indian Army’s Annual Press Briefing 2026. The video alleges that General Dwivedi made sensitive remarks regarding ‘Operation Sindoor’, including claims that the operation was still ongoing and that diplomatic intervention by former US President Donald Trump had restricted India’s military response. Several social media users shared the clip while questioning the Indian Army’s operational decisions and demanding accountability over the alleged remarks. The CyberPeace concludes that the viral video claiming to show a discussion between a Zee News journalist and Chief of Army Staff General Upendra Dwivedi on ‘Operation Sindoor’ is misleading and digitally manipulated. Although the visuals were sourced from the Indian Army’s Annual Press Briefing 2026, the audio was artificially created and added later to misinform viewers. The Army Chief did not make any remarks regarding diplomatic interference or limitations on military action during the briefing.
Claim:
An X (formerly Twitter) user, Abbas Chandio (@AbbasChandio__), shared the video on January 14, asserting that it showed a Zee News journalist questioning the Army Chief about the status and outcomes of ‘Operation Sindoor’ during a recent press conference. In the clip, the journalist is purportedly heard challenging the Army Chief over his earlier statement that the operation was “still ongoing,” while the COAS is allegedly heard responding that diplomatic intervention during the conflict limited the Army’s ability to pursue further military action. Here is the link and archive link to the post, along with a screenshot.
The reverse image search also directed to an extended version of the footage uploaded on the official YouTube channel of India Today. The original video was identified as coverage from the Indian Army’s Annual Press Conference 2026, held on January 13 in New Delhi and addressed by COAS General Upendra Dwivedi. Upon reviewing the original press briefing footage, CyberPeace found no instance where a Zee News journalist questioned the Army Chief about ‘Operation Sindoor’. There was also no mention of the statements attributed to General Dwivedi in the viral clip.
In the authentic footage, journalist Anuvesh Rath was seen raising questions related to defence procurement and modernization, not military operations or diplomatic interventions. Here is the link to the original video, along with a screenshot.

To further verify the claim, CyberPeace extracted the audio track from the viral video and analysed it using the AI-based voice detection tool Aurigin. The analysis revealed that the voice heard in the clip was artificially generated, indicating the use of synthetic or manipulated audio. This confirmed that while genuine visuals from the Army’s official press briefing were used, a fabricated audio track had been overlaid to falsely attribute controversial statements to the Army Chief and a Zee News journalist.

Conclusion
The CyberPeace concludes that the viral video claiming to show a discussion between a Zee News journalist and Chief of Army Staff General Upendra Dwivedi on ‘Operation Sindoor’ is misleading and digitally manipulated. Although the visuals were sourced from the Indian Army’s Annual Press Briefing 2026, the audio was artificially created and added later to misinform viewers. The Army Chief did not make any remarks regarding diplomatic interference or limitations on military action during the briefing. The video is a clear case of digital manipulation and misinformation, aimed at creating confusion and casting doubts over the Indian Army’s official position.
Related Blogs

A Foray into the Digital Labyrinth
In our digital age, the silhouette of truth is often obfuscated by a fog of technological prowess and cunning deception. With each passing moment, the digital expanse sprawls wider, and within it, synthetic media, known most infamously as 'deepfakes', emerge like phantoms from the machine. These adept forgeries, melding authenticity with fabrication, represent a new frontier in the malleable narrative of understood reality. Grappling with the specter of such virtual deceit, social media behemoths Facebook and YouTube have embarked on a prodigious quest. Their mission? To formulate robust bulwarks around the sanctity of fact and fiction, all the while fostering seamless communication across channels that billions consider an inextricable part of their daily lives.
In an exploration of this digital fortress besieged by illusion, we unpeel the layers of strategy that Facebook and YouTube have unfurled in their bid to stymie the proliferation of these insidious technical marvels. Though each platform approaches the issue through markedly different prisms, a shared undercurrent of necessity and urgency harmonizes their efforts.
The Detailing of Facebook's Strategic
Facebook's encampment against these modern-day chimaeras teems with algorithmic sentinels and human overseers alike—a union of steel and soul. The company’s layer upon layer of sophisticated artificial intelligence is designed to scrupulously survey, identify, and flag potential deepfake content with a precision that borders on the prophetic. Employing advanced AI systems, Facebook endeavours to preempt the chaos sown by manipulated media by detecting even the slightest signs of digital tampering.
However, in an expression of profound acumen, Facebook also serves reminder of AI's fallibility by entwining human discernment into its fabric. Each flagged video wages its battle for existence within the realm of these custodians of reality—individuals entrusted with the hefty responsibility of parsing truth from technologically enabled fiction.
Facebook does not rest on the laurels of established defense mechanisms. The platform is in a perpetual state of flux, with policies and AI models adapting to the serpentine nature of the digital threat landscape. By fostering its cyclical metamorphosis, Facebook not only sharpens its detection tools but also weaves a more resilient protective web, one capable of absorbing the shockwaves of an evolving battlefield.
YouTube’s Overture of Transparency and the Exposition of AI
Turning to the amphitheatre of YouTube, the stage is set for an overt commitment to candour. Against the stark backdrop of deepfake dilemmas, YouTube demands the unveiling of the strings that guide the puppets, insisting on full disclosure whenever AI's invisible hands sculpt the content that engages its diverse viewership.
YouTube's doctrine is straightforward: creators must lift the curtains and reveal any artificial manipulation's role behind the scenes. With clarity as its vanguard, this requirement is not just procedural but an ethical invocation to showcase veracity—a beacon to guide viewers through the murky waters of potential deceit.
The iron fist within the velvet glove of YouTube's policy manifests through a graded punitive protocol. Should a creator falter in disclosing the machine's influence, repercussions follow, ensuring that the ecosystem remains vigilant against hidden manipulation.
But YouTube's policy is one that distinguishes between malevolence and benign use. Artistic endeavours, satirical commentary, and other legitimate expositions are spared the policy's wrath, provided they adhere to the overarching principle of transparency.
The Symbiosis of Technology and Policy in a Morphing Domain
YouTube's commitment to refining its coordination between human insight and computerized examination is unwavering. As AI's role in both the generation and moderation of content deepens, YouTube—which, like a skilled cartographer, must redraw its policies increasingly—traverses this ever-mutating landscape with a proactive stance.
In a Comparative Light: Tracing the Convergence of Giants
Although Facebook and YouTube choreograph their steps to different rhythms, together they compose an intricate dance aimed at nurturing trust and authenticity. Facebook leans into the proactive might of their AI algorithms, reinforced by updates and human interjection, while YouTube wields the virtue of transparency as its sword, cutting through masquerades and empowering its users to partake in storylines that are continually rewritten.
Together on the Stage of Our Digital Epoch
The sum of Facebook and YouTube's policies is integral to the pastiche of our digital experience, a multifarious quilt shielding the sanctum of factuality from the interloping specters of deception. As humanity treads the line between the veracious and the fantastic, these platforms stand as vigilant sentinels, guiding us in our pursuit of an old-age treasure within our novel digital bazaar—the treasure of truth. In this labyrinthine quest, it is not merely about unmasking deceivers but nurturing a wisdom that appreciates the shimmering possibilities—and inherent risks—of our evolving connection with the machine.
Conclusion
The struggle against deepfakes is a complex, many-headed challenge that will necessitate a united front spanning technologists, lawmakers, and the public. In this digital epoch, where the veneer of authenticity is perilously thin, the valiant endeavours of these tech goliaths serve as a lighthouse in a storm-tossed sea. These efforts echo the importance of evergreen vigilance in discerning truth from artfully crafted deception.
References
- https://about.fb.com/news/2020/01/enforcing-against-manipulated-media/
- https://indianexpress.com/article/technology/artificial-intelligence/google-sheds-light-on-how-its-fighting-deep-fakes-and-ai-generated-misinformation-in-india-9047211/
- https://techcrunch.com/2023/11/14/youtube-adapts-its-policies-for-the-coming-surge-of-ai-videos/
- https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/youtube-twitter-hunt-down-deepfakes

The recent Promotion and Regulation of Online Gaming Act, 2025, that came into force in August, has been one of the most widely anticipated regulations in the digital entertainment industry. Among provisions such as promoting esports and licensing of online gaming, the legislation notably introduces a blanket ban on real-money gaming (RMG). The rationale behind this was to reduce its addictive effects, protect minors, and limit the circulation of black-money. However, in reality, the Act has spawned apprehension about the legislative process, regulatory redundancy, and unintended consequences that can shift users and revenue to offshore operators.
From Debate to Prohibition: How the Act was Passed
The Promotion and Regulation of Online Gaming Act was passed as a central law, providing the earlier fragmented state laws on online betting and gambling with an overarching framework. Proponents argue that, among other provisions, some kind of unified national framework was needed to deal with the scale of online betting due to its detrimental impact on young users. The current Act is a direct transition to criminalisation rather than the swings of self-regulation and partial restrictions used during the previous decade of incremental experiments in regulation. Stakeholders in the industry believe that this type of sudden, blanket action creates uncertainty and erodes confidence in the system in the long run. Further, critics have pointed out that the Bill was passed without adequate Parliamentary deliberation. A question has been raised about whether procedural safeguards were upheld.
Prohibition of Online RMG
Within the Indian context, a distinction has long been drawn between games of skill and games of chance, with the latter, like a lottery or a casino, being severely prohibited under state laws, whereas the former, like rummy or fantasy sports, have generally been allowed after being recognized as skill-based by court authorities. The Online Gaming Act of 2025 abolishes this distinction on the internet, thus banning all RMG actions that include cash transactions, regardless of skill or chance. The act also criminalises the advertising, facilitation, and hosting of such sites, thereby penalizing offshore operators with an Indian customer focus, and subjecting their payment gateways, app stores, and advertisers under its jurisdiction to penalties.
The Problem of Overlap
One potential issue that the Act presents is its overlap with the existing laws. The IT Rules 2023 mandate intermediaries in the gaming sector to appoint compliance officers, submit monthly reports, and undergo due diligence. The new Act introduces a three-level classification of games, whereas the advisories of the Central Consumer Protection Authority (CCPA) under the Consumer Protection Act treat online betting as an unfair trade practice.
This multiplicity of regulations builds a maze where different Ministries and state governments have overlapping jurisdiction. Policy experts caution that such an overlap can create enforcement challenges, punish players who act within the law, and leave offshore malefactors undetected.
Unintended Consequences: Driving Users Offshore
Outright prohibition will hardly ever remove demand; it will only push it out. Offshore sites have taken advantage of the situation as Indian operators like Dream11 shut down their money games after the ban. It has already been reported that there is aggressive advertising by foreign betting companies that are not registered in India, most of which have backend infrastructure that cannot be regulated by the Act (Storyboard18).
This diversion of users to unregulated markets has two main risks. First, Indian players are deprived of the consumer protection offered to them in local regulation, and their data can be sent to suspicious foreign organizations. Second, the government loses control over the money flow that can be transferred via informal channels or cryptocurrencies or other obscure systems. Industry analysts are alerting that such developments may only worsen the issue of black-money instead of solving it (IGamingBusiness).
Advertising, Age Gating, and Digital Rights
The Act has also strengthened advertisement regulations, aligning with advisories issued by the Advertising Standards Council of India, which prohibits the targeting of minors. However, critics believe that the application remains inadequately enforced, and children can with comparative ease access unregulated overseas applications. In the absence of complementary digital literacy programs and strong parental controls, these limitations can be effectively superficial instead of real.
Privacy advocates also warn that frequent prompts, vague messages, or invasive surveillance can weaken the digital rights of users instead of strengthening them. Overregulation has also been found to create banner blindness in global contexts where users ignore warnings without first clearly understanding them.
Enforcement Challenges
The Act puts a lot of responsibilities on many stakeholders, including the Ministry of Information and Broadcasting (MIB) and the Reserve Bank of India (RBI). Platforms like Google Play and Apple App Store are expected to verify government-approved lists of compliant gaming apps and remove non-compliant or banned ones, as directed by the MIB and the RBI. Although this pressure may motivate intermediaries to collaborate, it may also have a risk of overreach when it is applied unequally or in a political way.
According to the experts, the solution should be underpinned by technology itself. Artificial intelligence can be used to identify illegal advertisements, track illegal gaming in children, and trace payment streams. At the same time, the regulators should be able to issue final lists of either compliant or non-compliant applications to advise the consumers and intermediaries alike. Without such practical provisions, enforcement risks remaining patchy.
Online Gaming Rules
On 1 October 2025, the government issued a draft of the Online Gaming Rules in accordance with the Promotion and Regulation of Online Gaming Act. The regulations focus on the creation of the compliance frameworks, define the classification of the allowed gaming activities, and prescribe grievance-redressal mechanisms aiming to promote the protection of the players and procedural transparency. However, the draft does not revisit or soften the existing blanket prohibition on real-money gaming (RMG) and, hence, the questions about the effectiveness of enforcement and regulatory clarity remain open (Times of India, 2025).
Protecting Consumers Without Stifling Innovation
The ban highlights a larger conflict, i.e., the protection of the vulnerable users without stifling an industry that has traditionally contributed to innovation, jobs, and the collection of tax revenue. Online gaming has significantly added to the GST collections, and the sudden shakeup brings fiscal concerns (Reuters).
Several legal objections to the Act have already been brought, asking whether the Act is constitutional, especially as to whether the restrictions are proportional to the right to trade. The outcome of such cases will define the future trajectory of the digital economy of India (Reuters).
Way Forward
Instead of outright prohibition, a more balanced approach that incorporates regulation and consumer protection is suggested by the experts. Key measures could include:
- A definite difference between games of skill and games of chance, with proportionate regulation.
- Age confirmation and campaign against online illiteracy to protect the underage population.
- Enhanced advertising and payments compliance requirements and enforceable non-compliance penalty.
- Coordinated oversight among different ministries to prevent duplication and regulatory struggle.
- Leveraging AI and fintech to track illegal financial activities (black money flows) and developing innovation.
Conclusion
The Online Gaming Act 2025 addresses social issues, such as addiction, monetary risk, and child safety, that require governance interventions. However, the path it follows to this end, that of total prohibition, is more likely to spawn a new set of issues instead of providing solutions because it will send consumers to offshore sites, undermine consumer rights, and slow innovation.
For India, the real challenge is not whether to prohibit online money gaming but how to create a balanced, transparent, and enforceable framework that protects users while fostering a responsible gaming ecosystem. India can reduce the adverse consequences of online betting without keeping the industry in the shadows with better coordination, reasonable use of technology, and balanced protection.
References:
- India's Dream11, top gaming apps halt money-based games after ban
- India online gambling ban could drive punters to black market
- Offshore betting firms with backend ops in India not covered by online gaming law
- The Great Gamble: India’s Online Gaming Ban, The GST Battle, And What Lies Ahead.
- Game Over for Online Money Games? An Analysis of the Online Gaming Act 2025
- Government gambles heavily on prohibiting online money gaming
- Online gaming regulation: New rules to take effect from October 1; government stresses consultative approach with industry

Introduction
In the hyperconnected world, cyber incidents can no longer be treated as sporadic disruptions; such incidents have become an everyday occurrence. The attack landscape today is very consequential and shows significant multiplication in its frequency, with ransomware attacks incapacitating a health system, phishing attacks hitting a financial institution, or state-sponsored attacks on critical infrastructures. Towards counteracting such threats, traditional ways alone are not enough, they gravely rely on manual research and human intellect. Attackers exercise speed, scale, and stealth, and defenders are always four steps behind. With such a widening gap, it is deemed necessary to facilitate incident response and crisis management with the intervention of automation and artificial intelligence (AI) for faster detection, context-driven decision-making, and collaborative response beyond human capabilities.
Incident Response and Crisis Management
Incident response is the structured way in which organisations deal with responding to detecting, segregating, and recovering from security incidents. Crisis management takes this even further, dealing not only with the technical fallout of a breach but also its business, reputation, and regulatory implications. Echelon used to depend on manual teams of people sorting through logs, cross-correlating alarms, and generating responses, a paradigm effective for small numbers but quickly inadequate in today's threat climate. Today's opponents attack at machine speed, employing automation to launch attacks. Under such circumstances, responding with slow, manual methods means delay and draconian consequences. The AI and automation introduction is a paradigm change that allows organisations to equate the pace and precision with which attackers initiate attacks in responding to incidents.
How Automation Reinvents Response
Cybercrime automation liberates cybercrime analysts from boring and repetitive tasks that consume time. An analyst manually detects potential threats from a list of hundreds each day, while automated systems sift through noise and focus only on genuine threats. Malware can automatically cause infected computers to be disconnected from the network to avoid spreading or may automatically have its suspicious account permissions removed without human intervention. The security orchestration systems move further by introducing playbooks, predefined steps describing how incidents of a certain type (e.g., phishing attempts or malware infections) should be handled. This ensures fast containment while ensuring consistency and minimising human error amid the urgency of dealing with thousands of alerts.
Automation takes care of threat detection, prioritisation, and containment, allowing human analysts to refocus on more complex decision-making. Instead of drowning in the sea of trivial alerts, security teams can now devote their efforts to more strategic areas: threat hunting and longer-term resilience. Automation is a strong tool of defence, cutting response times down from hours to minutes.
The Intelligence Layer: AI in Action
If automation provides speed, then AI is what allows the brain to be intelligent and flexible. Working with old and fixed-rule systems, AI-enabled solutions learn from experiences, adapt to changes in threats, and discover hidden patterns of which human analysts themselves would be unaware. For instance, machine learning algorithms identify normal behaviour on a corporate network and raise alerts on any anomalies that could indicate an insider attack or an advanced persistent threat. Similarly, AI systems sift through global threat intelligence to predict likely attack vectors so organisations can have their vulnerabilities fixed before they are exploited.
AI also boosts forensic analysis. Instead of searching forever for clues, analysts let AI-driven systems trace back to the origin of an event, identify vulnerabilities exploited by attackers, and flag systems that are still under attack. During a crisis, AI is a decision support that predicts outcomes of different scenarios and recommends the best response. In response to a ransomware attack, for example, based on context, AI might advise separating a single network segment or restoring from backup or alerting law enforcement.
Real-World Applications and Case Studies
Already, this mitigation has been provided in the form of real-world applications of automation and AI. Consider, for example, IBM Watson for Cybersecurity, which has been applied in analysing unstructured threat intelligence and providing analysts with actionable results in minutes, rather than days. Like this, systems driven by AI in DARPA's Cyber Grand Challenge demonstrated the ability to automatically identify an instant vulnerability, patch it, and reveal the potential of a self-healing system. AI-powered fraud detection systems stop suspicious transactions in the middle of their execution and work all night to prevent losses. What is common in all these examples is that automation and AI lessen human effort, increase accuracy, and in the event of a cyberattack, buy precious time.
Challenges and Limitations
While promising, the technology is still not fully mature. The quality of an AI system is highly dependent on the training data provided; poor training can generate false positives that drown teams or worse false negatives that allow attackers to proceed unabated. Attackers have also started targeting AI itself by poisoning datasets or designing malware that does not get detected. Aside from risks that are more technical, the operational and financial costs involved in implementing advanced AI-based systems present expensive threats to any company. Organisations will have to make expenditures not only on technology but also for the training of staff to best utilise these tools. There are some ethical and privacy issues to consider as well because systems may be processing sensitive personal data, so global data protection laws such as the GDPR or India's DPDP Act could come into conflict.
Creating a Human-AI Collaboration
The future is not going to be one of substitution by machines but of creating human-AI synergy. Automation can do the drudgery, AI can provide smarts, and human professionals can use judgment, imagination, and ethical decisions. One would want to build AI-fuelled Security Operations Centres where technology and human experts work in tandem. Continuous training must be provided to AI models to reduce false alarms and make them most resistant against adversarial attacks. Regular conduct of crisis drills that combine AI tools and human teams can ensure preparedness for real-time events. Likewise, it is worth integrating ethical AI guidelines into security frameworks to ensure a stronger defence while respecting privacy and regulatory compliance.
Conclusion
Cyber-attacks are an eventuality in this modern time, but the actual impact need not be so harsh. The organisations can maintain the programmatic method of integrating automation and AI into incident response and crisis management so that the response against the very threat can be shifted from reactive firefighting to proactive resilience. Automation gives speed and efficiency while AI gives intelligence and foresight, hence putting the defenders on par and possibly exceeding the speed and sophistication of the attackers. But an utmost system without human inquisitiveness, ethical reasoning, and strategic foresight would remain imperfect. The best defence is in that human-machine relationship symbiotic system wherein automation and AI take care of how fast and how many cyber threats come in, whereas human intellect ensures that every response is aligned with larger organizational goals. This synergy is where cybersecurity resiliency will reside in the future-the defenders won't just be reacting to emergencies but will rather be driving the way.
References
- https://www.sisainfosec.com/blogs/incident-response-automation/
- https://stratpilot.ai/role-of-ai-in-crisis-management-and-its-critical-importance/
- https://www.juvare.com/integrating-artificial-intelligence-into-crisis-management/
- https://www.motadata.com/blog/role-of-automation-in-incident-management/