#FactCheck - Old Video of Khamenei Manipulated With AI Voice, Viral Claim Misleading
Executive Summary
Claims are circulating that Iran’s Supreme Leader Ayatollah Ali Khamenei was killed in a major attack allegedly carried out by Israel and the United States. Amid these claims, a video is being widely shared on social media in which Khamenei can be heard saying, “Beware of fake news, I am alive.” Research conducted by CyberPeace has found the viral claim to be false. Our research revealed that the video being shared is old and that Khamenei’s voice has been altered using artificial intelligence to support a misleading narrative.
Claim
On March 1, 2026, an Instagram user shared the viral video in which Ayatollah Ali Khamenei is heard saying, “Beware of fake news, I am alive.” The link to the post and its archived version are provided above along with a screenshot.

Fact Check:
To verify the authenticity of the claim, we extracted key frames from the viral video and conducted a reverse image search using Google Lens. During the research, we found the same video on the YouTube channel of Sky News Australia, published on June 19, 2025. In the approximately 43-minute-long video, the portion used in the viral clip appears around the 10-minute mark.

According to Sky News Australia’s report, Iran’s Supreme Leader Ayatollah Ali Khamenei had rejected US President Donald Trump’s demand for unconditional surrender. The Ayatollah regime also warned that any American military intervention would be accompanied by “irreparable damage.” Upon closely listening to the viral clip, we noticed that Khamenei’s voice sounded robotic, raising suspicion that it may have been AI-generated. We then analyzed the video using the AI detection tool AURGIN AI. The results indicated that the viral clip had been generated using artificial intelligence.

Conclusion
Our research establishes that the viral video is old and has been digitally manipulated. Ayatollah Ali Khamenei’s voice has been altered using artificial intelligence and the clip is being shared with a misleading claim.
Related Blogs

Over the last decade, battlefields have percolated from mountains, deserts, jungles, seas, and the skies into the invisible networks of code and cables. Cyberwarfare is no longer a distant possibility but today’s reality. The cyberattacks of Estonia in 2007, the crippling of Iran’s nuclear program by the Stuxnet virus, the SolarWinds and Colonial Pipeline breaches in recent years have proved one thing: that nations can now paralyze economies and infrastructures without firing a bullet. Cyber operations now fall beyond the traditional threshold of war, allowing aggressors to exploit the grey zone where full-scale retaliation may be unlikely.
At the same time, this ambiguity has also given rise to the concept of cyber deterrence. It is a concept that has been borrowed from the nuclear strategies during the Cold War era and has been adapted to the digital age. At the core, cyber deterrence seeks to alter the adversary’s cost-benefit calculation that makes attacks either too costly or pointless to pursue. While power blocs like the US, Russia, and China continue to build up their cyber arsenals, smaller nations can hold unique advantages, most importantly in terms of their resilience, if not firepower.
Understanding the concept of Cyber Deterrence
Deterrence, in its classic sense, is about preventing action through the fear of consequences. It usually manifests in four mechanisms as follows:
- Punishment by threatening to impose costs on attackers, whether by counter-attacks, economic sanctions, or even conventional forces.
- Denial of attacks by making them futile through hardened defences, and ensuring the systems to resist, recover, and continue to function.
- Entanglement by leveraging interdependence in trade, finance, and technology to make attacks costly for both attackers and defenders.
- Norms can also help shape behaviour by stigmatizing reckless cyber actions by imposing reputational costs that can exceed any gains.
However, great powers have always emphasized the importance of punishment as a tool to showcase their power by employing offensive cyber arsenals to instill psychological pressure on their rivals. Yet in cyberspace, punishment has inherent flaws.
The Advantage of Asymmetry
For small states, smaller geographical size can be utilised as a benefit. Three advantages of this exist, such as:
- With fewer critical infrastructures to protect, resources can be concentrated. For example, Denmark, with a modest population of $40 million cyber budget, is considered to be among the most cyber-secure nations, despite receiving billions of US spending.
- Smaller bureaucracies enable faster response. The centralised cyber command of Singapore allows it to ensure a rapid coordination between the government and the private sector.
- Smaller countries with lesser populations can foster a higher public awareness and participation in cyber hygiene by amplifying national resilience.
In short, defending a small digital fortress can be easier than securing a sprawling empire of interconnected systems.
Lessons from Estonia and Singapore
The 2007 crisis of Estonia remains a case study of cyber resilience. Although its government, bank, and media were targeted in offline mode, Estonia emerged stronger by investing heavily in cyber defense mechanisms. Another effort in this case stood was with the hosting of NATO’s Cooperative Cyber Defence Centre of Excellence to build one of the world’s most resilient e-governance models.
Singapore is another case. Where, recognising its vulnerability as a global financial hub, it has adopted a defense-centric deterrence strategy by focusing on redundancy, cyber education, and international partnership rather than offensive capacity. These approaches can also showcase that deterrence is not always about scaring attackers with retaliation, it is about making the attacks meaningless.
Cyber deterrence and Asymmetric Warfare
Cyber conflict is understood through the lens of asymmetric warfare, where weaker actors exploit the unconventional and stronger foes. As guerrillas get outmanoeuvred by superpowers in Vietnam or Afghanistan, small states hold the capability to frustrate the cyber giants by turning their size into a shield. The essence of asymmetric cyber defence also lies in three principles, which can be mentioned as;
- Resilience over retaliation by ensuring a rapid recovery to neutralise the goals of the attackers.
- Undertaking smart investments focusing on limited budgets over critical assets, not sprawling infrastructures.
- Leveraging norms to shape the international opinions to stigmatize the aggressors and increase the reputational costs.
This also helps to transform the levels of cyber deterrence into a game of endurance rather than escalating it into a domain where small states can excel.
There remain challenges as well, as attribution problems persist, the smaller nations still depend on foreign technology, which the adversaries have sought to exploit. Issues over the shortage of talent have plagued the small states, as cyber professionals have migrated to get lucrative jobs abroad. Moreover, building deterrence capability through norms requires active multilateral cooperation, which may not be possible for all small nations to sustain.
Conclusion
Cyberwarfare represents a new frontier of asymmetric conflict where size does not guarantee safety or supremacy. Great powers have often dominated the offensive cyber arsenals, where small states have carved their own path towards security by focusing on defence, resilience, and international collaboration. The examples of Singapore and Estonia demonstrate the fact that the small size of a state can be its identity of a hidden strength in capabilities like cyberspace, allowing nimbleness, concentration of resources and societal cohesion. In the long run, cyber deterrence for small states will not rest on fearsome retaliation but on making attacks futile and recovery inevitable.
References
- https://bluegoatcyber.com/blog/asymmetric-warfare/
- https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=2268&context=jss
- https://www.linkedin.com/pulse/rising-tide-cyberwarfare-battle-between-superpowers-hussain/
- https://digitalcommons.odu.edu/cgi/viewcontent.cgi?article=1243&context=gpis_etds
- https://www.scirp.org/journal/paperinformation?paperid=141708
- https://digitalcommons.odu.edu/cgi/viewcontent.cgi?article=1243&context=gpis_etds

Executive Summary:
Assembly elections are underway in several Indian states, including West Bengal, Assam, Kerala, Tamil Nadu, and Puducherry. While voting has already taken place in Assam, Kerala, and Puducherry, polling is still pending in West Bengal. In view of the elections, central security forces have been deployed across West Bengal. Amid this, a video showing a group of people pelting stones at a security vehicle is being widely shared on social media. Some users claim that the incident took place in West Bengal and allege that Muslims attacked an army vehicle. However, research by CyberPeace found the claim to be false. The viral video is from Pakistan and has no connection to West Bengal.
Claim
A social media user shared the video on April 5, 2026, claiming that an army vehicle was attacked in West Bengal.
Post links:

Fact Check
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search using Google Lens. This led us to a video posted on a Facebook page on October 13, 2025. The caption of that post indicated that the video was from Lahore, showing clashes between members of Tehreek-e-Labbaik and the police.

Further clues in the video also pointed to Pakistan. A shop sign reading “Lovely Drink Corner” is visible in the footage. A Google search confirmed that this establishment is located in Lahore, Pakistan.

Conclusion
The viral claim is misleading. Although central forces have been deployed in West Bengal for the ongoing elections, the video showing stone-pelting on a security vehicle is not from the state. It is an old video from Lahore, Pakistan, and is being falsely shared with a communal angle to mislead users.

In a recent ruling, a U.S. federal judge sided with Meta in a copyright lawsuit brought by a group of prominent authors who alleged that their works were illegally used to train Meta’s LLaMA language model. While this seems like a significant legal victory for the tech giant, it may not be so. Rather, this is a good case study for creators in the USA to refine their legal strategies and for policymakers worldwide to act quickly to shape the rules of engagement between AI and intellectual property.
The Case: Meta vs. Authors
In Kadrey v. Meta, the plaintiffs alleged that Meta trained its LLaMA models on pirated copies of their books, violating copyright law. However, U.S. District Judge Vince Chhabria ruled that the authors failed to prove two critical things: that their copyrighted works had been used in a way that harmed their market and that such use was not “transformative.” In fact, the judge ruled that converting text into numerical representations to train an AI was sufficiently transformative under the U.S. fair use doctrine. He also noted that the authors’ failure to demonstrate economic harm undermined their claims. Importantly, he clarified that this ruling does not mean that all AI training data usage is lawful, only that the plaintiffs didn’t make a strong enough case.
Meta even admitted that some data was sourced from pirate sites like LibGen, but the Judge still found that fair use could apply because the usage was transformative and non-exploitative.
A Tenuous Win
Chhabria’s decision emphasised that this is not a blanket endorsement of using copyrighted content in AI training. The judgment leaned heavily on the procedural weakness of the case and not necessarily on the inherent legality of Meta’s practices.
Policy experts are warning that U.S. courts are currently interpreting AI training as fair use in narrow cases, but the rulings may not set the strongest judicial precedent. The application of law could change with clearer evidence of commercial harm or a more direct use of content.
Moreover, the ruling does not address whether authors or publishers should have the right to opt out of AI model training, a concern that is gaining momentum globally.
Implications for India
The case highlights a glaring gap in India’s copyright regime: it is outdated. Since most AI companies are located in the U.S., courts have had the opportunity to examine copyright in the context of AI-generated content. India has yet to start. Recently, news agency ANI filed a case alleging copyright infringement against OpenAI for training on its copyrighted material. However, the case is only at an interim stage. The final outcome of the case will have a significant impact on the legality of these language models being able to use copyrighted material for training.
Considering that India aims to develop “state-of-the-art foundational AI models trained on Indian datasets” under the IndiaAI Mission, the lack of clear legal guidance on what constitutes fair dealing when using copyrighted material for AI training is a significant gap.
Thus, key points of consideration for policymakers include:
- Need for Fair Dealing Clarity: India’s fair-dealing provisions under the Copyright Act, 1957, are narrower than U.S. fair use. The doctrine may have to be reviewed to strike a balance between this law and the requirement of diverse datasets to develop foundational models rooted in Indian contexts. A parallel concern regarding data privacy also arises.
- Push for Opt-Out or Licensing Mechanisms: India should consider whether to introduce a framework that requires companies to license training data or provide an opt-out system for creators, especially given the volume of Indian content being scraped by global AI systems.
- Digital Public Infrastructure for AI: India’s policymakers could take this opportunity to invest in public datasets, especially in regional languages, that are both high quality and legally safe for AI training.
- Protecting Local Creators: India needs to ensure that its authors, filmmakers, educators and journalists are protected from having their work repurposed without compensation, since power asymmetries between Big Tech and local creators can lead to exploitation of the latter.
Conclusion
The ruling in Meta’s favour is just one win for the developer. The real questions about consent, compensation and creative control remain unanswered. Meanwhile, the lesson for India is urgent: it needs AI policies that balance innovation with creator rights and provide legal certainty and ethical safeguards as it accelerates its AI ecosystem. Further, as global tech firms race ahead, India must not remain a passive data source; it must set the terms of its digital future. This will help the country move a step closer to achieving its goal of building sovereign AI capacity and becoming a hub for digital innovation.
References
- https://www.theguardian.com/technology/2025/jun/26/meta-wins-ai-copyright-lawsuit-as-us-judge-rules-against-authors
- https://www.wired.com/story/meta-scores-victory-ai-copyright-case/
- https://www.cnbc.com/2025/06/25/meta-llama-ai-copyright-ruling.html
- https://www.mondaq.com/india/copyright/1348352/what-is-fair-use-of-copyright-doctrine
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2113095#:~:text=One%20of%20the%20key%20pillars,models%20trained%20on%20Indian%20datasets.
- https://www.ndtvprofit.com/law-and-policy/ani-vs-openai-delhi-high-court-seeks-responses-on-copyright-infringement-charges-against-chatgpt