#FactCheck - AI-Generated Video Falsely Claims Death of Iran’s Supreme Leader
Executive Summary
Iran’s Supreme Leader Ayatollah Ali Khamenei was reportedly killed in a major attack carried out by Israel and the United States, with claims circulating that Iranian state media confirmed his death early Sunday morning. Amid these claims, a video is being widely shared on social media. The viral video shows a body trapped under debris. Users sharing the clip claim that the body seen in the footage is that of Ayatollah Ali Khamenei. However, research conducted by CyberPeace found the viral claim to be false. Our research revealed that the video is not authentic but AI-generated.
Claim:
On March 1, 2026, an Instagram user shared the viral video with the caption: “Shaheed Ayatollah Sayyid Ali Hosseini Khamenei — Neither fled nor hid in a bunker, embraced death like a brave man.” The link to the post and its archived version are provided below along with a screenshot.

Fact Check:
Upon closely examining the viral video, we noticed several visual irregularities and technical inconsistencies. This raised suspicion about its authenticity. We then scanned the video using the AI detection tool Hive Moderation. The results indicated that approximately 83 percent of the content showed signs of being AI-generated.

To further verify the claim, we also analyzed the video using another AI detection tool, WasItAI. The findings similarly suggested that the video was generated using artificial intelligence.

Conclusion:
Our research establishes that the viral video is not real. It has been artificially generated using AI and is being shared with misleading claims.
Related Blogs
.webp)
Introduction
According to Statista, the global artificial intelligence software market is forecast to grow by around 126 billion US dollars by 2025. This will include a 270% increase in enterprise adoption over the past four years. The top three verticals in the Al market are BFSI (Banking, Financial Services, and Insurance), Healthcare & Life Sciences, and Retail & e-commerce. These sectors benefit from vast data generation and the critical need for advanced analytics. Al is used for fraud detection, customer service, and risk management in BFSI; diagnostics and personalised treatment plans in healthcare; and retail marketing and inventory management.
The Chairperson of the Competition Commission of India’s Chief, Smt. Ravneet Kaur raised a concern that Artificial Intelligence has the potential to aid cartelisation by automating collusive behaviour through predictive algorithms. She explained that the mere use of algorithms cannot be anti-competitive but in case the algorithms are manipulated, then that is a valid concern about competition in markets.
This blog focuses on how policymakers can balance fostering innovation and ensuring fair competition in an AI-driven economy.
What is the Risk Created by AI-driven Collusion?
AI uses predictive algorithms, and therefore, they could lead to aiding cartelisation by automating collusive behaviour. AI-driven collusion could be through:
- The use of predictive analytics to coordinate pricing strategies among competitors.
- The lack of human oversight in algorithm-induced decision-making leads to tacit collusion (competitors coordinate their actions without explicitly communicating or agreeing to do so).
AI has been raising antitrust concerns and the most recent example is the partnership between Microsoft and OpenAI, which has raised concerns among other national competition authorities regarding potential competition law issues. While it is expected that the partnership will potentially accelerate innovation, it also raises concerns about potential anticompetitive effects such as market foreclosure or the creation of barriers to entry for competitors and, therefore, has been under consideration in the German and UK courts. The problem here is in detecting and proving whether collusion is taking place.
The Role of Policy and Regulation
The uncertainties induced by AI regarding its effects on competition create the need for algorithmic transparency and accountability in mitigating the risks of AI-driven collusion. It leads to the need to build and create regulatory frameworks that mandate the disclosure of algorithmic methodologies and establish a set of clear guidelines for the development of AI and its deployment. These frameworks or guidelines should encourage an environment of collaboration between competition watchdogs and AI experts.
The global best practices and emerging trends in AI regulation already include respect for human rights, sustainability, transparency and strong risk management. The EU AI Act could serve as a model for other jurisdictions, as it outlines measures to ensure accountability and mitigate risks. The key goal is to tailor AI regulations to address perceived risks while incorporating core values such as privacy, non-discrimination, transparency, and security.
Promoting Innovation Without Stifling Competition
Policymakers need to ensure that they balance regulatory measures with innovation scope and that the two priorities do not hinder each other.
- Create adaptive and forward-thinking regulatory approaches to keep pace with technological advancements that take place at the pace of development and allow for quick adjustments in response to new AI capabilities and market behaviours.n
- Competition watchdogs need to recruit domain experts to assess competition amid rapid changes in the technology landscape. Create a multi-stakeholder approach that involves regulators, industry leaders, technologists and academia who can create inclusive and ethical AI policies.
- Businesses can be provided incentives such as recognition through certifications, grants or benefits in acknowledgement of adopting ethical AI practices.
- Launch studies such as the CCI’s market study to study the impact of AI on competition. This can lead to the creation of a driving force for sustainable growth with technological advancements.
Conclusion: AI and the Future of Competition
We must promote a multi-stakeholder approach that enhances regulatory oversight, and incentivising ethical AI practices. This is needed to strike a delicate balance that safeguards competition and drives sustainable growth. As AI continues to redefine industries, embracing collaborative, inclusive, and forward-thinking policies will be critical to building an equitable and innovative digital future.
The lawmakers and policymakers engaged in the drafting of the frameworks need to ensure that they are adaptive to change and foster innovation. It is necessary to note that fair competition and innovation are not mutually exclusive goals, they are complementary to each other. Therefore, a regulatory framework that promotes transparency, accountability, and fairness in AI deployment must be established.
References
- https://www.thehindu.com/sci-tech/technology/ai-has-potential-to-aid-cartelisation-fair-competition-integral-for-sustainable-growth-cci-chief/article69041922.ece
- https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-market-74851580.html
- https://www.ey.com/en_in/insights/ai/how-to-navigate-global-trends-in-artificial-intelligence-regulation#:~:text=Six%20regulatory%20trends%20in%20Artificial%20Intelligence&text=These%20include%20respect%20for%20human,based%20approach%20to%20AI%20regulation.
- https://www.business-standard.com/industry/news/ai-has-potential-to-aid-fair-competition-for-sustainable-growth-cci-chief-124122900221_1.html

INTRODUCTION:
The Ministry of Defence has recently designated the Additional Directorate General of Strategic Communication in the Indian Army as the nodal officer now authorised to send removal requests and notices to social media intermediaries regarding posts consisting of illegal content with respect to the Army. Earlier, this process was followed through the Ministry of Electronics and Information Technology (MeitY). The recent designation gives the Army the autonomy of circumnavigating the old process and enables them to send direct notices (as deemed appropriate by the government and its agency). Let us look at the legal framework that allows them to do so and its policy implications.
BACKGROUND AND LEGAL FRAMEWORK:
Section 69 of the IT Act 2000 gives the government the power to issue directions for interception, monitoring or decryption of any data/information through any computer resource. This is done so under six reasons related to:
- Upholding the sovereignty or integrity of India
- Security of the state
- Defence of India
- Friendly relations with foreign states
- Public order or for preventing incitement of any cognisable offence
- Investigations of offences related to the aforementioned reasons
Section 79(3)(b) of the Information Technology Act 2000 is another aspect of the law related to the removal of data on notification. It allows for all intermediaries (including internet service providers and social media platforms) to have safety harbours from the liability of the content put out by third parties/users on their platforms. This, however, is only applicable when the intermediary has either received a notification or actual knowledge by the appropriate government or its agency of the data on their platform being used for unlawful acts and complies promptly by removing the data from their platform without tampering with evidence.
PLAUSIBLE REASONS FOR POLICY DECISION:
Cases related to the Indian Army are sensitive for a number of reasons, rooted in the fact that they directly pertain to the nation's security, integrity and sovereignty. The impact of the spread of misinformation and disinformation is almost instantaneous and the stakes are high in any circumstance, but exceptionally so when it comes to the Armed Forces and the nation’s security status. A mechanism to tackle cases of such a security level should allow for quick action from the authorities. Owing to the change in the ability to notify directly rather than through another ministry, the army can now promptly deal with these concerns as and when they arise. One immediate benefit of this change is that the forces can now quickly respond to instances where foreign states and actors with malicious intent put out information that can cause harm to the nation’s interests, image and integrity.
This step helps the forces deal with countering misinformation, ensuring national security and even addressing issues of online propaganda. An example of sensitive content about the army leading to legal intervention is the case of Delhi-based magazine The Caravan. The Defence Ministry, along with the Intelligence Bureau and the Jammu and Kashmir police ordered the Delhi-based publication to remove an article claiming the murder and torture of civilians by the Indian army in Jammu and Kashmir citing the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The instruction was challenged by the magazine in the courts.
CONCLUSION:
This move brings with it potential benefits along with risks and the focus should always be on maintaining a balanced approach. Transparency and accountability are imperative and checks on related guidelines so as to prevent misuse while simultaneously protecting national security should be at the centre of the objective of the policy approach. Misinformation in and about the armed forces must be dealt with immediately.
REFERENCES:
- https://www.hindustantimes.com/india-news/army-can-now-directly-issue-notices-to-remove-online-posts-101730313177838.html
- https://www.hindustantimes.com/india-news/inside-79-3-b-the-content-blocking-provision-with-many-legal-grey-areas-101706987924882.html
- https://www.thehindu.com/news/national/govt-orders-magazine-to-take-down-article-on-army-torture-and-murder-in-jammu/article67840790.ece
- https://myind.net/Home/viewArticle/army-gains-authority-to-directly-issue-notice-to-take-down-online-posts

The Illusion of Digital Serenity
In the age of technology, our email accounts have turned into overcrowded spaces, full of newsletters, special offers, and unwanted updates. To most, the presence of an "unsubscribe" link brings a minor feeling of empowerment, a chance to declutter and restore digital serenity. Yet behind this harmless-seeming tool lurks a developing cybersecurity threat. Recent research and expert discussions indicate that the "unsubscribe" button is being used by cybercriminals to carry out phishing campaigns, confirm active email accounts, and distribute malware. This new threat not only undermines individual users but also has wider implications for trust, behaviour, and governance in cyberspace.
Exploiting User Behaviour
The main challenge is the manipulation of user behaviour. Cyber thieves have learned to analyse typical user habits, most notably the instinctive process of unsubscribing from spam mail. Taking advantage of this, they now place criminal codes in emails that pose as real subscription programs. These codes may redirect traffic to fake websites that attempt to steal credentials, force the installation of malicious code, or merely count the click as verification that the recipient's email address is valid. Once confirmed, these addresses tend to be resold on the dark web or included in additional spam lists, further elevating the threat of subsequent attacks.
A Social Engineering Trap
This type of cyber deception is a prime example of social engineering, where the weakest link in the security chain ends up being the human factor. In the same way, misinformation campaigns take advantage of cognitive biases such as confirmation or familiarity, and these unsubscribe traps exploit user convenience and habits. The bait is so simple, and that is exactly what makes it work. Someone attempting to combat spam may unknowingly walk into a sophisticated cyber threat. Unlike phishing messages impersonating banks or government agencies, which tend to elicit suspicion, spoofed unsubscribe links are integrated into regular digital habits, making them more difficult to recognise and resist.
Professional Disguise, Malicious Intent
Technical analysis determines that most of these messages come from suspicious domains or spoofed versions of valid ones, like "@offers-zomato.ru" in place of the authentic "@zomato.com." The appearance of the email looks professional, complete with logos and styling copied from reputable businesses. But behind the HTML styling lies redirection code and obfuscated scripts with a very different agenda. At times, users are redirected to sites that mimic login pages or questionnaire forms, capturing sensitive information under the guise of email preference management.
Beyond the Inbox: Broader Consequences
The consequences of this attack go beyond the individual user. The compromise of a personal email account can be used to carry out more extensive spamming campaigns, engage in botnets, or even execute identity theft. Furthermore, the compromised devices may become entry points for ransomware attacks or espionage campaigns, particularly if the individual works within sensitive sectors such as finance, defence, or healthcare. In this context, what appears to be a personal lapse becomes a national security risk. This is why the issue posed by the weaponised unsubscribe button must be considered not just as a cybersecurity risk but also as a policy and public awareness issue.
Platform Responsibility
Platform responsibility is yet another important aspect. Email service providers such as Gmail, Outlook, and ProtonMail do have native unsubscribe capabilities, under the List-Unsubscribe header mechanism. These tools enable users to remove themselves from valid mailing lists safely without engaging with the original email content. Yet many users do not know about these safer options and instead resort to in-body unsubscribe links that are easier to find but risky. To that extent, email platforms need to do more not only to enhance backend security but also to steer user actions through simple interfaces, safety messages, and digital hygiene alerts.
Education as a Defence
Education plays a central role in mitigation. Just as cyber hygiene campaigns have been launched to teach users not to click on suspicious links or download unknown attachments, similar efforts are needed to highlight the risks associated with casual unsubscribing. Cybersecurity literacy must evolve to match changing threat patterns. Rather than only targeting clearly malicious activity, awareness campaigns should start tackling deceptive tactics that disguise themselves as beneficial, including unsubscribe traps or simulated customer support conversations. Partnerships between public and private institutions might be vital in helping with this by leveraging their resources for mass digital education.
Practical Safeguards for Users
Users are advised to always check the sender's domain before clicking any link, avoid unknown promotional emails, and hover over any link to preview its true destination. Rather than clicking "unsubscribe," users can simply mark such emails as spam or junk so that their email providers can automatically filter similar messages in the future. For enhanced security, embracing mechanisms such as mail client sandboxing, two-factor authentication (2FA) support, and alias email addresses for sign-ups can also help create layered defences.
Policy and Regulatory Implications
Policy implications are also significant. Governments and data protection regulators must study the increasing misuse of misleading unsubscribe hyperlinks under electronic communication and consent laws. In India, the new Digital Personal Data Protection Act, 2023 (DPDPA), provides a legislative framework to counter such deceptive practices, especially under the principles of legitimate processing and purpose limitation. The law requires that the processing of data should be transparent and fair, a requirement that malicious emails obviously breach. Regulatory agencies like CERT-In can also release periodic notifications warning users against such trends as part of their charter to encourage secure digital practices.
The Trust Deficit
The vulnerability also relates to broader issues of trust in digital infrastructure. When widely used tools such as an unsubscribe feature become points of exploitation, user trust in digital platforms erodes. Such a trust deficit can lead to generalised distrust of email systems, digital communication, and even legitimate marketing. Restoring and maintaining such trust demands a unified response that includes technical measures, user education, and regulatory action.
Conclusion: Inbox Hygiene with Caution
The "unsubscribe button trap" is a parable of the modern age. It illustrates how mundane digital interactions, when manipulated, can do great damage not only to individual users but also to the larger ecosystem of online security and trust. As cyber-attacks grow increasingly psychologically advanced and behaviorally focused, our response must similarly become more sophisticated, interdisciplinary, and user-driven. Getting your inbox in order should never involve putting yourself in cyber danger. But as things stand, even that basic task requires caution, context, and clear thinking.