#FactCheck - Viral Photos Falsely Linked to Iranian President Ebrahim Raisi's Helicopter Crash
Executive Summary:
On 20th May, 2024, Iranian President Ebrahim Raisi and several others died in a helicopter crash that occurred northwest of Iran. The images circulated on social media claiming to show the crash site, are found to be false. CyberPeace Research Team’s investigation revealed that these images show the wreckage of a training plane crash in Iran's Mazandaran province in 2019 or 2020. Reverse image searches and confirmations from Tehran-based Rokna Press and Ten News verified that the viral images originated from an incident involving a police force's two-seater training plane, not the recent helicopter crash.
Claims:
The images circulating on social media claim to show the site of Iranian President Ebrahim Raisi's helicopter crash.



Fact Check:
After receiving the posts, we reverse-searched each of the images and found a link to the 2020 Air Crash incident, except for the blue plane that can be seen in the viral image. We found a website where they uploaded the viral plane crash images on April 22, 2020.

According to the website, a police training plane crashed in the forests of Mazandaran, Swan Motel. We also found the images on another Iran News media outlet named, ‘Ten News’.

The Photos uploaded on to this website were posted in May 2019. The news reads, “A training plane that was flying from Bisheh Kolah to Tehran. The wreckage of the plane was found near Salman Shahr in the area of Qila Kala Abbas Abad.”
Hence, we concluded that the recent viral photos are not of Iranian President Ebrahim Raisi's Chopper Crash, It’s false and Misleading.
Conclusion:
The images being shared on social media as evidence of the helicopter crash involving Iranian President Ebrahim Raisi are incorrectly shown. They actually show the aftermath of a training plane crash that occurred in Mazandaran province in 2019 or 2020 which is uncertain. This has been confirmed through reverse image searches that traced the images back to their original publication by Rokna Press and Ten News. Consequently, the claim that these images are from the site of President Ebrahim Raisi's helicopter crash is false and Misleading.
- Claim: Viral images of Iranian President Raisi's fatal chopper crash.
- Claimed on: X (Formerly known as Twitter), YouTube, Instagram
- Fact Check: Fake & Misleading
Related Blogs

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.

Introduction
The pervasive issue of misinformation in India is a multifaceted challenge with profound implications for democratic processes, public awareness, and social harmony. The Election Commission of India (ECI) has taken measures to counter misinformation during the 2024 elections. ECI has launched campaigns to educate people and urge them to verify election-related content and share responsibly on social media. In response to the proliferation of fake news and misinformation online, the ECI has introduced initiatives such as ‘Myth vs. Reality’ and 'VerifyBeforeYouAmplify' to clear the air around fake news being spread on social media. EC measures aim to ensure that the spread of misinformation is curbed, especially during election time, when voters consume a lot of information from social media. It is of the utmost importance that voters take in facts and reliable information and avoid any manipulative or fake information that can negatively impact the election process.
EC Collaboration with Tech Platforms
In this new age of technology, the Internet and social media continue to witness a surge in the spread of misinformation, disinformation, synthetic media content, and deepfake videos. This has rightly raised serious concerns. The responsible use of social media is instrumental in maintaining the accuracy of information and curbing misinformation incidents.
The ECI has collaborated with Google to empower the citizenry by making it easy to find critical voting information on Google Search and YouTube. In this way, Google supports the 2024 Indian General Election by providing high-quality information to voters, safeguarding platforms from abuse, and helping people navigate AI-generated content. The company connects voters to helpful information through product features that show data from trusted organisations across its portfolio. YouTube showcases election information panels, including how to register to vote, how to vote, and candidate information. YouTube's recommendation system prominently features content from authority sources on the homepage, in search results, and in the "Up Next" panel. YouTube highlights high-quality content from authoritative news sources during key moments through its Top News and Breaking News shelves, as well as the news watch page.
Google has also implemented strict policies and restrictions regarding who can run election-related advertising campaigns on its platforms. They require all advertisers who wish to run election ads to undergo an identity verification process, provide a pre-certificate issued by the ECI or anyone authorised by the ECI for each election ad they want to run where necessary, and have in-ad disclosures that clearly show who paid for the ad. Additionally, they have long-standing ad policies that prohibit ads from promoting demonstrably false claims that could undermine trust or participation in elections.
CyberPeace Countering Misinformation
CyberPeace Foundation, a leading organisation in the field of cybersecurity works to promote digital peace for all. CyberPeace is working on the wider ecosystem to counter misinformation and develop a safer and more responsible Internet. CyberPeace has collaborated with Google.org to run a pan-India awareness-building program and comprehensive multilingual digital resource hub with content available in up to 15 Indian languages to empower over 40 million netizens in building resilience against misinformation and practising responsible online behaviour. This step is crucial in creating a strong foundation for a trustworthy Internet and secure digital landscape.
Myth vs Reality Register by ECI
The Election Commission of India (ECI) has launched the 'Myth vs Reality Register' to combat misinformation and ensure the integrity of the electoral process during the general elections 2024. The 'Myth vs Reality Register' can be accessed through the Election Commission's official website (https://mythvsreality.eci.gov.in/). All stakeholders are urged to verify and corroborate any dubious information they receive through any channel with the information provided in the register. The register provides a one-stop platform for credible and authenticated election-related information, with the factual matrix regularly updated to include the latest busted fakes and fresh FAQs. The ECI has identified misinformation as one of the challenges, along with money, muscle, and Model Code of Conduct violations, for electoral integrity. The platform can be used to verify information, prevent the spread of misinformation, debunk myths, and stay informed about key issues during the General Elections 2024.
The ECI has taken proactive steps to combat the challenge of misinformation which could cripple the democratic process. EC has issued directives urging vigilance and responsibility from all stakeholders, including political parties, to verify information before amplifying it. The EC has also urged responsible behaviour on social media platforms and discourse that inspires unity rather than division. The commission has stated that originators of false information will face severe consequences, and nodal officers across states will remove unlawful content. Parties are encouraged to engage in issue-based campaigning and refrain from disseminating unverified or misleading advertisements.
Conclusion
The steps taken by the ECI have been designed to empower citizens and help them affirm the accuracy and authenticity of content before amplifying it. All citizens must be well-educated about the entire election process in India. This includes information on how the electoral rolls are made, how candidates are monitored, a complete database of candidates and candidate backgrounds, party manifestos, etc. For informed decision-making, active reading and seeking information from authentic sources is imperative. The partnership between government agencies, tech platforms and civil societies helps develop strategies to counter the widespread misinformation and promote online safety in general, and electoral integrity in particular.
References
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2016941#:~:text=To%20combat%20the%20spread%20of,the%20ongoing%20General%20Elections%202024
- https://www.business-standard.com/elections/lok-sabha-election/ls-elections-2024-ec-uses-social-media-to-nudge-electors-to-vote-124040700429_1.html
- https://blog.google/intl/en-in/company-news/outreach-initiatives/supporting-the-2024-indian-general-election/
- https://blog.google/intl/en-in/partnering-indias-success-in-a-new-digital-paradigm/

Introduction
Holi 2025 is just around the corner. In fact, in the Braj region, Mathura and Vrindavan, the celebrations have already begun, starting from Basant Panchami on 2nd February 2025. Temples in Vrindavan are sprinkling flowers on devotees, creating mesmerising scenes with the spirit of devotion. While cities like Delhi, Bangalore, Mumbai, etc., are all set, with pre-bookings for Holi events, parties and music festivals.
However, in the current digital era, cybercriminals attempt to conduct manipulative campaigns to deceive innocent people. They attempt to send fake cashback offers, freebies, lucrative deals, giveaways, and phishing scams under the guise of Holi deals and offers. The upcoming festival of colors requires you to know the warning signs so you can remain alert and safeguard against digital scams.
How Scammers Might Target You
Holi is a time for joy, colors, and celebrations, but cybercriminals see it as the perfect opportunity to trick people into falling for scams. With increased online shopping, event bookings, and digital transactions, scammers exploit the festive mood to steal money and personal information. Here are some common Holi-related cyber scams and how they operate:
- Exclusive Fake Holi Offers
Scammers send out promotional messages via WhatsApp, SMS, or email claiming to offer exclusive Holi discounts. For example, you might receive a message like:
"Get 70% off on Holi color packs! Limited-time deal! Click here to order now."
However, clicking the link leads to a fraudulent website designed to steal your card details or make unauthorized transactions.
- Fake Holi Cashback Offers
You may get an SMS that reads:
"Congratulations! You’ve won ₹500 cashback for your Holi purchases. Claim now by clicking this link."
The link may take you to a phishing page that asks for your UPI PIN or bank login credentials, allowing scammers to siphon off your money.
- Fake Quizzes to Win Freebies
Scammers circulate links to Holi-themed quizzes or surveys promising free gifts like branded clothing, sweets, or smart gadgets. These often ask users to enter personal details such as phone numbers, email addresses, or even Aadhaar numbers. Once entered, the scammers misuse this information for identity theft or further phishing attempts.
- Fake Social Media Giveaways
Many fraudsters create fake Instagram and Facebook pages mimicking well-known brands, announcing contests with tempting prizes. For example:
"Holi Giveaway! Win a free Bluetooth speaker or chance to win smartphone by following us and sending a small registration fee!"
Once you pay, the page disappears, leaving you with nothing but regret.
- Targeted Phishing Scams
During Holi, phishing attempts surge as scammers disguise themselves as banks, e-wallet services, or e-commerce platforms. You might receive an email with a subject like:
"Urgent: Your Holi order needs confirmation, update your details now!"
The email contains a fake link that, when clicked, prompts you to enter sensitive login information, which the scammers then use to access your account.
- Clickbait Links on Social Media
Cybercriminals circulate enticing headlines such as:
"This New Holi Color Is Banned – Find Out Why!"
These links often lead to malware-infected pages that compromise your device security or steal browsing data.
- Bogus Online Booking Platforms
With many people looking for Holi event tickets or holiday stays, scammers set up fake booking websites. Imagine you come across a site advertising "Holi Pool Party – Entry Just INR 299!" you eagerly make the payment, only to find out later that the event never existed.
How to Stay Safe This Festive Season
- Verify offers directly from official websites instead of clicking on random links.
- Avoid sharing personal or banking details on unfamiliar platforms.
- Look for HTTPS in website URLs before making any payments.
- Be cautious of unsolicited messages, even if they appear to be from known contacts.
- If an offer seems too good to be true, it it is likely a scam or deception.
Conclusion:
As Holi 2025 approaches, make sure your online security remains a priority. Keep an eye on potential frauds that attempt to take advantage of the festive seasons like Holi. Protect yourself against various cyber threats. Before engaging with any Internet content, prioritize the verification of sources. Let us safeguard our celebrations with critical cyber security precautions. Wishing you all a cyber-safe and Happy Holi 2025!