#FactCheck - Old Video Misleadingly Claimed as Footage of Iranian President Before Crash
Executive Summary:
A video that circulated on social media to show Iranian President Ebrahim Raisi inside a helicopter moments before the tragic crash on May 20, 2024, has equally been proven to be fake. The validation of information leaves no doubt, that the video was shot in January 2024, which showed Raisi’s visiting Nemroud Reservoir Dam project. As a means of verifying the origin of the video, the CyberPeace Research Team conducted reverse image search and analyzed the information obtained from the Islamic Republic News Agency, Mehran News, and the Iranian Students’ News Agency. Further, the associated press pointed out inconsistencies between the part in the video that went viral and the segment that was shown by Iranian state television. The original video is old and it is not related to the tragic crash as there is incongruence between the snowy background and the green landscape with a river presented in the clip.

Claims:
A video circulating on social media claims to show Iranian President Ebrahim Raisi inside a helicopter an hour before his fatal crash.



Fact Check:
Upon receiving the posts, in some of the social media posts we found some similar watermarks of the IRNA News agency and Nouk-e-Qalam News.

Taking a cue from this, we performed a keyword search to find any credible source of the shared video, but we found no such video uploaded by the IRNA News agency on their website. Recently, they haven’t uploaded any video regarding the viral news.
We closely analyzed the video, it can be seen that President Ebrahim Raisi was watching outside the snow-covered mountain, but in the internet-available footage regarding the accident, there were no such snow-covered mountains that could be seen but green forest.
We then checked for any social media posts uploaded by IRNA News Agency and found that they had uploaded the same video on X on January 18, 2024. The post clearly indicates the President’s aerial visit to Nemroud Dam.

The viral video is old and does not contain scenes that appear before the tragic chopper crash involving President Raisi.
Conclusion:
The viral clip is not related to the fatal crash of Iranian President Ebrahim Raisi's helicopter and is actually from a January 2024 visit to the Nemroud Reservoir Dam project. The claim that the video shows visuals before the crash is false and misleading.
- Claim: Viral Video of Iranian President Raisi was shot before fatal chopper crash.
- Claimed on: X (Formerly known as Twitter), YouTube, Instagram
- Fact Check: Fake & Misleading
Related Blogs

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.

Introduction
Meta is the leader in social media platforms and has been successful in having a widespread network of users and services across global cyberspace. The corporate house has been responsible for revolutionizing messaging and connectivity since 2004. The platform has brought people closer together in terms of connectivity, however, being one of the most popular platforms is an issue as well. Popular platforms are mostly used by cyber criminals to gain unauthorised data or create chatrooms to maintain anonymity and prevent tracking. These bad actors often operate under fake names or accounts so that they are not caught. The platforms like Facebook and Instagram have been often in the headlines as portals where cybercriminals were operating and committing crimes.
To keep the data of the netizen safe and secure Paytm under first of its kind service is offering customers protection against cyber fraud through an insurance policy available for fraudulent mobile transactions up to Rs 10,000 for a premium of Rs 30. The cover ‘Paytm Payment Protect’ is provided through a group insurance policy issued by HDFC Ergo. The company said that the plan is being offered to increase the trust in digital payments, which will push up adoption.
Meta’s Cybersecurity
Meta has one of the best cyber security in the world but that diest mean that it cannot be breached. The social media giant is the most vulnerable platform in cases of data breaches as various third parties are also involved. As seen the in the case of Cambridge Analytica, a huge chunk of user data was available to influence the users in terms of elections. Meta needs to be ahead of the curve to have a safe and secure platform, for this Meta has deployed various AI and ML driven crawlers and software which work o keeping the platform safe for its users and simultaneously figure out which accounts may be used by bad actors and further removes the criminal accounts. The same is also supported by the keen participation of the user in terms of the reporting mechanism. Meta-Cyber provides visibility of all OT activities, observes continuously the PLC and SCADA for changes and configuration, and checks the authorization and its levels. Meta is also running various penetration and bug bounty programs to reduce vulnerabilities in their systems and applications, these testers are paid heavily depending upon the scope of the vulnerability they found.
CyberRoot Risk Investigation
Social media giant Meta has taken down over 40 accounts operated by an Indian firm CyberRoot Risk Analysis, allegedly involved in hack-for-hire services along with this Meta has taken down 900 fraudulently run accounts, these accounts are said to be operated from China by an unknown entity. CyberRoot Risk Analysis was responsible for sharing malware over the platform and used it to impersonate themselves just as their targets, i.e lawyers, doctors, entrepreneurs, and industries like – cosmetic surgery, real estate, investment firms, pharmaceutical, private equity firms, and environmental and anti-corruption activists. They would get in touch with such personalities and then share malware hidden in files which would often lead to data breaches subsequently leading to different types of cybercrimes.
Meta and its team is working tirelessly to eradicate the influence of such bad actors from their platforms, use of AI and Ml based tools have increased exponentially.
Paytm CyberFraud Cover
Paytm is offering customers protection against cyber fraud through an insurance policy available for fraudulent mobile transactions up to Rs 10,000 for a premium of Rs 30. The cover ‘Paytm Payment Protect’ is provided through a group insurance policy issued by HDFC Ergo. The company said that the plan is being offered to increase the trust in digital payments, which will push up adoption. The insurance cover protects transactions made through UPI across all apps and wallets. The insurance coverage has been obtained by One97 Communications, which operates under the Paytm brand.
The exponential increase in the use of digital payments during the pandemic has made more people susceptible to cyber fraud. While UPI has all the digital safeguards in place, most UPI-related frauds are undertaken by confidence tricksters who get their victims to authorise a transaction by passing collect requests as payments. There are also many fraudsters collecting payments by pretending to be merchants. These types of frauds have resulted in a loss of more than Rs 63 crores in the previous financial year. The issue of data insurance is new to India but is indeed the need of the hour, majority of netizens are unaware of the value of their data and hence remain ignorant towards data protection, such steps will result in safer data management and protection mechanisms, thus safeguarding the Indian cyberspace.
Conclusion
cyberspace is at a critical juncture in terms of data protection and privacy, with new legislation coming out on the same we can expect new and stronger policies to prevent cybercrimes and cyber-attacks. The efforts by tech giants like Meta need to gain more speed in terms of the efficiency of cyber safety of the platform and the user to make sure that the future of the platforms remains secured strongly. The concept of data insurance needs to be shared with netizens to increase awareness about the subject. The initiative by Paytm will be a monumental initiative as this will encourage more platforms and banks to commit towards coverage for cyber crimes. With the increasing cases of cybercrimes, such financial coverage has come as a light of hope and security for the netizens.

Introduction
The most significant change seen in the Indian cyber laws this year was the passing of the Digital Personal Data Protection Act, 2023, in the parliament. DPDP Act is the first concrete form of legislation focusing on the protection of Digital Personal Data of Indian netizens in all aspects; the act is analogous to what GDPR is for Europe. The act lays down heavy compliance mandates for the intermediaries and data fiduciaries, this has made it difficult for the tech companies a lot of policy, legal and technical changes have to be made in order to implement the act to its complete efficiency. Recently, the big techs have addressed a letter to the Minister and Minister of State of Meity to extend the implementation timeline of the act. In other news, the union cabinet has given the green light for the much-awaited MoC with Japan focused on establishing a long-term Semiconductor Supply Chain Partnership.
Letter to Meity
The lobby of the big techs represented by a Trade Body named the Big Tech Asia Internet Coalition (AIC) this week wrote to the Ministry of Electronics and Information Technology (Meity), addressing it to the Minister Ashwini Vaishnav and Minister of State (MoS) Rajeev Chandershekhra recommending a 12-18 month extension on the implementation of the Digital Personal Data Protection Act. This request comes at a time when the government has been voicing its urgency to implement the act in order to safeguard Indian data at the earliest. The trade body represented big names, including Meta, Google, Microsoft, Apple and many more. These big techs essentially comprise the segment recognised under the DPDP as the Significant Data Fiduciaries due to the sheer volume of data processed, hosted, stored, etc. In the protective sense, the act has been designed to focus on preventing the exploitation of personal data of Indian netizens by the big techs, hence, they form an integral part of the Indian Data Ecosystem. The following reasons/complications concerning the implementation of the act were highlighted in the letter:
- Unrealistic Timelines: The AIC expressed that the current timeline for the implementation of the act seems unrealistic for the big techs to establish technological, policy and legal mechanisms to be in compliance with section 5 of the act, which talks about the Obligations of a Data Fiduciary and the particular notice to be shared with the data principles in accordance with the act.
- Technical Requirements: Members of AIC expressed that the duration for the implementation of the act is much less in comparison to the time required by the tech companies to set up/deploy relevant technical critical infrastructure, SoPs and capacity building for the same. This will cause a major hindrance in establishing the efficiency of the act.
- Data Rights: Right to Erasure, Correction, Deletion, Nominate, etc., are guaranteed under the DPDP, but the big techs are not sure about the efficient implementation of these rights and hence will need fundamental changes in the technology architecture of their platform, thus expressing concern of the early implementation of the act.
- Equivalency to GDPR: The DPDP is taken to be congruent to the European GDPR, but the DPDP focuses on a few more aspects, such as cross-border data flow and compliance mandates for the right to erasure, hence a lot of GDPR-compliant big techs also need to establish more robust mechanisms to maintain compliance to Indian DPDP.
Indo-Japan MoC
A Memorandum of Cooperation (MoC) on the Japan-India Semiconductor Supply Chain Partnership was signed in July 2023 between the Ministry of Electronics and Information Technology (MeitY) of India and the Ministry of Economy, Trade and Industry (METI) of Japan. This information was shared with the Union Cabinet, which is led by Prime Minister Narendra Modi. The Ministry of Commerce (MoC) aims to expand collaboration between Japan and India in order to improve the semiconductor supply chain. This is because semiconductors are critical to the development of industries and digital technologies. The Parties agree that the MoC will take effect on the date of signature and be in effect for five years. Bilateral cooperation on business-to-business and G2G levels on ways to develop a robust semiconductor supply chain and make use of complementary skills. The cooperation is aimed at harnessing indigenous talent and creating opportunities for higher employment avenues.
MeitY's purpose also includes promoting international cooperation within bilateral and regional frameworks in the frontier and emerging fields of information technology. MeitY has engaged in Memorandums of Understanding (MoUs), Memorandums of Covenants (MoCs), and Agreements with counterpart organisations/agencies of other nations with the aim of fostering bilateral collaboration and information sharing. Additionally, MeitY aims to establish supply chain resilience, which would enable India to become a reliable partner. An additional step towards mutually advantageous semiconductor-related commercial prospects and collaborations between India & Japan is the strengthening of mutual collaboration between Japanese and Indian enterprises through this Memorandum of Understanding. The “India-Japan Digital Partnership” (IJDP), which was introduced during PM Modi's October 2018 visit to Japan, was created in light of the two countries' complementary and synergistic efforts. Its goal is to advance both current areas of cooperation and new initiatives within the scope of S&T/ICT cooperation, with a particular emphasis on “Digital ICT Technologies."
Conclusion
As we move ahead into the digital age, it is pertinent to be aware and educated about the latest technological advancements, new forms of cybercrimes and threats and legal aspects of digital rights and responsibilities, whether it is the recommendation to extend the implementation of DPDP or the Indo-Japan MoC, both of these instances impact the Indian netizen and his/her interests. Hence, the indigenous netizen needs to develop a keen interest in the protection of the Indian cyber-ecosystem to create a safer future. In our war against technology, our best weapon is technology and awareness, thus implementing the same in our daily digital lifestyles and routines is a must.
References
- https://www.eetindia.co.in/cabinet-approves-moc-on-japan-india-semiconductor-supply-chain-partnership/
- https://www.moneycontrol.com/news/business/startup/trade-body-representing-big-tech-urges-govt-to-extend-dpdp-act-implementation-by-1-5-years-11605431.html
- https://www.google.com/url?rct=j&sa=t&url=https://www.eetindia.co.in/cabinet-approves-moc-on-japan-india-semiconductor-supply-chain-partnership/&ct=ga&cd=CAEYACoTOTI3Mzg4NzEyODgwMjI2ODk0MDIaOTBiYzUxNmI5YTRjYTE1NTpjb206ZW46VVM&usg=AOvVaw2lEO7-cIBZ_ox1xV39LGLs