#FactCheck - Old Video Misleadingly Claimed as Footage of Iranian President Before Crash
Executive Summary:
A video that circulated on social media to show Iranian President Ebrahim Raisi inside a helicopter moments before the tragic crash on May 20, 2024, has equally been proven to be fake. The validation of information leaves no doubt, that the video was shot in January 2024, which showed Raisi’s visiting Nemroud Reservoir Dam project. As a means of verifying the origin of the video, the CyberPeace Research Team conducted reverse image search and analyzed the information obtained from the Islamic Republic News Agency, Mehran News, and the Iranian Students’ News Agency. Further, the associated press pointed out inconsistencies between the part in the video that went viral and the segment that was shown by Iranian state television. The original video is old and it is not related to the tragic crash as there is incongruence between the snowy background and the green landscape with a river presented in the clip.

Claims:
A video circulating on social media claims to show Iranian President Ebrahim Raisi inside a helicopter an hour before his fatal crash.



Fact Check:
Upon receiving the posts, in some of the social media posts we found some similar watermarks of the IRNA News agency and Nouk-e-Qalam News.

Taking a cue from this, we performed a keyword search to find any credible source of the shared video, but we found no such video uploaded by the IRNA News agency on their website. Recently, they haven’t uploaded any video regarding the viral news.
We closely analyzed the video, it can be seen that President Ebrahim Raisi was watching outside the snow-covered mountain, but in the internet-available footage regarding the accident, there were no such snow-covered mountains that could be seen but green forest.
We then checked for any social media posts uploaded by IRNA News Agency and found that they had uploaded the same video on X on January 18, 2024. The post clearly indicates the President’s aerial visit to Nemroud Dam.

The viral video is old and does not contain scenes that appear before the tragic chopper crash involving President Raisi.
Conclusion:
The viral clip is not related to the fatal crash of Iranian President Ebrahim Raisi's helicopter and is actually from a January 2024 visit to the Nemroud Reservoir Dam project. The claim that the video shows visuals before the crash is false and misleading.
- Claim: Viral Video of Iranian President Raisi was shot before fatal chopper crash.
- Claimed on: X (Formerly known as Twitter), YouTube, Instagram
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Deepfake technology, which combines the words "deep learning" and "fake," uses highly developed artificial intelligence—specifically, generative adversarial networks (GANs)—to produce computer-generated content that is remarkably lifelike, including audio and video recordings. Because it can provide credible false information, there are concerns about its misuse, including identity theft and the transmission of fake information. Cybercriminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
India Topmost destination for deepfake attacks
According to Sumsub’s identity fraud report 2023, a well-known digital identity verification company with headquarters in the UK. India, Bangladesh, and Pakistan have become an important participants in the Asia-Pacific identity fraud scene with India’s fraud rate growing exponentially by 2.99% from 2022 to 2023. They are among the top ten nations most impacted by the use of deepfake technology. Deepfake technology is being used in a significant number of cybercrimes, according to the newly released Sumsub Identity Fraud Report for 2023, and this trend is expected to continue in the upcoming year. This highlights the need for increased cybersecurity awareness and safeguards as identity fraud poses an increasing concern in the area.
How Deeepfake Works
Deepfakes are a fascinating and worrisome phenomenon that have emerged in the modern digital landscape. These realistic-looking but wholly artificial videos have become quite popular in the last few months. Such realistic-looking, but wholly artificial, movies have been ingrained in the very fabric of our digital civilisation as we navigate its vast landscape. The consequences are enormous and the attraction is irresistible.
Deep Learning Algorithms
Deepfakes examine large datasets, frequently pictures or videos of a target person, using deep learning techniques, especially Generative Adversarial Networks. By mimicking and learning from gestures, speech patterns, and facial expressions, these algorithms can extract valuable information from the data. By using sophisticated approaches, generative models create material that mixes seamlessly with the target context. Misuse of this technology, including the dissemination of false information, is a worry. Sophisticated detection techniques are becoming more and more necessary to separate real content from modified content as deepfake capabilities improve.
Generative Adversarial Networks
Deepfake technology is based on GANs, which use a dual-network design. Made up of a discriminator and a generator, they participate in an ongoing cycle of competition. The discriminator assesses how authentic the generated information is, whereas the generator aims to create fake material, such as realistic voice patterns or facial expressions. The process of creating and evaluating continuously leads to a persistent improvement in Deepfake's effectiveness over time. The whole deepfake production process gets better over time as the discriminator adjusts to become more perceptive and the generator adapts to produce more and more convincing content.
Effect on Community
The extensive use of Deepfake technology has serious ramifications for several industries. As technology develops, immediate action is required to appropriately manage its effects. And promoting ethical use of technologies. This includes strict laws and technological safeguards. Deepfakes are computer trickery that mimics prominent politicians' statements or videos. Thus, it's a serious issue since it has the potential to spread instability and make it difficult for the public to understand the true nature of politics. Deepfake technology has the potential to generate totally new characters or bring stars back to life for posthumous roles in the entertainment industry. It gets harder and harder to tell fake content from authentic content, which makes it simpler for hackers to trick people and businesses.
Ongoing Deepfake Assaults In India
Deepfake videos continue to target popular celebrities, Priyanka Chopra is the most recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her look the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Actions Considered by Authorities
A PIL was filed requesting the Delhi High Court that access to websites that produce deepfakes be blocked. The petitioner's attorney argued in court that the government should at the very least establish some guidelines to hold individuals accountable for their misuse of deepfake and AI technology. He also proposed that websites should be asked to identify information produced through AI as such and that they should be prevented from producing illegally. A division bench highlighted how complicated the problem is and suggested the government (Centre) to arrive at a balanced solution without infringing the right to freedom of speech and expression (internet).
Information Technology Minister Ashwini Vaishnaw stated that new laws and guidelines would be implemented by the government to curb the dissemination of deepfake content. He presided over a meeting involving social media companies to talk about the problem of deepfakes. "We will begin drafting regulation immediately, and soon, we are going to have a fresh set of regulations for deepfakes. this might come in the way of amending the current framework or ushering in new rules, or a new law," he stated.
Prevention and Detection Techniques
To effectively combat the growing threat posed by the misuse of deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and adjusting the constantly changing terrain.
Conclusion
Advanced artificial intelligence-powered deepfake technology produces extraordinarily lifelike computer-generated information, raising both creative and moral questions. Misuse of tech or deepfake presents major difficulties such as identity theft and the propagation of misleading information, as demonstrated by examples in India, such as the latest deepfake video involving Priyanka Chopra. It is important to develop critical thinking abilities, use detection strategies including analyzing audio quality and facial expressions, and keep up with current trends in order to counter this danger. A thorough strategy that incorporates fact-checking, preventative tactics, and awareness-raising is necessary to protect against the negative effects of deepfake technology. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain. Creating a true cyber-safe environment for netizens.
References:
- https://yourstory.com/2023/11/unveiling-deepfake-technology-impact
- https://www.indiatoday.in/movies/celebrities/story/deepfake-alert-priyanka-chopra-falls-prey-after-rashmika-mandanna-katrina-kaif-and-alia-bhatt-2472293-2023-12-05
- https://www.csoonline.com/article/1251094/deepfakes-emerge-as-a-top-security-threat-ahead-of-the-2024-us-election.html
- https://timesofindia.indiatimes.com/city/delhi/hc-unwilling-to-step-in-to-curb-deepfakes-delhi-high-court/articleshow/105739942.cms
- https://www.indiatoday.in/india/story/india-among-top-targets-of-deepfake-identity-fraud-2472241-2023-12-05
- https://sumsub.com/fraud-report-2023/
.webp)
Starting on 16th February 2025, Google changed its advertisement platform program policy. It will permit advertisers to employ device fingerprinting techniques for user tracking. Organizations that use their advertising services are now permitted to use fingerprinting techniques for tracking their users' data. Originally announced on 18th December 2024, this rule change has sparked yet another debate regarding privacy and profits.
The Issue
Fingerprinting is a technique that allows for the collection of information about a user’s device and browser details, ultimately enabling the creation of a profile of the user. Not only used for or limited to targeting advertisements, data procured in such a manner can be used by private entities and even government organizations to identify individuals who access their services. If information on customization options, such as language settings and a user’s screen size, is collected, it becomes easier to identify an individual when combined with data points like browser type, time zone, battery status, and even IP address.
What makes this technique contentious at the moment is the lack of awareness regarding the information being collected from the user and the inability to opt out once permissions are granted.
This is unlike Google’s standard system of data collection through permission requests, such as accepting website cookies—small text files sent to the browser when a user visits a particular website. While contextual and first-party cookies limit data collection to enhance user experience, third-party cookies enable the display of irrelevant advertisements while users browse different platforms. Due to this functionality, companies can engage in targeted advertising.
This issue has been addressed in laws like the General Data Protection Regulation (GDPR) of the European Union (EU) and the Digital Personal Data Protection (DPDP) Act, 2023 (India), which mandate strict rules and regulations regarding advertising, data collection, and consent, among other things. One of the major requirements in both laws is obtaining clear, unambiguous consent. This also includes the option to opt out of previously granted permissions for cookies.
However, in the case of fingerprinting, the mechanism of data collection relies on signals that users cannot easily erase. While clearing all data from the browser or refusing cookies might seem like appropriate steps to take, they do not prevent tracking through fingerprinting, as users can still be identified using system details that a website has already collected. This applies to all IoT products as well. People usually do not frequently change the devices they use, and once a system is identified, there are no available options to stop tracking, as fingerprinting relies on device characteristics rather than data-collecting text files that could otherwise be blocked.
Google’s Changing Stance
According to Statista, Google’s revenue is largely made up of the advertisement services it provides (amounting to 264.59 billion U.S. dollars in 2024). Any change in its advertisement program policies draws significant attention due to its economic impact.
In 2019, Google claimed in a blog post that fingerprinting was a technique that “subverts user choice and is wrong.” It is in this context that the recent policy shift comes as a surprise. In response, the ICO (Information Commissioner’s Office), the UK’s data privacy watchdog, has stated that this change is irresponsible. Google, however, is eager to have further discussions with the ICO regarding the policy change.
Conclusion
The debate regarding privacy in targeted advertising has been ongoing for quite some time. Concerns about digital data collection and storage have led to new and evolving laws that mandate strict fines for non-compliance.
Google’s shift in policy raises pressing concerns about user privacy and transparency. Fingerprinting, unlike cookies, offers no opt-out mechanism, leaving users vulnerable to continuous tracking without consent. This move contradicts Google’s previous stance and challenges global regulations like the GDPR and DPDP Act, which emphasize clear user consent.
With regulators like the ICO expressing disapproval, the debate between corporate profits and individual privacy intensifies. As digital footprints become harder to erase, users, lawmakers, and watchdogs must scrutinize such changes to ensure that innovation does not come at the cost of fundamental privacy rights
References
- https://www.techradar.com/pro/security/profit-over-privacy-google-gives-advertisers-more-personal-info-in-major-fingerprinting-u-turn
- https://www.ccn.com/news/technology/googles-new-fingerprinting-policy-sparks-privacy-backlash-as-ads-become-harder-to-avoid/
- https://www.emarketer.com/content/google-pivot-digital-fingerprinting-enable-better-cross-device-measurement
- https://www.lewissilkin.com/insights/2025/01/16/google-adopts-new-stance-on-device-fingerprinting-102ju7b
- https://www.lewissilkin.com/insights/2025/01/16/ico-consults-on-storage-and-access-cookies-guidance-102ju62
- https://www.bbc.com/news/articles/cm21g0052dno
- https://www.techradar.com/features/browser-fingerprinting-explained
- https://fingerprint.com/blog/canvas-fingerprinting/
- https://www.statista.com/statistics/266206/googles-annual-global-revenue/#:~:text=In%20the%20most%20recently%20reported,billion%20U.S.%20dollars%20in%202024
.webp)
Introduction
Misinformation has the potential to impact people, communities and institutions alike, and the ramifications can be far-ranging. From influencing voter behaviours and consumer choices to shaping personal beliefs and community dynamics, the information we consume in our daily lives affects every aspect of our existence. And so, when this very information is flawed or incomplete, whether accidentally or deliberately so, it has the potential to confuse and mislead people.
‘Debunking’ is the process of exposing false information or countering inaccuracies and manipulation by presenting actual facts. The goal is to minimise the harmful effects of misinformation by informing and educating people. Debunking initiatives work hard to expose false information and cut down conspiracies, catalogue evidence of false information, clearly identify sources of misinformation vs. accurate information, and assert the truth. Debunking looks at building capacity and educating people both as a strategy and goal.
Debunking is most effective when it comes from trusted sources, provides detailed explanations, and offers guidance and verifiable advice. Debunking is reactive in nature and it focuses on specific instances of misinformation and is closely tied to fact-checking. Debunking aims to mitigate the impact of misinformation that has already spread. As such, the approach is to contain and correct, post-occurrence. The most common method of debunking is collaboration between fact-checking groups and social media companies. When journalists or other fact-checkers identify false or misleading content, social media sites flag or label it such, so that audiences are alerted. Debunking is an essential method for reducing the impact and incidence of misinformation by providing real facts and increasing overall accuracy of content in the digital information ecosystem.
Role of Debunking the Misinformation
Debunking fights against false or misleading information by correcting false claims, myths, and misinformation with evidence-based rebuttals. It combats untruths and the spread of misinformation by providing and disseminating debunked evidence to the public. Debunking by presenting evidence that contradicts misleading facts and encourages individuals to develop fact-checking habits and proactively check for authenticated sources. Debunking plays a vital role in boosting trust in credible sources by offering evidence-based corrections and enhancing the credibility of online information. By exposing falsehoods and endorsing qualities like information completeness and evidence-backed data and logic, debunking efforts help create a culture of well-informed and constructive public conversations and analytical exchanges. Effectively dispelling myths and misinformation can help create communities and societies that are more educated, resilient, and goal-oriented.
Debunking as a tailoring Strategy to counter Misinformation
Understanding the information environment and source trustworthiness is critical for developing effective debunking techniques. Successful debunking efforts use clear messages, appealing forms, and targeted distribution to reach a wide range of netizens. Debunking as an effective method for combating misinformation includes analysing successful efforts, using fact-checking, relying on reputable sources for corrections, and using scientific communication. Fact-checking plays a critical role in ensuring information accuracy and holding people accountable for making misleading claims. Collaborative efforts and transparent techniques can boost the credibility and efficacy of fact-checking activities and boost the legitimacy and effectiveness of debunking initiatives at a larger scale. Scientific communication is also critical for debunking myths about different topics/concerns by giving evidence-based information. Clear and understandable framing of scientific knowledge is critical for engaging broad audiences and effectively refuting misinformation.
CyberPeace Policy Recommendations
- It is recommended that debunking initiatives must highlight core facts, emphasising what is true over what is wrong and establishing a clear contrast between the two. This is crucial as people are more likely to believe familiar information even if they learn later that it is incorrect. Debunking must provide a comprehensive explanation, filling the ‘information gap’ created by the myth. This can be done by explaining things as clearly as possible, as people may stop paying attention if they are faced with an overload of competing information. The use of visuals to illustrate core facts is an effective way to help people understand the issue and clearly tell the difference between information and misinformation.
- Individuals can play a role in debunking misinformation on social media by highlighting inconsistencies, recommending related articles with corrections or sharing trusted sources and debunking reports in their communities.
- Governments and regulatory agencies can improve information openness by demanding explicit source labelling and technical measures to be implemented on platforms. This can increase confidence in information sources and equip people to practice discernment when they consume content online. Governments should also support and encourage independent fact-checking organisations that are working to disprove misinformation. Digital literacy programmes may teach the public how to critically assess information online and spot any misinformation.
- Tech businesses may enhance algorithms for detecting and flagging misinformation, therefore reducing the propagation of misleading information. Offering options for people to report suspicious/doubtful information and misinformation can empower them and help them play an active role in identifying and rectifying inaccurate information online and foster a more responsible information environment on the platforms.
Conclusion
Debunking is an effective strategy to counter widespread misinformation through a combination of fact-checking, scientific evidence, factual explanations, verified facts and corrections. Debunking can play an important role in fostering a culture where people look for authenticity while consuming the information and place a high value on trusted and verified information. A collaborative strategy can increase the legitimacy and reach of debunking efforts, making them more effective in reaching larger audiences and being easy-to-understand for a wide range of demographics. In a complex and ever-evolving digital ecosystem, it is important to build information resilience both at the macro level for the ecosystem as a whole and at the micro level, with the individual consumer. Only then can we ensure a culture of mindful, responsible content creation and consumption.
References