#FactCheck - Old Video Misleadingly Claimed as Footage of Iranian President Before Crash
Executive Summary:
A video that circulated on social media to show Iranian President Ebrahim Raisi inside a helicopter moments before the tragic crash on May 20, 2024, has equally been proven to be fake. The validation of information leaves no doubt, that the video was shot in January 2024, which showed Raisi’s visiting Nemroud Reservoir Dam project. As a means of verifying the origin of the video, the CyberPeace Research Team conducted reverse image search and analyzed the information obtained from the Islamic Republic News Agency, Mehran News, and the Iranian Students’ News Agency. Further, the associated press pointed out inconsistencies between the part in the video that went viral and the segment that was shown by Iranian state television. The original video is old and it is not related to the tragic crash as there is incongruence between the snowy background and the green landscape with a river presented in the clip.

Claims:
A video circulating on social media claims to show Iranian President Ebrahim Raisi inside a helicopter an hour before his fatal crash.



Fact Check:
Upon receiving the posts, in some of the social media posts we found some similar watermarks of the IRNA News agency and Nouk-e-Qalam News.

Taking a cue from this, we performed a keyword search to find any credible source of the shared video, but we found no such video uploaded by the IRNA News agency on their website. Recently, they haven’t uploaded any video regarding the viral news.
We closely analyzed the video, it can be seen that President Ebrahim Raisi was watching outside the snow-covered mountain, but in the internet-available footage regarding the accident, there were no such snow-covered mountains that could be seen but green forest.
We then checked for any social media posts uploaded by IRNA News Agency and found that they had uploaded the same video on X on January 18, 2024. The post clearly indicates the President’s aerial visit to Nemroud Dam.

The viral video is old and does not contain scenes that appear before the tragic chopper crash involving President Raisi.
Conclusion:
The viral clip is not related to the fatal crash of Iranian President Ebrahim Raisi's helicopter and is actually from a January 2024 visit to the Nemroud Reservoir Dam project. The claim that the video shows visuals before the crash is false and misleading.
- Claim: Viral Video of Iranian President Raisi was shot before fatal chopper crash.
- Claimed on: X (Formerly known as Twitter), YouTube, Instagram
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
Union Minister of State for Electronics and IT, Rajeev Chandrasekhar, announced that rules for the Digital Personal Data Protection (DPDP) Act are expected to be released by the end of January. The rules will be subject to a month-long consultation process, but their notification may be delayed until after the general elections in April-May 2024. Chandrasekhar mentioned changes to the current IT regulations would be made in the next few days to address the problem of deepfakes on social networking sites.
The government has observed a varied response from platforms regarding advisory measures on deepfakes, leading to the decision to enforce more specific rules. During the Digital India Dialogue, platforms were made aware of existing provisions and the consequences of non-compliance. An advisory was issued, and new amended IT rules will be released if satisfaction with compliance is not achieved.
When Sachin Tendulkar reported a deepfake on a site where he was seen endorsing a gaming application, it raised concerns about the exploitation of deepfakes. Tendulkar urged the reporting of such incidents and underlined the need for social media companies to be watchful, receptive to grievances, and quick to address disinformation and deepfakes.
The DPDP Act, 2023
The Digital Personal Data Protection Act (DPDP) 2023 is a brand-new framework for digital personal data protection that aims to protect individuals' digital personal data. The act ensures compliance by the platforms collecting personal data. The act aims to provide consent-based data collection techniques. DPDP Act 2023 is an important step toward protecting individual privacy. The Act, which requires express consent for the acquisition, administration, and processing of personal data, seeks to guarantee that organisations follow the stated objective for which user consent was granted. This proactive strategy coincides with global data protection trends and demonstrates India's commitment to safeguarding user information in the digital era.
Amendments to IT rules
Minister Chandrasekhar declared that existing IT regulations would be amended in order to combat the rising problem of deepfakes and disinformation on social media platforms. These adjustments, which will be published over the next few days, are primarily aimed at countering widespread of false information and deepfake. The decision follows a range of responses from platforms to deepfake recommendations made during Digital India Dialogues.
The government's stance: blocking non-compliant platforms
Minister Chandrasekhar reaffirmed the government's commitment to enforcing the updated guidelines. If platforms fail to follow compliance, the government may consider banning them. This severe position demonstrates the government's commitment to safeguarding Indian residents from the possible harm caused by false information.
Empowering Users with Education and Awareness
In addition to the upcoming DPDP Act Rules/recommendations and IT regulation changes, the government recognises the critical role that user education plays in establishing a robust digital environment. Minister Rajeev Chandrasekhar emphasised the necessity for comprehensive awareness programs to educate individuals about their digital rights and the need to protect personal information.
These instructional programs seek to equip users to make informed decisions about giving consent to their data. By developing a culture of digital literacy, the government hopes to guarantee that citizens have the information to safeguard themselves in an increasingly linked digital environment.
Balancing Innovation with User Protection
As India continues to explore its digital frontier, the junction of technology innovation and user safety remains a difficult balance. The upcoming Rules on the DPDP Act and modifications to existing IT rules represent the government's proactive efforts to build a strong framework that supports innovation while protecting user privacy and combating disinformation. Recognising the changing nature of the digital world, the government is actively participating in continuing discussions with stakeholders such as industry professionals, academia, and civil society. These conversations promote a collaborative approach to policy creation, ensuring that legislation is adaptable to the changing nature of cyber risks and technology breakthroughs. Such inclusive talks demonstrate the government's dedication to transparent and participatory governance, in which many viewpoints contribute to the creation of effective and nuanced policy. These advances reflect an important milestone in India's digital journey, as the country prepares to set a good example by creating responsible and safe digital ecosystems for its residents.
Reference :
- https://economictimes.indiatimes.com/tech/technology/govt-may-release-personal-data-bill-rules-in-a-fortnight/articleshow/106162669.cms?from=mdr
- https://www.business-standard.com/india-news/dpdp-rules-expected-to-be-released-by-end-of-the-month-mos-chandrasekhar-124011600679_1.html
.webp)
Introduction
The automobile business is fast expanding, with vehicles becoming sophisticated, interconnected gadgets equipped with cutting-edge digital technology. This integration improves convenience, safety, and efficiency while also exposing automobiles to a new set of cyber risks. Electric vehicles (EVs) are equipped with sophisticated computer systems that manage various functions, such as acceleration, braking, and steering. If these systems are compromised, it could result in hazardous situations, including the remote control of the vehicle or unauthorized access to sensitive data. The automotive sector is evolving with the rise of connected car stakeholders, exposing new vulnerabilities for hackers to exploit.
Why Automotive Cybersecurity is required
Cybersecurity threats to automotives result from hardware, software and overall systems redundancy. Additional concerns include general privacy clauses that justify collecting and transferring data to “third-party vendors”, without explicitly disclosing who such third parties are and the manner of processing personal data. For example, infotainment platform data may show popular music and the user’s preferences, which may be used by the music industry to improve marketing strategies. Similarly, it is lesser known that any data relating to behavioural tracking data, such as driving patterns etc., are also logged by the original equipment manufacturer.
Hacking is not limited to attackers gaining control of an electronic automobile; it includes malicious actors hacking charging stations to manipulate the systems. In Russia, EV charging stations were hacked in Moscow to display pro-Ukraine and anti-Putin messages such as “Glory to Ukraine” and “Death to the enemy” in the backdrop of the Russia-Ukraine war. Other examples include instances from the Isle of Wight, where hackers controlled the EV monitor to show inappropriate content and display high voltage fault codes to EV owners, preventing them from charging their vehicles with empty batteries.
UN Economic Commission for Europe releases Regulation 155 for Automobiles
UN Economic Commission for Europe Regulation 155 lays down uniform provisions concerning the approval of vehicles with regard to cybersecurity and cybersecurity management systems (CSMS). This was originally a part of the Commission.s Work Paper (W.P.) 29 that aimed to harmonise vehicular regulations for vehicles and vehicle equipment. Regulation 155 has a two-prong objective; first, to ensure cybersecurity at the organisational level and second, to ensure adequate designs of the vehicle architecture. A critical aspect in this context is the implementation of a certified CSMS by all companies that bring vehicles to market. Notably, this requirement alters the perspective of manufacturers; their responsibilities no longer conclude with the start of production (SOP). Instead, manufacturers are now required to continuously monitor and assess the safety systems throughout the entire life cycle of a vehicle, including making any necessary improvements.
This Regulation reflects the highly dynamic nature of software development and assurance. Moreover, the management system is designed to ensure compliance with safety requirements across the entire supply chain. This is a significant challenge, considering that suppliers currently account for over 70 per cent of the software volume.
The Regulation, which is binding in nature for 64 member countries, came into force in 2021. UNECE countries were required to be compliant with the Regulations by July 2022 for all new vehicles and by July 2024, the Regulation was set to apply to all vehicles. It is believed that the Regulation will become a de facto global standard, since vehicles authorised in a particular country may not be brought into the global market or the market of any UNECE member country based on any other authorisation. In such a scenario, OEMs of non-member countries may be required to give a “self-declaration”, declaring the equipment’s conformity with cybersecurity standards.
Conclusion
To compete and ensure trust, global car makers must deliver a robust cybersecurity framework that meets evolving regulations. The UNECE regulations in this regard are driving this direction by requiring automotive original equipment manufacturers (OEMs) to integrate vehicle cybersecurity throughout the entire value chain. The ‘security by design' approach aims to build a connected car that is trusted by all. Automotive cybersecurity involves measures and technologies to protect connected vehicles and their onboard systems from growing digital threats.
References:
- “Electric vehicle cyber security risks and best practices (2023)”, Cyber Talk, 1 August 2023. https://www.cybertalk.org/2023/08/01/electric-vehicle-cyber-security-risks-and-best-practices-2023/#:~:text=EVs%20are%20equipped%20with%20complex,unauthorized%20access%20to%20sensitive%20data.
- Gordon, Aaron, “Russian Electric Vehicle Chargers Hacked, Tell Users “PUTIN IS A D*******D”, Vice, 28 February 2022. https://www.vice.com/en/article/russian-electric-vehicle-chargers-hacked-tell-users-putin-is-a-dickhead/
- “Isle of Wight: Council’s electric vehicle chargers hacked to show porn site”, BBC, 6 April 2022. https://www.bbc.com/news/uk-england-hampshire-61006816
- Sandler, Manuel, “UN Regulation No. 155: What You Need to Know about UN R155”, Cyres Consulting, 1 June 2022. https://www.cyres-consulting.com/un-regulation-no-155-requirements-what-you-need-to-know/?srsltid=AfmBOopV1pH1mg6M2Nn439N1-EyiU-gPwH2L4vq5tmP0Y2vUpQR-yfP7#A_short_overview_Background_knowledge_on_UN_Regulation_No_155
- https://unece.org/wp29-introduction?__cf_chl_tk=ZYt.Sq4MrXvTwSiYURi_essxUCGCysfPq7eSCg1oXLA-1724839918-0.0.1.1-13972

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india