#FactCheck: Misleading Clip of Nepal Crash Shared as Air India’s AI-171 Ahmedabad Accident
Executive Summary:
A viral video circulating on social media platforms, claimed to show the final moments of an Air India flight carrying passengers inside the cabin just before it crashed near Ahmedabad on June 12, 2025, is false. However, upon further research, the footage was found to originate from the Yeti Airlines Flight 691 crash that occurred in Pokhara, Nepal, on January 15, 2023. For all details, please follow the report.

Claim:
Viral videos circulating on social media claiming to show the final moments inside Air India flight AI‑171 before it crashed near Ahmedabad on June 12, 2025. The footage appears to have been recorded by a passenger during the flight and is being shared as real-time visuals from the recent tragedy. Many users have believed the clip to be genuine and linked it directly to the Air India incident.


Fact Check:
To confirm the validity of the video going viral depicting the alleged final moments of Air India's AI-171 that crashed near Ahmedabad on 12 June 2025, we engaged in a comprehensive reverse image search and keyframe analysis then we got to know that the footage occurs back in January 2023, namely Yeti Airlines Flight 691 that crashed in Pokhara, Nepal. The visuals shared in the viral video match up, including cabin and passenger details, identically to the original livestream made by a passenger aboard the Nepal flight, confirming that the video is being reused out of context.

Moreover, well-respected and reliable news organisations, including New York Post and NDTV, have shared reports confirming that the video originated from the 2023 Nepal plane crash and has no relation to the recent Air India incident. The Press Information Bureau (PIB) also released a clarification dismissing the video as disinformation. Reliable reports from the past, visual evidence, and reverse search verification all provide complete agreement in that the viral video is falsely attributed to the AI-171 tragedy.


Conclusion:
The viral footage does not show the AI-171 crash near Ahmedabad on 12 June 2025. It is an irrelevant, previously recorded livestream from the January 2023 Yeti Airlines crash in Pokhara, Nepal, falsely repurposed as breaking news. It’s essential to rely on verified and credible news agencies. Please refer to official investigation reports when discussing such sensitive events.
- Claim: A dramatic clip of passengers inside a crashing plane is being falsely linked to the recent Air India tragedy in Ahmedabad.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
What is Deepfake
Deepfakes have been, a fascinating but unsettling phenomenon that is now prominent in this digital age. These incredibly convincing films have drawn attention and blended in well with our high-tech surroundings. The lifelike but completely manufactured quality of deepfake videos has become an essential component of our digital environment as we traverse the broad reaches of our digital society. While these works have an undoubtedly captivating charm, they have important ramifications. Come along as we examine the deep effects that misuse of deepfakes can have on our globalized digital culture. After many actors now business tycoon Ratan Tata has become the latest victim of deepfake. Tata called out a post from a user that used a fake interview of him in a video recommending Investments.
Case Study
The nuisance of deep fake is sparing none from actors politicians to entrepreneurs everyone is getting caught in the Trap. Soon after the actresses Rashmika Mandana, Katrina Kaif, Kajol and other actresses fell prey to the rising scenario of deepfake, a new case from the industry emerged, which took Mr. Ratan Tata on storm. Business tycoon Ratan Tata has become the latest victim of deepfake. He took to his social media sharing an image of the interview that asked people to invest money in a project in a post on Instagram. Ratan Tata called out a post from a user that used a fake interview of him in a video recommending these Investments.
This nuisance that has been created because of the deepfake is sparing nobody from actors to politicians to entrepreneurs now everyone is getting caught in the trap the latest victim being Ratan Tata. Tech magnate Ratan Tata is the most recent victim of this deepfake phenomenon. The millionaire was seen in the video, which was posted by the Instagram user, giving his followers a once-in-a-million opportunity to "exaggerate investments risk-free."
In the stated video, Ratan Tata was seen giving everyone in India advice mentioning to the public regarding the opportunity to increase their money with no risk and a 100% guarantee. The caption of the video clip stated, "Go to the channel right now."
Tata annotated both the video and the screenshot of the caption with the word "FAKE."
Ongoing Deepfake Assaults in India
Deepfake videos continue to target celebrities, and Priyanka Chopra is also a recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples, including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her looking the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Prevention and Detection
In order to effectively combat the growing threat posed by deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain.
Conclusion
The current instance involving Ratan Tata serves as an example of how the emergence of counterfeit technology poses an imminent danger to our digital civilization. The fake video, which was posted to Instagram, showed the business tycoon giving financial advice and luring followers with low-risk investment options. Tata quickly called out the footage as "FAKE," highlighting the need for careful media consumption. The Tata incident serves as a reminder of the possible damage deepfakes can do to prominent people's reputations. The issue, in Ratan Tata's instance specifically, demands that public personalities be more mindful of the possible misuse of their virtual identities. We can all work together to strengthen our defenses against this sneaky phenomenon and maintain the trustworthiness of our internet-based culture in the face of ever-changing technological challenges by emphasizing preventive measures like strict safety regulations and the implementation of state-of-the-art deepfake detection technologies.
References
- https://economictimes.indiatimes.com/magazines/panache/ratan-tata-slams-deepfake-video-that-features-him-giving-risk-free-investment-advice/articleshow/105805223.cms
- https://www.ndtv.com/india-news/ratan-tata-flags-deepfake-video-of-his-interview-recommending-investments-4640515
- https://www.businesstoday.in/bt-tv/short-video/viralvideo-business-tycoon-ratan-tata-falls-victim-to-deepfake-408557-2023-12-07
- https://www.livemint.com/news/india/false-ratan-tata-calls-out-a-deepfake-video-of-him-giving-investment-advice-11701926766285.html
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.

AI-generated content has been taking up space in the ever-changing dynamics of today's tech landscape. Generative AI has emerged as a powerful tool that has enabled the creation of hyper-realistic audio, video, and images. While advantageous, this ability has some downsides, too, particularly in content authenticity and manipulation.
The impact of this content is varied in the areas of ethical, psychological and social harms seen in the past couple of years. A major concern is the creation of non-consensual explicit content, including nudes. This content includes content where an individual’s face gets superimposed onto explicit images or videos without their consent. This is not just a violation of privacy for individuals, and can have humongous consequences for their professional and personal lives. This blog examines the existing laws and whether they are equipped to deal with the challenges that this content poses.
Understanding the Deepfake Technology
Deepfake technology is a media file (image, video, or speech) that typically represents a human subject that is altered deceptively using deep neural networks (DNNs). It is used to alter a person’s identity, and it usually takes the form of a “face swap” where the identity of a source subject is transferred onto a destination subject. The destination’s facial expressions and head movements remain the same, but the appearance in the video is that of the source. In the case of videos, the identities can be substituted by way of replacement or reenactment.
This superimposed content creates realistic content, such as fake nudes. Presently, creating a deepfake is not a costly endeavour. It requires a Graphics Processing Unit (GPU), software that is free, open-source, and easy to download, and graphics editing and audio-dubbing skills. Some of the common apps to create deepfakes are DeepFaceLab and FaceSwap, which are both public and open source and are supported by thousands of users who actively participate in the evolution and development of these software and models.
Legal Gaps and Challenges
Multiple gaps and challenges exist in the legal space for deepfakes and their regulation. They are:
- The inadequate definitions governing AI-generated explicit content often lead to enforcement challenges.
- Jurisdictional challenges due to the cross-border nature of crimes and the difficulties caused by international cooperation measures are in the early stages for AI content.
- There is a gap between the current consent-based and harassment laws for AI-generated nudes.
- Providing evidence or providing proof for the intent and identification of perpetrators in digital crimes is a challenge that is yet to be overcome.
Policy Responses and Global Trends
Presently, the global response to deepfakes is developing. The UK has developed the Online Safety Bill, the EU has the AI Act, the US has some federal laws such as the National AI Initiative Act of 2020 and India is currently developing the India AI Act as the specific legislation dealing with AI and its correlating issues.
The IT Rules, 2021, and the DPDP Act, 2023, regulate digital platforms by mandating content governance, privacy policies, grievance redressal, and compliance with removal orders. Emphasising intermediary liability and safe harbour protections, these laws play a crucial role in tackling harmful content like AI-generated nudes, while the DPDP Act focuses on safeguarding privacy and personal data rights.
Bridging the Gap: CyberPeace Recommendations
- Initiate legislative reforms by advocating for clear and precise definitions for the consent frameworks and instituting high penalties for AI-based offences, particularly those which are aimed at sexually explicit material.
- Advocate for global cooperation and collaborations by setting up international standards and bilateral and multilateral treaties that address the cross-border nature of these offences.
- Platforms should push for accountability by pushing for stricter platform responsibility for the detection and removal of harmful AI-generated content. Platforms should introduce strong screening mechanisms to counter the huge influx of harmful content.
- Public campaigns which spread awareness and educate users about their rights and the resources available to them in case such an act takes place with them.
Conclusion
The rapid advancement of AI-generated explicit content demands immediate and decisive action. As this technology evolves, the gaps in existing legal frameworks become increasingly apparent, leaving individuals vulnerable to profound privacy violations and societal harm. Addressing this challenge requires adaptive, forward-thinking legislation that prioritises individual safety while fostering technological progress. Collaborative policymaking is essential and requires uniting governments, tech platforms, and civil society to develop globally harmonised standards. By striking a balance between innovation and societal well-being, we can ensure that the digital age is not only transformative but also secure and respectful of human dignity. Let’s act now to create a safer future!
References
- https://etedge-insights.com/technology/artificial-intelligence/deepfakes-and-the-future-of-digital-security-are-we-ready/
- https://odsc.medium.com/the-rise-of-deepfakes-understanding-the-challenges-and-opportunities-7724efb0d981
- https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/