#FactCheck: Misleading Clip of Nepal Crash Shared as Air India’s AI-171 Ahmedabad Accident
Executive Summary:
A viral video circulating on social media platforms, claimed to show the final moments of an Air India flight carrying passengers inside the cabin just before it crashed near Ahmedabad on June 12, 2025, is false. However, upon further research, the footage was found to originate from the Yeti Airlines Flight 691 crash that occurred in Pokhara, Nepal, on January 15, 2023. For all details, please follow the report.

Claim:
Viral videos circulating on social media claiming to show the final moments inside Air India flight AI‑171 before it crashed near Ahmedabad on June 12, 2025. The footage appears to have been recorded by a passenger during the flight and is being shared as real-time visuals from the recent tragedy. Many users have believed the clip to be genuine and linked it directly to the Air India incident.


Fact Check:
To confirm the validity of the video going viral depicting the alleged final moments of Air India's AI-171 that crashed near Ahmedabad on 12 June 2025, we engaged in a comprehensive reverse image search and keyframe analysis then we got to know that the footage occurs back in January 2023, namely Yeti Airlines Flight 691 that crashed in Pokhara, Nepal. The visuals shared in the viral video match up, including cabin and passenger details, identically to the original livestream made by a passenger aboard the Nepal flight, confirming that the video is being reused out of context.

Moreover, well-respected and reliable news organisations, including New York Post and NDTV, have shared reports confirming that the video originated from the 2023 Nepal plane crash and has no relation to the recent Air India incident. The Press Information Bureau (PIB) also released a clarification dismissing the video as disinformation. Reliable reports from the past, visual evidence, and reverse search verification all provide complete agreement in that the viral video is falsely attributed to the AI-171 tragedy.


Conclusion:
The viral footage does not show the AI-171 crash near Ahmedabad on 12 June 2025. It is an irrelevant, previously recorded livestream from the January 2023 Yeti Airlines crash in Pokhara, Nepal, falsely repurposed as breaking news. It’s essential to rely on verified and credible news agencies. Please refer to official investigation reports when discussing such sensitive events.
- Claim: A dramatic clip of passengers inside a crashing plane is being falsely linked to the recent Air India tragedy in Ahmedabad.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In the boundless world of the internet—a digital frontier rife with both the promise of connectivity and the peril of deception—a new spectre stealthily traverses the electronic pathways, casting a shadow of fear and uncertainty. This insidious entity, cloaked in the mantle of supposed authority, preys upon the unsuspecting populace navigating the virtual expanse. And in the heart of India's vibrant tapestry of diverse cultures and ceaseless activity, Mumbai stands out—a sprawling metropolis of dreams and dynamism, yet also the stage for a chilling saga, a cyber charade of foul play and fraud.
The city's relentless buzz and hum were punctuated by a harrowing tale that unwound within the unassuming confines of a Kharghar residence, where a 46-year-old individual's brush with this digital demon would unfold. His typical day veered into the remarkable as his laptop screen lit up with an ominous pop-up, infusing his routine with shock and dread. This deceiving popup, masquerading as an official communication from the National Crime Records Bureau (NCRB), demanded an exorbitant fine of Rs 33,850 for ostensibly browsing adult content—an offence he had not committed.
The Cyber Deception
This tale of deceit and psychological warfare is not unique, nor is it the first of its kind. It finds echoes in the tragic narrative that unfurled in September 2023, far south in the verdant land of Kerala, where a young life was tragically cut short. A 17-year-old boy from Kozhikode, caught in the snare of similar fraudulent claims of NCRB admonishment, was driven to the extreme despair of taking his own life after being coerced to dispense Rs 30,000 for visiting an unauthorised website, as the pop-up falsely alleged.
Sewn with a seam of dread and finesse, the pop-up which appeared in another recent case from Navi Mumbai, highlights the virtual tapestry of psychological manipulation, woven with threatening threads designed to entrap and frighten. In this recent incident a 46-year-old Kharghar resident was left in shock when he got a pop-up on a laptop screen warning him to pay Rs 33,850 fine for surfing a porn website. This message appeared from fake website of NCRB created to dupe people. Pronouncing that the user has engaged in browsing the Internet for some activities, it delivers an ultimatum: Pay the fine within six hours, or face the critical implications of a criminal case. The panacea it offers is simple—settle the demanded amount and the shackles on the browser shall be lifted.
It was amidst this web of lies that the man from Kharghar found himself entangled. The story, as retold by his brother, an IT professional, reveals the close brush with disaster that was narrowly averted. His brother's panicked call, and the rush of relief upon realising the scam, underscores the ruthless efficiency of these cyber predators. They leverage sophisticated deceptive tactics, even specifying convenient online payment methods to ensnare their prey into swift compliance.
A glimmer of reason pierced through the narrative as Maharashtra State cyber cell special inspector general Yashasvi Yadav illuminated the fraudulent nature of such claims. With authoritative clarity, he revealed that no legitimate government agency would solicit fines in such an underhanded fashion. Rather, official procedures involve FIRs or court trials—a structured route distant from the scaremongering of these online hoaxes.
Expert Take
Concurring with this perspective, cyber experts facsimiles. By tapping into primal fears and conjuring up grave consequences, the fraudsters follow a time-worn strategy, cloaking their ill intentions in the guise of governmental or legal authority—a phantasm of legitimacy that prompts hasty financial decisions.
To pierce the veil of this deception, D. Sivanandhan, the former Mumbai police commissioner, categorically denounced the absurdity of the hoax. With a voice tinged by experience and authority, he made it abundantly clear that the NCRB's role did not encompass the imposition of fines without due process of law—a cornerstone of justice grossly misrepresented by the scam's premise.
New Lesson
This scam, a devilish masquerade given weight by deceit, might surge with the pretence of novelty, but its underpinnings are far from new. The manufactured pop-ups that propagate across corners of the internet issue fabricated pronouncements, feigned lockdowns of browsers, and the spectre of being implicated in taboo behaviours. The elaborate ruse doesn't halt at mere declarations; it painstakingly fabricates a semblance of procedural legitimacy by preemptively setting penalties and detailing methods for immediate financial redress.
Yet another dimension of the scam further bolsters the illusion—the ominous ticking clock set for payment, endowing the fraud with an urgency that can disorient and push victims towards rash action. With a spurious 'Payment Details' section, complete with options to pay through widely accepted credit networks like Visa or MasterCard, the sham dangles the false promise of restored access, should the victim acquiesce to their demands.
Conclusion
In an era where the demarcation between illusion and reality is nebulous, the impetus for individual vigilance and scepticism is ever-critical. The collective consciousness, the shared responsibility we hold as inhabitants of the digital domain, becomes paramount to withstand the temptation of fear-inducing claims and to dispel the shadows cast by digital deception. It is only through informed caution, critical scrutiny, and a steadfast refusal to capitulate to intimidation that we may successfully unmask these virtual masquerades and safeguard the integrity of our digital existence.
References:
- https://www.onmanorama.com/news/kerala/2023/09/29/kozhikode-boy-dies-by-suicide-after-online-fraud-threatens-him-for-visiting-unauthorised-website.html
- https://timesofindia.indiatimes.com/pay-rs-33-8k-fine-for-surfing-porn-warns-fake-ncrb-pop-up-on-screen/articleshow/106610006.cms
- https://www.indiatoday.in/technology/news/story/people-who-watch-porn-receiving-a-warning-pop-up-do-not-pay-it-is-a-scam-1903829-2022-01-24
.webp)
Introduction
The Senate bill introduced on 19 March 2024 in the United States would require online platforms to obtain consumer consent before using their data for Artificial Intelligence (AI) model training. If a company fails to obtain this consent, it would be considered a deceptive or unfair practice and result in enforcement action from the Federal Trade Commission (FTC) under the AI consumer opt-in, notification standards, and ethical norms for training (AI Consent) bill. The legislation aims to strengthen consumer protection and give Americans the power to determine how their data is used by online platforms.
The proposed bill also seeks to create standards for disclosures, including requiring platforms to provide instructions to consumers on how they can affirm or rescind their consent. The option to grant or revoke consent should be made available at any time through an accessible and easily navigable mechanism, and the selection to withhold or reverse consent must be at least as prominent as the option to accept while taking the same number of steps or fewer as the option to accept.
The AI Consent bill directs the FTC to implement regulations to improve transparency by requiring companies to disclose when the data of individuals will be used to train AI and receive consumer opt-in to this use. The bill also commissions an FTC report on the technical feasibility of de-identifying data, given the rapid advancements in AI technologies, evaluating potential measures companies could take to effectively de-identify user data.
The definition of ‘Artificial Intelligence System’ under the proposed bill
ARTIFICIALINTELLIGENCE SYSTEM- The term artificial intelligence system“ means a machine-based system that—
- Is capable of influencing the environment by producing an output, including predictions, recommendations or decisions, for a given set of objectives; and
- 2. Uses machine or human-based data and inputs to
(i) Perceive real or virtual environments;
(ii) Abstract these perceptions into models through analysis in an automated manner (such as by using machine learning) or manually; and
(iii) Use model inference to formulate options for outcomes.
Importance of the proposed AI Consent Bill USA
1. Consumer Data Protection: The AI Consent bill primarily upholds the privacy rights of an individual. Consent is necessitated from the consumer before data is used for AI Training; the bill aims to empower individuals with unhinged autonomy over the use of personal information. The scope of the bill aligns with the greater objective of data protection laws globally, stressing the criticality of privacy rights and autonomy.
2. Prohibition Measures: The proposed bill intends to prohibit covered entities from exploiting the data of consumers for training purposes without their consent. This prohibition extends to the sale of data, transfer to third parties and usage. Such measures aim to prevent data misuse and exploitation of personal information. The bill aims to ensure companies are leveraged by consumer information for the development of AI without a transparent process of consent.
3. Transparent Consent Procedures: The bill calls for clear and conspicuous disclosures to be provided by the companies for the intended use of consumer data for AI training. The entities must provide a comprehensive explanation of data processing and its implications for consumers. The transparency fostered by the proposed bill allows consumers to make sound decisions about their data and its management, hence nurturing a sense of accountability and trust in data-driven practices.
4. Regulatory Compliance: The bill's guidelines call for strict requirements for procuring the consent of an individual. The entities must follow a prescribed mechanism for content solicitation, making the process streamlined and accessible for consumers. Moreover, the acquisition of content must be independent, i.e. without terms of service and other contractual obligations. These provisions underscore the importance of active and informed consent in data processing activities, reinforcing the principles of data protection and privacy.
5. Enforcement and Oversight: To enforce compliance with the provisions of the bill, robust mechanisms for oversight and enforcement are established. Violations of the prescribed regulations are treated as unfair or deceptive acts under its provisions. Empowering regulatory bodies like the FTC to ensure adherence to data privacy standards. By holding covered entities accountable for compliance, the bill fosters a culture of accountability and responsibility in data handling practices, thereby enhancing consumer trust and confidence in the digital ecosystem.
Importance of Data Anonymization
Data Anonymization is the process of concealing or removing personal or private information from the data set to safeguard the privacy of the individual associated with it. Anonymised data is a sort of information sanitisation in which data anonymisation techniques encrypt or delete personally identifying information from datasets to protect data privacy of the subject. This reduces the danger of unintentional exposure during information transfer across borders and allows for easier assessment and analytics after anonymisation. When personal information is compromised, the organisation suffers not just a security breach but also a breach of confidence from the client or consumer. Such assaults can result in a wide range of privacy infractions, including breach of contract, discrimination, and identity theft.
The AI consent bill asks the FTC to study data de-identification methods. Data anonymisation is critical to improving privacy protection since it reduces the danger of re-identification and unauthorised access to personal information. Regulatory bodies can increase privacy safeguards and reduce privacy risks connected with data processing operations by investigating and perhaps implementing anonymisation procedures.
The AI consent bill emphasises de-identification methods, as well as the DPDP Act 2023 in India, while not specifically talking about data de-identification, but it emphasises the data minimisation principles, which highlights the potential future focus on data anonymisation processes or techniques in India.
Conclusion
The proposed AI Consent bill in the US represents a significant step towards enhancing consumer privacy rights and data protection in the context of AI development. Through its stringent prohibitions, transparent consent procedures, regulatory compliance measures, and robust enforcement mechanisms, the bill strives to strike a balance between fostering innovation in AI technologies while safeguarding the privacy and autonomy of individuals.
References:
- https://fedscoop.com/consumer-data-consent-training-ai-models-senate-bill/#:~:text=%E2%80%9CThe%20AI%20CONSENT%20Act%20gives,Welch%20said%20in%20a%20statement
- https://www.dataguidance.com/news/usa-bill-ai-consent-act-introduced-house#:~:text=USA%3A%20Bill%20for%20the%20AI%20Consent%20Act%20introduced%20to%20House%20of%20Representatives,-ConsentPrivacy%20Law&text=On%20March%2019%2C%202024%2C%20US,the%20U.S.%20House%20of%20Representatives
- https://datenrecht.ch/en/usa-ai-consent-act-vorgeschlagen/
- https://www.lujan.senate.gov/newsroom/press-releases/lujan-welch-introduce-billto-require-online-platforms-receive-consumers-consent-before-using-their-personal-data-to-train-ai-models/

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide