#FactCheck: Misleading Clip of Nepal Crash Shared as Air India’s AI-171 Ahmedabad Accident
Executive Summary:
A viral video circulating on social media platforms, claimed to show the final moments of an Air India flight carrying passengers inside the cabin just before it crashed near Ahmedabad on June 12, 2025, is false. However, upon further research, the footage was found to originate from the Yeti Airlines Flight 691 crash that occurred in Pokhara, Nepal, on January 15, 2023. For all details, please follow the report.

Claim:
Viral videos circulating on social media claiming to show the final moments inside Air India flight AI‑171 before it crashed near Ahmedabad on June 12, 2025. The footage appears to have been recorded by a passenger during the flight and is being shared as real-time visuals from the recent tragedy. Many users have believed the clip to be genuine and linked it directly to the Air India incident.


Fact Check:
To confirm the validity of the video going viral depicting the alleged final moments of Air India's AI-171 that crashed near Ahmedabad on 12 June 2025, we engaged in a comprehensive reverse image search and keyframe analysis then we got to know that the footage occurs back in January 2023, namely Yeti Airlines Flight 691 that crashed in Pokhara, Nepal. The visuals shared in the viral video match up, including cabin and passenger details, identically to the original livestream made by a passenger aboard the Nepal flight, confirming that the video is being reused out of context.

Moreover, well-respected and reliable news organisations, including New York Post and NDTV, have shared reports confirming that the video originated from the 2023 Nepal plane crash and has no relation to the recent Air India incident. The Press Information Bureau (PIB) also released a clarification dismissing the video as disinformation. Reliable reports from the past, visual evidence, and reverse search verification all provide complete agreement in that the viral video is falsely attributed to the AI-171 tragedy.


Conclusion:
The viral footage does not show the AI-171 crash near Ahmedabad on 12 June 2025. It is an irrelevant, previously recorded livestream from the January 2023 Yeti Airlines crash in Pokhara, Nepal, falsely repurposed as breaking news. It’s essential to rely on verified and credible news agencies. Please refer to official investigation reports when discussing such sensitive events.
- Claim: A dramatic clip of passengers inside a crashing plane is being falsely linked to the recent Air India tragedy in Ahmedabad.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Misinformation is no longer a challenge limited to major global platforms or widely spoken languages. In India and many other countries, false information is increasingly disseminated through local and vernacular languages, allowing it to reach communities more directly and intimately. While regional language content has played a crucial role in expanding access to information, it has also emerged as a powerful driver of misinformation by bad actors, and it often becomes harder to detect and counter. The challenge of local language misinformation is not merely digital in nature; it is deeply social, cultural, and shaped by specific local contexts.
Why Local-Language Misinformation Is More Impactful
A person’s mother tongue can be a highly effective medium for misinformation because it carries emotional resonance and a sense of authenticity. Information that aligns with an individual’s linguistic and cultural background is often trusted the most. When false narratives are framed using familiar expressions, local references, or community-specific concerns, they are more readily accepted and shared more widely.
Misinformation in a language like English, which is more heavily moderated, does not usually have the same impact as content in vernacular languages. In the latter case, such content tends to circulate within closed networks such as family WhatsApp groups, regional Facebook pages, local YouTube channels, and community forums. These spaces are often perceived as safe or trusted, which lowers scepticism and encourages the spread of unverified information.
The Role of Digital Platforms and Algorithms
Although social media platforms have opened up access to the content of regional languages, the moderation mechanisms have not kept up. The automated control systems for content are frequently trained mainly on the dominant languages, thus missing the detection of vernacular speech, slang, dialects, and code-mixing.
This results in a disparity in the enforcement of laws where misinformation in local languages:
- Doesn’t go through automated fact-checking tools
- Is subject to human moderation takes place at a slower pace
- Is less prone to being reported or flagged
- Gains unrestrained access for a longer time period than first imagined
The problem is further magnified by algorithmic amplification. Content that triggers very strong emotional reactions fear, anger, pride, or outrage, has a higher chance of being promoted, irrespective of its truthfulness. In regional situations, such content may very quickly sway public opinion even in very closely knit communities.
Forms of Vernacular Misinformation
Local-language misinformation appears in various forms:
- Health misinformation, with such examples as panic remedies, vaccine myths, and misleading medical prescriptions
- Political misinformation, which is mostly identified with regional identity, local grievances, or community narratives
- Rumours regarding disasters that are very hard to control and spread hatred during floods, earthquakes, or other public emergencies
- Economic and financial frauds that are perpetrated via the local dialect authorities or trusted institutions
- Cultural and religious untruths, which are based on exploiting the core of the beliefs
The regional aspect of such misinformation makes it very difficult to be corrected because the fact-checks in other languages may not get to that audience.
Community-Level Consequences
The effect of misinformation in local languages is not only about the misdirection of individuals. It can also:
- Negatively affect the process of public institutions gaining trust
- Support social polarisation and communal strife
- Get in the way of public health measures
- Help shape the decision-making process in elections at the grassroots level
- Take advantage of the digitally illiterate poor people
In a lot of scenarios, the damage done is not instant but rather accumulative, thus changing perceptions and supporting false worldviews more.
Why Countering Vernacular Misinformation Is Difficult
Multiple structural layers make it difficult to respond effectively:
- Variety of Languages: Just in India, there are many languages and dialects, which are very hard to monitor universally.
- Culturally Aware Systems: The local languages sometimes bear meanings that are deeply rooted in the culture, such as by using sarcasm or referring to history, and automated systems are unable to interpret it correctly.
- Reporting Not Common: Users might not spot misinformation or may not want to be a part of the struggle by showing the content shared by reliable members of the community.
- Insufficient Fact-Checking Capacity: Resources are often unavailable for fact-checking organisations to perform their duties worldwide in different languages effectively.
Building a Community-Centric Response
Overcoming misinformation in local languages needs a community-driven resilience approach instead of a platform-centric one. Some of the key actions are:
- Boosting Digital Literacy: Users will be able to question, verify, and put the content on hold before sharing it, thanks to the regional language awareness campaigns that will be conducted.
- Facilitating Local Fact-Checkers: Local journalists, educators, and NGOs are the main players in providing the context for verification.
- Accountability of Platforms: It is necessary for technology companies to support global moderation in several languages, the hiring of local experts, and the implementation of transparent enforcement mechanisms.
- Contemplating Policy and Governance: Regulatory frameworks should facilitate proactive risk assessment while controlling the right to free expression.
- Establishment of Trusted Local Intermediaries: Community leaders, health workers, teachers, and local organisations can engage in preventing misinformation among the networks that they are trusted in.
The Way Forward
Misinformation in local languages is not a minor concern; it is an issue that directly affects the future of digital trust. As the number of users accessing the internet through local language interfaces continues to grow, the volume and influence of regional content will also increase. If measures do not include all language groups, misinformation will remain least corrected and most influential at the community level, where it is also the hardest to identify and address.
Such a problem exists only if the power of language is not recognised. Therefore, one can say that it is necessary to protect the quality of information in local languages, not only for digital safety but for other factors as well, such as social cohesion, democratic participation, and public well-being.
Conclusion
Vernacular content has the potential to be very powerful in the ways it can inform, include and empower; meanwhile, if it goes unmonitored, it has the same potential to mislead, divide, and harm. Mis-disinformation in local languages calls for the cooperation of platforms, regulators, NGOs, and the communities involved. To win over the digital ecosystem, it has to speak all languages, not only for communication but also for protection.
References
- https://www.mdpi.com/2304-6775/10/2/15
- https://afpr.in/regional-languages-shaping-indias-online-discourse/
- https://medium.com/@pratikgsalvi03/how-indias-misinformation-surge-and-media-credibility-crisis-are-undermining-democracy-public-dc8ad7be8e12
- https://projectshakti.in/
- https://journals.sagepub.com/doi/10.1177/02683962211037693
- https://rsisinternational.org/journals/ijriss/Digital-Library/volume-8-issue-11/505-518.pdf
- https://www.irjmets.com/upload_newfiles/irjmets71200016652/paper_file/irjmets71200016652.pdf

Introduction
The ongoing debate on whether AI scaling has hit a wall has been rehashed by the underwhelming response to OpenAI’s ChatGPT v5. AI scaling laws, which describe that machine learning models perform better with increased training data, model parameters and computational resources, have guided the rapid progress of Large Language Models (LLMs) so far. But many AI researchers suggest that further improvements in LLMs will have to be effected through large computational costs by orders of magnitude, which does not justify the returns. The question, then, is whether scaling remains a viable path or whether the field must explore new approaches. This is not just a tech issue but a profound innovation challenge for countries like India, charting their own AI course.
The Scaling Wall: Gaps and Innovation Opportunities
Escalating costs, data scarcity, and diminishing gains mean that simply building larger AI models may no longer guarantee breakthroughs. In such a scenario, LLM developers will have to refine new approaches to training these models, for example, by diversifying data types and redefining training techniques.
This global challenge has a bearing on India’s AI ambitions. For India, where compute and data resources are relatively scarce, this scaling slowdown poses both a challenge and an opportunity. While the India AI Mission embodies smart priorities such as democratising compute resources and developing local datasets, looming scaling challenges could prove a roadblock. Realising these ambitions requires strong input from research and academia, and improved coordination between policymakers and startups. The scaling wall highlights systemic innovation gaps where sustained support is needed, not only in hardware but also in talent development, safety research, and efficient model design.
Way Forward
To truly harness AI’s transformative power, India must prioritise policy actions and ecosystem shifts that support smarter, safer, and context-rich research through the following measures:
- Driving Efficiency and Compute Innovation: Instead of relying on brute-force scaling, India should invest in research and startups working on efficient architectures, energy-conscious training methods, and compute optimisation.
- Investing in Multimodal and Diverse Data: While indigenous datasets are being developed under the India AI Mission through AI Kosha, they must be ethically sourced from speech, images, video, sensor data, and regional content, apart from text, to enable context-rich AI models truly tailored to Indian needs.
- Addressing Core Problems for Trustworthy AI: LLMs offered by all major companies, like OpenAI, Grok, and Deepseek, have the problem of unreliability, hallucinations, and biases, since they are primarily built on scaling large datasets and parameters, which have inherent limitations. India should invest in capabilities to solve these issues and design more trustworthy LLMs.
- Supporting Talent Development and Training: Despite its substantial AI talent pool, India faces an impending demand-supply gap. It will need to launch national programs and incentives to upskill engineers, researchers, and students in advanced AI skills such as model efficiency, safety, interpretability, and new training paradigms
Conclusion
The AI scaling wall debate is a reminder that the future of LLMs will depend not on ever-larger models but on smarter, safer, and more sustainable innovation. A new generation of AI is approaching us, and India can help shape its future. The country’s AI Mission and startup ecosystem are well-positioned to lead this shift by focusing on localised needs, efficient technologies, and inclusive growth, if implemented effectively. How India approaches this new set of challenges and translates its ambitions into action, however, remains to be seen.
References
- https://blogs.nvidia.com/blog/ai-scaling-laws/
- https://www.marketingaiinstitute.com/blog/scaling-laws-ai-wall
- https://fortune.com/2025/02/19/generative-ai-scaling-agi-deep-learning/
- https://indiaai.gov.in/
- https://www.deloitte.com/in/en/about/press-room/bridging-the-ai-talent-gap-to-boost-indias-tech-and-economic-impact-deloitte-nasscom-report.html

Introduction
February marks the beginning of Valentine’s Week, the time when we transcend from the season of smog to the season of love. This is a time when young people are more active on social media and dating apps with the hope of finding a partner to celebrate the occasion. Dating Apps, in order to capitalise on this occasion, launch special offers and campaigns to attract new users and string on the current users with the aspiration of finding their ideal partner. However, with the growing popularity of online dating, the tactics of cybercriminals have also penetrated this sphere. Scammers are now becoming increasingly sophisticated in manipulating individuals on digital platforms, often engaging in scams, identity theft, and financial fraud under the guise of romance. As love fills the air, netizens must stay vigilant and cautious while searching for a connection online and not fall into a scammer’s trap.
Here Are Some CyberPeace Tips To Avoid Romance Scams
- Recognize Red Flags of Romance Scams:- Online dating has made it easier to connect with people, but it has also become a tool for scammers to exploit the emotions of netizens for financial gain. They create fake profiles, build trust quickly, and then manipulate victims into sending money. Understanding their tactics can help you stay safe.
- Warning Signs of a Romance Scam:- If someone expresses strong feelings too soon, it’s a red flag. Scammers often claim to have fallen in love within days or weeks, despite never meeting in person. They use emotional pressure to create a false sense of connection. Their messages might seem off. Scammers often copy-paste scripted responses, making conversations feel unnatural. Poor grammar, inconsistencies in their stories, or vague answers are warning signs. Asking for money is the biggest red flag. They might have an emergency, a visa issue, or an investment opportunity they want you to help with. No legitimate relationship starts with financial requests.
- Manipulative Tactics Used by Scammers:- Scammers use love bombing to gain trust. They flood you with compliments, calling you their soulmate or destiny. This is meant to make you emotionally attached. They often share fake sob stories. It could be anything ranging from losing a loved one, facing a medical emergency, or even being stuck in a foreign country. These are designed to make you feel sorry for them and more willing to help. Some of these scammers might even pretend to be wealthy, being investors or successful business owners, showing off their fabricated luxury lifestyle in order to appear credible. Eventually, they’ll try to lure you into a fake investment. They create a sense of urgency. Whether it’s sending money, investing, or sharing personal details, scammers will push you to act fast. This prevents you from thinking critically or verifying your claims.
- Financial Frauds Linked to Romance Scams:- Romance scams have often led to financial fraud. Victims may be tricked into sending money directly or get roped into elaborate schemes. One common scam is the disappearing date, where someone insists on dining at an expensive restaurant, only to vanish before the bill arrives. Crypto scams are another major concern. Scammers convince victims to invest in fake cryptocurrency platforms, promising huge returns. Once the money is sent, the scammer disappears, leaving the victim with nothing.
- AI & Deepfake Risks in Online Dating:- Advancements in AI have made scams even more convincing. Scammers use AI-generated photos to create flawless, yet fake, profile pictures. These images often lack natural imperfections, making them hard to spot. Deepfake technology is also being used for video calls. Some scammers use pre-recorded AI-generated videos to fake live interactions. If a person’s expressions don’t match their words or their screen glitches oddly, it could be a deepfake.
- How to Stay Safe:-
- Always verify the identities of those who contact you on these sites. A simple reverse image search can reveal if someone’s profile picture is stolen.
- Avoid clicking suspicious links or downloading unknown apps sent by strangers. These can be used to steal your personal information.
- Trust your instincts. If something feels off, it probably is. Stay alert and protect yourself from online romance scams.
Best Online Safety Practices
- Prioritize Social Media Privacy:- Review and update your privacy settings regularly. Think before you share and be mindful of who can see your posts/stories. Avoid oversharing personal details.
- Report Suspicious Activities:- Even if a scam attempt doesn’t succeed, report it. Indian Cyber Crime Coordination Centre (I4C) 'Report Suspect' feature allow users to flag potential threats, helping prevent cybercrimes.
- Think Before You Click or Download:- Avoid clicking on unknown links or downloading attachments from unverified sources. These can be traps leading to phishing scams or malware attacks.
- Protect Your Personal Information:- Be cautious with whom and how you share your sensitive details online. Cybercriminals exploit even the smallest data points to orchestrate fraud.