#FactCheck: Misleading Clip of Nepal Crash Shared as Air India’s AI-171 Ahmedabad Accident
Executive Summary:
A viral video circulating on social media platforms, claimed to show the final moments of an Air India flight carrying passengers inside the cabin just before it crashed near Ahmedabad on June 12, 2025, is false. However, upon further research, the footage was found to originate from the Yeti Airlines Flight 691 crash that occurred in Pokhara, Nepal, on January 15, 2023. For all details, please follow the report.

Claim:
Viral videos circulating on social media claiming to show the final moments inside Air India flight AI‑171 before it crashed near Ahmedabad on June 12, 2025. The footage appears to have been recorded by a passenger during the flight and is being shared as real-time visuals from the recent tragedy. Many users have believed the clip to be genuine and linked it directly to the Air India incident.


Fact Check:
To confirm the validity of the video going viral depicting the alleged final moments of Air India's AI-171 that crashed near Ahmedabad on 12 June 2025, we engaged in a comprehensive reverse image search and keyframe analysis then we got to know that the footage occurs back in January 2023, namely Yeti Airlines Flight 691 that crashed in Pokhara, Nepal. The visuals shared in the viral video match up, including cabin and passenger details, identically to the original livestream made by a passenger aboard the Nepal flight, confirming that the video is being reused out of context.

Moreover, well-respected and reliable news organisations, including New York Post and NDTV, have shared reports confirming that the video originated from the 2023 Nepal plane crash and has no relation to the recent Air India incident. The Press Information Bureau (PIB) also released a clarification dismissing the video as disinformation. Reliable reports from the past, visual evidence, and reverse search verification all provide complete agreement in that the viral video is falsely attributed to the AI-171 tragedy.


Conclusion:
The viral footage does not show the AI-171 crash near Ahmedabad on 12 June 2025. It is an irrelevant, previously recorded livestream from the January 2023 Yeti Airlines crash in Pokhara, Nepal, falsely repurposed as breaking news. It’s essential to rely on verified and credible news agencies. Please refer to official investigation reports when discussing such sensitive events.
- Claim: A dramatic clip of passengers inside a crashing plane is being falsely linked to the recent Air India tragedy in Ahmedabad.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
%20(1).jpg)
Introduction
Artificial Intelligence (AI) driven autonomous weapons are reshaping military strategy, acting as force multipliers that can independently assess threats, adapt to dynamic combat environments, and execute missions with minimal human intervention, pushing the boundaries of modern warfare tactics. AI has become a critical component of modern technology-driven warfare and has simultaneously impacted many spheres in a technology-driven world. Nations often prioritise defence for significant investments, supporting its growth and modernisation. AI has become a prime area of investment and development for technological superiority in defence forces. India’s focus on defence modernisation is evident through initiatives like the Defence AI Council and the Task Force on Strategic Implementation of AI for National Security.
The main requirement that Autonomous Weapons Systems (AWS) require is the “autonomy” to perform their functions when direction or input from a human actor is absent. AI is not a prerequisite for the functioning of AWSs, but, when incorporated, AI could further enable such systems. While militaries seek to apply increasingly sophisticated AI and automation to weapons technologies, several questions arise. Ethical concerns have been raised for AWS as the more prominent issue by many states, international organisations, civil society groups and even many distinguished figures.
Ethical Concerns Surrounding Autonomous Weapons
The delegation of life-and-death decisions to machines is the ethical dilemma that surrounds AWS. A major concern is the lack of human oversight, raising questions about accountability. What if AWS malfunctions or violates international laws, potentially committing war crimes? This ambiguity fuels debate over the dangers of entrusting lethal force to non-human actors. Additionally, AWS poses humanitarian risks, particularly to civilians, as flawed algorithms could make disastrous decisions. The dehumanisation of warfare and the violation of human dignity are critical concerns when AWS is in question, as targets become reduced to mere data points. The impact on operators’ moral judgment and empathy is also troubling, alongside the risk of algorithmic bias leading to unjust or disproportionate targeting. These ethical challenges are deeply concerning.
Balancing Ethical Considerations and Innovations
It is immaterial how advanced a computer becomes in simulating human emotions like compassion, empathy, altruism, or other emotions as the machine will only be imitating them, not experiencing them as a human would. A potential solution to this ethical predicament is using a 'human-in-the-loop' or 'human-on-the-loop' semi-autonomous system. This would act as a compromise between autonomy and accountability.
A “human-on-the-loop” system is designed to provide human operators with the ability to intervene and terminate engagements before unacceptable levels of damage occur. For example, defensive weapon systems could autonomously select and engage targets based on their programming, during which a human operator retains full supervision and can override the system within a limited period if necessary.
In contrast, a ‘human-in-the-loop” system is intended to engage individual targets or specific target groups pre-selected by a human operator. Examples would include homing munitions that, once launched to a particular target location, search for and attack preprogrammed categories of targets within the area.
International Debate and Regulatory Frameworks
The regulation of autonomous weapons that employ AI, in particular, is a pressing global issue due to the ethical, legal, and security concerns it contains. There are many ongoing efforts at the international level which are in discussion to regulate such weapons. One such example is the initiative under the United Nations Convention on CertainConventional Weapons (CCW), where member states, India being an active participant, debate the limits of AI in warfare. However, existing international laws, such as the Geneva Conventions, offer legal protection by prohibiting indiscriminate attacks and mandating the distinction between combatants and civilians. The key challenge lies in achieving global consensus, as different nations have varied interests and levels of technological advancement. Some countries advocate for a preemptive ban on fully autonomous weapons, while others prioritise military innovation. The complexity of defining human control and accountability further complicates efforts to establish binding regulations, making global cooperation both essential and challenging.
The Future of AI in Defence and the Need for Stronger Regulations
The evolution of autonomous weapons poses complex ethical and security challenges. As AI-driven systems become more advanced, a growing risk of its misuse in warfare is also advancing, where lethal decisions could be made without human oversight. Proactive regulation is crucial to prevent unethical use of AI, such as indiscriminate attacks or violations of international law. Setting clear boundaries on autonomous weapons now can help avoid future humanitarian crises. India’s defence policy already recognises the importance of regulating the use of AI and AWS, as evidenced by the formation of bodies like the Defence AI Project Agency (DAIPA) for enabling AI-based processes in defence Organisations. Global cooperation is essential for creating robust regulations that balance technological innovation with ethical considerations. Such collaboration would ensure that autonomous weapons are used responsibly, protecting civilians and combatants, while encouraging innovation within a framework prioritising human dignity and international security.
Conclusion
AWS and AI in warfare present significant ethical, legal, and security challenges. While these technologies promise enhanced military capabilities, they raise concerns about accountability, human oversight, and humanitarian risks. Balancing innovation with ethical responsibility is crucial, and semi-autonomous systems offer a potential compromise. India’s efforts to regulate AI in defence highlight the importance of proactive governance. Global cooperation is essential in establishing robust regulations that ensure AWS is used responsibly, prioritising human dignity and adherence to international law, while fostering technological advancement.
References
● https://indianexpress.com/article/explained/reaim-summit-ai-war-weapons-9556525/
.webp)
Introduction
In today's digital economy, data is not only a business asset but also the fuel for innovation, decision-making, and consumer trust. However, the digitisation of services has made personal or sensitive data a top target for cybercriminals. The stakes are high: a data breach can cost millions of fines, cause damage to reputation and devastate the confidence of consumers. Therefore, regulatory compliance and data protection have become a strategic imperative.
From the General Data Protection Regulation (GDPR) in the EU to the Digital Personal Data Protection (DPDP) Act of India, various sector-specific regulations like HIPAA for healthcare in the US, companies are now subject to a web of data protection and compliance laws. The challenge is to balance compliance efforts with strong security, a balance that demands both policy restraint and technical resilience. This blog examines pivotal pillars, shifting trends and actionable best practices for dominating data protection and compliance in 2025 and beyond.
Why Data Protection and Compliance Matter More Than Ever
Data protection isn't just about keeping fines at bay, it's about preserving the relationship with customers, partners and regulators. A 2024 IBM report says the average data-breach cost has now exceeded USD 4.5 million, with regulatory fines constituting a large portion of the cost. In addition to economics, breaches tend to result in intellectual property loss, customer loss and long-term brand attenuation. Compliance ensures organisations remain within certain legislative necessities for collecting, holding, transferring and setting of personal and sensitive information. Failure to conformity can lead to serious penalties: under GDPR, fines could be up to 4% of the company's annual turnover or €20 million, whichever is higher. In regulated sectors like banking and healthcare, compliance breaches can also lead to the suspension of licenses.
Important Regulatory Frameworks Informing 2025
- GDPR and Its Global Ripple Effect
GDPR was enacted in 2018 and continues to have a ripple effect on privacy legislation worldwide. Its tenets of lawfulness, transparency, data minimisation and purpose limitation have been replicated in many jurisdictions such as Brazil's LGPD and South Korea's PIPA.
- India's DPDP Act
The DPDP Act, 2023, gives high importance to consent-based processing of data, transparent notice rules and fiduciary responsibilities for data. With a penalty for default of up to INR 250 crore, it's amongst the most impactful laws for digital personal data protection.
- Sectoral Regulations
- HIPAA for healthcare information in the US.
- PCI DSS for payment card security.
- DORA (Digital Operational Resilience Act) in the EU for financial organisations.
- These industry-specific models generate overlapping compliance responsibilities, making cross-enterprise compliance programs vital.
Key Pillars of a Sound Data Protection & Compliance Program
- Data Governance and Classification
Having insight into what data you have to store, where it is stored and who can have access to it is the keystone of compliance. Organisations need to have data classification policies in place to group information based on sensitivity and impose more rigorous controls on sensitive data.
- Security Controls and Privacy by Design
Strong technical defences, encryption, multi-factor authentication, and intrusion detection are the initial defences. Privacy by design integrated in product development guarantees compliance is thought through from the initial stage, not added on afterwards.
- Consent and Transparency
Contemporary data legislation highlights informed consent. This entails simple, non-technical privacy notices, detailed opt-in choices, and straightforward withdrawal options. Transparency produces trust and lessens legal danger.
- Incident Response and Breach Notification
Most laws demand timely breach notifications, and GDPR insists on reporting within 72 hours. Having a documented incident response plan maintains legal deadlines and reduces harm.
- Employee Training and Awareness
Human mistake is the top source of data breaches. Ongoing training in prevention of phishing, password management, basic cyber hygiene and compliance requirements is crucial.
Upcoming Trends in 2025
- AI-Powered Compliance Monitoring
Organisations are embracing AI-powered solutions to systematically monitor data flows, identify policy breaches and auto-create compliance reports. The solutions assist in closing the loop between IT security teams and compliance officers.
- Cross-Border Data Transfer Mechanisms
With increasingly severe regulations, companies are spending more on secure cross-border data transfer frameworks like Standard Contractual Clauses (SCCs) and Binding Corporate Rules (BCRs).
- Privacy-Enhancing Technologies (PETs)
Methods such as homomorphic encryption and differential privacy are picking up steam, enabling organisations to sift through datasets without revealing sensitive personal data.
- ESG and Data Ethics
Data handling is increasingly becoming a part of Environmental, Social and Governance (ESG) reporting. Ethical utilisation of customer data, not just compliance, has become a reputational differentiator.
Challenges in Implementation
Despite having transparent frameworks, data protection plans encounter challenges like jurisdictions having competing needs, and global compliance is becoming expensive. The emerging technologies, such as generative AI, often bring privacy threats that haven’t been fully covered by legislation. Small and micro enterprises have neither the budget nor the skills to implement enterprise-level compliance programs. Qualifying these challenges often needs a risk-based strategy, allocations of resources to top areas of impact and automating the compliance chores wherever possible.
Best Practices for 2025 and Beyond
In 2025, regulatory compliance and data protection are no longer a precaution or a response to a breach but are strategic drivers of resilience and trust. As regulatory analysis rises, cyber threats evolve, and consumer expectations grow, administrations need to integrate compliance into the very fabric of their actions. By bringing governance and technology together, organisations can break free from a "checklist" mentality and instead adopt a proactive and risk-sensitive approach. Eventually, data protection is not just about not getting in trouble; it's about developing a kind that succeeds in the digital era.
References
- GDPR – Official EU Regulation Page: https://gdpr.eu
- India’s DPDP Act Overview – MeitY: https://www.meity.gov.in/data-protection-framework
- HIPAA – US Department of Health & Human Services: https://www.hhs.gov/hipaa
- PCI DSS Standards: https://www.pcisecuritystandards.org
- IBM Cost of a Data Breach Report 2024: https://www.ibm.com/reports/data-breach
- OECD – Privacy Guidelines: https://www.oecd.org/sti/privacy-guidelines

Introduction
Romance scams have been rised in India. A staggering 66 percent of individuals in India have been ensnared by the siren songs of deceitful online dating schemes. These are not the attempts of yesteryears but rather a new breed of scams, seamlessly weaving the threads of traditional deceit with the sinew of cutting-edge technologies such as generative AI and deep fakes. A report by Tenable highlights the rise of romance scams in India, which now combine traditional tactics with advanced technologies like generative AI and deepfakes. Over 69% of Indians struggle to distinguish between artificial and authentic human voices. Scammers are using celebrity impersonations and platforms like Facebook to lure victims into a false sense of security.
The Romance Scam
A report by Tenable, the exposure management company, illuminates the disturbing evolution of these romance scams. It reveals a reality: AI-generated deep lakes have attained a level of sophistication where an astonishing 69 percent of Indians confess to struggling to discern between artificial and authentic human voices. This technological prowess has armed scammers with the tools to craft increasingly convincing personas, enabling them to perpetrate their nefarious acts with alarming success.
In 2023 alone, 43 percent of Indians reported falling victim to AI voice scams, with a staggering 83 percent of those targeted suffering financial loss. The scammers, like puppeteers, manipulate their digital marionettes with a deftness that is both awe-inspiring and horrifying. They have mastered the art of impersonating celebrities and fabricating personas that resonate with their targets, particularly preying on older demographics who may be more susceptible to their charms.
Social media platforms, which were once heralded as the town squares of the 21st century, have unwittingly become fertile grounds for these fraudulent activities. They lure victims into a false sense of security before the scammers orchestrate their deceitful symphonies. Chris Boyd, a staff research engineer at Tenable, issues a stern warning against the lure of private conversations, where the protective layers of security are peeled away, leaving individuals exposed to the machinations of these digital charlatans.
The Vulnerability of Individuals
The report highlights the vulnerability of certain individuals, especially those who are older, widowed, or experiencing memory loss. These individuals are systematically targeted by heartless criminals who exploit their longing for connection and companionship. The importance of scrutinising requests for money from newfound connections is underscored, as is the need for meticulous examination of photographs and videos for any signs of manipulation or deceit.
'Increasing awareness and maintaining vigilance are our strongest weapons against these heartless manipulations, 'safeguarding love seekers from the treacherous web of AI-enhanced deception.'
The landscape of love has been irrevocably altered by the prevalence of smartphones and the deep proliferation of mobile internet. Finding love has morphed into a digital odyssey, with more and more Indians turning to dating apps like Tinder, Bumble, and Hinge. Yet, as with all technological advancements, there lurks a shadowy underbelly. The rapid adoption of dating sites has provided potential scammers with a veritable goldmine of opportunity.
It is not uncommon these days to hear tales of individuals who have lost their life savings to a person they met on a dating site or who have been honey-trapped and extorted by scammers on such platforms. A new study, titled 'Modern Love' and published by McAfee ahead of Valentine's Day 2024, reveals that such scams are rampant in India, with 39 percent of users reporting that their conversations with a potential love interest online turned out to be with a scammer.
The study also found that 77 percent of Indians have encountered fake profiles and photos that appear AI-generated on dating websites or apps or on social media, while 26 percent later discovered that they were engaging with AI-generated bots rather than real people. 'The possibilities of AI are endless, and unfortunately, so are the perils,' says Steve Grobman, McAfee’s Chief Technology Officer.
Steps to Safeguard
Scammers have not limited their hunting grounds to dating sites alone. A staggering 91 percent of Indians surveyed for the study reported that they, or someone they know, have been contacted by a stranger through social media or text message and began to 'chat' with them regularly. Cybercriminals exploit the vulnerability of those seeking love, engaging in long and sophisticated attempts to defraud their victims.
McAfee offers some steps to protect oneself from online romance and AI scams:
- Scrutinise any direct messages you receive from a love interest via a dating app or social media.
- Be on the lookout for consistent, AI-generated messages which often lack substance or feel generic.
- Avoid clicking on any links in messages from someone you have not met in person.
- Perform a reverse image search of any profile pictures used by the person.
- Refrain from sending money or gifts to someone you haven’t met in person, even if they send you money first.
- Discuss your new love interest with your trusted friend. It can be easy to overlook red flags when you are hopeful and excited.
Conclusion
The path is fraught with illusions, and only by arming oneself with knowledge and scepticism can one hope to find true connection without falling prey to the mirage of deceit. As we navigate this treacherous terrain, let us remember that the most profound connections are often those that withstand the test of time and the scrutiny of truth.
References
- https://www.businesstoday.in/technology/news/story/valentine-day-alert-deepfakes-genai-amplifying-romance-scams-in-india-warn-researchers-417245-2024-02-13
- https://www.indiatimes.com/amp/news/india/valentines-day-around-40-per-cent-indians-have-been-scammed-while-looking-for-love-online-627324.html
- https://zeenews.india.com/technology/valentine-day-deepfakes-in-romance-scams-generative-ai-in-scams-romance-scams-in-india-online-dating-scams-in-india-ai-voice-scams-in-india-cyber-security-in-india-2720589.html
- https://www.mcafee.com/en-us/consumer-corporate/newsroom/press-releases/2023/20230209.html