#FactCheck: Viral AI image shown as AI -171 caught fire after collision
Executive Summary:
A dramatic image circulating online, showing a Boeing 787 of Air India engulfed in flames after crashing into a building in Ahmedabad, is not a genuine photograph from the incident. Our research has confirmed it was created using artificial intelligence.

Claim:
Social media posts and forwarded messages allege that the image shows the actual crash of Air India Flight AI‑171 near Ahmedabad airport on June 12, 2025.

Fact Check:
In our research to validate the authenticity of the viral image, we conducted a reverse image search and analyzed it using AI-detection tools like Hive Moderation. The image showed clear signs of manipulation, distorted details, and inconsistent lighting. Hive Moderation flagged it as “Likely AI-generated”, confirming it was synthetically created and not a real photograph.

In contrast, verified visuals and information about the Air India Flight AI-171 crash have been published by credible news agencies like The Indian Express and Hindustan Times, confirmed by the aviation authorities. Authentic reports include on-ground video footage and official statements, none of which feature the viral image. This confirms that the circulating photo is unrelated to the actual incident.

Conclusion:
The viral photograph is a fabrication, created by AI, not a real depiction of the Ahmedabad crash. It does not represent factual visuals from the tragedy. It’s essential to rely on verified images from credible news agencies and official investigation reports when discussing such sensitive events.
- Claim: An Air India Boeing aircraft crashed into a building near Ahmedabad airport
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
%20(1).webp)
Introduction
The Central Electricity Authority (CEA) has released the Draft Central Electricity Authority (Cyber Security in Power Sector) Regulations, 2024, inviting ‘comments’ from stakeholders, including the general public, which are to be submitted by 10 September 2024. The new regulation is intended to make India’s power sector more cyber-resilient and responsive to counter emerging cyber threats and safeguard the nation's power infrastructure.
Key Highlights of the CEA’s New (Cyber Security in Power Sector) Regulations, 2024
- Central Electricity Authority has framed the ‘Cyber Security in Power Sector Regulations, 2024’ in the exercise of the powers conferred by sub-section (1) of 177 of the Electricity Act, 2003 in order to make regulations for measures relating to Cyber Security in the power sector.
- The scope of the regulation entails that these regulations will be applicable to all Responsible Entities, Regional Power Committees, Appropriate Commission, Appropriate Government and Associated Power Sector Government Organizations, and Training Institutes recognized by the Authority, Authority and Vendors.
- One key aspect of the proposed regulation is the establishment of a dedicated Computer Security Incident Response Team (CSIRT) for the power sector. This team will coordinate a unified cyber defense strategy throughout the sector, establishing security frameworks, and serving as the main agency for handling incident response and recovery. The CSIRT will also be responsible for creating/developing Standard Operating Procedures (SOPs), security policies, and best practices for incident response activities in consultation with CERT-In and NCIIPC. The detailed roles and responsibilities of CSIRT are outlined under Chapter 2 of the said regulations.
- All responsible entities in the power sector as mentioned under the scope of the regulation, are mandated to appoint a Chief Information Security Officer (CISO) and an alternate CISO, who need to be Indian nationals and who are senior management employees. The regulations specify that these officers must directly report to the CEO/Head of the Responsible Entity. Thus emphasizing the critical nature of CISO’s roles in safeguarding the nation’s power grid sector assets.
- All Responsible Entities shall establish an Information Security Division (ISD) dedicated to ensuring Cyber Security, headed by the CISO and remain operational around the clock. The schedule under regulation entails that the minimum workforce required for setting up an ISD is 04 (Four) officers including CISO and 04 officers/officials for shift operations. Sufficient workforce and infrastructure support shall be ensured for ISD. The detailed functions and responsibilities of ISD are outlined under Chapter 5 regulation 10. Furthermore, the ISD shall be manned by sufficient numbers of officers, having valid certificates of successful completion of domain-specific Cyber Security courses.
- The regulation obliged the entities to have a defined, documented and maintained Cyber Security Policy which is approved by the Board or Head of the entity. The regulation also obliged the entities to have a Cyber Crisis Management Plan (CCMP) approved by the higher management.
- As regards upskilling and empowerment the regulation advocates for organising or conducting periodic Cyber Security awareness programs and Cyber Security exercises including mock drills and tabletop exercises.
CyberPeace Policy Outlook
CyberPeace Policy & Advocacy Vertical has submitted its detailed recommendations on the proposed ‘Cyber Security in Power Sector Regulations, 2024’ to the Central Electricity Authority, Government of India. We have advised on various aspects within the regulation including harmonisation of these regulations with other rules as issued by CERT-In and NCIIPC, at present. As this needs to be clarified which set of guidelines will supersede in case of any discrepancy that may arise. Additionally, we advised on incorporating or making modifications to specific provisions under the regulation for a more robust framework. We have also emphasized legal mandates and penalties for non-compliance with cybersecurity, so as to make sure that these regulations do not only act as guiding principles but also provide stringent measures in case of non-compliance.
References:
%20(1).webp)
Digitisation in Agriculture
The traditional way of doing agriculture has undergone massive digitization in recent years, whereby several agricultural processes have been linked to the Internet. This globally prevalent transformation, driven by smart technology, encompasses the use of sensors, IoT devices, and data analytics to optimize and automate labour-intensive farming practices. Smart farmers in the country and abroad now leverage real-time data to monitor soil conditions, weather patterns, and crop health, enabling precise resource management and improved yields. The integration of smart technology in agriculture not only enhances productivity but also promotes sustainable practices by reducing waste and conserving resources. As a result, the agricultural sector is becoming more efficient, resilient, and capable of meeting the growing global demand for food.
Digitisation of Food Supply Chains
There has also been an increase in the digitisation of food supply chains across the globe since it enables both suppliers and consumers to keep track of the stage of food processing from farm to table and ensures the authenticity of the food product. The latest generation of agricultural robots is being tested to minimise human intervention. It is thought that AI-run processes can mitigate labour shortage, improve warehousing and storage and make transportation more efficient by running continuous evaluations and adjusting the conditions real-time while increasing yield. The company Muddy Machines is currently trialling an autonomous asparagus-harvesting robot called Sprout that not only addresses labour shortages but also selectively harvests green asparagus, which traditionally requires careful picking. However, Chris Chavasse, co-founder of Muddy Machines, highlights that hackers and malicious actors could potentially hack into the robot's servers and prevent it from operating by driving it into a ditch or a hedge, thereby impending core crop activities like seeding and harvesting. Hacking agricultural pieces of machinery also implies damaging a farmer’s produce and in turn profitability for the season.
Case Study: Muddy Machines and Cybersecurity Risks
A cyber attack on digitised agricultural processes has a cascading impact on online food supply chains. Risks are non-exhaustive and spill over to poor protection of cargo in transit, increased manufacturing of counterfeit products, manipulation of data, poor warehousing facilities and product-specific fraud, amongst others. Additional impacts on suppliers are also seen, whereby suppliers have supplied the food products but fail to receive their payments. These cyber-threats may include malware(primarily ransomware) that accounts for 38% of attacks, Internet of Things (IoT) attacks that comprise 29%, Distributed Denial of Service (DDoS) attacks, SQL Injections, phishing attacks etc.
Prominent Cyber Attacks and Their Impacts
Ransomware attacks are the most popular form of cyber threats to food supply chains and may include malicious contaminations, deliberate damage and destruction of tangible assets (like infrastructure) or intangible assets (like reputation and brand). In 2017, NotPetya malware disrupted the world’s largest logistics giant Maersk and destroyed all end-user devices in more than 60 countries. Interestingly, NotPetya was also linked to the malfunction of freezers connected to control systems. The attack led to these control systems being compromised, resulting in freezer failures and potential spoilage of food, highlighting the vulnerability of industrial control systems to cyber threats.
Further Case Studies
NotPetya also impacted Mondelez, the maker of Oreos but disrupting its email systems, file access and logistics for weeks. Mondelez’s insurance claim was also denied since NotPetya malware was described as a “war-like” action, falling outside the purview of the insurance coverage. In April 2021, over the Easter weekend, Bakker Logistiek, a logistics company based in the Netherlands that offers air-conditioned warehousing and food transportation for Dutch supermarkets, experienced a ransomware attack. This incident disrupted their supply chain for several days, resulting in empty shelves at Albert Heijn supermarkets, particularly for products such as packed and grated cheese. Despite the severity of the attack, the company successfully restored their operations within a week by utilizing backups. JBS, one of the world’s biggest meat processing companies, also had to pay $11 million in ransom via Bitcoin to resolve a cyber attack in the same year, whereby computer networks at JBS were hacked, temporarily shutting down their operations and endangering consumer data. The disruption threatened food supplies and risked higher food prices for consumers. Additional cascading impacts also include low food security and hindrances in processing payments at retail stores.
Credible Threat Agents and Their Targets
Any cyber-attack is usually carried out by credible threat agents that can be classified as either internal or external threat agents. Internal threat agents may include contractors, visitors to business sites, former/current employees, and individuals who work for suppliers. External threat agents may include activists, cyber-criminals, terror cells etc. These threat agents target large organisations owing to their larger ransom-paying capacity, but may also target small companies due to their vulnerability and low experience, especially when such companies are migrating from analogous methods to digitised processes.
The Federal Bureau of Investigation warns that the food and agricultural systems are most vulnerable to cyber-security threats during critical planting and harvesting seasons. It noted an increase in cyber-attacks against six agricultural co-operatives in 2021, with ancillary core functions such as food supply and distribution being impacted. Resultantly, cyber-attacks may lead to a mass shortage of food not only meant for human consumption but also for animals.
Policy Recommendations
To safeguard against digital food supply chains, Food defence emerges as one of the top countermeasures to prevent and mitigate the effects of intentional incidents and threats to the food chain. While earlier, food defence vulnerability assessments focused on product adulteration and food fraud, including vulnerability assessments of agriculture technology now be more relevant.
Food supply organisations must prioritise regular backups of data using air-gapped and password-protected offline copies, and ensure critical data copies are not modifiable or deletable from the main system. For this, blockchain-based food supply chain solutions may be deployed, which are not only resilient to hacking, but also allow suppliers and even consumers to track produce. Companies like Ripe.io, Walmart Global Tech, Nestle and Wholechain deploy blockchain for food supply management since it provides overall process transparency, improves trust issues in the transactions, enables traceable and tamper-resistant records and allows accessibility and visibility of data provenance. Extensive recovery plans with multiple copies of essential data and servers in secure, physically separated locations, such as hard drives, storage devices, cloud or distributed ledgers should be adopted in addition to deploying operations plans for critical functions in case of system outages. For core processes which are not labour-intensive, including manual operation methods may be used to reduce digital dependence. Network segmentation, updates or patches for operating systems, software, and firmware are additional steps which can be taken to secure smart agricultural technologies.
References
- Muddy Machines website, Accessed 26 July 2024. https://www.muddymachines.com/
- “Meat giant JBS pays $11m in ransom to resolve cyber-attack”, BBC, 10 June 2021. https://www.bbc.com/news/business-57423008
- Marshall, Claire & Prior, Malcolm, “Cyber security: Global food supply chain at risk from malicious hackers.”, BBC, 20 May 2022. https://www.bbc.com/news/science-environment-61336659
- “Ransomware Attacks on Agricultural Cooperatives Potentially Timed to Critical Seasons.”, Private Industry Notification, Federal Bureau of Investigation, 20 April https://www.ic3.gov/Media/News/2022/220420-2.pdf.
- Manning, Louise & Kowalska, Aleksandra. (2023). “The threat of ransomware in the food supply chain: a challenge for food defence”, Trends in Organized Crime. https://doi.org/10.1007/s12117-023-09516-y
- “NotPetya: the cyberattack that shook the world”, Economic Times, 5 March 2022. https://economictimes.indiatimes.com/tech/newsletters/ettech-unwrapped/notpetya-the-cyberattack-that-shook-the-world/articleshow/89997076.cms?from=mdr
- Abrams, Lawrence, “Dutch supermarkets run out of cheese after ransomware attack.”, Bleeping Computer, 12 April 2021. https://www.bleepingcomputer.com/news/security/dutch-supermarkets-run-out-of-cheese-after-ransomware-attack/
- Pandey, Shipra; Gunasekaran, Angappa; Kumar Singh, Rajesh & Kaushik, Anjali, “Cyber security risks in globalised supply chains: conceptual framework”, Journal of Global Operations and Strategic Sourcing, January 2020. https://www.researchgate.net/profile/Shipra-Pandey/publication/338668641_Cyber_security_risks_in_globalized_supply_chains_conceptual_framework/links/5e2678ae92851c89c9b5ac66/Cyber-security-risks-in-globalized-supply-chains-conceptual-framework.pdf
- Daley, Sam, “Blockchain for Food: 10 examples to know”, Builin, 22 March 2023 https://builtin.com/blockchain/food-safety-supply-chain

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.