#FactCheck: Viral AI image shown as AI -171 caught fire after collision
Executive Summary:
A dramatic image circulating online, showing a Boeing 787 of Air India engulfed in flames after crashing into a building in Ahmedabad, is not a genuine photograph from the incident. Our research has confirmed it was created using artificial intelligence.

Claim:
Social media posts and forwarded messages allege that the image shows the actual crash of Air India Flight AI‑171 near Ahmedabad airport on June 12, 2025.

Fact Check:
In our research to validate the authenticity of the viral image, we conducted a reverse image search and analyzed it using AI-detection tools like Hive Moderation. The image showed clear signs of manipulation, distorted details, and inconsistent lighting. Hive Moderation flagged it as “Likely AI-generated”, confirming it was synthetically created and not a real photograph.

In contrast, verified visuals and information about the Air India Flight AI-171 crash have been published by credible news agencies like The Indian Express and Hindustan Times, confirmed by the aviation authorities. Authentic reports include on-ground video footage and official statements, none of which feature the viral image. This confirms that the circulating photo is unrelated to the actual incident.

Conclusion:
The viral photograph is a fabrication, created by AI, not a real depiction of the Ahmedabad crash. It does not represent factual visuals from the tragedy. It’s essential to rely on verified images from credible news agencies and official investigation reports when discussing such sensitive events.
- Claim: An Air India Boeing aircraft crashed into a building near Ahmedabad airport
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The government has announced that the new criminal laws will come into force on 1st July 2024. The Union Government notified that three recently enacted criminal laws, viz. Bhartiya Nyaya Sanhita 2023, Bharatiya Nagarik Suraksha Sanhita 2023, and Bharatiya Sakshya Adhiniyam 2023 will be effective from 1st July 2024. The Indian Penal Code 1860, Code of Criminal Procedure 1973, and Indian Evidence Act 1872 have been replaced by these new criminal laws.
On 23 February 2024, the Ministry of Home Affairs Announced the Effective Date of new criminal laws as follows:
- Bharatiya Nyaya Sanhita, 2023 Effective from 1-7-2024, except Section 106(2).
- Bharatiya Sakshya Adhiniyam, 2023 Effective from 1-7-2024.
- Bharatiya Nagarik Suraksha Sanhita, 2023 The provisions will come into force on 1-7-2024 except the provisions of the entry relating to section 106(2) of the Bharatiya Nyaya Sanhita, 2023, in the First Schedule.
Section 106(2) Will Not Be Enforced
Truckers protested against this provision, which provides 10 years imprisonment and fines for those who cause death by rash and negligent driving of a vehicle not amounting to culpable homicide, and escape without reporting it to a police officer. As of now, the government has promised truckers and transporters that subsection 2 of Section 106 of Bharatiya Nyay Sanhita (BNS) will not come into force. This subsection deals with fatal hit-and-run cases and prescribes higher penalties for not informing authorities immediately after an accident.
Section 106(2) of Bharatiya Nyaya Sanhita, 2023 read as follows;
106. Causing death by negligence.—
(2) Whoever causes death of any person by rash and negligent driving of vehicle not amounting to culpable homicide, and escapes without reporting it to a police officer or a Magistrate soon after the incident, shall be punished with imprisonment of either description of aterm which may extend to ten years, and shall also be liable to fine.
BHARATIYA SAKSHYA ADHINIYAM, 2023
The Bhartiya Sakshya Adhiniyam 2023 will replace the Indian Evidence Act 1872. The Act has undergone significant modification to maintain its fundamental principles for fair legal proceedings and adapt to technological advancements and changes in societal norms. This Act recognises electronic records as primary evidence under Section 57. It also allows the electronic presentation of oral evidence, enabling remote testimony and ensuring that electronic records will have the same legal effect as paper records.
Bharatiya Nagarik Suraksha Sanhita, 2023
The Bharatiya Nagarik Suraksha Sanhita, 2023 replaces the 1973 Code of Criminal Procedure, introducing certain modifications. This Act, under section 176, requires forensic investigation for crimes punished with seven years' imprisonment or more. Section 530 of BNSS, 2023 is a newly inserted provision which envisages the use of electronic communication audio-video electronic means for trials, inquiries, proceedings, service and issuance of summons. Electronic mode is permitted for all trials, inquiries, and proceedings under section 173 of this Act. The concept of Zero FIR is also introduced under section 173(1) and mandates police stations to register the FIR, irrespective of jurisdiction.
Conclusion
India's new criminal laws are set to take effect on 1st July 2024. These laws modernise the country's legal framework, replacing outdated statutes and incorporating technological advancements. The concerns from stakeholders led to the withholding of enforcement of Section 106(2) of Bharatiya Nyaya Sanhita 2023. The new criminal laws aim to address contemporary society's complexities while upholding justice and fairness.
References
- https://www.indiatoday.in/india/video/new-criminal-laws-to-come-into-effect-from-july-1-2506664-2024-02-24
- https://www.lawrbit.com/article/ipc-crpc-evidence-act-replaced-by-new-criminal-laws/

Introduction
Online dating platforms have become a common way for individuals to connect in today’s digital age. For many in the LGBTQ+ community, especially in environments where offline meeting spaces are limited, these platforms offer a way to find companionship and support. However, alongside these opportunities come serious risks. Users are increasingly being targeted by cybercrimes such as blackmail, sextortion, identity theft, and online harassment. These incidents often go unreported due to stigma and concerns about privacy. The impact of such crimes can be both emotional and financial, highlighting the need for greater awareness and digital safety.
Cybercrime On LGBTQ+ Dating Apps: A Threat Landscape
According to the NCRB 2022 report, there has been a 24.4% increase in cybercrimes. But unfortunately, the queer community-specific data is not available. Cybercrimes that target LGBTQ+ users in very organised and predatory. In several Indian cities, gangs actively monitor dating platforms to the point that potential victims, especially young queers and those who seem discreet about their identity, become targets. Once the contact is established, perpetrators use a standard operating process, building false trust, forcing private exchanges, and then gradually starting blackmail and financial exploitation. Many queer victims are blackmailed with threats of exposure to families or workplaces, often by fake police demanding bribes. Fear of stigma and insensitive policing discourages reporting. Cyber criminal gangs exploit these gaps on dating apps. Despite some arrests, under-reporting persists, and activists call for stronger platform safety.
Types of Cyber Crimes against Queer Community on Dating Apps
- Romance scam or “Lonely hearts scam”: Scammers build trust with false stories (military, doctors, NGO workers) and quickly express strong romantic interest. They later request money, claiming emergencies. They often try to create multiple accounts to avoid profile bans.
- Sugar daddy scam: In this type of scam, the fraudster offers money or allowance in exchange for things like chatting, sending photos, or other interactions. They usually offer a specific amount and want to use some uncommon payment gateways. After telling you they will send you a lot of money, they often make up a story like: “My last sugar baby cheated me, so now you must first send me a small amount to prove you are trustworthy.” This is just a trick to make you send them money first.
- Sextortion / Blackmail scam: Scammers record explicit chats or pretend to be underage, then threaten exposure unless you pay. Some target discreet users. Never send explicit content or pay blackmailers.
- Investment Scams: Scammers posing as traders or bankers convince victims to invest in fake opportunities. Some "flip" small amounts to build trust, then disappear with larger sums. Real investors won’t approach you on dating apps. Don’t share financial info or transfer money.
- Pay-Before-You-Meet scam: Scammer demands upfront payment (gift cards, gas money, membership fees) before meeting, then vanishes. Never pay anyone before meeting in person.
- Security app registration scam: Scammers ask you to register on fake "security apps" to steal your info, claiming it ensures your safety. Research apps before registering. Be wary of quick link requests.
- The Verification code scam: Scammers trick you into giving them SMS verification codes, allowing them to hijack your accounts. Never share verification codes with anyone.
- Third-party app links: Mass spam messages with suspicious links that steal info or infect devices. Don’t click suspicious links or “Google me” messages.
- Support message scam: Messages pretending to be from application support, offering prizes or fake shows to lure you to malicious sites.
Platform Accountability & Challenges
The issue of online dating platforms in India is characterised by weak grievance redressal, poor takedown of abusive profiles, and limited moderation practices. Most platforms appoint grievance officers or offer an in-app complaint portal, but complaints are often unanswered or receive only automated and AI-generated responses. This highlights the gap between policy and enforcement on the ground.
Abusive or fake profiles, often used for scams, hate crimes, and outing LGBTQ+ individuals, remain active long after being reported. In India, organised extortion gangs have exploited such profiles to lure, assault, rob, and blackmail queer men. Moderation teams often struggle with backlogs and lack the resources needed to handle even the most serious complaints.
Despite offering privacy settings and restricting profile visibility, moderation practices in India are still weak, leaving large segments of users vulnerable to impersonation, catfishing, and fraud. The concept of pseudonymisation can help protect vulnerable communities, but it is difficult to distinguish authentic users from malicious actors without robust, privacy-respecting verification systems.
Since many LGBTQ+ individuals prefer to maintain their confidentiality, while others are more vocal about their identities, in either case, the data shared by an individual with an online dating platform must be vigilantly protected. The Digital Personal Data Protection Act, 2023, mandates the protection of personal data. Section 8(4) provides: “A Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the rules made thereunder.” Accordingly, digital platforms collecting such data should adopt the necessary technical and organisational measures to comply with data protection laws.
Recommendations
The Supreme Court has been proactive in this regard, through decisions like Navtej Singh Johar v. Union of India, which decriminalised same-sex relationships. Justice K.S. Puttaswamy (Retd.) v. Union of India and Ors., acknowledged the right to privacy as a fundamental right, and, most recently, the 2025 affirmation of the right to digital access. However, to protect LGBTQ+ people online, more robust legal frameworks are still required.
There is a requirement for a dedicated commission or an empowered LGBTQ+ cell. Like the National Commission for Women (NCW), which works to safeguard the rights of women, a similar commission would address community-specific issues, including cybercrime, privacy violations, and discrimination on digital platforms. It may serve as an institutional link between the victim, the digital platforms, the government, and the police. Dating Platforms must enhance their security features and grievance mechanisms to safeguard the users.
Best Practices
Scammers use data sets and plans to target individuals seeking specific interests, such as love, sex, money, or association. Do not make financial transactions, such as signing up for third-party platforms or services. Scammers may attempt to create accounts for others, which can be used to access dating platforms and harm legitimate users. Users should be vigilant about sharing sensitive information, such as private images, contact information, or addresses, as scammers can use this information to threaten users. Stay smart, stay cyber safe.
References
- https://www.hindustantimes.com/htcity/cinema/16yearold-queer-child-pranshu-dies-by-suicide-due-to-bullying-did-we-fail-as-a-society-mental-health-expert-opines-101701172202794.html#google_vignette
- https://www.ijsr.net/archive/v11i6/SR22617213031.pdf
- https://help.grindr.com/hc/en-us/articles/1500009328241-Scam-awareness-guide
- http://meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://mib.gov.in/sites/default/files/2024-02/IT%28Intermediary%20Guidelines%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20English.pdf
%20(1).jpg)
Introduction
Artificial Intelligence (AI) driven autonomous weapons are reshaping military strategy, acting as force multipliers that can independently assess threats, adapt to dynamic combat environments, and execute missions with minimal human intervention, pushing the boundaries of modern warfare tactics. AI has become a critical component of modern technology-driven warfare and has simultaneously impacted many spheres in a technology-driven world. Nations often prioritise defence for significant investments, supporting its growth and modernisation. AI has become a prime area of investment and development for technological superiority in defence forces. India’s focus on defence modernisation is evident through initiatives like the Defence AI Council and the Task Force on Strategic Implementation of AI for National Security.
The main requirement that Autonomous Weapons Systems (AWS) require is the “autonomy” to perform their functions when direction or input from a human actor is absent. AI is not a prerequisite for the functioning of AWSs, but, when incorporated, AI could further enable such systems. While militaries seek to apply increasingly sophisticated AI and automation to weapons technologies, several questions arise. Ethical concerns have been raised for AWS as the more prominent issue by many states, international organisations, civil society groups and even many distinguished figures.
Ethical Concerns Surrounding Autonomous Weapons
The delegation of life-and-death decisions to machines is the ethical dilemma that surrounds AWS. A major concern is the lack of human oversight, raising questions about accountability. What if AWS malfunctions or violates international laws, potentially committing war crimes? This ambiguity fuels debate over the dangers of entrusting lethal force to non-human actors. Additionally, AWS poses humanitarian risks, particularly to civilians, as flawed algorithms could make disastrous decisions. The dehumanisation of warfare and the violation of human dignity are critical concerns when AWS is in question, as targets become reduced to mere data points. The impact on operators’ moral judgment and empathy is also troubling, alongside the risk of algorithmic bias leading to unjust or disproportionate targeting. These ethical challenges are deeply concerning.
Balancing Ethical Considerations and Innovations
It is immaterial how advanced a computer becomes in simulating human emotions like compassion, empathy, altruism, or other emotions as the machine will only be imitating them, not experiencing them as a human would. A potential solution to this ethical predicament is using a 'human-in-the-loop' or 'human-on-the-loop' semi-autonomous system. This would act as a compromise between autonomy and accountability.
A “human-on-the-loop” system is designed to provide human operators with the ability to intervene and terminate engagements before unacceptable levels of damage occur. For example, defensive weapon systems could autonomously select and engage targets based on their programming, during which a human operator retains full supervision and can override the system within a limited period if necessary.
In contrast, a ‘human-in-the-loop” system is intended to engage individual targets or specific target groups pre-selected by a human operator. Examples would include homing munitions that, once launched to a particular target location, search for and attack preprogrammed categories of targets within the area.
International Debate and Regulatory Frameworks
The regulation of autonomous weapons that employ AI, in particular, is a pressing global issue due to the ethical, legal, and security concerns it contains. There are many ongoing efforts at the international level which are in discussion to regulate such weapons. One such example is the initiative under the United Nations Convention on CertainConventional Weapons (CCW), where member states, India being an active participant, debate the limits of AI in warfare. However, existing international laws, such as the Geneva Conventions, offer legal protection by prohibiting indiscriminate attacks and mandating the distinction between combatants and civilians. The key challenge lies in achieving global consensus, as different nations have varied interests and levels of technological advancement. Some countries advocate for a preemptive ban on fully autonomous weapons, while others prioritise military innovation. The complexity of defining human control and accountability further complicates efforts to establish binding regulations, making global cooperation both essential and challenging.
The Future of AI in Defence and the Need for Stronger Regulations
The evolution of autonomous weapons poses complex ethical and security challenges. As AI-driven systems become more advanced, a growing risk of its misuse in warfare is also advancing, where lethal decisions could be made without human oversight. Proactive regulation is crucial to prevent unethical use of AI, such as indiscriminate attacks or violations of international law. Setting clear boundaries on autonomous weapons now can help avoid future humanitarian crises. India’s defence policy already recognises the importance of regulating the use of AI and AWS, as evidenced by the formation of bodies like the Defence AI Project Agency (DAIPA) for enabling AI-based processes in defence Organisations. Global cooperation is essential for creating robust regulations that balance technological innovation with ethical considerations. Such collaboration would ensure that autonomous weapons are used responsibly, protecting civilians and combatants, while encouraging innovation within a framework prioritising human dignity and international security.
Conclusion
AWS and AI in warfare present significant ethical, legal, and security challenges. While these technologies promise enhanced military capabilities, they raise concerns about accountability, human oversight, and humanitarian risks. Balancing innovation with ethical responsibility is crucial, and semi-autonomous systems offer a potential compromise. India’s efforts to regulate AI in defence highlight the importance of proactive governance. Global cooperation is essential in establishing robust regulations that ensure AWS is used responsibly, prioritising human dignity and adherence to international law, while fostering technological advancement.
References
● https://indianexpress.com/article/explained/reaim-summit-ai-war-weapons-9556525/