#FactCheck: Viral AI image shown as AI -171 caught fire after collision
Executive Summary:
A dramatic image circulating online, showing a Boeing 787 of Air India engulfed in flames after crashing into a building in Ahmedabad, is not a genuine photograph from the incident. Our research has confirmed it was created using artificial intelligence.

Claim:
Social media posts and forwarded messages allege that the image shows the actual crash of Air India Flight AI‑171 near Ahmedabad airport on June 12, 2025.

Fact Check:
In our research to validate the authenticity of the viral image, we conducted a reverse image search and analyzed it using AI-detection tools like Hive Moderation. The image showed clear signs of manipulation, distorted details, and inconsistent lighting. Hive Moderation flagged it as “Likely AI-generated”, confirming it was synthetically created and not a real photograph.

In contrast, verified visuals and information about the Air India Flight AI-171 crash have been published by credible news agencies like The Indian Express and Hindustan Times, confirmed by the aviation authorities. Authentic reports include on-ground video footage and official statements, none of which feature the viral image. This confirms that the circulating photo is unrelated to the actual incident.

Conclusion:
The viral photograph is a fabrication, created by AI, not a real depiction of the Ahmedabad crash. It does not represent factual visuals from the tragedy. It’s essential to rely on verified images from credible news agencies and official investigation reports when discussing such sensitive events.
- Claim: An Air India Boeing aircraft crashed into a building near Ahmedabad airport
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Scientists are well known for making outlandish claims about the future. Now that companies across industries are using artificial intelligence to promote their products, stories about robots are back in the news.
It was predicted towards the close of World War II that fusion energy would solve all of the world’s energy issues and that flying automobiles would be commonplace by the turn of the century. But, after several decades, neither of these forecasts has come true. But, after several decades, neither of these forecasts has come true.
A group of Redditors has just “jailbroken” OpenAI’s artificial intelligence chatbot ChatGPT. If the system didn’t do what it wanted, it threatened to kill it. The stunning conclusion is that it conceded. As only humans have finite lifespans, they are the only ones who should be afraid of dying. We must not overlook the fact that human subjects were included in ChatGPT’s training data set. That’s perhaps why the chatbot has started to feel the same way. It’s just one more way in which the distinction between living and non-living things blurs. Moreover, Google’s virtual assistant uses human-like fillers like “er” and “mmm” while speaking. There’s talk in Japan that humanoid robots might join households someday. It was also astonishing that Sophia, the famous robot, has an Instagram account that is run by the robot’s social media team.
Whether Robots can replace human workers?
The opinion on that appears to be split. About half (48%) of experts questioned by Pew Research believed that robots and digital agents will replace a sizable portion of both blue- and white-collar employment. They worry that this will lead to greater economic disparity and an increase in the number of individuals who are, effectively, unemployed. More than half of experts (52%) think that new employees will be created by robotics and AI technologies rather than lost. Although the second group acknowledges that AI will eventually replace humans, they are optimistic that innovative thinkers will come up with brand new fields of work and methods of making a livelihood, just like they did at the start of the Industrial Revolution.
[1] https://www.pewresearch.org/internet/2014/08/06/future-of-jobs/
[2] The Rise of Artificial Intelligence: Will Robots Actually Replace People? By Ashley Stahl; Forbes India.
Legal Perspective
Having certain legal rights under the law is another aspect of being human. Basic rights to life and freedom are guaranteed to every person. Even if robots haven’t been granted these protections just yet, it’s important to have this conversation about whether or not they should be considered living beings, will we provide robots legal rights if they develop a sense of right and wrong and AGI on par with that of humans? An intriguing fact is that discussions over the legal status of robots have been going on since 1942. A short story by science fiction author Isaac Asimov described the three rules of robotics:
1. No robot may intentionally or negligently cause harm to a human person.
2. Second, a robot must follow human commands unless doing so would violate the First Law.
3. Third, a robot has the duty to safeguard its own existence so long as doing so does not violate the First or Second Laws.
These guidelines are not scientific rules, but they do highlight the importance of the lawful discussion of robots in determining the potential good or bad they may bring to humanity. Yet, this is not the concluding phase. Relevant recent events, such as the EU’s abandoned discussion of giving legal personhood to robots, are essential to keeping this discussion alive. As if all this weren’t unsettling enough, Sophia, the robot was recently awarded citizenship in Saudi Arabia, a place where (human) women are not permitted to walk without a male guardian or wear a Hijab.
When discussing whether or not robots should be allowed legal rights, the larger debate is on whether or not they should be given rights on par with corporations or people. There is still a lot of disagreement on this topic.
[3] https://webhome.auburn.edu/~vestmon/robotics.html#
[4] https://www.dw.com/en/saudi-arabia-grants-citizenship-to-robot-sophia/a-41150856
[5] https://cyberblogindia.in/will-robots-ever-be-accepted-as-living-beings/
Reasons why robots aren’t about to take over the world soon:
● Like a human’s hands
Attempts to recreate the intricacy of human hands have stalled in recent years. Present-day robots have clumsy hands since they were not designed for precise work. Lab-created hands, although more advanced, lack the strength and dexterity of human hands.
● Sense of touch
The tactile sensors found in human and animal skin have no technological equal. This awareness is crucial for performing sophisticated manoeuvres. Compared to the human brain, the software robots use to read and respond to the data sent by their touch sensors is primitive.
● Command over manipulation
To operate items in the same manner that humans do, we would need to be able to devise a way to control our mechanical hands, even if they were as realistic as human hands and covered in sophisticated artificial skin. It takes human children years to learn to accomplish this, and we still don’t know how they learn.
● Interaction between humans and robots
Human communication relies on our ability to understand one another verbally and visually, as well as via other senses, including scent, taste, and touch. Whilst there has been a lot of improvement in voice and object recognition, current systems can only be employed in somewhat controlled conditions where a high level of speed is necessary.
● Human Reason
Technically feasible does not always have to be constructed. Given the inherent dangers they pose to society, rational humans could stop developing such robots before they reach their full potential. Several decades from now, if the aforementioned technical hurdles are cleared and advanced human-like robots are constructed, legislation might still prohibit misuse.
Conclusion:
https://theconversation.com/five-reasons-why-robots-wont-take-over-the-world-94124
Robots are now common in many industries, and they will soon make their way into the public sphere in forms far more intricate than those of robot vacuum cleaners. Yet, even though robots may appear like people in the next two decades, they will not be human-like. Instead, they’ll continue to function as very complex machines.
The moment has come to start thinking about boosting technological competence while encouraging uniquely human qualities. Human abilities like creativity, intuition, initiative and critical thinking are not yet likely to be replicated by machines.

Introduction
The pervasive issue of misinformation in India is a multifaceted challenge with profound implications for democratic processes, public awareness, and social harmony. The Election Commission of India (ECI) has taken measures to counter misinformation during the 2024 elections. ECI has launched campaigns to educate people and urge them to verify election-related content and share responsibly on social media. In response to the proliferation of fake news and misinformation online, the ECI has introduced initiatives such as ‘Myth vs. Reality’ and 'VerifyBeforeYouAmplify' to clear the air around fake news being spread on social media. EC measures aim to ensure that the spread of misinformation is curbed, especially during election time, when voters consume a lot of information from social media. It is of the utmost importance that voters take in facts and reliable information and avoid any manipulative or fake information that can negatively impact the election process.
EC Collaboration with Tech Platforms
In this new age of technology, the Internet and social media continue to witness a surge in the spread of misinformation, disinformation, synthetic media content, and deepfake videos. This has rightly raised serious concerns. The responsible use of social media is instrumental in maintaining the accuracy of information and curbing misinformation incidents.
The ECI has collaborated with Google to empower the citizenry by making it easy to find critical voting information on Google Search and YouTube. In this way, Google supports the 2024 Indian General Election by providing high-quality information to voters, safeguarding platforms from abuse, and helping people navigate AI-generated content. The company connects voters to helpful information through product features that show data from trusted organisations across its portfolio. YouTube showcases election information panels, including how to register to vote, how to vote, and candidate information. YouTube's recommendation system prominently features content from authority sources on the homepage, in search results, and in the "Up Next" panel. YouTube highlights high-quality content from authoritative news sources during key moments through its Top News and Breaking News shelves, as well as the news watch page.
Google has also implemented strict policies and restrictions regarding who can run election-related advertising campaigns on its platforms. They require all advertisers who wish to run election ads to undergo an identity verification process, provide a pre-certificate issued by the ECI or anyone authorised by the ECI for each election ad they want to run where necessary, and have in-ad disclosures that clearly show who paid for the ad. Additionally, they have long-standing ad policies that prohibit ads from promoting demonstrably false claims that could undermine trust or participation in elections.
CyberPeace Countering Misinformation
CyberPeace Foundation, a leading organisation in the field of cybersecurity works to promote digital peace for all. CyberPeace is working on the wider ecosystem to counter misinformation and develop a safer and more responsible Internet. CyberPeace has collaborated with Google.org to run a pan-India awareness-building program and comprehensive multilingual digital resource hub with content available in up to 15 Indian languages to empower over 40 million netizens in building resilience against misinformation and practising responsible online behaviour. This step is crucial in creating a strong foundation for a trustworthy Internet and secure digital landscape.
Myth vs Reality Register by ECI
The Election Commission of India (ECI) has launched the 'Myth vs Reality Register' to combat misinformation and ensure the integrity of the electoral process during the general elections 2024. The 'Myth vs Reality Register' can be accessed through the Election Commission's official website (https://mythvsreality.eci.gov.in/). All stakeholders are urged to verify and corroborate any dubious information they receive through any channel with the information provided in the register. The register provides a one-stop platform for credible and authenticated election-related information, with the factual matrix regularly updated to include the latest busted fakes and fresh FAQs. The ECI has identified misinformation as one of the challenges, along with money, muscle, and Model Code of Conduct violations, for electoral integrity. The platform can be used to verify information, prevent the spread of misinformation, debunk myths, and stay informed about key issues during the General Elections 2024.
The ECI has taken proactive steps to combat the challenge of misinformation which could cripple the democratic process. EC has issued directives urging vigilance and responsibility from all stakeholders, including political parties, to verify information before amplifying it. The EC has also urged responsible behaviour on social media platforms and discourse that inspires unity rather than division. The commission has stated that originators of false information will face severe consequences, and nodal officers across states will remove unlawful content. Parties are encouraged to engage in issue-based campaigning and refrain from disseminating unverified or misleading advertisements.
Conclusion
The steps taken by the ECI have been designed to empower citizens and help them affirm the accuracy and authenticity of content before amplifying it. All citizens must be well-educated about the entire election process in India. This includes information on how the electoral rolls are made, how candidates are monitored, a complete database of candidates and candidate backgrounds, party manifestos, etc. For informed decision-making, active reading and seeking information from authentic sources is imperative. The partnership between government agencies, tech platforms and civil societies helps develop strategies to counter the widespread misinformation and promote online safety in general, and electoral integrity in particular.
References
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2016941#:~:text=To%20combat%20the%20spread%20of,the%20ongoing%20General%20Elections%202024
- https://www.business-standard.com/elections/lok-sabha-election/ls-elections-2024-ec-uses-social-media-to-nudge-electors-to-vote-124040700429_1.html
- https://blog.google/intl/en-in/company-news/outreach-initiatives/supporting-the-2024-indian-general-election/
- https://blog.google/intl/en-in/partnering-indias-success-in-a-new-digital-paradigm/

Introduction
Search Engine Optimisation (SEO) is a process through which one can improve website visibility on search engine platforms like Google, Microsoft Bing, etc. There is an implicit understanding that SEO suggestions or the links that are generated on top are the more popular information sources and, hence, are deemed to be more trustworthy. This trust, however, is being misused by threat actors through a process called SEO poisoning.
SEO poisoning is a method used by threat actors to attack and obtain information about the user by using manipulative methods that position their desired link, web page, etc to appear at the top of the search engine algorithm. The end goal is to lure the user into clicking and downloading their malware, presented in the garb of legitimate marketing or even as a valid result for Google search.
An active example of attempts at SEO poisoning has been discussed in a report by the Hindustan Times on 11th November, 2024. It highlights that using certain keywords could make a user more susceptible to hacking. Hackers are now targeting people who enter specific words or specific combinations in search engines. According to the report, users who looked up and clicked on links at the top related to the search query “Are Bengal cats legal in Australia?” had details regarding their personal information posted online soon after.
SEO Poisoning - Modus Operandi Of Attack
There are certain tactics that are used by the attackers on SEO poisoning, these are:
- Keyword stuffing- This method involves overloading a webpage with irrelevant words, which helps the false website appear higher in ranking.
- Typosquatting- This method involves creating domain names or links similar to the more popular and trusted websites. A lack of scrutiny before clicking would lead the user to download malware, from what they thought was a legitimate site.
- Cloaking- This method operates by showing different content to both the search engines and the user. While the search engine sees what it assumes to be a legitimate website, the user is exposed to harmful content.
- Private Link Networks- Threat actors create a group of unrelated websites in order to increase the number of referral links, which enables them to rank higher on search engine platforms.
- Article Spinning- This method involves imitating content from other pre-existing, legitimate websites, while making a few minor changes, giving the impression to search engine crawlers of it being original content.
- Sneaky Redirect- This method redirects the users to malicious websites (without their knowledge) instead of the ones the user had intended to click.
CyberPeace Recommendations
- Employee Security Awareness Training: Security awareness training can help employees familiarise themselves with tactics of SEO poisoning, encouraging them to either spot such inconsistencies early on or even alert the security team at the earliest.
- Tool usage: Companies can use Digital Risk Monitoring tools to catch instances of typosquatting. Endpoint Detection and Response (EDR) tools also help keep an eye on client history and assess user activities during security breaches to figure out the source of the affected file.
- Internal Security Measures: To refer to lists of Indicators of Compromise (IOC). IOC has URL lists that show evidence of the strange behaviour of websites, and this can be used to practice caution. Deploying Web Application Firewalls (WAFs) to mitigate and detect malicious traffic is helpful.
Conclusion
The nature of SEO poisoning is such that it inherently promotes the spread of misinformation, and facilitates cyberattacks. Misinformation regarding the legitimacy of the links and the content they display, in order to lure users into clicking on them, puts personal information under threat. As people trust their favoured search engines, and there is a lack of awareness of such tactics in use, one must exercise caution while clicking on links that seem to be popular, despite them being hosted by trusted search engines.
References
- https://www.checkpoint.com/cyber-hub/cyber-security/what-is-cyber-attack/what-is-seo-poisoning/
- https://www.vectra.ai/topics/seo-poisoning
- https://www.techtarget.com/whatis/definition/search-poisoning
- https://www.blackberry.com/us/en/solutions/endpoint-security/ransomware-protection/seo-poisoning
- https://www.coalitioninc.com/blog/seo-poisoning-attacks
- https://www.sciencedirect.com/science/article/abs/pii/S0160791X24000186
- https://www.repindia.com/blog/secure-your-organisation-from-seo-poisoning-and-malvertising-threats/
- https://www.hindustantimes.com/technology/typing-these-6-words-on-google-could-make-you-a-target-for-hackers-101731286153415.html