#FactCheck: Viral AI image shown as AI -171 caught fire after collision
Executive Summary:
A dramatic image circulating online, showing a Boeing 787 of Air India engulfed in flames after crashing into a building in Ahmedabad, is not a genuine photograph from the incident. Our research has confirmed it was created using artificial intelligence.

Claim:
Social media posts and forwarded messages allege that the image shows the actual crash of Air India Flight AI‑171 near Ahmedabad airport on June 12, 2025.

Fact Check:
In our research to validate the authenticity of the viral image, we conducted a reverse image search and analyzed it using AI-detection tools like Hive Moderation. The image showed clear signs of manipulation, distorted details, and inconsistent lighting. Hive Moderation flagged it as “Likely AI-generated”, confirming it was synthetically created and not a real photograph.

In contrast, verified visuals and information about the Air India Flight AI-171 crash have been published by credible news agencies like The Indian Express and Hindustan Times, confirmed by the aviation authorities. Authentic reports include on-ground video footage and official statements, none of which feature the viral image. This confirms that the circulating photo is unrelated to the actual incident.

Conclusion:
The viral photograph is a fabrication, created by AI, not a real depiction of the Ahmedabad crash. It does not represent factual visuals from the tragedy. It’s essential to rely on verified images from credible news agencies and official investigation reports when discussing such sensitive events.
- Claim: An Air India Boeing aircraft crashed into a building near Ahmedabad airport
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Romance scams have been rised in India. A staggering 66 percent of individuals in India have been ensnared by the siren songs of deceitful online dating schemes. These are not the attempts of yesteryears but rather a new breed of scams, seamlessly weaving the threads of traditional deceit with the sinew of cutting-edge technologies such as generative AI and deep fakes. A report by Tenable highlights the rise of romance scams in India, which now combine traditional tactics with advanced technologies like generative AI and deepfakes. Over 69% of Indians struggle to distinguish between artificial and authentic human voices. Scammers are using celebrity impersonations and platforms like Facebook to lure victims into a false sense of security.
The Romance Scam
A report by Tenable, the exposure management company, illuminates the disturbing evolution of these romance scams. It reveals a reality: AI-generated deep lakes have attained a level of sophistication where an astonishing 69 percent of Indians confess to struggling to discern between artificial and authentic human voices. This technological prowess has armed scammers with the tools to craft increasingly convincing personas, enabling them to perpetrate their nefarious acts with alarming success.
In 2023 alone, 43 percent of Indians reported falling victim to AI voice scams, with a staggering 83 percent of those targeted suffering financial loss. The scammers, like puppeteers, manipulate their digital marionettes with a deftness that is both awe-inspiring and horrifying. They have mastered the art of impersonating celebrities and fabricating personas that resonate with their targets, particularly preying on older demographics who may be more susceptible to their charms.
Social media platforms, which were once heralded as the town squares of the 21st century, have unwittingly become fertile grounds for these fraudulent activities. They lure victims into a false sense of security before the scammers orchestrate their deceitful symphonies. Chris Boyd, a staff research engineer at Tenable, issues a stern warning against the lure of private conversations, where the protective layers of security are peeled away, leaving individuals exposed to the machinations of these digital charlatans.
The Vulnerability of Individuals
The report highlights the vulnerability of certain individuals, especially those who are older, widowed, or experiencing memory loss. These individuals are systematically targeted by heartless criminals who exploit their longing for connection and companionship. The importance of scrutinising requests for money from newfound connections is underscored, as is the need for meticulous examination of photographs and videos for any signs of manipulation or deceit.
'Increasing awareness and maintaining vigilance are our strongest weapons against these heartless manipulations, 'safeguarding love seekers from the treacherous web of AI-enhanced deception.'
The landscape of love has been irrevocably altered by the prevalence of smartphones and the deep proliferation of mobile internet. Finding love has morphed into a digital odyssey, with more and more Indians turning to dating apps like Tinder, Bumble, and Hinge. Yet, as with all technological advancements, there lurks a shadowy underbelly. The rapid adoption of dating sites has provided potential scammers with a veritable goldmine of opportunity.
It is not uncommon these days to hear tales of individuals who have lost their life savings to a person they met on a dating site or who have been honey-trapped and extorted by scammers on such platforms. A new study, titled 'Modern Love' and published by McAfee ahead of Valentine's Day 2024, reveals that such scams are rampant in India, with 39 percent of users reporting that their conversations with a potential love interest online turned out to be with a scammer.
The study also found that 77 percent of Indians have encountered fake profiles and photos that appear AI-generated on dating websites or apps or on social media, while 26 percent later discovered that they were engaging with AI-generated bots rather than real people. 'The possibilities of AI are endless, and unfortunately, so are the perils,' says Steve Grobman, McAfee’s Chief Technology Officer.
Steps to Safeguard
Scammers have not limited their hunting grounds to dating sites alone. A staggering 91 percent of Indians surveyed for the study reported that they, or someone they know, have been contacted by a stranger through social media or text message and began to 'chat' with them regularly. Cybercriminals exploit the vulnerability of those seeking love, engaging in long and sophisticated attempts to defraud their victims.
McAfee offers some steps to protect oneself from online romance and AI scams:
- Scrutinise any direct messages you receive from a love interest via a dating app or social media.
- Be on the lookout for consistent, AI-generated messages which often lack substance or feel generic.
- Avoid clicking on any links in messages from someone you have not met in person.
- Perform a reverse image search of any profile pictures used by the person.
- Refrain from sending money or gifts to someone you haven’t met in person, even if they send you money first.
- Discuss your new love interest with your trusted friend. It can be easy to overlook red flags when you are hopeful and excited.
Conclusion
The path is fraught with illusions, and only by arming oneself with knowledge and scepticism can one hope to find true connection without falling prey to the mirage of deceit. As we navigate this treacherous terrain, let us remember that the most profound connections are often those that withstand the test of time and the scrutiny of truth.
References
- https://www.businesstoday.in/technology/news/story/valentine-day-alert-deepfakes-genai-amplifying-romance-scams-in-india-warn-researchers-417245-2024-02-13
- https://www.indiatimes.com/amp/news/india/valentines-day-around-40-per-cent-indians-have-been-scammed-while-looking-for-love-online-627324.html
- https://zeenews.india.com/technology/valentine-day-deepfakes-in-romance-scams-generative-ai-in-scams-romance-scams-in-india-online-dating-scams-in-india-ai-voice-scams-in-india-cyber-security-in-india-2720589.html
- https://www.mcafee.com/en-us/consumer-corporate/newsroom/press-releases/2023/20230209.html

There has been a struggle to create legal frameworks that can define where free speech ends and harmful misinformation begins, specifically in democratic societies where the right to free expression is a fundamental value. Platforms like YouTube, Wikipedia, and Facebook have gained a huge consumer base by focusing on hosting user-generated content. This content includes anything a visitor puts on a website or social media pages.
The legal and ethical landscape surrounding misinformation is dependent on creating a fine balance between freedom of speech and expression while protecting public interests, such as truthfulness and social stability. This blog is focused on examining the legal risks of misinformation, specifically user-generated content, and the accountability of platforms in moderating and addressing it.
The Rise of Misinformation and Platform Dynamics
Misinformation content is amplified by using algorithmic recommendations and social sharing mechanisms. The intent of spreading false information is closely interwoven with the assessment of user data to identify target groups necessary to place targeted political advertising. The disseminators of fake news have benefited from social networks to reach more people, and from the technology that enables faster distribution and can make it more difficult to distinguish fake from hard news.
Multiple challenges emerge that are unique to social media platforms regulating misinformation while balancing freedom of speech and expression and user engagement. The scale at which content is created and published, the different regulatory standards, and moderating misinformation without infringing on freedom of expression complicate moderation policies and practices.
The impacts of misinformation on social, political, and economic consequences, influencing public opinion, electoral outcomes, and market behaviours underscore the urgent need for effective regulation, as the consequences of inaction can be profound and far-reaching.
Legal Frameworks and Evolving Accountability Standards
Safe harbour principles allow for the functioning of a free, open and borderless internet. This principle is embodied under the US Communications Decency Act and the Information Technology Act in Sections 230 and 79 respectively. They play a pivotal role in facilitating the growth and development of the Internet. The legal framework governing misinformation around the world is still in nascent stages. Section 230 of the CDA protects platforms from legal liability relating to harmful content posted on their sites by third parties. It further allows platforms to police their sites for harmful content and protects them from liability if they choose not to.
By granting exemptions to intermediaries, these safe harbour provisions help nurture an online environment that fosters free speech and enables users to freely express themselves without arbitrary intrusions.
A shift in regulations has been observed in recent times. An example is the enactment of the Digital Services Act of 2022 in the European Union. The Act requires companies having at least 45 million monthly users to create systems to control the spread of misinformation, hate speech and terrorist propaganda, among other things. If not followed through, they risk penalties of up to 6% of the global annual revenue or even a ban in EU countries.
Challenges and Risks for Platforms
There are multiple challenges and risks faced by platforms that surround user-generated misinformation.
- Moderating user-generated misinformation is a big challenge, primarily because of the quantity of data in question and the speed at which it is generated. It further leads to legal liabilities, operational costs and reputational risks.
- Platforms can face potential backlash, both in instances of over-moderation or under-moderation. It can be considered as censorship, often overburdening. It can also be considered as insufficient governance in cases where the level of moderation is not protecting the privacy rights of users.
- Another challenge is more in the technical realm, including the limitations of AI and algorithmic moderation in detecting nuanced misinformation. It holds out to the need for human oversight to sift through the misinformation that is created by AI-generated content.
Policy Approaches: Tackling Misinformation through Accountability and Future Outlook
Regulatory approaches to misinformation each present distinct strengths and weaknesses. Government-led regulation establishes clear standards but may risk censorship, while self-regulation offers flexibility yet often lacks accountability. The Indian framework, including the IT Act and the Digital Personal Data Protection Act of 2023, aims to enhance data-sharing oversight and strengthen accountability. Establishing clear definitions of misinformation and fostering collaborative oversight involving government and independent bodies can balance platform autonomy with transparency. Additionally, promoting international collaborations and innovative AI moderation solutions is essential for effectively addressing misinformation, especially given its cross-border nature and the evolving expectations of users in today’s digital landscape.
Conclusion
A balance between protecting free speech and safeguarding public interest is needed to navigate the legal risks of user-generated misinformation poses. As digital platforms like YouTube, Facebook, and Wikipedia continue to host vast amounts of user content, accountability measures are essential to mitigate the harms of misinformation. Establishing clear definitions and collaborative oversight can enhance transparency and build public trust. Furthermore, embracing innovative moderation technologies and fostering international partnerships will be vital in addressing this cross-border challenge. As we advance, the commitment to creating a responsible digital environment must remain a priority to ensure the integrity of information in our increasingly interconnected world.
References
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://hbr.org/2021/08/its-time-to-update-section-230
- https://www.cnbctv18.com/information-technology/deepfakes-digital-india-act-safe-harbour-protection-information-technology-act-sajan-poovayya-19255261.htm

One of the best forums for many video producers is YouTube. It also has a great chance of generating huge profits. YouTube content producers need assistance to get the most views, likes, comments, and subscribers for their videos and channels. As a result, some people could use YouTube bots to unnaturally raise their ranks on the YouTube site, which might help them get more organic views and reach a larger audience. However, this strategy is typically seen as unfair and can violate the YouTube platform’s terms of service.
As YouTube grows in popularity, so does the usage of YouTube bots. These bots are software programs that may automate operations on the YouTube platform, such as watching, liking, or disliking videos, subscribing to or unsubscribing from channels, making comments, and adding videos to playlists, among others. There have been YouTube bots around for a while. Many YouTubers widely use these computer codes to increase the number of views on their videos and accounts, which helps them rank higher in YouTube’s algorithm. Researchers discovered a new bot that takes private information from YouTube users’ accounts.
CRIL (Cyble Research and Intelligence Labs) has been monitoring new and active malware families CRIL has discovered a new YouTube bot virus capable of viewing, liking, and commenting on YouTube videos. Furthermore, it is capable of stealing sensitive information from browsers and acting as a bot that accepts orders from the Command and Control (C&C) server to carry out other harmful operations.
The Bot Insight
This YouTube bot has the same capabilities as all other YouTube bots, including the ability to view, like, and comment on videos. Additionally, it has the ability to steal private data from browsers and act as a bot that takes commands from a Command and Control (C&C) server for various malicious purposes. Researchers from Cyble discovered the inner workings of this information breach the Youtube bot uses the sample hash(SHA256) e9dac8b677a670e70919730ee65ab66cc27730378b9233d944ad7879c530d312.They discovered that it was created using the.NET compiler and is an executable file with a 32-bit size.
- The virus runs an AntiVM check as soon as it is executed to thwart researchers’ attempts to find and analyze malware in a virtual environment.
- It stops the execution if it finds that it is operating in a regulated setting. If not, it will carry out the tasks listed in the argument strings.
- Additionally, the virus creates a mutex, copies itself to the %appdata% folder as AvastSecurity.exe, and then uses cmd.exe to run.
- The new mutex makes a task scheduler entry and aids in ensuring
- The victim’s system’s installed Chromium browsers are used to harvest cookies, autofill information, and login information by the AvastSecurity.exe program.
- In order to view the chosen video, the virus runs the YouTube Playwright function, passing the previously indicated arguments along with the browser’s path and cookie data.
- YouTube bot uses the YouTube Playwright function to launch the browser environment with the specified parameters and automate actions like watching, liking, and commenting on YouTube videos. The feature is dependent on Microsoft. playwright’s kit.
- The malware establishes a connection to a C2 server and gets instructions to erase the entry for the scheduled task and end its own process, extract log files to the C2 server, download and run other files, and start/stop watching a YouTube movie.
- Additionally, it verifies that the victim’s PC has the required dependencies, including the Playwright package and the Chrome browser, installed. When it gets the command “view,” it will download and install these dependencies if they are missing.
Recommendations
The following is a list of some of the most critical cybersecurity best practices that serve as the first line of defense against intruders. We propose that our readers follow the advice provided below:
- Downloading pirated software from warez/torrent websites should be avoided. Such a virus is commonly found in “Hack Tools” available on websites such as YouTube, pirate sites, etc.
- When feasible, use strong passwords and impose multi-factor authentication.
- Enable automatic software updates on your laptop, smartphone, and other linked devices.
- Use a reputable antivirus and internet security software package on your linked devices, such as your computer, laptop, and smartphone.
- Avoid clicking on suspicious links and opening email attachments without verifying they are legitimate.Inform staff members on how to guard against dangers like phishing and unsafe URLs.
- Block URLs like Torrent/Warez that might be used to propagate malware.To prevent malware or TAs from stealing data, keep an eye on the beacon at the network level.
Conclusion
Using YouTube bots may be a seductive strategy for content producers looking to increase their ranks and expand their viewership on the site. However, the employment of bots is typically regarded as unfair and may violate YouTube’s terms of service. Utilizing YouTube bots carries additional risk because they might be identified, which could lead to account suspension or termination for the user. Mitigating this pressing issue through awareness drives and surveys to determine the bone of contention is best. NonProfits and civil society organizations can bridge the gap between the tech giant and the end user to facilitate better know-how about these unknown bots.