#FactCheck: Fake Claim that US has used Indian Airspace to attack Iran
Executive Summary:
An online claim alleging that U.S. bombers used Indian airspace to strike Iran has been widely circulated, particularly on Pakistani social media. However, official briefings from the U.S. Department of Defense and visuals shared by the Pentagon confirm that the bombers flew over Lebanon, Syria, and Iraq. Indian authorities have also refuted the claim, and the Press Information Bureau (PIB) has issued a fact-check dismissing it as false. The available evidence clearly indicates that Indian airspace was not involved in the operation.
Claim:
Various Pakistani social media users [archived here and here] have alleged that U.S. bombers used Indian airspace to carry out airstrikes on Iran. One widely circulated post claimed, “CONFIRMED: Indian airspace was used by U.S. forces to strike Iran. New Delhi’s quiet complicity now places it on the wrong side of history. Iran will not forget.”

Fact Check:
Contrary to viral social media claims, official details from U.S. authorities confirm that American B2 bombers used a Middle Eastern flight path specifically flying over Lebanon, Syria, and Iraq to reach Iran during Operation Midnight Hammer.

The Pentagon released visuals and unclassified briefings showing this route, with Joint Chiefs of Staff Chair Gen. Dan Caine explained that the bombers coordinated with support aircraft over the Middle East in a highly synchronized operation.

Additionally, Indian authorities have denied any involvement, and India’s Press Information Bureau (PIB) issued a fact-check debunking the false narrative that Indian airspace was used.

Conclusion:
In conclusion, official U.S. briefings and visuals confirm that B-2 bombers flew over the Middle East not India to strike Iran. Both the Pentagon and Indian authorities have denied any use of Indian airspace, and the Press Information Bureau has labeled the viral claims as false.
- Claim: Fake Claim that US has used Indian Airspace to attack Iran
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/

"Cybercriminals are unleashing a surprisingly high volume of new threats in this short period of time to take advantage of inadvertent security gaps as organizations are in a rush to ensure business continuity.”
Cyber security firm Fortinet on Monday announced that over the past several weeks, it has been monitoring a significant spike in COVID-19 related threats.
An unprecedented number of unprotected users and devices are now online with one or two people in every home connecting remotely to work through the internet. Simultaneously there are children at home engaged in remote learning and the entire family is engaged in multi-player games, chatting with friends as well as streaming music and video. The cybersec firm’s FortiGuard Labs is observing this perfect storm of opportunity being exploited by cybercriminals as the Threat Report on the Pandemic highlights:
A surge in Phishing Attacks: The research shows an average of about 600 new phishing campaigns every day. The content is designed to either prey on the fears and concerns of individuals or pretend to provide essential information on the current pandemic. The phishing attacks range from scams related to helping individuals deposit their stimulus for Covid-19 tests, to providing access to Chloroquine and other medicines or medical device, to providing helpdesk support for new teleworkers.
Phishing Scams Are Just the Start: While the attacks start with a phishing attack, their end goal is to steal personal information or even target businesses through teleworkers. Majority of the phishing attacks contain malicious payloads – including ransomware, viruses, remote access trojans (RATs) designed to provide criminals with remote access to endpoint systems, and even RDP (remote desktop protocol) exploits.
A Sudden Spike in Viruses: The first quarter of 2020 has documented a 17% increase in viruses for January, a 52% increase for February and an alarming 131% increase for March compared to the same period in 2019. The significant rise in viruses is mainly attributed to malicious phishing attachments. Multiple sites that are illegally streaming movies that were still in theatres secretly infect malware to anyone who logs on. Free game, free movie, and the attacker is on your network.
Risks for IoT Devices magnify: As users are all connected to the home network, attackers have multiple avenues of attack that can be exploited targeting devices including computers, tablets, gaming and entertainment systems and even online IoT devices such as digital cameras, smart appliances – with the ultimate goal of finding a way back into a corporate network and its valuable digital resources.
Ransomware like attack to disrupt business: If the device of a remote worker can be compromised, it can become a conduit back into the organization’s core network, enabling the spread of malware to other remote workers. The resulting business disruption can be just as effective as ransomware targeting internal network systems for taking a business offline. Since helpdesks are now remote, devices infected with ransomware or a virus can incapacitate workers for days while devices are mailed in for reimaging.
“Though organizations have completed the initial phase of transitioning their entire workforce to remote telework and employees are becoming increasingly comfortable with their new reality, CISOs continue to face new challenges presented by maintaining a secure teleworker business model. From redefining their security baseline, or supporting technology enablement for remote workers, to developing detailed policies for employees to have access to data, organizations must be nimble and adapt quickly to overcome these new problems that are arising”, said Derek Manky, Chief, Security Insights & Global Threat Alliances at Fortinet – Office of CISO.

Introduction
In a landmark move for India’s growing artificial intelligence (AI) ecosystem, ten cutting-edge Indian startups have been selected to participate in the prestigious Global AI Accelerator Programme in Paris. This initiative, jointly facilitated by the Ministry of Electronics and Information Technology (MeitY) under the IndiaAI mission, aims to project India’s AI innovation on the global stage, empower startups to scale impactful solutions while fostering cross-border collaboration.
Launched in alignment with the vision of India as a global AI powerhouse, the IndiaAI initiative has been working on strengthening domestic AI capabilities. Participation in the Paris Accelerator Programme is a direct extension of this mission, offering Indian startups access to world-class mentorship, investor networks, and a thriving innovation ecosystem in France, one of Europe’s AI capitals.
Global Acceleration for Local Impact
The ten selected startups represent diverse verticals, from conversational AI to cybersecurity, edtech and surveillance intelligence. This selection was made after a rigorous evaluation of innovation potential, scalability, and societal impact. Each of these ventures represents India's technological ambition and capacity to solve real-world problems through AI.
The significance of this opportunity goes beyond business growth. It sets the foundation for collaborative policy dialogues, ethical AI development, and bilateral innovation frameworks. With rising global scrutiny on issues such as AI safety, bias, and misinformation, the need for making efforts for a more responsible innovation takes centre stage.
CyberPeace Outlook
India’s participation opens up a pivotal chapter in India's AI diplomacy. Through such initiatives, the importance of AI is not confined just to commercial tools but also as a cornerstone of national security, citizen safety, and digital sovereignty can be explored. As AI systems increasingly integrate with critical infrastructure from health to law enforcement, the role of cyber resilience becomes significant. With the increasing engagement of AI in several sensitive sectors like audio-video surveillance and digital edtech, there is an urgent need for secure-by-design innovation. Including parameters such as security, ethics, and accountability into the development lifecycle becomes important, aligning with its broader goal of harmonising with digital progress.
Conclusion
India’s participation in the Paris Accelerator Programme signifies its commitment to shaping global AI norms and innovation diplomacy. As Indian startups interact with European regulators, investors, and technologists, they carry the responsibility of representing not just business acumen but the values of an open, inclusive, and secure digital future.
This global exposure also feeds directly into India’s domestic AI strategies, a global platform informing policy evolution, enhancing research and development networks, and building a robust talent pipeline. Programmes like these act as bridges, ensuring India remains adaptive in the ever-evolving AI landscape. Encouraging such global engagements while actively working with stakeholders to build frameworks safeguarding national interests, protecting civil liberties, and fostering innovation becomes paramount. As India takes this global leap, the journey ahead must be shaped by innovation, collaboration, and vigilance.
References
- https://egov.eletsonline.com/2025/05/indiaai-selects-10-innovative-startups-for-prestigious-ai-accelerator-programme-in-paris/#:~:text=The%2010%20startups%20selected%20for,audio%2Dvideo%20analytics%20for%20surveillance.
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2132377
- https://inc42.com/buzz/meet-the-10-indian-ai-startups-selected-for-global-acceleration-programme/
- https://www.businessworld.in/article/govt-to-send-10-ai-startups-to-paris-accelerator-in-push-for-global-reach-558251