#FactCheck: Fake Claim that US has used Indian Airspace to attack Iran
Executive Summary:
An online claim alleging that U.S. bombers used Indian airspace to strike Iran has been widely circulated, particularly on Pakistani social media. However, official briefings from the U.S. Department of Defense and visuals shared by the Pentagon confirm that the bombers flew over Lebanon, Syria, and Iraq. Indian authorities have also refuted the claim, and the Press Information Bureau (PIB) has issued a fact-check dismissing it as false. The available evidence clearly indicates that Indian airspace was not involved in the operation.
Claim:
Various Pakistani social media users [archived here and here] have alleged that U.S. bombers used Indian airspace to carry out airstrikes on Iran. One widely circulated post claimed, “CONFIRMED: Indian airspace was used by U.S. forces to strike Iran. New Delhi’s quiet complicity now places it on the wrong side of history. Iran will not forget.”

Fact Check:
Contrary to viral social media claims, official details from U.S. authorities confirm that American B2 bombers used a Middle Eastern flight path specifically flying over Lebanon, Syria, and Iraq to reach Iran during Operation Midnight Hammer.

The Pentagon released visuals and unclassified briefings showing this route, with Joint Chiefs of Staff Chair Gen. Dan Caine explained that the bombers coordinated with support aircraft over the Middle East in a highly synchronized operation.

Additionally, Indian authorities have denied any involvement, and India’s Press Information Bureau (PIB) issued a fact-check debunking the false narrative that Indian airspace was used.

Conclusion:
In conclusion, official U.S. briefings and visuals confirm that B-2 bombers flew over the Middle East not India to strike Iran. Both the Pentagon and Indian authorities have denied any use of Indian airspace, and the Press Information Bureau has labeled the viral claims as false.
- Claim: Fake Claim that US has used Indian Airspace to attack Iran
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
A viral post on social media shared with misleading captions about a National Highway being built with large bridges over a mountainside in Jammu and Kashmir. However, the investigation of the claim shows that the bridge is from China. Thus the video is false and misleading.

Claim:
A video circulating of National Highway 14 construction being built on the mountain side in Jammu and Kashmir.

Fact Check:
Upon receiving the image, Reverse Image Search was carried out, an image of an under-construction road, falsely linked to Jammu and Kashmir has been proven inaccurate. After investigating we confirmed the road is from a different location that is G6911 Ankang-Laifeng Expressway in China, highlighting the need to verify information before sharing.


Conclusion:
The viral claim mentioning under-construction Highway from Jammu and Kashmir is false. The post is actually from China and not J&K. Misinformation like this can mislead the public. Before sharing viral posts, take a brief moment to verify the facts. This highlights the importance of verifying information and relying on credible sources to combat the spread of false claims.
- Claim: Under-Construction Road Falsely Linked to Jammu and Kashmir
- Claimed On: Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
.webp)
Executive Summary
This report analyses a recently launched social engineering attack that took advantage of Microsoft Teams and AnyDesk to deliver DarkGate malware, a MaaS tool. This way, through Microsoft Teams and by tricking users into installing AnyDesk, attackers received unauthorized remote access to deploy DarkGate that offers such features as credential theft, keylogging, and fileless persistence. The attack was executed using obfuscated AutoIt scripts for the delivery of malware which shows how threat actors are changing their modus operandi. The case brings into focus the need to put into practice preventive security measures for instance endpoint protection, staff awareness, limited utilization of off-ice-connection tools, and compartmentalization to safely work with the new and increased risks that contemporary cyber threats present.
Introduction
Hackers find new technologies and application that are reputable for spreading campaigns. The latest use of Microsoft Teams and AnyDesk platforms for launching the DarkGate malware is a perfect example of how hackers continue to use social engineering and technical vulnerabilities to penetrate the defenses of organizations. This paper focuses on the details of the technical aspect of the attack, the consequences of the attack together with preventive measures to counter the threat.
Technical Findings
1. Attack Initiation: Exploiting Microsoft Teams
The attackers leveraged Microsoft Teams as a trusted communication platform to deceive victims, exploiting its legitimacy and widespread adoption. Key technical details include:
- Spoofed Caller Identity: The attackers used impersonation techniques to masquerade as representatives of trusted external suppliers.
- Session Hijacking Risks: Exploiting Microsoft Teams session vulnerabilities, attackers aimed to escalate their privileges and deploy malicious payloads.
- Bypassing Email Filters: The initial email bombardment was designed to overwhelm spam filters and ensure that malicious communication reached the victim’s inbox.
2. Remote Access Exploitation: AnyDesk
After convincing victims to install AnyDesk, the attackers exploited the software’s functionality to achieve unauthorized remote access. Technical observations include:
- Command and Control (C2) Integration: Once installed, AnyDesk was configured to establish persistent communication with the attacker’s C2 servers, enabling remote control.
- Privilege Escalation: Attackers exploited misconfigurations in AnyDesk to gain administrative privileges, allowing them to disable antivirus software and deploy payloads.
- Data Exfiltration Potential: With full remote access, attackers could silently exfiltrate data or install additional malware without detection.
3. Malware Deployment: DarkGate Delivery via AutoIt Script
The deployment of DarkGate malware utilized AutoIt scripting, a programming language commonly used for automating Windows-based tasks. Technical details include:
- Payload Obfuscation: The AutoIt script was heavily obfuscated to evade signature-based antivirus detection.
- Process Injection: The script employed process injection techniques to embed DarkGate into legitimate processes, such as explorer.exe or svchost.exe, to avoid detection.
- Dynamic Command Loading: The malware dynamically fetched additional commands from its C2 server, allowing real-time adaptation to the victim’s environment.
4. DarkGate Malware Capabilities
DarkGate, now available as a Malware-as-a-Service (MaaS) offering, provides attackers with advanced features. Technical insights include:
- Credential Dumping: DarkGate used the Mimikatz module to extract credentials from memory and secure storage locations.
- Keylogging Mechanism: Keystrokes were logged and transmitted in real-time to the attacker’s server, enabling credential theft and activity monitoring.
- Fileless Persistence: Utilizing Windows Management Instrumentation (WMI) and registry modifications, the malware ensured persistence without leaving traditional file traces.
- Network Surveillance: The malware monitored network activity to identify high-value targets for lateral movement within the compromised environment.
5. Attack Indicators
Trend Micro researchers identified several indicators of compromise (IoCs) associated with the DarkGate campaign:
- Suspicious Domains: example-remotesupport[.]com and similar domains used for C2 communication.
- Malicious File Hashes:some text
- AutoIt Script: 5a3f8d0bd6c91234a9cd8321a1b4892d
- DarkGate Payload: 6f72cde4b7f3e9c1ac81e56c3f9f1d7a
- Behavioral Anomalies:some text
- Unusual outbound traffic to non-standard ports.
- Unauthorized registry modifications under HKCU\Software\Microsoft\Windows\CurrentVersion\Run.
Broader Cyber Threat Landscape
In parallel with this campaign, other phishing and malware delivery tactics have been observed, including:
- Cloud Exploitation: Abuse of platforms like Cloudflare Pages to host phishing sites mimicking Microsoft 365 login pages.
- Quishing Campaigns: Phishing emails with QR codes that redirect users to fake login pages.
- File Attachment Exploits: Malicious HTML attachments embedding JavaScript to steal credentials.
- Mobile Malware: Distribution of malicious Android apps capable of financial data theft.
Implications of the DarkGate Campaign
This attack highlights the sophistication of threat actors in leveraging legitimate tools for malicious purposes. Key risks include:
- Advanced Threat Evasion: The use of obfuscation and process injection complicates detection by traditional antivirus solutions.
- Cross-Platform Risk: DarkGate’s modular design enables its functionality across diverse environments, posing risks to Windows, macOS, and Linux systems.
- Organizational Exposure: The compromise of a single endpoint can serve as a gateway for further network exploitation, endangering sensitive organizational data.
Recommendations for Mitigation
- Enable Advanced Threat Detection: Deploy endpoint detection and response (EDR) solutions to identify anomalous behavior like process injection and dynamic command loading.
- Restrict Remote Access Tools: Limit the use of tools like AnyDesk to approved use cases and enforce strict monitoring.
- Use Email Filtering and Monitoring: Implement AI-driven email filtering systems to detect and block email bombardment campaigns.
- Enhance Endpoint Security: Regularly update and patch operating systems and applications to mitigate vulnerabilities.
- Educate Employees: Conduct training sessions to help employees recognize and avoid phishing and social engineering tactics.
- Implement Network Segmentation: Limit the spread of malware within an organization by segmenting high-value assets.
Conclusion
Using Microsoft Teams and AnyDesk to spread DarkGate malware shows the continuous growth of the hackers’ level. The campaign highlights how organizations have to start implementing adequate levels of security preparedness to threats, including, Threat Identification, Training employees, and Rights to Access.
The DarkGate malware is a perfect example of how these attacks have developed into MaaS offerings, meaning that the barrier to launch highly complex attacks is only decreasing, which proves once again why a layered defense approach is crucial. Both awareness and flexibility are still the key issues in addressing the constantly evolving threat in cyberspace.
Reference:

Introduction
Given the era of digital trust and technological innovation, the age of artificial intelligence has provided a new dimension to how people communicate and how they create and consume content. However, like all borrowed powers, the misuse of AI can lead to terrible consequences. One recent dark example was a cybercrime in Brazil: a sophisticated online scam using deepfake technology to impersonate celebrities of global stature, including supermodel Gisele Bündchen, in misleading Instagram ads. Luring in millions of reais in revenue, this crime clearly brings forth the concern of AI-generative content having rightfully set on the side of criminals.
Scam in Motion
Lately, the federal police of Brazil have stated that this scheme has been in circulation since 2024, when the ads were already being touted as apparently very genuine, using AI-generated video and images. The ads showed Gisele Bündchen and other celebrities endorsing skincare products, promotional giveaways, or time-limited discounts. The victims were tricked into making petty payments, mostly under 100 reais (about $19) for these fake products or were lured into paying "shipping costs" for prizes that never actually arrived.
The criminals leveraged their approach by scaling it up and focusing on minor losses accumulated from every victim, thus christening it "statistical immunity" by investigators. Victims being pocketed only a couple of dollars made most of them stay on their heels in terms of filing a complaint, thereby allowing these crooks extra limbs to shove on. Over time, authorities estimated that the group had gathered over 20 million reais ($3.9 million) in this elaborate con.
The scam was detected when a victim came forth with the information that an Instagram advertisement portraying a deepfake video of Gisele Bündchen was indeed false. With Anna looking to be Gisele and on the recommendation of a skincare company, the deepfake video was the most well-produced fake video. On going further into the matter, it became apparent that the investigations uncovered a whole network of deceptive social media pages, payment gateways, and laundering channels spread over five states in Brazil.
The Role of AI and Deepfakes in Modern Fraud
It is one of the first few large-scale cases in Brazil where AI-generated deepfakes have been used to perpetrate financial fraud. Deepfake technology, aided by machine learning algorithms, can realistically mimic human appearance and speech and has become increasingly accessible and sophisticated. Whereas before a level of expertise and computer resources were needed, one now only requires an online tool or app.
With criminals gaining a psychological advantage through deepfakes, the audiences would be more willing to accept the ad as being genuine as they saw a familiar and trusted face, a celebrity known for integrity and success. The human brain is wired to trust certain visual cues, making deepfakes an exploitation of this cognitive bias. Unlike phishing emails brimming with spelling and grammatical errors, deepfake videos are immersive, emotional, and visually convincing.
This is the growing terrain: AI-enabled misinformation. From financial scams to political propaganda, manipulated media is killing trust in the digital ecosystem.
Legalities and Platform Accountability
The Brazilian government had taken a proactive stance on the issue. In June 2025, the country's Supreme Court held that social media platforms could be held liable for failure to expeditiously remove criminal content, even in the absence of a formal order from a court. The icing on the cake is that that judgment would go a long way in architecting platform accountability in Brazil and potentially worldwide as jurisdictions adopt processes to deal with AI-generated fraud.
Meta, the parent company of Instagram, had said its policies forbid "ads that deceptively use public figures to scam people." Meta claims to use advanced detection mechanisms, trained review teams, and user tools to report violations. The persistence of such scams shows that the enforcement mechanisms still lag the pace and scale of AI-based deception.
Why These Scams Succeed
There are many reasons for the success of these AI-powered scams.
- Trust Due to Familiarity: Human beings tend to believe anything put forth by a known individual.
- Micro-Fraud: Keeping the money laundered from victims small prevents any increase in the number of complaints about these crimes.
- Speed To Create Content: New ads are being generated by criminals faster than ads can be checked for and removed by platforms via AI tools.
- Cross-Platform Propagation: A deepfake ad is then reshared onto various other social networking platforms once it starts gaining some traction, thereby worsening the problem.
- Absence of Public Awareness: Most users still cannot discern manipulated media, especially when high-quality deepfakes come into play.
Wider Implications on Cybersecurity and Society
The Brazilian case is but a microcosm of a much bigger problem. With deepfake technology evolving, AI-generated deception threatens not only individuals but also institutions, markets, and democratic systems. From investment scams and fake charters to synthetic IDs for corporate fraud, the possibilities for abuse are endless.
Moreover, with generative AIs being adopted by cybercriminals, law enforcement faces obstructions to properly attributing, validating evidence, and conducting digital forensics. Determining what is actual and what is manipulated has now given rise to the need for a forensic AI model that has triggered the deployment of the opposite on the other side, the attacker, thus initiating a rising tech arms race between the two parties.
Protecting Citizens from AI-Powered Scams
Public awareness has remained the best defence for people in such scams. Gisele Bündchen's squad encouraged members of the public to verify any advertisement through official brand or celebrity channels before engaging with said advertisements. Consumers need to be wary of offers that appear "too good to be true" and double-check the URL for authenticity before sharing any kind of personal information
Individually though, just a few acts go so far in lessening some of the risk factors:
- Verify an advertisement's origin before clicking or sharing it
- Never share any monetary or sensitive personal information through an unverifiable link
- Enable two-factor authentication on all your social accounts
- Periodically check transaction history for any unusual activity
- Report any deepfake or fraudulent advertisement immediately to the platform or cybercrime authorities
Collaboration will be the way ahead for governments and technology companies. Investing in AI-based detection systems, cooperating on international law enforcement, and building capacity for digital literacy programs will enable us to stem this rising tide of synthetic media scams.
Conclusion
The deepfake case in Brazil with Gisele Bündchen acts as a clarion for citizens and legislators alike. This shows the evolution of cybercrime that profited off the very AI technologies that were once hailed for innovation and creativity. In this new digital frontier that society is now embracing, authenticity stands closer to manipulation, disappearing faster with each dawn.
While keeping public safety will certainly still require great cybersecurity measures in this new environment, it will demand equal contributions on vigilance, awareness, and ethical responsibility. Deepfakes are not only a technology problem but a societal one-crossing into global cooperation, media literacy, and accountability at every level throughout the entire digital ecosystem.