#FactCheck- Old US Troops Homecoming Video Falsely Linked to Iran Ceasefire
Executive Summary
Talks between the United States and Iran over a ceasefire reportedly held in Islamabad on Saturday ended without a resolution. Meanwhile, a video circulating on social media claims to show US troops returning home following a ceasefire in the Middle East conflict.
However, a research by the CyberPeace found the claim to be false. The viral video is not linked to any recent ceasefire. It actually dates back to March and shows the return of Iowa National Guard troops after months of deployment in the Middle East.
Claim
An X (formerly Twitter) user posted the video on April 7, 2026, claiming,“Another victory for Iran: American soldiers have started arriving home. After leaving the Middle East, American soldiers are saying, ‘Why did we fight for Israel? If Iran is talking about peace, we will also stand with them.’”

Fact Check
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search using Google Lens. This led us to posts by Newsradio 1040 WHO, which had shared the same footage on March 12 across Facebook and Instagram.


In its caption, the radio station stated that nearly 600 Iowa soldiers had returned home after a nine-month deployment in the Middle East as part of Operation Inherent Resolve. The segment, narrated by journalist Claire Burnett, explained that the soldiers belonged to the 2nd Brigade Combat Team, 34th Infantry Division, and had been deployed to Iraq and Syria. The footage was recorded at the 132nd Wing base of the Iowa Air National Guard in Des Moines.

For further confirmation, a March 12 report by KCCI 8 News also showed the same aircraft and troops, verifying the authenticity and timeline of the footage

Operation Inherent Resolve, launched in 2014, is a US-led campaign aimed at supporting local forces in the fight against the Islamic State (ISIS) and ensuring its lasting defeat.
https://www.kcci.com/article/iowans-welcome-national-guard-unit-home-from-deployment-in-middle-east/70729105

Conclusion
The viral claim is false and misleading. The video does not show US troops returning due to any recent ceasefire between the United States and Iran. Instead, it captures the routine homecoming of Iowa National Guard soldiers in March after completing a scheduled deployment in the Middle East.There is no evidence linking the footage to current geopolitical developments or any ceasefire agreement. The claim has been taken out of context and shared with a misleading narrative to create confusion around ongoing international events.
Related Blogs

Introduction
In the ever-evolving world of technological innovation, a new chapter is being inscribed by the bold visionaries at Figure AI, a startup that is not merely capitalising on artificial intelligence rage but seeking to crest its very pinnacle. With the recent influx of a staggering $675 million in funding, this Sunnyvale, California-based enterprise has captured the imagination of industry giants and venture capitalists alike, all betting on a future where humanoid robots transcend the realm of science fiction to become an integral part of our daily lives.
The narrative of Figure AI's ascent is punctuated by the names of tech luminaries and corporate giants. Jeff Bezos, through his firm Explore Investments LLC, has infused a hefty $100 million into the venture. Microsoft, not to be outdone, has contributed a cool $95 million. Nvidia and an Amazon-affiliated fund have each bestowed $50 million upon Figure AI's ambitious endeavours. This surge of capital is a testament to the potential seen in the company's mission to develop general-purpose humanoid robots that promise to revolutionise industries and redefine human labour.
The Catalyst for Change
This investment craze can be traced back to the emergence of OpenAI's ChatGPT, a chatbot that caught the public eye in November 2022. Its success has not only ushered in a new era for AI but has also sparked a race among investors eager to stake their claim in startups determined to outshine their more established counterparts. OpenAI itself, once mulling over the acquisition of Figure AI, has now joined the ranks of its benefactors with a $5 million investment.
The roster of backers reads like a who's who of the tech and venture capital world. Intel's venture capital arm, LG Innotek, Samsung's investment group, Parkway Venture Capital, Align Ventures, ARK Venture Fund, Aliya Capital Partners, and Tamarack—all have invested their lot with Figure AI, signalling a broad consensus on the startup's potential to disrupt and innovate.
Yet, when probed for insights, these major players—Amazon, Nvidia, Microsoft, and Intel—have maintained a Sphinx-like silence, while Figure AI and other entities mentioned in the report have refrained from immediate responses to inquiries. This veil of secrecy only adds to the intrigue surrounding the company's prospects and the transformative impact its technology may have on society.
Need For AI Robots
Figure AI's robots are not mere assemblages of metal and circuitry; they are envisioned as versatile beings capable of navigating a multitude of environments and executing a diverse array of tasks. From working at aisles of warehouses to the bustling corridors of retail spaces, these humanoid automatons are being designed to fill the void of millions of jobs projected to remain vacant due to a shrinking human labour force.
The company's long-term mission statement is as audacious as it is altruistic: 'to develop general-purpose humanoids that make a positive impact on humanity and create a better life for future generations.' This noble pursuit is not just about engineering efficiency; it is about reshaping the very fabric of work, liberating humans from hazardous and menial tasks, and propelling us towards a future where our lives are enriched with purpose and fulfilment.
Conclusion
As we stand on the cusp of a new digital world, the strides of Figure AI serve as a beacon, illuminating the path towards machine and human symbiosis. The investment frenzy that has enveloped the company is a clarion call to all dreamers, pragmatists and innovators alike that the age of humanoid helpers is upon us, and the possibilities are as endless as our collective imagination.
Figure AI is forging a future where robots walk among us, not as novelties or overlords but as partners in forging a world where technology and humanity work together to unlock untold potential. The story of Figure AI is not just one of investment and innovation; it is a narrative of hope, a testament to the indomitable spirit of human ingenuity, and a preview of the wondrous epoch that lies just beyond the horizon.
References
- https://cybernews.com/tech/openai-bezos-nvidia-fund-robot-startup-figure-ai/
- https://www.thedailystar.net/business/news/bezos-nvidia-join-openai-funding-humanoid-robot-startup-3551476
- https://www.bloomberg.com/news/articles/2024-02-23/bezos-nvidia-join-openai-microsoft-in-funding-humanoid-robot-startup-figure-ai
- https://economictimes.indiatimes.com/tech/technology/bezos-nvidia-join-openai-in-funding-humanoid-robot-startup-report/articleshow/107967102.cms?from=mdr

Executive Summary:
BrazenBamboo’s DEEPDATA malware represents a new wave of advanced cyber espionage tools, exploiting a zero-day vulnerability in Fortinet FortiClient to extract VPN credentials and sensitive data through fileless malware techniques and secure C2 communications. With its modular design, DEEPDATA targets browsers, messaging apps, and password stores, while leveraging reflective DLL injection and encrypted DNS to evade detection. Cross-platform compatibility with tools like DEEPPOST and LightSpy highlights a coordinated development effort, enhancing its espionage capabilities. To mitigate such threats, organizations must enforce network segmentation, deploy advanced monitoring tools, patch vulnerabilities promptly, and implement robust endpoint protection. Vendors are urged to adopt security-by-design practices and incentivize vulnerability reporting, as vigilance and proactive planning are critical to combating this sophisticated threat landscape.
Introduction
The increased use of zero-day vulnerabilities by more complex threat actors reinforces the importance of more developed countermeasures. One of the threat actors identified is BrazenBamboo uses a zero-day vulnerability in Fortinet FortiClient for Windows through the DEEPDATA advanced malware framework. This research explores technical details about DEEPDATA, the tricks used in its operations, and its other effects.
Technical Findings
1. Vulnerability Exploitation Mechanism
The vulnerability in Fortinet’s FortiClient lies in its failure to securely handle sensitive information in memory. DEEPDATA capitalises on this flaw via a specialised plugin, which:
- Accesses the VPN client’s process memory.
- Extracts unencrypted VPN credentials from memory, bypassing typical security protections.
- Transfers credentials to a remote C2 server via encrypted communication channels.
2. Modular Architecture
DEEPDATA exhibits a highly modular design, with its core components comprising:
- Loader Module (data.dll): Decrypts and executes other payloads.
- Orchestrator Module (frame.dll): Manages the execution of multiple plugins.
- FortiClient Plugin: Specifically designed to target Fortinet’s VPN client.
Each plugin operates independently, allowing flexibility in attack strategies depending on the target system.
3. Command-and-Control (C2) Communication
DEEPDATA establishes secure channels to its C2 infrastructure using WebSocket and HTTPS protocols, enabling stealthy exfiltration of harvested data. Technical analysis of network traffic revealed:
- Dynamic IP switching for C2 servers to evade detection.
- Use of Domain Fronting, hiding C2 communication within legitimate HTTPS traffic.
- Time-based communication intervals to minimise anomalies in network behavior.
4. Advanced Credential Harvesting Techniques
Beyond VPN credentials, DEEPDATA is capable of:
- Dumping password stores from popular browsers, such as Chrome, Firefox, and Edge.
- Extracting application-level credentials from messaging apps like WhatsApp, Telegram, and Skype.
- Intercepting credentials stored in local databases used by apps like KeePass and Microsoft Outlook.
5. Persistence Mechanisms
To maintain long-term access, DEEPDATA employs sophisticated persistence techniques:
- Registry-based persistence: Modifies Windows registry keys to reload itself upon system reboot.
- DLL Hijacking: Substitutes legitimate DLLs with malicious ones to execute during normal application operations.
- Scheduled Tasks and Services: Configures scheduled tasks to periodically execute the malware, ensuring continuous operation even if detected and partially removed.
Additional Tools in BrazenBamboo’s Arsenal
1. DEEPPOST
A complementary tool used for data exfiltration, DEEPPOST facilitates the transfer of sensitive files, including system logs, captured credentials, and recorded user activities, to remote endpoints.
2. LightSpy Variants
- The Windows variant includes a lightweight installer that downloads orchestrators and plugins, expanding espionage capabilities across platforms.
- Shellcode-based execution ensures that LightSpy’s payload operates entirely in memory, minimising artifacts on the disk.
3. Cross-Platform Overlaps
BrazenBamboo’s shared codebase across DEEPDATA, DEEPPOST, and LightSpy points to a centralised development effort, possibly linked to a Digital Quartermaster framework. This shared ecosystem enhances their ability to operate efficiently across macOS, iOS, and Windows systems.
Notable Attack Techniques
1. Memory Injection and Data Extraction
Using Reflective DLL Injection, DEEPDATA injects itself into legitimate processes, avoiding detection by traditional antivirus solutions.
- Memory Scraping: Captures credentials and sensitive information in real-time.
- Volatile Data Extraction: Extracts transient data that only exists in memory during specific application states.
2. Fileless Malware Techniques
DEEPDATA leverages fileless infection methods, where its payload operates exclusively in memory, leaving minimal traces on the system. This complicates post-incident forensic investigations.
3. Network Layer Evasion
By utilising encrypted DNS queries and certificate pinning, DEEPDATA ensures that network-level defenses like intrusion detection systems (IDS) and firewalls are ineffective in blocking its communications.
Recommendations
1. For Organisations
- Apply Network Segmentation: Isolate VPN servers from critical assets.
- Enhance Monitoring Tools: Deploy behavioral analysis tools that detect anomalous processes and memory scraping activities.
- Regularly Update and Patch Software: Although Fortinet has yet to patch this vulnerability, organisations must remain vigilant and apply fixes as soon as they are released.
2. For Security Teams
- Harden Endpoint Protections: Implement tools like Memory Integrity Protection to prevent unauthorised memory access.
- Use Network Sandboxing: Monitor and analyse outgoing network traffic for unusual behaviors.
- Threat Hunting: Proactively search for indicators of compromise (IOCs) such as unauthorised DLLs (data.dll, frame.dll) or C2 communications over non-standard intervals.
3. For Vendors
- Implement Security by Design: Adopt advanced memory protection mechanisms to prevent credential leakage.
- Bug Bounty Programs: Encourage researchers to report vulnerabilities, accelerating patch development.
Conclusion
DEEPDATA is a form of cyber espionage and represents the next generation of tools that are more advanced and tunned for stealth, modularity and persistence. While Brazen Bamboo is in the process of fine-tuning its strategies, the organisations and vendors have to be more careful and be ready to respond to these tricks. The continuous updating, the ability to detect the threats and a proper plan on how to deal with incidents are crucial in combating the attacks.
References:

Introduction
In today’s hyper-connected world, information spreads faster than ever before. But while much attention is focused on public platforms like Facebook and Twitter, a different challenge lurks in the shadows: misinformation circulating on encrypted and closed-network platforms such as WhatsApp and Telegram. Unlike open platforms where harmful content can be flagged in public, private groups operate behind a digital curtain. Here, falsehoods often spread unchecked, gaining legitimacy because they are shared by trusted contacts. This makes encrypted platforms a double-edged sword. It is essential for privacy and free expression, yet uniquely vulnerable to misuse.
As Prime Minister Narendra Modi rightly reminded,
“Think 10 times before forwarding anything,” warning that even a “single fake news has the capability to snowball into a matter of national concern.”
The Moderation Challenge with End-to-End Encryption
Encrypted messaging platforms were built to protect personal communication. Yet, the same end-to-end encryption that shields users’ privacy also creates a blind spot for moderation. Authorities, researchers, and even the platforms themselves cannot view content circulating in private groups, making fact-checking nearly impossible.
Trust within closed groups makes the problem worse. When a message comes from family, friends, or community leaders, people tend to believe it without questioning and quickly pass it along. Features like large group chats, broadcast lists, and “forward to many” options further speed up its spread. Unlike open networks, there is no public scrutiny, no visible counter-narrative, and no opportunity for timely correction.
During the COVID-19 pandemic, false claims about vaccines spread widely through WhatsApp groups, undermining public health campaigns. Even more alarming, WhatsApp rumors about child kidnappers and cow meat in India triggered mob lynchings, leading to the tragic loss of life.
Encrypted platforms, therefore, represent a unique challenge: they are designed to protect privacy, but, unintentionally, they also protect the spread of dangerous misinformation.
Approaches to Curbing Misinformation on End-to-End Platforms
- Regulatory: Governments worldwide are exploring ways to access encrypted data on messaging platforms, creating tensions between the right to user privacy and crime prevention. Approaches like traceability requirements on WhatsApp, data-sharing mandates for platforms in serious cases, and stronger obligations to act against harmful viral content are also being considered.
- Technological Interventions: Platforms like WhatsApp have introduced features such as “forwarded many times” labels and limits on mass forwarding. These tools can be expanded further by introducing AI-driven link-checking and warnings for suspicious content.
- Community-Based Interventions: Ultimately, no regulation or technology can succeed without public awareness. People need to be inoculated against misinformation through pre-bunking efforts and digital literacy campaigns. Fact-checking websites and tools also have to be taught.
Best Practices for Netizens
Experts recommend simple yet powerful habits that every user can adopt to protect themselves and others. By adopting these, ordinary users can become the first line of defence against misinformation in their own communities:
- Cross-Check Before Forwarding: Verify claims from trusted platforms & official sources.
- Beware of Sensational Content: Headlines that sound too shocking or dramatic probably need checking. Consult multiple sources for a piece of news. If only one platform/ channel is carrying sensational news, it is likely to be clickbait or outright false.
- Stick to Trusted News Sources: Verify news through national newspapers and expert commentary. Remember, not everything on the internet/television is true.
- Look Out for Manipulated Media: Now, with AI-generated deepfakes, it becomes more difficult to tell the difference between original and manipulated media. Check for edited images, cropped videos, or voice messages without source information. Always cross-verify any media received.
- Report Harmful Content: Report misinformation to the platform it is being circulated on and PIB’s Fact Check Unit.
Conclusion
In closed, unmonitored groups, platforms like WhatsApp and Telegram often become safe havens where people trust and forward messages from friends and family without question. Once misinformation takes root, it becomes extremely difficult to challenge or correct, and over time, such actions can snowball into serious social, economic and national concerns.
Preventing this is a matter of shared responsibility. Governments can frame balanced regulations, but individuals must also take initiative: pause, think, and verify before sharing. Ultimately, the right to privacy must be upheld, but with reasonable safeguards to ensure it is not misused at the cost of societal trust and safety.
References
- India WhatsApp ‘child kidnap’ rumours claim two more victims (BBC) The people trying to fight fake news in India (BBC)
- Press Information Bureau – PIB Fact Check
- Brookings Institution – Encryption and Misinformation Report (2021)
- Curtis, T. L., Touzel, M. P., Garneau, W., Gruaz, M., Pinder, M., Wang, L. W., Krishna, S., Cohen, L., Godbout, J.-F., Rabbany, R., & Pelrine, K. (2024). Veracity: An Open-Source AI Fact-Checking System. arXiv.
- NDTV – PM Modi cautions against fake news (2022)
- Times of India – Govt may insist on WhatsApp traceability (2019)
- Medianama – Telegram refused to share ISIS channel data (2019)