#FactCheck - AI-Generated Video Falsely Shows US Soldiers Surrendering to Iranian Forces
Executive Summary:
Amid the ongoing conflict between the United States, Israel, and Iran, a video circulating widely on social media claims to show American soldiers kneeling and surrendering to Iranian forces. In the clip, several soldiers appear to be sitting on their knees in front of armed personnel, creating the impression that they have been captured on the battlefield.
The video is being shared with the claim that the Iranian military has taken US soldiers prisoner during the war.
However, an research by the CyberPeace found that the claim is false. The viral clip is not authentic and has been generated using artificial intelligence. There is no credible evidence to support the claim that American soldiers have been captured by Iranian forces.
Claim
A Facebook user named “News Tick” shared the video on March 12, 2026, claiming that Iran had released footage of captured US soldiers. In the clip, the soldiers can be seen kneeling while armed personnel stand around them, giving the scene a highly dramatic appearance.

Fact Check
To verify the claim, we first searched the internet using relevant keywords. We found no credible reports from reputable news organizations confirming that US soldiers had been captured by Iran during the conflict. A closer examination of the video revealed several visual inconsistencies. The weapons carried by the soldiers appear unclear and oddly shaped. Additionally, the background looks unusually blurred and overly dramatic. The lighting and textures in the footage also appear artificial—common indicators of AI-generated visuals.
To confirm this suspicion, we analyzed the clip using multiple AI detection tools. The tool Hive Moderation indicated a 99% probability that the video was created using artificial intelligence.

Further analysis using Sightengine also suggested that the video was likely AI-generated, estimating an 80% probability of AI creation.

Conclusion
Our research shows that the viral video claiming to depict American soldiers surrendering and being captured by Iranian forces is fake. The footage has been generated using AI and does not represent a real incident.
Related Blogs

Introduction
Ransomware is one of the serious cyber threats as it causes consequences such as financial losses, data loss, and reputation damage. Recently in 2023, a new ransomware called Akira ransomware emerged or surfaced. It has targeted and affected various enterprises or industries, such as BSFI, Construction, Education, Healthcare, Manufacturing, real estate and consulting, primarily based in the United States. Akira ransomware has targeted industries by exploiting the double-extortion technique by exfiltrating and encrypting sensitive data and imposing the threat on victims to leak or sell the data on the dark web if the ransom is not paid. The Akira ransomware gang has extorted a ransom ranging from $200,000 to millions of dollars.
Uncovering the Akira Ransomware operations and their targets
Akira ransomware gang has gained unauthorised access to computer systems by using sophisticated encryption algorithms to encrypt the Data. When such an encryption process is completed, the affected device or network will not be able to access its files or use its data.
The affected files by Akira ransomware showed the extension named “.akira”, and the file’s icon shows blank white pages. The Akira ransomware has developed a data leak site so as to extort victims. And it has also used the ransom note named “akira_readme.txt”.
Akira ransomware steeled the corporate data of various organisations, which the Akira ransomware gang used as leverage while threatening the affected organisation with high ransom demands. Akira Ransomware gang threaten the victims to leak their sensitive data or corporate data in the public domain if the demanded ransom amount is not paid. Akira ransomware gang has leaked the data of four organisations and the size ranges from 5.9GB to 259 GB of data leakage.
Akira Ransomware gang communicating with Victims
The Akira ransomware has provided a unique negotiation password to each victim to initiate communication. Where the ransomware gang deployed a chat system for the purpose of negotiation and demanding ransom from the affected organisations. They have deployed a ransom note as akira_readme.txt so as to provide information as to how they have affected the victim’s files or data along with links to the Akira data leak site and negotiation site.
How Akira Ransomware is different from Pegasus Spyware
Pegasus, developed in the year 2011, belongs to one of the most powerful family of spyware. Once it has infected, it can spear your phone and your text messages or emails. It has the ability to turn your phone into a surveillance device, from copying your messages to harvesting your photos and recording calls. In fact, it has the ability to record you through your phone camera or record your conversation by using your microphone, it also has the ability to track your pinpoint location. In contrast, newly Akira ransomware affects encrypting your files and preventing access to your Data and then asking for ransom n the pretext of leaking your data or for decryption.
How to recover from malware attacks
If affected by such type of malware attack, you can use anti-malware tools such as SpyHunter 5 or Malwarebytes to scan your system. These are the security software which can scan your system and remove suspicious malware files and entries. If you are unable to perform the scan or antivirus in normal mode due to malware in your system, you can use it in Safe Mode. And try to find a relevant decryptor which can help you to recover your files. Do not fall into a ransomware gang’s trap because there is no guarantee that they will help you to recover or will not leak your data after paying the ransom amount.
Best practices to be safe from such ransomware attacks
Conclusion
The Akira ransomware operation poses serious threats to various organisations worldwide. There is a high need to employ robust cybersecurity measures to safeguard networks and sensitive data. Organisations must ensure to keep their software system updated and backed up to a secure network on a regular basis. Paying the ransom is illegal mean instead you should report the incident to law enforcement agencies and can consult with cybersecurity professionals for the recovery method.

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Introduction
Meta has announced that E2EE in Instagram direct messages is ending entirely. Every day, billions of people send messages they consider private. A medical update to a family member. A photograph meant for one person. A conversation they would never have in public. For years, end-to-end encryption (E2EE) was the technology that made that privacy possible: the digital equivalent of a sealed envelope that only the sender and receiver could open. After May 8, 2026, this will change.
Understanding the Adoption Gap
Meta pointed to low user adoption as the reason for this change. Few people were using encrypted messaging on Instagram, the company said, so the feature was not worth keeping. That explanation raises some questions. Encryption was never switched on by default. Users had to find it and turn it on themselves. It was not advertised. And it was only available in certain regions to begin with, something Meta noted on its own Help Centre page. Features that require users to actively seek them out tend to get used far less than those that simply work from the start. WhatsApp demonstrates this clearly; encryption has been on by default since 2016, for every user, with no action required. Back in 2019, Mark Zuckerberg spoke publicly about building privacy into Meta’s messaging platforms as a core direction for the company. The current decision shows a different vision for the company.
The Commercial Dimension
Encrypted message content is not accessible for advertising purposes by design. In December 2025, Meta updated its privacy policy to allow interactions with its Meta AI assistant to inform personalized advertising recommendations across its platforms. With encryption removed from Instagram direct messages, the content of those conversations enters a data environment that already serves Meta’s advertising systems. Meta has not made a direct public statement connecting these two decisions, but technology analysts and privacy researchers have noted the commercial implications of making previously inaccessible message content available within that ecosystem.
What This Means for Users
From May 8, 2026, the content of Instagram direct messages will be accessible to Meta’s systems. This includes messages relating to personal matters that users may have previously sent under the assumption of encryption. A related concern is the question of data security. Unencrypted message content stored on platform servers creates a larger surface area of sensitive information that could be exposed in the event of a security breach. As platforms collect and retain greater volumes of personal data, the potential consequences of unauthorised access grow correspondingly.
But, there is an argument on the other side. Law enforcement agencies and child safety organisations have long maintained that end-to-end encryption limits their ability to detect and act on harmful content. Removing encryption does make certain forms of platform-level content moderation technically feasible where they were not before.
India’s Supreme Court: The Warning Nobody Heeded
India’s Supreme Court said it plainly when hearing the case against Meta’s 2021 WhatsApp privacy policy, which forced hundreds of millions of users to accept data sharing with Facebook or lose access entirely. Chief Justice Surya Kant called it “a decent way of committing theft of private information” and asked how ordinary people could meaningfully consent to policies written in language they cannot understand. He made it human with one line: “A poor woman selling fruits on the streets — will she understand the terms of your policy?” The court ordered Meta not to share a single word of user data until the case is resolved. When Meta’s lawyers argued that encryption protects users anyway, the bench pushed back: encryption protects message content, not the metadata surrounding it. Who you talk to, how often, at what time, from where: all of it is still harvested. The Competition Commission’s own advocate summarised the entire arrangement in four words: “We are the products.”
WhatsApp: A Question Worth Asking
Instagram, Messenger, and WhatsApp are three products inside one ecosystem, owned by Meta, serving one business model. Instagram’s encryption is already gone. Is WhatsApp next in line ?
WhatsApp has over 850 million monthly active users in India alone. People do not use it for entertainment, it is how families talk, how businesses run, how essential daily communication happens. It is infrastructure, not an app. Meta acquired it in 2014 promising no ads, no data exploitation. By 2021 that promise was already bending. By 2025 ads appeared in the Status section. Both original co-founders had long since left the company over exactly these concerns. Instagram’s encryption survived until it conflicted with revenue and regulation. WhatsApp’s encryption exists today under the same ownership, the same business model, and the same tightening global regulatory pressure. That is not a reason to panic. It is a reason to pay attention.
Conclusion
Encryption is not permanent. It is a design choice, and like any design choice, it can be undone when priorities shift. After May 8, 2026, Instagram direct messages will no longer be protected the way they once were. For most users, this change will pass unnoticed. But the data those conversations contain will now be accessible in ways it previously was not. What platforms do with user data is rarely announced loudly. Paying attention to the quiet changes matters.
References
- https://help.instagram.com/491565145294150
- https://www.theguardian.com/technology/2026/mar/18/instagram-to-remove-end-to-end-encryption-for-private-messages-in-may
- https://www.androidpolice.com/why-meta-is-getting-rid-of-e2ee/
- https://digitalpolicyalert.org/change/13307
- https://www.skadden.com/insights/publications/2025/06/take-it-down-act
- https://timesofindia.indiatimes.com/india/you-cant-play-with-right-of-privacy-of-citizens-scs-big-warning-to-whatsapp-meta-over-take-it-or-leave-it-policy/articleshow/127878524.cms#
- https://proton.me/blog/instagram-end-to-end-encryption
- https://www.forbes.com/sites/parmyolson/2018/09/26/exclusive-whatsapp-cofounder-brian-acton-gives-the-inside-story-on-deletefacebook-and-why-he-left-850-million-behind/