#FactCheck: Israel Apologizes to Iran’ Video Is AI-Generated
Executive Summary:
A viral video claiming to show Israelis pleading with Iran to "stop the war" is not authentic. As per our research the footage is AI-generated, created using tools like Google’s Veo, and not evidence of a real protest. The video features unnatural visuals and errors typical of AI fabrication. It is part of a broader wave of misinformation surrounding the Israel-Iran conflict, where AI-generated content is widely used to manipulate public opinion. This incident underscores the growing challenge of distinguishing real events from digital fabrications in global conflicts and highlights the importance of media literacy and fact-checking.
Claim:
A X verified user with the handle "Iran, stop the war, we are sorry" posted a video featuring people holding placards and the Israeli flag. The caption suggests that Israeli citizens are calling for peace and expressing remorse, stating, "Stop the war with Iran! We apologize! The people of Israel want peace." The user further claims that Israel, having allegedly initiated the conflict by attacking Iran, is now seeking reconciliation.

Fact Check:
The bottom-right corner of the video displays a "VEO" watermark, suggesting it was generated using Google's AI tool, VEO 3. The video exhibits several noticeable inconsistencies such as robotic, unnatural speech, a lack of human gestures, and unclear text on the placards. Additionally, in one frame, a person wearing a blue T-shirt is seen holding nothing, while in the next frame, an Israeli flag suddenly appears in their hand, indicating possible AI-generated glitches.

We further analyzed the video using the AI detection tool HIVE Moderation, which revealed a 99% probability that the video was generated using artificial intelligence technology. To validate this finding, we examined a keyframe from the video separately, which showed an even higher likelihood of 99% probability of being AI generated. These results strongly indicate that the video is not authentic and was most likely created using advanced AI tools.

Conclusion:
The video is highly likely to be AI-generated, as indicated by the VEO watermark, visual inconsistencies, and a 99% probability from HIVE Moderation. This highlights the importance of verifying content before sharing, as misleading AI-generated media can easily spread false narratives.
- Claim: AI generated video of Israelis saying "Stop the War, Iran We are Sorry".
- Claimed On: Social Media
- Fact Check:AI Generated Mislead
Related Blogs

Introduction
In this ever-evolving world of technology, cybercrimes and criminals continue to explore new and innovative methods to exploit and intimidate their victims. One of the recent shocking incidents has been reported from the city of Bharatpur, Rajasthan, where the cyber crooks organised a mock court session This complex operation, meant to induce fear and force obedience, exemplifies the daring and intelligence of modern hackers. In this blog article, we’ll go deeper into this concerning occurrence, delving into it to offer light on the strategies used and the ramifications for cybersecurity.to frighten their targets.
The Setup
The case was reported from Gopalgarh village in Bharatpur, Rajasthan, and has unfolded with a shocking twist -the father-son duo, Tahir Khan and his son Talim Khano — from Gopalgarh village in Bharatpur, Rajasthan, has been fooling people to gain their monetary gain by staging a mock court setting and recorded the proceedings to intimidate their victims into paying hefty sums. In the recent case, they have gained 2.69 crores through sextortion. the duo uses to trace their targets on social media platforms, blackmail them, and earn a hefty amount.
An official complaint was filed by a 69-year-old victim who was singled out through his social media accounts, his friends, and his posts Initially, they contacted the victim with a pre-recorded video featuring a nude woman, coaxing him into a compromising situation. As officials from the Delhi Crime Branch and the CBI, they threatened the victim, claiming that a girl had approached them intending to file a complaint against him. Later, masquerading as YouTubers, they threatened to release the incriminating video online. Adding to the charade, they impersonated a local MLA and presented the victim with a forged stamp paper alleging molestation charges. Eventually, posing as Delhi Crime Branch officials again, they demanded money to settle the case after falsely stating that they had apprehended the girl. To further manipulate the victim, the accused staged a court proceeding, recording it and subsequently sending it to him, creating the illusion that everything was concluded. This unique case of sextortion stands out as the only instance where the culprits went to such lengths, staging and recording a mock court to extort money. Furthermore, it was discovered that the accused had fabricated a letter from the Delhi High Court, adding another layer of deception to their scheme.
The Investigation
The complaint was made in a cyber cell. After the complaint was filed, the investigation was made, and it was found that this case stands as one of the most significant sextortion incidents in the country. The father-son pair skillfully assumed five different roles, meticulously executing their plan, which included creating a simulated court environment. “We have also managed to recover Rs 25 lakh from the accused duo—some from their residence in Gopalgarh and the rest from the bank account where it was deposited.
The Tricks used by the duo
The father-son The setup in the fake court scene event was a meticulously built web of deception to inspire fear and weakness in the victim. Let’s look at the tricks the two used to fool the people.
- Social Engineering strategies: Cyber criminals are skilled at using social engineering strategies to acquire the trust of their victims. In this situation, they may have employed phishing emails or phone calls to get personal information about the victim. By appearing as respectable persons or organisations, the crooks tricked the victim into disclosing vital information, giving them weapons they needed to create a sense of trustworthiness.
- Making a False Narrative: To make the fictitious court scenario more credible, the cyber hackers concocted a captivating story based on the victim’s purported legal problems. They might have created plausible papers to give their plan authority, such as forged court summonses, legal notifications, or warrants. They attempted to create a sense of impending danger and an urgent necessity for the victim to comply with their demands by deploying persuasive language and legal jargon.
- Psychological Manipulation: The perpetrators of the fictitious court scenario were well aware of the power of psychological manipulation in coercing their victims. They hoped to emotionally overwhelm the victim by using fear, uncertainty, and the possible implications of legal action. The offenders probably used threats of incarceration, fines, or public exposure to increase the victim’s fear and hinder their capacity to think critically. The idea was to use desperation and anxiety to force the victim to comply.
- Use of Technology to Strengthen Deception: Technological advancements have given cyber thieves tremendous tools to strengthen their misleading methods. The simulated court scenario might have included speech modulation software or deep fake technology to impersonate the voices or appearances of legal experts, judges, or law enforcement personnel. This technology made the deception even more believable, blurring the border between fact and fiction for the victim.
The use of technology in cybercriminals’ misleading techniques has considerably increased their capacity to fool and influence victims. Cybercriminals may develop incredibly realistic and persuasive simulations of judicial processes using speech modulation software, deep fake technology, digital evidence alteration, and real-time communication tools. Individuals must be attentive, gain digital literacy skills, and practice critical thinking when confronting potentially misleading circumstances online as technology advances. Individuals can better protect themselves against the expanding risks posed by cyber thieves by comprehending these technological breakthroughs.
What to do?
Seeking Help and Reporting Incidents: If you or anyone you know is the victim of cybercrime or is fooled by cybercrooks. When confronted with disturbing scenarios such as the imitation court scene staged by cybercrooks, victims must seek help and act quickly by reporting the occurrence. Prompt reporting serves various reasons, including increasing awareness, assisting with investigations, and preventing similar crimes from occurring again. Victims should take the following steps:
- Contact your local law enforcement: Inform local legal enforcement about the cybercrime event. Provide them with pertinent incident facts and proof since they have the experience and resources to investigate cybercrime and catch the offenders involved.
- Seek Assistance from a Cybersecurity specialist: Consult a cybersecurity specialist or respected cybersecurity business to analyse the degree of the breach, safeguard your digital assets, and obtain advice on minimising future risks. Their knowledge and forensic analysis can assist in gathering evidence and mitigating the consequences of the occurrence.
- Preserve Evidence: Keep any evidence relating to the event, including emails, texts, and suspicious actions. Avoid erasing digital evidence, and consider capturing screenshots or creating copies of pertinent exchanges. Evidence preservation is critical for investigations and possible legal procedures.
Conclusion
The setting fake court scene event shows how cybercriminals would deceive and abuse their victims. These criminals tried to use fear and weakness in the victim through social engineering methods, the fabrication of a false narrative, the manipulation of personal information, psychological manipulation, and the use of technology. Individuals can better defend themselves against cybercrooks by remaining watchful and sceptical.

Executive Summary:
A manipulated viral photo of a Maldivian building with an alleged oversized portrait of Indian Prime Minister Narendra Modi and the words "SURRENDER" went viral on social media. People responded with fear, indignation, and anxiety. Our research, however, showed that the image was manipulated and not authentic.

Claim:
A viral image claims that the Maldives displayed a huge portrait of PM Narendra Modi on a building front, along with the phrase “SURRENDER,” implying an act of national humiliation or submission.

Fact Check:
After a thorough examination of the viral post, we got to know that it had been altered. While the image displayed the same building, it was wrong to say it included Prime Minister Modi’s portrait along with the word “SURRENDER” shown in the viral version. We also checked the image with the Hive AI Detector, which marked it as 99.9% fake. This further confirmed that the viral image had been digitally altered.

During our research, we also found several images from Prime Minister Modi’s visit, including one of the same building displaying his portrait, shared by the official X handle of the Maldives National Defence Force (MNDF). The post mentioned “His Excellency Prime Minister Shri @narendramodi was warmly welcomed by His Excellency President Dr.@MMuizzu at Republic Square, where he was honored with a Guard of Honor by #MNDF on his state visit to Maldives.” This image, captured from a different angle, also does not feature the word “surrender.


Conclusion:
The claim that the Maldives showed a picture of PM Modi with a surrender message is incorrect and misleading. The image is altered and is being spread to mislead people and stir up controversy. Users should check the authenticity of photos before sharing.
- Claim: Viral image shows the Maldives mocking India with a surrender sign
- Claimed On: Social Media
- Fact Check: False and Misleading
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62