#FactCheck-AI-Generated Image Falsely Shows Donald Trump Raising Fist During Washington Incident
Executive Summary
A photo of Donald Trump is going viral on social media, showing him raising his fist. Users claim the image was taken during a press event in Washington, when security personnel were escorting him out amid reports of gunfire. Research by CyberPeace Research Wing found that the viral image is AI-generated and is being shared with misleading claims.
Claim
On April 26, 2026, an X user shared the image with the caption: “Thank You, Lord our God, for protecting our President.” The post suggests that Trump made the gesture during a chaotic evacuation at a Washington event.

Fact Check
Reports confirm that Trump and senior officials were hurried away from the White House Correspondents’ Association dinner on April 25 after gunshots were reportedly heard from a floor above the ballroom. However, no authentic visuals show Trump raising his fist during the evacuation.
- https://www.nytimes.com/2024/07/14/arts/design/trump-photo-raised-fist.html
- https://edition.cnn.com/2025/04/11/politics/trump-obama-portrait-white-house


Further analysis of the viral image indicates signs of digital manipulation. Google’s SynthID detection tool flagged the file as containing SynthID—an invisible watermark embedded in content generated using Google’s AI tools.

Additionally, AI detection platform Hive Moderation assessed that the image is likely AI-generated or a deepfake.

Conclusion
The research confirms that the viral image of Donald Trump raising his fist during a Washington incident is not real. It was created using AI and is being circulated with a misleading narrative.
Related Blogs

Introduction
The Sexual Harassment of minors in cyberspace has become a matter of grave concern that needs to be addressed. Sextortion is the practice of extorting individuals into sharing explicit and sexual content under the threat of exposure. This grim activity has evolved into a pervasive issue on several social media platforms, particularly Instagram. To combat this illicit act, big corporate giants such as Meta have deployed a comprehensive ‘nudity protection’ feature, leveraging the use of AI (Artificial Intelligence) algorithms to ascertain and address the rapid distribution of unsolicited explicit content.
The Meta Initiative presented a multifaceted approach to improve user safety, especially for young people online, who are more vulnerable to predatory behavior.
The Salient Feature
Instagram’s use of advanced AI algorithms to automatically identify and blur out explicit images shared within direct messages is the driving force behind this initiative. This new safety measure serves two essential purposes.
- Preventing dissemination of sensitive content - The feature, when enabled, obstructs the visibility of sensitive personal pictures and also limits dissemination of the same.
- Empower minors to exercise more control over their social media - This cutting feature comes with the ability to disable the nudity protection at the will of users, allowing users, including minors, to regulate their exposure to age-inappropriate and harmful materials online. The nudity protection feature is enabled for all users under 18 as a default setting on Instagram globally. This measure guarantees a baseline standard of security for the most vulnerable demographic of users. Adults are able to exercise more autonomy over the feature, receiving periodic prompts for its voluntary activationWhen this feature detects an explicit image, it automatically blurs the image with cautionary overlay, enabling recipients to make an informed decision about whether or not they wish to view the flagged content. The decision to introduce this feature is an interesting and sensitive approach to balancing individual agency with institutionalising online protection.
Comprehensive Safety Measures Beyond Nudity Detection
The cutting-edge nudity protection feature is a crucial element of Instagram’s new strategy and is supported by a comprehensive set of measures devised to tackle sextortion and ensure a safe cyber environment for its users:
Awareness Drives and Safety Tips - Users sending and receiving sexually explicit content are directed to a screen with curated safety tips to ensure complete user awareness and inspire due diligence. These safety tips are critical in raising awareness about the risks of sharing sensitive content and inculcating responsible online behaviour.
New Technology to Identify Sextortionists - Meta Platforms are constantly evolving, and new sophisticated algorithms are introduced to better detect malicious accounts engaged in possible sextortion. These proactive measures check for any predatory behaviour so that such threats can be neutralised before they escalate and do grave harm.
Superior Reporting and Support Mechanisms - Instagram is implementing new technology to bolster its reporting mechanisms so that users reporting concerns pertaining to nudity, sexual exploitation and threats are instantaneously directed to local child safety authorities for necessary support and assistance.
This new sophisticated approach highlights Instagram's Commitment to forging a safer haven for users by addressing various aspects of this grim issue through the three-pronged strategy of detection, prevention and support.
User’s Safety and Accountability
The implementation of the nudity protection feature and various associated safety measures is Meta’s way of tackling the growing concern about user safety in a more proactive manner, especially when it concerns minors. Instagram’s experience with this feature will likely be the sandbox in which Meta tests its new user protection strategy and refines it before extending it to other platforms like Facebook and WhatsApp.
Critical Reception and Future Outlook
The nudity protection feature has been met with positive feedback from experts and online safety advocates, commending Instagram for taking a proactive stance against sextortion and exploitation. However, critics also emphasise the need for continued innovation, transparency, and accountability to effectively address evolving threats and ensure comprehensive protection for all users.
Conclusion
As digital spaces continue to evolve, Meta Platforms must demonstrate an ongoing commitment to adapting its safety measures and collaborating with relevant stakeholders to stay ahead of emerging challenges. Ongoing investment in advanced technology, user education, and robust support systems will be crucial in maintaining a secure and responsible online environment. Ultimately, Instagram's nudity protection feature represents a significant step forward in the fight against online sexual exploitation and abuse. By leveraging cutting-edge technology, fostering user awareness, and implementing comprehensive safety protocols, Meta Platforms is setting a positive example for other social media platforms to prioritise user safety and combat predatory behaviour in digital spaces.
References
- https://www.nbcnews.com/tech/tech-news/instagram-testing-blurring-nudity-messages-protect-teens-sextortion-rcna147402
- https://techcrunch.com/2024/04/11/meta-will-auto-blur-nudity-in-instagram-dms-in-latest-teen-safety-step/
- https://hypebeast.com/2024/4/instagram-dm-nudity-blurring-feature-teen-safety-info

AI-generated content has been taking up space in the ever-changing dynamics of today's tech landscape. Generative AI has emerged as a powerful tool that has enabled the creation of hyper-realistic audio, video, and images. While advantageous, this ability has some downsides, too, particularly in content authenticity and manipulation.
The impact of this content is varied in the areas of ethical, psychological and social harms seen in the past couple of years. A major concern is the creation of non-consensual explicit content, including nudes. This content includes content where an individual’s face gets superimposed onto explicit images or videos without their consent. This is not just a violation of privacy for individuals, and can have humongous consequences for their professional and personal lives. This blog examines the existing laws and whether they are equipped to deal with the challenges that this content poses.
Understanding the Deepfake Technology
Deepfake technology is a media file (image, video, or speech) that typically represents a human subject that is altered deceptively using deep neural networks (DNNs). It is used to alter a person’s identity, and it usually takes the form of a “face swap” where the identity of a source subject is transferred onto a destination subject. The destination’s facial expressions and head movements remain the same, but the appearance in the video is that of the source. In the case of videos, the identities can be substituted by way of replacement or reenactment.
This superimposed content creates realistic content, such as fake nudes. Presently, creating a deepfake is not a costly endeavour. It requires a Graphics Processing Unit (GPU), software that is free, open-source, and easy to download, and graphics editing and audio-dubbing skills. Some of the common apps to create deepfakes are DeepFaceLab and FaceSwap, which are both public and open source and are supported by thousands of users who actively participate in the evolution and development of these software and models.
Legal Gaps and Challenges
Multiple gaps and challenges exist in the legal space for deepfakes and their regulation. They are:
- The inadequate definitions governing AI-generated explicit content often lead to enforcement challenges.
- Jurisdictional challenges due to the cross-border nature of crimes and the difficulties caused by international cooperation measures are in the early stages for AI content.
- There is a gap between the current consent-based and harassment laws for AI-generated nudes.
- Providing evidence or providing proof for the intent and identification of perpetrators in digital crimes is a challenge that is yet to be overcome.
Policy Responses and Global Trends
Presently, the global response to deepfakes is developing. The UK has developed the Online Safety Bill, the EU has the AI Act, the US has some federal laws such as the National AI Initiative Act of 2020 and India is currently developing the India AI Act as the specific legislation dealing with AI and its correlating issues.
The IT Rules, 2021, and the DPDP Act, 2023, regulate digital platforms by mandating content governance, privacy policies, grievance redressal, and compliance with removal orders. Emphasising intermediary liability and safe harbour protections, these laws play a crucial role in tackling harmful content like AI-generated nudes, while the DPDP Act focuses on safeguarding privacy and personal data rights.
Bridging the Gap: CyberPeace Recommendations
- Initiate legislative reforms by advocating for clear and precise definitions for the consent frameworks and instituting high penalties for AI-based offences, particularly those which are aimed at sexually explicit material.
- Advocate for global cooperation and collaborations by setting up international standards and bilateral and multilateral treaties that address the cross-border nature of these offences.
- Platforms should push for accountability by pushing for stricter platform responsibility for the detection and removal of harmful AI-generated content. Platforms should introduce strong screening mechanisms to counter the huge influx of harmful content.
- Public campaigns which spread awareness and educate users about their rights and the resources available to them in case such an act takes place with them.
Conclusion
The rapid advancement of AI-generated explicit content demands immediate and decisive action. As this technology evolves, the gaps in existing legal frameworks become increasingly apparent, leaving individuals vulnerable to profound privacy violations and societal harm. Addressing this challenge requires adaptive, forward-thinking legislation that prioritises individual safety while fostering technological progress. Collaborative policymaking is essential and requires uniting governments, tech platforms, and civil society to develop globally harmonised standards. By striking a balance between innovation and societal well-being, we can ensure that the digital age is not only transformative but also secure and respectful of human dignity. Let’s act now to create a safer future!
References
- https://etedge-insights.com/technology/artificial-intelligence/deepfakes-and-the-future-of-digital-security-are-we-ready/
- https://odsc.medium.com/the-rise-of-deepfakes-understanding-the-challenges-and-opportunities-7724efb0d981
- https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/

Executive Summary
Amid the ongoing tensions in West Asia between the United States–Israel alliance and Iran since February 28, 2026, a video is rapidly going viral on social media. The clip shows buildings engulfed in flames and thick plumes of smoke following an attack. Several users are sharing it with the claim that it depicts Iran’s recent strike on Tel Aviv, Israel. However, an research by the CyberPeace found the claim to be misleading. The viral video is actually from August 2025, when Israel carried out airstrikes in Sanaa, the capital of Yemen. It has no connection to the current conflict.
Claim:
An Instagram user ‘iran_.news24’ posted the video on March 27, 2026, with the caption: “Iran has turned Israel’s largest city Tel Aviv into hell—fears that 200,000 people have died in the war so far.”
Fact Check
To verify the viral claim, keyframes of the video were extracted and searched using Google Lens. The same video was found posted on August 24, 2025, by a Facebook user ‘Mhmdmhywbalshrby5’. The accompanying text, when translated, stated that it showed Israeli bombardment of Sanaa, Yemen.

Similarly, another Instagram user ‘ae5ce’ had also shared the same video on August 24, 2025, identifying it as footage from Sanaa.

Media reports further support this finding. According to a report published by Egypt Today on August 24, 2025, Israel carried out multiple airstrikes in Sanaa targeting key locations, including an oil station, a power facility, and the presidential palace. Casualties were also reported. The strikes were said to be in response to attacks by Houthi forces.

Additionally, the New York Post shared another video of the same incident from a different angle on its X (formerly Twitter) handle on August 25, 2025.

Conclusion
The video being circulated with the claim of Iran attacking Tel Aviv is actually old footage from Israeli airstrikes in Yemen in August 2025. It is unrelated to the ongoing conflict.