#FactCheck - Misleading Video of Dubai Airport Attack Circulates Online, Found AI-Generated
Executive Summary
Amid rising tensions in the Middle East following attacks on Iran by the United States and Israel, a video is being shared on social media claiming that it shows a recent attack at Dubai International Airport. Research by the CyberPeace found the viral claim to be false. Our research revealed that the viral video is not real but has been created using artificial intelligence technology.
Claim:
An Instagram user shared the viral video on March 1, 2026, claiming it shows an attack at Dubai Airport. The link to the post, the archive link, and a screenshot are provided below.

Fact Check:
To verify the viral claim, we searched Google using relevant keywords. However, we did not find any credible media report confirming the claim.On closely examining the viral video, we noticed several unusual visuals and technical inconsistencies, raising suspicion that it might be AI-generated. To verify this, we scanned the video using the AI detection tool Sightengine. According to the results, around 74 percent of the video shows the likelihood of being AI-generated.

Conclusion:
Our research found that the viral video is not real but has been created using artificial intelligence technology.
Related Blogs

The more ease and dependency the internet slithers into our lives, the more obscure parasites linger on with it, menacing our privacy and data. Among these digital parasites, cyber espionage, hacking, and ransom have never failed to grab the headlines. These hostilities carried out by cyber criminals, corporate juggernauts and several state and non-state actors lend them unlimited access to the customers’ data damaging the digital fabric and wellbeing of netizens.
As technology continues to evolve, so does the need for robust safety measures. To tackle these emerging challenges, Korea based Samsung Electronics has introduced a cutting-edge security tool called Auto Blocker. Introduced in the One UI 6 Update, Auto Blocker boasts an array of additional security features, granting users the ability to customize their device's security as per their requirements Also known as ‘advanced sandbox’ or ‘Virtual Quarantine’. Sandboxing is a safety measure for separating running programs to prevent spread of digital vulnerabilities. It prohibits automatic execution of malicious code embedded in images. This shield now extends to third-party apps like WhatsApp and Facebook messenger, providing better resilience against cyber-attacks in all Samsung devices.
Matter of Choice
Dr. Seungwon Shin, EVP & Head of Security Team, Mobile eXperience Business at Samsung Electronics, emphasizes the significance of user safety. He stated “At Samsung, we constantly strive to keep our users safe from security attacks, and with the introduction of Auto Blocker, users can continue to enjoy the benefits of our open ecosystem, knowing that their mobile experience is secured.”
Auto Blocker is a matter of choice. It's not a cookie cutter solution; instead, its USP is the ability to customize security measures of your device. The Auto Blocker can be accessed through device’s setting, and is activated via toggle.
Your personal Digital Armor
One of Auto Blocker's salient features is its ability to prevent bloatware (unnecessary apps) from installing in the devices from unknown sources which is called sideloading. While sideloading provides greater scope of control and better customization, it also exposes users to potential threats, such as malicious file downloads. The proactive approach of Auto Blocker disables sideloading by default. Auto Blocker serves as an extra line of defense, especially against gruesome social engineering attacks such as voice Phishing (Vhishing). The app has an essential tool called ‘Message Guard’, engineered to combat Zero Click attacks. These complicated attacks are executed when a message containing an image is viewed.
The Auto Blocker also offers a wide variety of new controls to enhance device’s safety, including security scans to detect malwares. Additionally, Auto Blocker prevents the installation of malwares via USB cable. This ensures the device's security even when someone gains physical access to it, such as when the device is being charged in a public place.
Raising the Bar for Cyber Security
Auto Blocker testifies Samsung's unwavering commitment to the safety and privacy of its users. It acts an essential part of Samsung's security suite and privacy innovations, improving overall mobile experience within the Galaxy’s ecosystem. It provides a safer mobile experience while allowing user superior control over their device's protection. In comparison. Apple offers a more standardized approach to privacy and security with emphasis on user friendly design and closed ecosystem. Samsung disables sideloading to combat threats, while Apple is more flexible in this regard on macOS.
In this dynamic digital space, the Auto Blocker offers a tool to maintain cyber peace and resilience. It protects from a broad spectrum of digital hostilities while allowing us to embrace the new digital ecosystem crafted by Galaxy. It's a security feature that puts you in control, allowing you to determine how you fortify your digital fort to safeguard your device against digital specters like zero clicks, voice phishing (Vishing) and malware downloads
Samsung’s new product emerges as impenetrable armor shielding users against cyber hostilities. With its new customizable security feature with Galaxy Ecosystem, it allows users to exercise greater control over their digital space, promoting more secure and peaceful cyberspace.
Reference:
HT News Desk. (2023, November 1). Samsung unveils new Auto Blocker feature to protect devices. How does it work? Hindustan Times. https://www.hindustantimes.com/technology/samsung-unveils-new auto-blocker-feature to-protect-devices-how-does-it-work 101698805574773.html
.webp)
Executive Summary:
Our Team recently came across a post on X (formerly twitter) where a photo widely shared with misleading captions was used about a Hindu Priest performing a vedic prayer at Washington after recent elections. After investigating, we found that it shows a ritual performed by a Hindu priest at a private event in White House to bring an end to the Covid-19 Pandemic. Always verify claims before sharing.

Claim:
An image circulating after Donald Trump’s win in the US election shows Pujari Harish Brahmbhatt at the White House recently.

Fact Check:
The analysis was carried out and found that the video is from an old post that was uploaded in May 2020. By doing a Reverse Image Search we were able to trace the sacred Vedic Shanti Path or peace prayer was recited by a Hindu priest in the Rose Garden of the White House on the occasion of National Day of Prayer Service with other religious leaders to pray for the health, safety and well-being of everyone affected by the coronavirus pandemic during those difficult days, and to bring an end to Covid-19 Pandemic.

Conclusion:
The viral claim mentioning that a Hindu priest performed a Vedic prayer at the White House during Donald Trump’s presidency isn’t true. The photo is actually from a private event in 2020 and provides misleading information.
Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
- Claim: Hindu priest held a Vedic prayer at the White House under Trump
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading

Artificial Intelligence (AI) provides a varied range of services and continues to catch intrigue and experimentation. It has altered how we create and consume content. Specific prompts can now be used to create desired images enhancing experiences of storytelling and even education. However, as this content can influence public perception, its potential to cause misinformation must be noted as well. The realistic nature of the images can make it hard to discern as artificially generated by the untrained eye. As AI operates by analysing the data it was trained on previously to deliver, the lack of contextual knowledge and human biases (while framing prompts) also come into play. The stakes are higher whilst dabbling with subjects such as history, as there is a fine line between the creation of content with the intent of mere entertainment and the spread of misinformation owing to biases and lack of veracity left unchecked. AI-generated images enhance storytelling but can also spread misinformation, especially in historical contexts. For instance, an AI-generated image of London during the Black Death might include inaccurate details, misleading viewers about the past.
The Rise of AI-Generated Historical Images as Entertainment
Recently, generated images and videos of various historical instances along with the point of view of the people present have been floating all over the internet. Some of them include the streets of London during the Black Death in the 1300s in England, the eruption of Mount Vesuvius at Pompeii etc. Hogne and Dan, two creators who operate accounts named POV Lab and Time Traveller POV on TikTok state that they create such videos as they feel that seeing the past through a first-person perspective is an interesting way to bring history back to life while highlighting the cool parts, helping the audience learn something new. Mostly sensationalised for visual impact and storytelling, such content has been called out by historians for inconsistencies with respect to details particular of the time. Presently, artists admit to their creations being inaccurate, reasoning them to be more of an artistic interpretation than fact-checked documentaries.
It is important to note that AI models may inaccurately depict objects (issues with lateral inversion), people(anatomical implausibilities), or scenes due to "present-ist" bias. As noted by Lauren Tilton, an associate professor of digital humanities at the University of Richmond, many AI models primarily rely on data from the last 15 years, making them prone to modern-day distortions especially when analysing and creating historical content. The idea is to spark interest rather than replace genuine historical facts while it is assumed that engagement with these images and videos is partly a product of the fascination with upcoming AI tools. Apart from this, there are also chatbots like Hello History and Charater.ai which enable simulations of interacting with historical figures that have piqued curiosity.
Although it makes for an interesting perspective, one cannot ignore that our inherent biases play a role in how we perceive the information presented. Dangerous consequences include feeding into conspiracy theories and the erasure of facts as information is geared particularly toward garnering attention and providing entertainment. Furthermore, exposure of such content to an impressionable audience with a lesser attention span increases the gravity of the matter. In such cases, information regarding the sources used for creation becomes an important factor.
Acknowledging the risks posed by AI-generated images and their susceptibility to create misinformation, the Government of Spain has taken a step in regulating the AI content created. It has passed a bill (for regulating AI-Generated content) that mandates the labelling of AI-generated images and failure to do so would warrant massive fines (up to $38 million or 7% of turnover on companies). The idea is to ensure that content creators label their content which would help to spot images that are artificially created from those that are not.
The Way Forward: Navigating AI and Misinformation
While AI-generated images make for exciting possibilities for storytelling and enabling intrigue, their potential to spread misinformation should not be overlooked. To address these challenges, certain measures should be encouraged.
- Media Literacy and Awareness – In this day and age critical thinking and media literacy among consumers of content is imperative. Awareness, understanding, and access to tools that aid in detecting AI-generated content can prove to be helpful.
- AI Transparency and Labeling – Implementing regulations similar to Spain’s bill on labelling content could be a guiding crutch for people who have yet to learn to tell apart AI-generated content from others.
- Ethical AI Development – AI developers must prioritize ethical considerations in training using diverse and historically accurate datasets and sources which would minimise biases.
As AI continues to evolve, balancing innovation with responsibility is essential. By taking proactive measures in the early stages, we can harness AI's potential while safeguarding the integrity and trust of the sources while generating images.
References:
- https://www.npr.org/2023/06/07/1180768459/how-to-identify-ai-generated-deepfake-images
- https://www.nbcnews.com/tech/tech-news/ai-image-misinformation-surged-google-research-finds-rcna154333
- https://www.bbc.com/news/articles/cy87076pdw3o
- https://newskarnataka.com/technology/government-releases-guide-to-help-citizens-identify-ai-generated-images/21052024/
- https://www.technologyreview.com/2023/04/11/1071104/ai-helping-historians-analyze-past/
- https://www.psypost.org/ai-models-struggle-with-expert-level-global-history-knowledge/
- https://www.youtube.com/watch?v=M65IYIWlqes&t=2597s
- https://www.vice.com/en/article/people-are-creating-records-of-fake-historical-events-using-ai/?utm_source=chatgpt.com
- https://www.reuters.com/technology/artificial-intelligence/spain-impose-massive-fines-not-labelling-ai-generated-content-2025-03-11/?utm_source=chatgpt.com
- https://www.theguardian.com/film/2024/sep/13/documentary-ai-guidelines?utm_source=chatgpt.com