#FactCheck - Misleading Video of Dubai Airport Attack Circulates Online, Found AI-Generated
Executive Summary
Amid rising tensions in the Middle East following attacks on Iran by the United States and Israel, a video is being shared on social media claiming that it shows a recent attack at Dubai International Airport. Research by the CyberPeace found the viral claim to be false. Our research revealed that the viral video is not real but has been created using artificial intelligence technology.
Claim:
An Instagram user shared the viral video on March 1, 2026, claiming it shows an attack at Dubai Airport. The link to the post, the archive link, and a screenshot are provided below.

Fact Check:
To verify the viral claim, we searched Google using relevant keywords. However, we did not find any credible media report confirming the claim.On closely examining the viral video, we noticed several unusual visuals and technical inconsistencies, raising suspicion that it might be AI-generated. To verify this, we scanned the video using the AI detection tool Sightengine. According to the results, around 74 percent of the video shows the likelihood of being AI-generated.

Conclusion:
Our research found that the viral video is not real but has been created using artificial intelligence technology.
Related Blogs

Executive Summary
A video of senior Congress leader Shashi Tharoor is widely circulating on social media, allegedly showing him praising Pakistan’s diplomatic stance over the ICC T20 World Cup issue. Many users are sharing the clip believing it to be genuine. However, research by the CyberPeace found the claim to be false. The viral video of Tharoor is a deepfake, and the Congress leader himself has described it as fabricated and fake.
Claim
A Facebook page named “Vok Sports” shared the video on February 11, 2026, claiming that Tharoor praised Pakistan. In the viral clip, he is purportedly heard saying in English that Pakistan’s diplomatic handling of the matter was “brilliant” and that it had outmanoeuvred the Indian cricket board, adding that good diplomacy could make a weak nation appear powerful.
The video was widely shared by social media users as authentic. (Archive links and post details provided.)
Fact Check
To verify the claim, we first scanned Tharoor’s official X (formerly Twitter) handle. We found a post dated February 12 in which he responded to a Pakistani journalist who had shared the video. Tharoor stated that the clip was AI-generated “fake news,” adding that neither the language nor the voice in the video was his.

A reverse image search using Google Lens led the Desk to a video uploaded on February 10, 2026, by India Today on its official YouTube channel. The visuals in this original video exactly matched those seen in the viral clip showing Tharoor speaking to the media. However, upon analysing the original footage, we found that Tharoor was speaking in Hindi about the controversy surrounding the T20 World Cup. He stated that politics should not be mixed with cricket or sports and did not praise Pakistan or the Pakistan Cricket Board at any point. This indicates that the audio in the viral clip had been manipulated and replaced. In the original video, Tharoor said that politicians should conduct politics separately, diplomats should handle diplomacy, and cricket players should focus on the game, expressing hope that cricket would move forward with the match.
- https://www.youtube.com/watch?v=GkA1mLlAT8Q&t=3s

To further verify the authenticity of the video, several AI detection tools were used. Analysis through Aurigin.ai suggested a 78 percent probability that the audio in the viral clip was AI-generated.

Conclusion
The CyberPeace confirmed that the viral video is a deepfake. Tharoor did not praise Pakistan’s diplomatic stance during the T20 World Cup controversy, and the circulating clip has been digitally manipulated.

Introduction
Misinformation is no longer a challenge limited to major global platforms or widely spoken languages. In India and many other countries, false information is increasingly disseminated through local and vernacular languages, allowing it to reach communities more directly and intimately. While regional language content has played a crucial role in expanding access to information, it has also emerged as a powerful driver of misinformation by bad actors, and it often becomes harder to detect and counter. The challenge of local language misinformation is not merely digital in nature; it is deeply social, cultural, and shaped by specific local contexts.
Why Local-Language Misinformation Is More Impactful
A person’s mother tongue can be a highly effective medium for misinformation because it carries emotional resonance and a sense of authenticity. Information that aligns with an individual’s linguistic and cultural background is often trusted the most. When false narratives are framed using familiar expressions, local references, or community-specific concerns, they are more readily accepted and shared more widely.
Misinformation in a language like English, which is more heavily moderated, does not usually have the same impact as content in vernacular languages. In the latter case, such content tends to circulate within closed networks such as family WhatsApp groups, regional Facebook pages, local YouTube channels, and community forums. These spaces are often perceived as safe or trusted, which lowers scepticism and encourages the spread of unverified information.
The Role of Digital Platforms and Algorithms
Although social media platforms have opened up access to the content of regional languages, the moderation mechanisms have not kept up. The automated control systems for content are frequently trained mainly on the dominant languages, thus missing the detection of vernacular speech, slang, dialects, and code-mixing.
This results in a disparity in the enforcement of laws where misinformation in local languages:
- Doesn’t go through automated fact-checking tools
- Is subject to human moderation takes place at a slower pace
- Is less prone to being reported or flagged
- Gains unrestrained access for a longer time period than first imagined
The problem is further magnified by algorithmic amplification. Content that triggers very strong emotional reactions fear, anger, pride, or outrage, has a higher chance of being promoted, irrespective of its truthfulness. In regional situations, such content may very quickly sway public opinion even in very closely knit communities.
Forms of Vernacular Misinformation
Local-language misinformation appears in various forms:
- Health misinformation, with such examples as panic remedies, vaccine myths, and misleading medical prescriptions
- Political misinformation, which is mostly identified with regional identity, local grievances, or community narratives
- Rumours regarding disasters that are very hard to control and spread hatred during floods, earthquakes, or other public emergencies
- Economic and financial frauds that are perpetrated via the local dialect authorities or trusted institutions
- Cultural and religious untruths, which are based on exploiting the core of the beliefs
The regional aspect of such misinformation makes it very difficult to be corrected because the fact-checks in other languages may not get to that audience.
Community-Level Consequences
The effect of misinformation in local languages is not only about the misdirection of individuals. It can also:
- Negatively affect the process of public institutions gaining trust
- Support social polarisation and communal strife
- Get in the way of public health measures
- Help shape the decision-making process in elections at the grassroots level
- Take advantage of the digitally illiterate poor people
In a lot of scenarios, the damage done is not instant but rather accumulative, thus changing perceptions and supporting false worldviews more.
Why Countering Vernacular Misinformation Is Difficult
Multiple structural layers make it difficult to respond effectively:
- Variety of Languages: Just in India, there are many languages and dialects, which are very hard to monitor universally.
- Culturally Aware Systems: The local languages sometimes bear meanings that are deeply rooted in the culture, such as by using sarcasm or referring to history, and automated systems are unable to interpret it correctly.
- Reporting Not Common: Users might not spot misinformation or may not want to be a part of the struggle by showing the content shared by reliable members of the community.
- Insufficient Fact-Checking Capacity: Resources are often unavailable for fact-checking organisations to perform their duties worldwide in different languages effectively.
Building a Community-Centric Response
Overcoming misinformation in local languages needs a community-driven resilience approach instead of a platform-centric one. Some of the key actions are:
- Boosting Digital Literacy: Users will be able to question, verify, and put the content on hold before sharing it, thanks to the regional language awareness campaigns that will be conducted.
- Facilitating Local Fact-Checkers: Local journalists, educators, and NGOs are the main players in providing the context for verification.
- Accountability of Platforms: It is necessary for technology companies to support global moderation in several languages, the hiring of local experts, and the implementation of transparent enforcement mechanisms.
- Contemplating Policy and Governance: Regulatory frameworks should facilitate proactive risk assessment while controlling the right to free expression.
- Establishment of Trusted Local Intermediaries: Community leaders, health workers, teachers, and local organisations can engage in preventing misinformation among the networks that they are trusted in.
The Way Forward
Misinformation in local languages is not a minor concern; it is an issue that directly affects the future of digital trust. As the number of users accessing the internet through local language interfaces continues to grow, the volume and influence of regional content will also increase. If measures do not include all language groups, misinformation will remain least corrected and most influential at the community level, where it is also the hardest to identify and address.
Such a problem exists only if the power of language is not recognised. Therefore, one can say that it is necessary to protect the quality of information in local languages, not only for digital safety but for other factors as well, such as social cohesion, democratic participation, and public well-being.
Conclusion
Vernacular content has the potential to be very powerful in the ways it can inform, include and empower; meanwhile, if it goes unmonitored, it has the same potential to mislead, divide, and harm. Mis-disinformation in local languages calls for the cooperation of platforms, regulators, NGOs, and the communities involved. To win over the digital ecosystem, it has to speak all languages, not only for communication but also for protection.
References
- https://www.mdpi.com/2304-6775/10/2/15
- https://afpr.in/regional-languages-shaping-indias-online-discourse/
- https://medium.com/@pratikgsalvi03/how-indias-misinformation-surge-and-media-credibility-crisis-are-undermining-democracy-public-dc8ad7be8e12
- https://projectshakti.in/
- https://journals.sagepub.com/doi/10.1177/02683962211037693
- https://rsisinternational.org/journals/ijriss/Digital-Library/volume-8-issue-11/505-518.pdf
- https://www.irjmets.com/upload_newfiles/irjmets71200016652/paper_file/irjmets71200016652.pdf

Executive Summary
A video of Dr. Samir V. Kamat, Chairman of the Defence Research and Development Organisation (DRDO), is going viral on social media. In the clip, he appears to claim that Prime Minister Narendra Modi instructed scientists to wash the Agni-6 missile with cow urine, and later use a mixture of cow dung and urine to prevent rusting. Research by CyberPeace Research Wing found that the video is a deepfake, created by manipulating original footage using AI tools. It was also shared by an account previously known for posting anti-India misinformation and is reportedly banned in India.
Claim
An X user named “Lovely” shared the video on May 1, 2026, alleging that Indian scientists were using cow urine and dung in missile development under government direction. The post used derogatory language and criticized India’s scientific community.

Fact Check
To verify the claim, we searched relevant keywords on Google but found no credible media reports supporting such statements by the DRDO chief. We then extracted keyframes from the viral clip and conducted a reverse image search using Google Lens. This led us to the original video posted by ANI on April 30, 2026. The footage is from the National Security Summit 2.0, where Dr. Kamat spoke about India’s missile development programs.
In the authentic video, Dr. Kamat discusses short-range ballistic missiles like ‘Pralay’, and advancements in hypersonic glide and cruise missile technologies, including scramjet propulsion. There is no mention of cow urine, cow dung, or any such practices.

Further analysis using AI detection tool Aurigin indicated an 88% probability that the viral video was AI-generated or manipulated.

Conclusion
Our research confirms that the viral video is fake and AI-manipulated. Dr. Samir V. Kamat never made any statement about washing missiles with cow urine. The clip is a deepfake created to spread misinformation and mislead viewers.