#FactCheck - Viral Video Claiming Iran’s Attack on US Airbase Debunked as 9/11 Footage
Executive Summary
A video showing thick smoke rising from a building and people running in panic is being shared on social media. The video is being circulated with the claim that it shows Iran launching a missile attack on the United States.CyberPeace’s research found the claim to be misleading. Our probe revealed that the video is not related to any recent incident. The viral clip is actually from the September 11, 2001 terrorist attacks on the World Trade Center in the United States and is being falsely shared as footage of an alleged Iranian missile strike on the US.
Claim:
An Instagram user shared the video claiming, “Iran has attacked a US airbase in Qatar. Iran has fired six ballistic missiles at the Al Udeid Airbase in Qatar. Al Udeid Airbase is the largest US military base in West Asia.”
Links to the post and its archived version are provided below.

Fact Check:
To verify the claim, we extracted key frames from the viral video and ran a reverse image search using Google Lens. During the search, we found visuals matching the viral clip in a report published by Wion on September 11, 2021. The report, titled “In pics | A look back at the scenes from the 9/11 attacks,” included an image that closely resembled the visuals seen in the viral video. The caption of the image stated that it was a file photo from September 11, 2001, showing pedestrians running as one of the World Trade Center towers collapsed in New York City.

Further research led us to the same footage on the YouTube channel CBS 8 San Diego. At the 01:11 timestamp of the video, visuals matching the viral clip can be clearly seen.

We also found an Al Jazeera report dated June 23, 2025, which confirmed that Iran had attacked US forces stationed at the Al Udeid airbase in Qatar in retaliation for US strikes on Iran’s uclear facilities. However, the visuals used in the viral video do not correspond to this incident.

Conclusion
The viral video does not show a recent Iranian attack on a US airbase in Qatar. The clip actually dates back to the September 11, 2001 terrorist attacks on the World Trade Center in the United States. Old 9/11 footage has been falsely shared with a misleading claim linking it to Iran’s alleged missile strike on the US.
Related Blogs
.webp)
Introduction
The spread of misinformation online has become a significant concern, with far-reaching social, political, economic and personal implications. The degree of vulnerability to misinformation differs from person to person, dependent on psychological elements such as personality traits, familial background and digital literacy combined with contextual factors like information source, repetition, emotional content and topic. How to reduce misinformation susceptibility in real-world environments where misinformation is regularly consumed on social media remains an open question. Inoculation theory has been proposed as a way to reduce susceptibility to misinformation by informing people about how they might be misinformed. Psychological inoculation campaigns on social media are effective at improving misinformation resilience at scale.
Prebunking has gained prominence as a means to preemptively build resilience against anticipated exposure to misinformation. This approach, grounded in Inoculation Theory, allows people to analyse and avoid manipulation without prior knowledge of specific misleading content by helping them build generalised resilience. We may draw a parallel here with broad spectrum antibiotics that can be used to fight infections and protect the body against symptoms before one is able to identify the particular pathogen at play.
Inoculation Theory and Prebunking
Inoculation theory is a promising approach to combat misinformation in the digital age. It involves exposing individuals to weakened forms of misinformation before encountering the actual false information. This helps develop resistance and critical thinking skills to identify and counter deceptive content.
Inoculation theory has been established as a robust framework for countering unwanted persuasion and can be applied within the modern context of online misinformation:
- Preemptive Inoculation: Preemptive inoculation entails exposing people to weaker kinds of misinformation before they encounter genuine erroneous information. Individuals can build resistance and critical thinking abilities by being exposed to typical misinformation methods and strategies.
- Technique/logic based Inoculation: Individuals can educate themselves about typical manipulative strategies used in online misinformation, which could be emotionally manipulative language, conspiratorial reasoning, trolling and logical fallacies. Learning to recognise these tactics as indicators of misinformation is an important first step to being able to recognise and reject the same. Through logical reasoning, individuals can recognize such tactics for what they are: attempts to distort the facts or spread misleading information. Individuals who are equipped with the capacity to discern weak arguments and misleading methods may properly evaluate the reliability and validity of information they encounter on the Internet.
- Educational Campaigns: Educational initiatives that increase awareness about misinformation, its consequences, and the tactics used to manipulate information can be useful inoculation tools. These programmes equip individuals with the knowledge and resources they need to distinguish between reputable and fraudulent sources, allowing them to navigate the online information landscape more successfully.
- Interactive Games and Simulations: Online games and simulations, such as ‘Bad News,’ have been created as interactive aids to protect people from misinformation methods. These games immerse users in a virtual world where they may learn about the creation and spread of misinformation, increasing their awareness and critical thinking abilities.
- Joint Efforts: Combining inoculation tactics with other anti-misinformation initiatives, such as accuracy primes, building resilience on social media platforms, and media literacy programmes, can improve the overall efficacy of our attempts to combat misinformation. Expert organisations and people can build a stronger defence against the spread of misleading information by using many actions at the same time.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
Implementation of the Inoculation Theory on social media platforms can be seen as an effective strategy point for building resilience among users and combating misinformation. Tech/social media platforms can develop interactive and engaging content in the form of educational prebunking videos, short animations, infographics, tip sheets, and misinformation simulations. These techniques can be deployed through online games, collaborations with influencers and trusted sources that help design and deploy targeted campaigns whilst also educating netizens about the usefulness of Inoculation Theory so that they can practice critical thinking.
The approach will inspire self-monitoring amongst netizens so that people consume information mindfully. It is a powerful tool in the battle against misinformation because it not only seeks to prevent harm before it occurs, but also actively empowers the target audience. In other words, Inoculation Theory helps build people up, and takes them on a journey of transformation from ‘potential victim’ to ‘warrior’ in the battle against misinformation. Through awareness-building, this approach makes people more aware of their own vulnerabilities and attempts to exploit them so that they can be on the lookout while they read, watch, share and believe the content they receive online.
Widespread adoption of Inoculation Theory may well inspire systemic and technological change that goes beyond individual empowerment: these interventions on social media platforms can be utilized to advance digital tools and algorithms so that such interventions and their impact are amplified. Additionally, social media platforms can explore personalized inoculation strategies, and customized inoculation approaches for different audiences so as to be able to better serve more people. One such elegant solution for social media platforms can be to develop a dedicated prebunking strategy that identifies and targets specific themes and topics that could be potential vectors for misinformation and disinformation. This will come in handy, especially during sensitive and special times such as the ongoing elections where tools and strategies for ‘Election Prebunks’ could be transformational.
Conclusion
Applying Inoculation Theory in the modern context of misinformation can be an effective method of establishing resilience against misinformation, help in developing critical thinking and empower individuals to discern fact from fiction in the digital information landscape. The need of the hour is to prioritize extensive awareness campaigns that encourage critical thinking, educate people about manipulation tactics, and pre-emptively counter false narratives associated with information. Inoculation strategies can help people to build mental amour or mental defenses against malicious content and malintent that they may encounter in the future by learning about it in advance. As they say, forewarned is forearmed.
References
- https://www.science.org/doi/10.1126/sciadv.abo6254
- https://stratcomcoe.org/publications/download/Inoculation-theory-and-Misinformation-FINAL-digital-ISBN-ebbe8.pdf
.webp)
Introduction
The rapid advancement of technology, including generative AI, offers immense benefits but also raises concerns about misuse. The Internet Watch Foundation reported that, as of July 2024, over 3,500 new AI-generated child sexual abuse images appeared on the dark web. The UK’s National Crime Agency records 800 monthly arrests for online child threats and estimates 840,000 adults as potential offenders. In response, the UK is introducing legislation to criminalise AI-generated child exploitation imagery, which will be a part of the Crime and Policing Bill when it comes to parliament in the next few weeks, aligning with global AI regulations like the EU AI Act and the US AI Initiative Act. This policy shift strengthens efforts to combat online child exploitation and sets a global precedent for responsible AI governance.
Current Legal Landscape and the Policy Gap
The UK’s Online Safety Act 2023 aims to combat CSAM and deepfake pornography by holding social media and search platforms accountable for user safety. It mandates these platforms to prevent children from accessing harmful content, remove illegal material, and offer clear reporting mechanisms. For adults, major platforms must be transparent about harmful content policies and provide users control over what they see.
However, the Act has notable limitations, including concerns over content moderation overreach, potential censorship of legitimate debates, and challenges in defining "harmful" content. It may disproportionately impact smaller platforms and raise concerns about protecting journalistic content and politically significant discussions. While intended to enhance online safety, these challenges highlight the complexities of balancing regulation with digital rights and free expression.
The Proposed Criminalisation of AI-Generated Sexual Abuse Content
The proposed law by the UK criminalises the creation, distribution, and possession of AI-generated CSAM and deepfake pornography. It mandates enforcement agencies and digital platforms to identify and remove such content, with penalties for non-compliance. Perpetrators may face up to two years in prison for taking intimate images without consent or installing equipment to facilitate such offences. Currently, sharing or threatening to share intimate images, including deepfakes, is an offence under the Sexual Offences Act 2003, amended by the Online Safety Act 2023. The government plans to repeal certain voyeurism offences, replacing them with broader provisions covering unauthorised intimate recordings. This aligns with its September 2024 decision to classify sharing intimate images as a priority offence under the Online Safety Act, reinforcing its commitment to balancing free expression with harm prevention.
Implications for AI Regulation and Platform Responsibility
The UK's move aligns with its AI Safety Summit commitments, placing responsibility on platforms to remove AI-generated sexual abuse content or face Ofcom enforcement. The Crime and Policing Bill is expected to tighten AI regulations, requiring developers to integrate safeguards against misuse, and the licensing frameworks may enforce ethical AI standards, restricting access to synthetic media tools. Given AI-generated abuse's cross-border nature, enforcement will necessitate global cooperation with platforms, law enforcement, and regulators. Bilateral and multilateral agreements could help harmonise legal frameworks, enabling swift content takedown, evidence sharing, and extradition of offenders, strengthening international efforts against AI-enabled exploitation.
Conclusion and Policy Recommendations
The Crime and Policing Bill marks a crucial step in criminalising AI-generated CSAM and deepfake pornography, strengthening online safety and platform accountability. However, balancing digital rights and enforcement remains a challenge. For effective implementation, industry cooperation is essential, with platforms integrating detection tools and transparent reporting systems. AI ethics frameworks should prevent misuse while allowing innovation, and victim support mechanisms must be prioritised. Given AI-driven abuse's global nature, international regulatory alignment is key for harmonised laws, evidence sharing, and cross-border enforcement. This legislation sets a global precedent, emphasising proactive regulation to ensure digital safety, ethical AI development, and the protection of human dignity.
References
- https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/
- https://www.reuters.com/technology/artificial-intelligence/uk-makes-use-ai-tools-create-child-abuse-material-crime-2025-02-01/
- https://www.financialexpress.com/life/technology-uk-set-to-ban-ai-tools-for-creating-child-sexual-abuse-images-with-new-laws-3735296/
- https://www.gov.uk/government/publications/national-crime-agency-annual-report-and-accounts-2023-to-2024/national-crime-agency-annual-report-and-accounts-2023-to-2024-accessible#part-1--performance-report

Introduction
The geographical world has physical boundaries, but the digital one has a different architecture and institutions are underprepared when it comes to addressing cybersecurity breaches. Cybercrime, which may lead to economic losses, privacy violations, national security threats and have psycho-social consequences, is forecast to continuously increase between 2024 and 2029, reaching an estimated cost of at least 6.4 trillion U.S. dollars (Statista). As cyber threats become persistent and ubiquitous, they are becoming a critical governance challenge. Lawmakers around the world need to collaborate on addressing this emerging issue.
Cybersecurity Governance and its Structural Elements
Cybersecurity governance refers to the strategies, policies, laws, and institutional frameworks that guide national and international preparedness and responses to cyber threats to governments, private entities, and individuals. Effective cybersecurity governance ensures that digital risks are managed proactively while balancing security with fundamental rights like privacy and internet freedom. It includes, but is not limited to :
- Policies and Legal Frameworks: Laws that define the scope of cybercrime, cybersecurity responsibilities, and mechanisms for data protection. Eg: India’s National Cybersecurity Policy (NCSP) of 2013, Information Technology Act, 2000, and Digital Personal Data Protection Act, 2023, EU’s Cybersecurity Act (2019), Cyber Resilience Act (2024), Cyber Solidarity Act (2025), and NIS2 Directive (2022), South Africa’s Cyber Crimes Act (2021), etc.
- Regulatory Bodies: Government agencies such as data protection authorities, cybersecurity task forces, and other sector-specific bodies. Eg: India’s Computer Emergency Response Team (CERT-In), Indian Cyber Crime Coordination Centre (I4C), Europe’s European Union Agency for Cybersecurity (ENISA), and others.
- Public-Private Knowledge Sharing: The sharing of the private sector’s expertise and the government’s resources plays a crucial role in improving enforcement and securing critical infrastructure. This model of collaboration is followed in the EU, Japan, Turkey, and the USA.
- Research and Development: Apart from the technical, the cyber domain also includes military, politics, economy, law, culture, society, and other elements. Robust, multi-sectoral research is necessary for formulating international and regional frameworks on cybersecurity.
Challenges to Cybersecurity Governance
Governments face several challenges in securing cyberspace and protecting critical assets and individuals despite the growing focus on cybersecurity. This is because so far the focus has been on cybersecurity management, which, considering the scale of attacks in the recent past, is not enough. Stakeholders must start deliberating on the aspect of governance in cyberspace while ensuring that this process is multi-consultative. (Savaş & Karataş 2022). Prominent challenges which need to be addressed are:
- Dynamic Threat Landscape: The threat landscape in cyberspace is ever-evolving. Bad actors are constantly coming up with new ways to carry out attacks, using elements of surprise, adaptability, and asymmetry aided by AI and quantum computing. While cybersecurity measures help mitigate risks and minimize damage, they can’t always provide definitive solutions. E.g., the pace of malware development is much faster than that of legal norms, legislation, and security strategies for the protection of information technology (IT). (Efe and Bensghir 2019).
- Regulatory Fragmentation and Compliance Challenges: Different countries, industries, or jurisdictions may enforce varying or conflicting cybersecurity laws and standards, which are still evolving and require rapid upgrades. This makes it harder for businesses to comply with regulations, increases compliance costs, and jeopardizes the security posture of the organization.
- Trans-National Enforcement Challenges: Cybercriminals operate across jurisdictions, making threat intelligence collection, incident response, evidence-gathering, and prosecution difficult. Without cross-border agreements between law enforcement agencies and standardized compliance frameworks for organizations, bad actors have an advantage in getting away with attacks.
- Balancing Security with Digital Rights: Striking a balance between cybersecurity laws and privacy concerns (e.g., surveillance laws vs. data protection) remains a profound challenge, especially in areas of CSAM prevention and identifying terrorist activities. Without a system of checks and balances, it is difficult to prevent government overreach into domains like journalism, which are necessary for a healthy democracy, and Big Tech’s invasion of user privacy.
The Road Ahead: Strengthening Cybersecurity Governance
All domains of human life- economy, culture, politics, and society- occur in digital and cyber environments now. It follows naturally, that governance in the physical world translates into governance in cyberspace. It must be underpinned by features consistent with the principles of openness, transparency, participation, and accountability, while also protecting human rights. In cyberspace, the world is stateless and threats are rapidly evolving with innovations in modern computing. Thus, cybersecurity governance requires a global, multi-sectoral approach utilizing the rules of international law, to chart out problems, and solutions, and carry out detailed risk analyses. (Savaş & Karataş 2022).
References
- https://www.statista.com/forecasts/1280009/cost-cybercrime-worldwide#statisticContainer
- https://link.springer.com/article/10.1365/s43439-021-00045-4#citeas
- https://digital-strategy.ec.europa.eu/en/policies/cybersecurity-policies#ecl-inpage-cybersecurity-strategy