#Fact Check: Viral Video Falsely Claims Israel Launched Nuclear Attack on Iran
Executive Summary:
A viral video circulating on social media inaccurately suggests that it shows Israel moving nuclear weapons in preparation for an assault on Iran, but a detailed research has established that it instead shows a SpaceX Starship rocket (Starship 36) being towed for a pre-planned test in Texas, USA, and the footage does not provide any evidence to back-up the claim of an Israeli action or a nuclear missile.

Claim:
Multiple posts on social media sharing a video clip of what appeared to be a large, missile-like object being towed to an unknown location by a very large vehicle and stated it is Israel preparing for a nuclear attack on Iran.
The caption of the video said: "Israel is going to launch a nuclear attack on Iran! #Israel”. The viral post received lots of engagement, helpingClaim: to spread misinformation and unfounded fear about the rising conflicts in the Middle East.

Fact check:
By doing reverse image search using the key frames of the viral footage, this landed us at a Facebook post dated June 16, 2025.

A YouTube livestream from NASASpaceflight is dated 15th June 2025. Both sources make it clear that the object was clearly identified as SpaceX Starship 36. This rocket was being towed at SpaceX's Texas facility in advance of a static fire test and as part of the overall preparation for the 10th test flight. In the video, there is clearly no military ordinance or personnel, or Israel’s nuclear attack on Iran markings.
More support for our conclusions came from several articles from SPACE.com, which briefly reported on the Starship's explosion shortly thereafter during various testing iterations.



Also, there was no mention of any Israeli nuclear mobilization by any reputable media or defence agencies. The resemblance between a large rocket and a missile likely added some confusion. Below is a video describing the difference, but the context and upload location have no relation to the State of Israel or Iran.

Conclusion:
The viral video alleging that the actual video showed Israel getting ready to launch a nuclear attack on Iran is false and misleading. In fact, the video was from Texas, showing the civilian transport of SpaceX’s Starship 36. This highlighted how easily unrelated videos can be used to create panic and spread misinformation. If you plan on sharing claims like this, verify them instead using trusted websites and tools.
- Claim: Misleading video on Israel is ready to go nuclear on Iran
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
%20(1).webp)
Introduction
The Central Electricity Authority (CEA) has released the Draft Central Electricity Authority (Cyber Security in Power Sector) Regulations, 2024, inviting ‘comments’ from stakeholders, including the general public, which are to be submitted by 10 September 2024. The new regulation is intended to make India’s power sector more cyber-resilient and responsive to counter emerging cyber threats and safeguard the nation's power infrastructure.
Key Highlights of the CEA’s New (Cyber Security in Power Sector) Regulations, 2024
- Central Electricity Authority has framed the ‘Cyber Security in Power Sector Regulations, 2024’ in the exercise of the powers conferred by sub-section (1) of 177 of the Electricity Act, 2003 in order to make regulations for measures relating to Cyber Security in the power sector.
- The scope of the regulation entails that these regulations will be applicable to all Responsible Entities, Regional Power Committees, Appropriate Commission, Appropriate Government and Associated Power Sector Government Organizations, and Training Institutes recognized by the Authority, Authority and Vendors.
- One key aspect of the proposed regulation is the establishment of a dedicated Computer Security Incident Response Team (CSIRT) for the power sector. This team will coordinate a unified cyber defense strategy throughout the sector, establishing security frameworks, and serving as the main agency for handling incident response and recovery. The CSIRT will also be responsible for creating/developing Standard Operating Procedures (SOPs), security policies, and best practices for incident response activities in consultation with CERT-In and NCIIPC. The detailed roles and responsibilities of CSIRT are outlined under Chapter 2 of the said regulations.
- All responsible entities in the power sector as mentioned under the scope of the regulation, are mandated to appoint a Chief Information Security Officer (CISO) and an alternate CISO, who need to be Indian nationals and who are senior management employees. The regulations specify that these officers must directly report to the CEO/Head of the Responsible Entity. Thus emphasizing the critical nature of CISO’s roles in safeguarding the nation’s power grid sector assets.
- All Responsible Entities shall establish an Information Security Division (ISD) dedicated to ensuring Cyber Security, headed by the CISO and remain operational around the clock. The schedule under regulation entails that the minimum workforce required for setting up an ISD is 04 (Four) officers including CISO and 04 officers/officials for shift operations. Sufficient workforce and infrastructure support shall be ensured for ISD. The detailed functions and responsibilities of ISD are outlined under Chapter 5 regulation 10. Furthermore, the ISD shall be manned by sufficient numbers of officers, having valid certificates of successful completion of domain-specific Cyber Security courses.
- The regulation obliged the entities to have a defined, documented and maintained Cyber Security Policy which is approved by the Board or Head of the entity. The regulation also obliged the entities to have a Cyber Crisis Management Plan (CCMP) approved by the higher management.
- As regards upskilling and empowerment the regulation advocates for organising or conducting periodic Cyber Security awareness programs and Cyber Security exercises including mock drills and tabletop exercises.
CyberPeace Policy Outlook
CyberPeace Policy & Advocacy Vertical has submitted its detailed recommendations on the proposed ‘Cyber Security in Power Sector Regulations, 2024’ to the Central Electricity Authority, Government of India. We have advised on various aspects within the regulation including harmonisation of these regulations with other rules as issued by CERT-In and NCIIPC, at present. As this needs to be clarified which set of guidelines will supersede in case of any discrepancy that may arise. Additionally, we advised on incorporating or making modifications to specific provisions under the regulation for a more robust framework. We have also emphasized legal mandates and penalties for non-compliance with cybersecurity, so as to make sure that these regulations do not only act as guiding principles but also provide stringent measures in case of non-compliance.
References:

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html
.webp)
Data has become a critical asset for the advancement of a nation’s economic, social, and technological development. India’s emergence as a global digital economy hub makes it necessary to create a robust framework that addresses the challenges and opportunities of digital transformation. The Indian government introduced the Draft National Data Governance Framework Policy in 2022, aiming to create a comprehensive data handling and governance framework. This policy draft addresses key challenges in data management, privacy, and digital economy growth. As per the recent media reports, the Draft National Data Governance Policy so prepared is under the finalisation stage, the government specified in its implementation document for the Budget 2023-24 announcement. The policy also aims to address the country's AI adoption and the issue of lack of datasets by providing widespread access to anonymized data.
Background and Need for the Policy
India has a robust digital economy with its adoption of the Digital India Initiative, Aadhaar digital identification, UPI for seamless payments and many more. In India, 751.5 million people connect to the internet, and is home to 462.0 million social media users in January 2024, equivalent to 32.2% of its total population (Data Reportal 2024). This has brought challenges including data privacy concerns, cybersecurity threats, digital exclusion, and a need for better regulation frameworks. To overcome them, the Draft National Data Governance Policy has been designed to provide institutional frameworks for data rules, standards, guidelines, and protocols for the sharing of non-personal data sets in a manner that ensures privacy, security, and trust so that they remain secure, transparent, and accountable.
Objectives omphasizesf the Framework
The objective of the Framework Policy is to accelerate Digital Governance in India. The framework will standardize data management and security standards across the Government. It will promote transparency, accountability, and ownership in Non-Personal data and dataset access and build a platform to receive and process data requests. It will also set quality standards and promote the expansion of the datasets program and overall non-personal ecosystem. Further, it aims to build India’s digital government goals and capacity, knowledge, and competency in Government departments and entities. All this would be done while ensuring greater citizen awareness, participation, and engagement.
Key Provisions of the Draft Policy
The Draft Framework Policy aims to establish a cohesive digital governance ecosystem in India that balances the need for data utilization with protecting citizens' privacy rights. It sets up an institutional framework of the "India Data Management Office (IDMO) set up under the Digital India Corporation (DIC) which will be responsible for developing rules, standards, and guidelines under this Policy.
The key provisions of the framework policy include:
- Promoting interoperability among government digital platforms, ensuring data privacy through data anonymization and security, and enhancing citizen access to government services through digital means.
- The policy e the creation of unified digital IDs, a standardisation in digital processes, and data-sharing guidelines across ministries to improve efficiency.
- It also focuses on building digital infrastructure, such as cloud services and data centres in order to support e-governance initiatives.
- Furthermore, it encourages public-private partnerships and sets guidelines for accountability and transparency in digital governance.
Implications and Concerns of the Framework
- The policy potentially impacts data sharing in India as it mentions data anonymization. The scale of data that would need to be anonymised in India is at a very large scale and it could become a potential challenge to engage in.
- Data localization and cross-border transfers have raised concerns among global tech companies and trade partners. They argue that such requirements could increase operational costs and hinder cross-border data flows. Striking a balance between protecting national interests and facilitating business operations remains a critical challenge.
- Another challenge associated with the policy is over-data centralization under the IDMO and the potential risks of government overreach in data access.
Key Takeaways and Recommendations
The GDPR in the European Union and the Digital Personal Data Protection Act passed in 2023 in India and many others are the data privacy laws in force in different countries. The policy needs to be aligned with the DPDP Act, 2023 and be updated as per the recent developments. It further needs to maintain transparency over the sharing of data and a user’s control. The policy needs engagement with industry experts, privacy advocates, and civil society to ensure a balance of innovation with privacy and security.
Conclusion
The Draft National Data Governance Framework Policy of 2022 represents a significant stage in shaping India's digital future. It ensures the evolution of data governance evolves alongside technological advancements. The framework policy seeks to foster a robust digital ecosystem that benefits citizens, businesses, and the government alike by focusing on the essentials of data privacy, transparency, and security. However, achieving this vision requires addressing concerns like data centralisation, cross-border data flows, and maintaining alignment with global privacy standards. Continued engagement with stakeholders and necessary updates to the draft policy will be crucial to its success in balancing innovation with user rights and data integrity. The final version of the policy is expected to be released soon.
References
- https://meity.gov.in/writereaddata/files/National-Data-Governance-Framework-Policy.pdf
- https://datareportal.com/?utm_source=DataReportal&utm_medium=Country_Article_Hyperlink&utm_campaign=Digital_2024&utm_term=India&utm_content=Home_Page_Link
- https://www.imf.org/en/Publications/fandd/issues/2023/03/data-by-people-for-people-tiwari-packer-matthan
- https://inc42.com/buzz/draft-national-data-governance-policy-under-finalisation-centre/
- https://legal.economictimes.indiatimes.com/news/industry/government-unveiled-national-data-governance-policy-in-budget-2023/97680515