#Fact Check: Viral Video Falsely Claims Israel Launched Nuclear Attack on Iran
Executive Summary:
A viral video circulating on social media inaccurately suggests that it shows Israel moving nuclear weapons in preparation for an assault on Iran, but a detailed research has established that it instead shows a SpaceX Starship rocket (Starship 36) being towed for a pre-planned test in Texas, USA, and the footage does not provide any evidence to back-up the claim of an Israeli action or a nuclear missile.

Claim:
Multiple posts on social media sharing a video clip of what appeared to be a large, missile-like object being towed to an unknown location by a very large vehicle and stated it is Israel preparing for a nuclear attack on Iran.
The caption of the video said: "Israel is going to launch a nuclear attack on Iran! #Israel”. The viral post received lots of engagement, helpingClaim: to spread misinformation and unfounded fear about the rising conflicts in the Middle East.

Fact check:
By doing reverse image search using the key frames of the viral footage, this landed us at a Facebook post dated June 16, 2025.

A YouTube livestream from NASASpaceflight is dated 15th June 2025. Both sources make it clear that the object was clearly identified as SpaceX Starship 36. This rocket was being towed at SpaceX's Texas facility in advance of a static fire test and as part of the overall preparation for the 10th test flight. In the video, there is clearly no military ordinance or personnel, or Israel’s nuclear attack on Iran markings.
More support for our conclusions came from several articles from SPACE.com, which briefly reported on the Starship's explosion shortly thereafter during various testing iterations.



Also, there was no mention of any Israeli nuclear mobilization by any reputable media or defence agencies. The resemblance between a large rocket and a missile likely added some confusion. Below is a video describing the difference, but the context and upload location have no relation to the State of Israel or Iran.

Conclusion:
The viral video alleging that the actual video showed Israel getting ready to launch a nuclear attack on Iran is false and misleading. In fact, the video was from Texas, showing the civilian transport of SpaceX’s Starship 36. This highlighted how easily unrelated videos can be used to create panic and spread misinformation. If you plan on sharing claims like this, verify them instead using trusted websites and tools.
- Claim: Misleading video on Israel is ready to go nuclear on Iran
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
The Digital Covenant: Aligning Communication with SDG Goals
“Rethinking Communication, Cyber Responsibility, and Sustainability in a Connected World”
Introduction
It is rightly said by Antonio Guterres, United Nations Secretary General, “Everyone should be able to express themselves freely without fear of attack. Everyone should be able to access a range of views and information sources.” In 2024, when the Global Alliance for PR and Communication Management asserted that it aligns with the era of digital transformation, where technology is moving at terminal velocity and bringing various risks and threats, it called on the global leaders and stakeholders to proclaim ‘Responsible Communication’ as the 18th Sustainable Development Goal (SDG). On May 17th, as we celebrate World Telecommunication and Information Society Day (WTISD) 2025, we must align our personal, professional, and virtual spaces with a safe and sustainable information age.
In terms of digital growth, it is indubitable that India is growing at a brisk pace consistently in alignment with its South Asian and Western counterparts and has incorporated international covenants on digital personal data and cyber crimes within its domestic regime.
UN Global Principles for Information Integrity
The United Nations has displayed its constant commitment to the achievement of the seventeen SDGs that were adopted at the United Nations Conference in 2012 in Rio de Janeiro. It recognises that you cannot isolate the digital transformation, technology, and digitisation from other areas that are included within the SDGs, such as health, education, and poverty. The UN released Policy Brief 8 in June 2023 by the UN Secretary-General that seeks to empirically derive data on the threats posed to information integrity and then come up with norms that help guide the member states, the digital platforms, and other stakeholders. The norms must be in conformity with the right to freedom of opinion and expression and the right to information access.
In line with its agenda, it has formulated Global Principles of Information Integrity, which include “Societal Trust and Resilience”, “Healthy Incentives”, “Public Empowerment”, “Independent, Free and Pluralistic Media” and “Transparency and Research”. The principles recognise the harm caused by hatred, misinformation, and disinformation propagated by the misuse of advances in Artificial Intelligence Technology (AI).
Breaking the Binary: Bridging the Gender Digital Divide
The reflection of how far we have come and how far we have to go can be deciphered with a single sentence, i.e., using digital technologies to promote gender equality. This can be seen both as a paradox and a pressing call to action. As we celebrate WTISD 2025, the day highlights the fundamental role of Information and Communication Technologies (ICTs) in accelerating progress and bringing those not included in this digital transformation to become a part of this change, especially the female population that remains isolated from mainstream growth. As per the data given by ITU, “Out of the world population, 70 per cent of men are using the internet, compared with 65 per cent of women.”
This exclusion is not merely a technical gap but a societal and economic chasm, reinforcing existing inequalities. By including such an important goal in the theme of this day, it marks a critical moment towards the formation of gender-sensitive digital policies, promoting digital literacy among women and girls, and ensuring safe, affordable, and meaningful connectivity. We can explore the future potential where technology is the true instrument for gender parity, not a mirror of old hierarchies.
India and its courts have time and again proven their commitment to cultivating digital transformation as an inherent strength to bridge this digital divide, and the recent judgement where the court declared the right to digital access an intrinsic part of the right to life and liberty is a single instance among many.
CyberPeace Resolution on World Telecommunication and Information Society Day
CyberPeace is actively bridging the gap between digital safety and sustainable development through its initiatives, aligning with the principles of the Sustainable Development Goals (SDGs). The ‘CyberPeace Corps’ empowers communities by fostering cyber hygiene awareness and building digital resilience. The ‘CyberPeace Initiative’, a project with Google.org, tackles digital misinformation, promoting informed online engagement. Additionally, Digital Shakti, now in its fifth phase, empowers women by enhancing their digital literacy and safety. These are just a few of the many impactful initiatives by CyberPeace, aimed at creating a safer and more inclusive digital future. Together, we are spreading awareness and strengthening the foundation for a safer and more inclusive digital future and promoting responsible tech use. Let us be resolute on this World Telecommunication and Information Society Day for “Clean Data. Safe Clicks. Stronger Future. Pledge to Cyber Hygiene Today!”
References

Introduction
“an intermediary, on whose computer resource the information is stored, hosted or published, upon receiving actual knowledge in the form of an order by a court of competent jurisdiction or on being notified by the Appropriate Government or its agency under clause (b) of sub-section (3) of section 79 of the Act, shall not , which is prohibited under any law for the time being in force in relation to the interest of the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation; incitement to an offence relating to the above, or any information which is prohibited under any law for the time being in force”
Law grows by confronting its absences, it heals itself through its own gaps. The most recent notification from MeitY, G.S.R. 775(E) dated October 22, 2025, is an illustration of that self-correction. On November 15, 2025, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, will come into effect. They accomplish two crucial things: they restrict who can use "actual knowledge” to initiate takedown and require senior-level scrutiny of those directives. By doing this, they maintain genuine security requirements while guiding India’s content governance system towards more transparent due process.
When Regulation Learns Restraint
To better understand the jurisprudence of revision, one must need to understand that Regulation, in its truest form, must know when to pause. The 2025 amendment marks that rare moment when the government chooses precision over power, when regulation learns restraint. The amendment revises Rule 3(1)(d) of the 2021 Rules. Social media sites, hosting companies, and other digital intermediaries are still required to take action within 36 hours of receiving “actual knowledge” that a piece of content is illegal (e.g. poses a threat to public order, sovereignty, decency, or morality). However, “actual knowledge” now only occurs in the following situations:
(i) a court order from a court of competent jurisdiction, or
(ii) a reasoned written intimation from a duly authorised government officer not below Joint Secretary rank (or equivalent)
The authorised authority in matters involving the police “must not be below the rank of Deputy Inspector General of Police (DIG)”. This creates a well defined, senior-accountable channel in place of a diffuse trigger.
There are two more new structural guardrails. The Rules first establish a monthly assessment of all takedown notifications by a Secretary-level officer of the relevant government to test necessity, proportionality, and compliance with India’s safe harbour provision under Section 79(3) of the IT Act. Second, in order for platforms to act precisely rather than in an expansive manner, takedown requests must be accompanied by legal justification, a description of the illegal act, and precise URLs or identifiers. The cumulative result of these guardrails is that each removal has a proportionality check and a paper trail.
Due Process as the Law’s Conscience
Indian jurisprudence has been debating what constitutes “actual knowledge” for over a decade. The Supreme Court in Shreya Singhal (2015) connected an intermediary’s removal obligation to notifications from official channels or court orders rather than vague notice. But over time, that line became hazy due to enforcement practices and some court rulings, raising concerns about over-removal and safe-harbour loss under Section 79(3). Even while more recent decisions questioned the “reasonable efforts” of intermediaries, the 2025 amendment institutionally pays homage to Shreya Singhal’s ethos by refocusing “actual knowledge” on formal reviewable communications from senior state actors or judges.
The amendment also introduces an internal constitutionalism to executive orders by mandating monthly audits at the Secretary level. The state is required to re-justify its own orders on a rolling basis, evaluating them against proportionality and necessity, which are criteria that Indian courts are increasingly requesting for speech restrictions. Clearer triggers, better logs, and less vague “please remove” communications that previously left compliance teams in legal limbo are the results for intermediaries.
The Court’s Echo in the Amendment
The essence of this amendment is echoed in Karnataka High Court’s Ruling on Sahyog Portal, a government portal used to coordinate takedown orders under Section 79(3)(b), was constitutional. The HC rejected X’s (formerly Twitter’s) appeal contesting the legitimacy of the portal in September. The business had claimed that by giving nodal officers the authority to issue takedown orders without court review, the portal permitted arbitrary content removals. The court disagreed, holding that the officers’ acts were in accordance with Section 79 (3)(b) and that they were “not dropping from the air but emanating from statutes.” The amendment turns compliance into conscience by conforming to the Sahyog Portal verdict, reiterating that due process is the moral grammar of governance rather than just a formality.
Conclusion: The Necessary Restlessness of Law
Law cannot afford stillness; it survives through self doubt and reinvention. The 2025 amendment, too, is not a destination, it’s a pause before the next question, a reminder that justice breathes through revision. As befits a constitutional democracy, India’s path to content governance has been combative and iterative. The next rule making cycle has been sharpened by the stays split judgments, and strikes down that have resulted from strategic litigation centred on the IT Rules, safe harbour, government fact-checking, and blocking orders. Lessons learnt are reflected in the 2025 amendment: review triumphs over opacity; specificity triumphs over vagueness; and due process triumphs over discretion. A digital republic balances freedom and force in this way.
Sources
- https://pressnews.in/law-and-justice/government-notifies-amendments-to-it-rules-2025-strengthening-intermediary-obligations/
- https://www.meity.gov.in/static/uploads/2025/10/90dedea70a3fdfe6d58efb55b95b4109.pdf
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2181719
- https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/
- https://www.medianama.com/2025/10/223-content-takedown-rules-online-platforms-36-hr-deadline-officer-rank/#:~:text=It%20specifies%20that%20government%20officers,Deputy%20Inspector%20General%20of%20Police%E2%80%9D.

Introduction
In the digital realm of social media, Meta Platforms, the driving force behind Facebook and Instagram, faces intense scrutiny following The Wall Street Journal's investigative report. This exploration delves deeper into critical issues surrounding child safety on these widespread platforms, unravelling algorithmic intricacies, enforcement dilemmas, and the ethical maze surrounding monetisation features. Instances of "parent-managed minor accounts" leveraging Meta's subscription tools to monetise content featuring young individuals have raised eyebrows. While skirting the line of legality, this practice prompts concerns due to its potential appeal to adults and the associated inappropriate interactions. It's a nuanced issue demanding nuanced solutions.
Failed Algorithms
The very heartbeat of Meta's digital ecosystem, its algorithms, has come under intense scrutiny. These algorithms, designed to curate and deliver content, were found to actively promoting accounts featuring explicit content to users with known pedophilic interests. The revelation sparks a crucial conversation about the ethical responsibilities tied to the algorithms shaping our digital experiences. Striking the right balance between personalised content delivery and safeguarding users is a delicate task.
While algorithms play a pivotal role in tailoring content to users' preferences, Meta needs to reevaluate the algorithms to ensure they don't inadvertently promote inappropriate content. Stricter checks and balances within the algorithmic framework can help prevent the inadvertent amplification of content that may exploit or endanger minors.
Major Enforcement Challenges
Meta's enforcement challenges have come to light as previously banned parent-run accounts resurrect, gaining official verification and accumulating large followings. The struggle to remove associated backup profiles adds layers to concerns about the effectiveness of Meta's enforcement mechanisms. It underscores the need for a robust system capable of swift and thorough actions against policy violators.
To enhance enforcement mechanisms, Meta should invest in advanced content detection tools and employ a dedicated team for consistent monitoring. This proactive approach can mitigate the risks associated with inappropriate content and reinforce a safer online environment for all users.
The financial dynamics of Meta's ecosystem expose concerns about the exploitation of videos that are eligible for cash gifts from followers. The decision to expand the subscription feature before implementing adequate safety measures poses ethical questions. Prioritising financial gains over user safety risks tarnishing the platform's reputation and trustworthiness. A re-evaluation of this strategy is crucial for maintaining a healthy and secure online environment.
To address safety concerns tied to monetisation features, Meta should consider implementing stricter eligibility criteria for content creators. Verifying the legitimacy and appropriateness of content before allowing it to be monetised can act as a preventive measure against the exploitation of the system.
Meta's Response
In the aftermath of the revelations, Meta's spokesperson, Andy Stone, took centre stage to defend the company's actions. Stone emphasised ongoing efforts to enhance safety measures, asserting Meta's commitment to rectifying the situation. However, critics argue that Meta's response lacks the decisive actions required to align with industry standards observed on other platforms. The debate continues over the delicate balance between user safety and the pursuit of financial gain. A more transparent and accountable approach to addressing these concerns is imperative.
To rebuild trust and credibility, Meta needs to implement concrete and visible changes. This includes transparent communication about the steps taken to address the identified issues, continuous updates on progress, and a commitment to a user-centric approach that prioritises safety over financial interests.
The formation of a task force in June 2023 was a commendable step to tackle child sexualisation on the platform. However, the effectiveness of these efforts remains limited. Persistent challenges in detecting and preventing potential child safety hazards underscore the need for continuous improvement. Legislative scrutiny adds an extra layer of pressure, emphasising the urgency for Meta to enhance its strategies for user protection.
To overcome ongoing challenges, Meta should collaborate with external child safety organisations, experts, and regulators. Open dialogues and partnerships can provide valuable insights and recommendations, fostering a collaborative approach to creating a safer online environment.
Drawing a parallel with competitors such as Patreon and OnlyFans reveals stark differences in child safety practices. While Meta grapples with its challenges, these platforms maintain stringent policies against certain content involving minors. This comparison underscores the need for universal industry standards to safeguard minors effectively. Collaborative efforts within the industry to establish and adhere to such standards can contribute to a safer digital environment for all.
To align with industry standards, Meta should actively participate in cross-industry collaborations and adopt best practices from platforms with successful child safety measures. This collaborative approach ensures a unified effort to protect users across various digital platforms.
Conclusion
Navigating the intricate landscape of child safety concerns on Meta Platforms demands a nuanced and comprehensive approach. The identified algorithmic failures, enforcement challenges, and controversies surrounding monetisation features underscore the urgency for Meta to reassess and fortify its commitment to being a responsible digital space. As the platform faces this critical examination, it has an opportunity to not only rectify the existing issues but to set a precedent for ethical and secure social media engagement.
This comprehensive exploration aims not only to shed light on the existing issues but also to provide a roadmap for Meta Platforms to evolve into a safer and more responsible digital space. The responsibility lies not just in acknowledging shortcomings but in actively working towards solutions that prioritise the well-being of its users.
References
- https://timesofindia.indiatimes.com/gadgets-news/instagram-facebook-prioritised-money-over-child-safety-claims-report/articleshow/107952778.cms
- https://www.adweek.com/blognetwork/meta-staff-found-instagram-tool-enabled-child-exploitation-the-company-pressed-ahead-anyway/107604/
- https://www.tbsnews.net/tech/meta-staff-found-instagram-subscription-tool-facilitated-child-exploitation-yet-company