#FactCheck - Old Wedding Fire Video Misleadingly Shared as Iranian Hypersonic Missile Strike in Tel Aviv
Executive Summary:
Amid the ongoing conflict involving the United States, Israel, and Iran, a video showing a building engulfed in flames is being widely circulated on social media. In the clip, a large fire can be seen inside a building while several people appear to be running in panic. The video is being shared with the claim that Iran fired a hypersonic missile targeting a ceremony in Tel Aviv, Israel, allegedly killing several Israeli military generals and other prominent figures.
However, research by the CyberPeace found that the claim is false. The video being circulated as footage of an attack in Israel actually predates the current conflict and shows a fire that broke out during a wedding ceremony.
Claim
A Facebook user named “Syed Asif Raza Jafri” shared the video on March 13, 2026, claiming that an Iranian hypersonic missile had struck a grand ceremony in Tel Aviv, where several Israeli military officers, generals, soldiers, and other important personalities were present. According to the post, the attack resulted in multiple casualties.
Source:
- https://www.facebook.com/reel/902182825912364
- https://ghostarchive.org/archive/rZryr

Fact Check
To verify the claim, we began our research using the Google Lens reverse image search tool. Several key frames from the viral video were extracted and searched online.
During the search, we found the same video shared earlier on multiple foreign social media accounts. A Facebook user named “Es de Bombero” from Chile had posted the video on January 17, 2026, describing it in Spanish as footage of a fire that broke out during a wedding celebration.

Our research shows that the viral video had been circulating on social media since at least January 15, 2026, well before the escalation of the current conflict. According to a report published on March 1, 2026, by BBC, the large-scale attacks on Iran by the United States and Israel began on February 28, 2026, after which Iran’s Supreme Leader Ali Khamenei was reported dead.
Additionally, a March 12, 2026 report by Al Jazeera stated that a house near Tel Aviv in central Israel was damaged by a rocket reportedly fired by Hezbollah, which has previously carried out joint attacks in coordination with Iran.

Conclusion
The viral video being shared as footage of an Iranian hypersonic missile strike in Tel Aviv is misleading. The clip is an older video of a fire that reportedly broke out during a wedding ceremony and was circulating online before the current conflict began.
While the exact location of the incident shown in the video cannot be independently verified, it is clear that the footage has no connection to the ongoing war between the United States, Israel, and Iran.
Related Blogs

The Rise of Tech Use Amongst Children
Technology today has become an invaluable resource for children, as a means to research issues, be informed about events, gather data, and share views and experiences with others. Technology is no longer limited to certain age groups or professions: children today are using it for learning & entertainment, engaging with their friends, online games and much more. With increased digital access, children are also exposed to online mis/disinformation and other forms of cyber crimes, far more than their parents, caregivers, and educators were in their childhood or are, even in the present. Children are particularly vulnerable to mis/disinformation due to their still-evolving maturity and cognitive capacities. The innocence of the youth is a major cause for concern when it comes to digital access because children simply do not possess the discernment and caution required to be able to navigate the Internet safely. They are active users of online resources and their presence on social media is an important factor of social, political and civic engagement but young people and children often lack the cognitive and emotional capacity needed to distinguish between reliable and unreliable information. As a result, they can be targets of mis/disinformation. ‘A UNICEF survey in 10 countries’[1] reveals that up to three-quarters of children reported feeling unable to judge the veracity of the information they encounter online.
Social media has become a crucial part of children's lives, with them spending a significant time on digital platforms such as Youtube, Facebook, Instagram and more. All these platforms act as source of news, educational content, entertainment, peer communication and more. These platforms host a variety of different kinds of content across a diverse range of subject matters, and each platform’s content and privacy policies are different. Despite age restrictions under the Children's Online Privacy Protection Act (COPPA), and other applicable laws, it is easy for children to falsify their birth date or use their parent's accounts to access content which might not be age-appropriate.
The Impact of Misinformation on Children
In virtual settings, inaccurate information can come in the form of text, images, or videos shared through traditional and social media channels. In this age, online misinformation is a significant cause for concern, especially with children, because it can cause anxiety, damage self-esteem, shape beliefs, and skewing their worldview/viewpoints. It can distort children's understanding of reality, hinder their critical thinking skills, and cause confusion and cognitive dissonance. The growing infodemic can even cause an overdose of information. Misinformation can also influence children's social interactions, leading to misunderstandings, conflicts, and mistrust among peers. Children from low literacy backgrounds are more susceptible to fabricated content. Mis/disinformation can exacerbate social divisions amongst peers and lead to unwanted behavioural patterns. Sometimes even children themselves can unwittingly spread/share misinformation. Therefore, it is important to educate & empower children to build cognitive defenses against online misinformation risks, promote media literacy skills, and equip them with the necessary tools to critically evaluate online information.
CyberPeace Policy Wing Recommendations
- Role of Parents & Educators to Build Cognitive Defenses
One way parents shape their children's values, beliefs and actions is through modelling. Children observe how their parents use technology, handle challenging situations, and make decisions. For example, parents who demonstrate honesty, encourage healthy use of social media and show kindness and empathy are more likely to raise children who hold these qualities in high regard. Hence parents/educators play an important role in shaping the minds of their young charges and their behaviours, whether in offline or online settings. It is important for parents/educators to realise that they must pay close attention to how online content consumption is impacting the cognitive skills of their child. Parents/educators should educate children about authentic sources of information. This involves instructing children on the importance of using reliable, credible sources to utilise while researching on any topic of study or otherwise, and using verification mechanisms to test suspected information., This may sound like a challenging ideal to meet, but the earlier we teach children about Prebunking and Debunking strategies and the ability to differentiate between fact and misleading information, the sooner we can help them build cognitive defenses so that they may use the Internet safely. Hence it becomes paramount important for parents/educators to require children to question the validity of information, verify sources, and critically analyze content. Developing these skills is essential for navigating the digital world effectively and making informed decisions.
- The Role of Tech & Social Media Companies to Fortify their Steps in Countering Misinformation
Is worth noting that all major tech/social media companies have privacy policies in place to discourage any spread of harmful content or misinformation. Social media platforms have already initiated efforts to counter misinformation by introducing new features such as adding context to content, labelling content, AI watermarks and collaboration with civil society organisations to counter the widespread online misinformation. In light of this, social media platforms must prioritise both the designing and the practical implementation aspects of policy development and deployment to counter misinformation strictly. These strategies can be further improved upon through government support and regulatory controls. It is recommended that social media platforms must further increase their efforts to counter increasing spread of online mis/disinformation and apply advanced techniques to counter misinformation including filtering, automated removal, detection and prevention, watermarking, increasing reporting mechanisms, providing context to suspected content, and promoting authenticated/reliable sources of information.
Social media platforms should consider developing children-specific help centres that host educational content in attractive, easy-to-understand formats so that children can learn about misinformation risks and tactics, how to spot red flags and how to increase their information literacy and protect themselves and their peers. Age-appropriate, attractive and simple content can go a long way towards fortifying young minds and making them aware and alert without creating fear.
- Laws and Regulations
It is important that the government and the social media platforms work in sync to counteract misinformation. The government must consult with the concerned platforms and enact rules and regulations which strengthen the platform’s age verification mechanisms at the sign up/ account creation stage whilst also respecting user privacy. Content moderation, removal of harmful content, and strengthening reporting mechanisms all are important factors which must be prioritised at both the regulatory level and the platform operational level. Additionally, in order to promote healthy and responsible use of technology by children, the government should collaborate with other institutions to design information literacy programs at the school level. The government must make it a key priority to work with civil society organisations and expert groups that run programs to fight misinformation and co-create a safe cyberspace for everyone, including children.
- Expert Organisations and Civil Societies
Cybersecurity experts and civil society organisations possess the unique blend of large scale impact potential and technical expertise. We have the ability to educate and empower huge numbers, along with the skills and policy acumen needed to be able to not just make people aware of the problem but also teach them how to solve it for themselves. True, sustainable solutions to any social concern only come about when capacity-building and empowerment are at the heart of the initiative. Programs that prioritise resilience, teach Prebunking and Debunking and are able to understand the unique concerns, needs and abilities of children and design solutions accordingly are the best suited to implement the administration’s mission to create a safe digital society.
Final Words
Online misinformation significantly impacts child development and can hinder their cognitive abilities, color their viewpoints, and cause confusion and mistrust. It is important that children are taught not just how to use technology but how to use it responsibly and positively. This education can begin at a very young age and parents, guardians and educators can connect with CyberPeace and other similar initiatives on how to define age-appropriate learning milestones. Together, we can not only empower children to be safe today, but also help them develop into netizens who make the world even safer for others tomorrow.
References:
- [1] Digital misinformation / disinformation and children
- [2] Children's Privacy | Federal Trade Commission
.webp)
Executive Summary:
A viral image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man; however, this claim is false. A thorough investigation by the Cyberpeace Research team found that the image has been digitally manipulated. The original photo, which was posted by Balmukund Acharya, a BJP MLA from Jaipur, on his official Facebook account in December 2023, he was posing with a Muslim man in his election office. The man wearing the Muslim skullcap is featured in several other photos on Acharya's Instagram account, where he expressed gratitude for the support from the Muslim community. Thus, the claimed image of a marriage between a Hindu Sadhvi and a Muslim man is digitally altered.

Claims:
An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.


Fact Check:
Upon receiving the posts, we reverse searched the image to find any credible sources. We found a photo posted by Balmukund Acharya Hathoj Dham on his facebook page on 6 December 2023.

This photo is digitally altered and posted on social media to mislead. We also found several different photos with the skullcap man where he was featured.

We also checked for any AI fabrication in the viral image. We checked using a detection tool named, “content@scale” AI Image detection. This tool found the image to be 95% AI Manipulated.

We also checked with another detection tool for further validation named, “isitai” image detection tool. It found the image to be 38.50% of AI content, which concludes to the fact that the image is manipulated and doesn’t support the claim made. Hence, the viral image is fake and misleading.

Conclusion:
The lack of credible source and the detection of AI manipulation in the image explains that the viral image claiming to show a Hindu Sadhvi marrying a Muslim man is false. It has been digitally altered. The original image features BJP MLA Balmukund Acharya posing with a Muslim man, and there is no evidence of the claimed marriage.
- Claim: An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading

Introduction
Rajeev Chandrasekhar, Minister of State at the Ministry of Electronics and Information Technology, has emphasised the need for an open internet. He stated that no platform can deny content creators access to distribute and monetise content and that large technology companies have begun to play a significant role in the digital evolution. Chandrasekhar emphasised that the government does not want the internet or monetisation to be in the purview of just one or two companies and does not want 120 crore Indians on the internet in 2025 to be catered to by big islands on the internet.
The Voice for Open Internet
India's Minister of State for IT, Rajeev Chandrasekhar, has stated that no technology company or social media platform can deny content creators access to distribute and monetise their content. Speaking at the Digital News Publishers Association Conference in Delhi, Chandrasekhar emphasized that the government does not want the internet or monetization of the internet to be in the hands of just one or two companies. He argued that the government does not like monopoly or duopoly and does not want 120 crore Indians on the Internet in 2025 to be catered to by big islands on the internet.
Chandrasekhar highlighted that large technology companies have begun to exert influence when it comes to the dissemination of content, which has become an area of concern for publishers and content creators. He stated that if any platform finds it necessary to block any content, they need to give reasons or grounds to the creators, stating that the content is violating norms.
As India tries to establish itself as an innovator in the technology sector, a recent corpus of Rs 1 lakh crore was announced by the government in the interim Budget of 2024-25. As big companies continue to tighten their stronghold on the sector, content moderation has become crucial. Under the IT Rules Act, 11 types of categories are unlawful under IT Act and criminal law. Platforms must ensure no user posts content that falls under these categories, take down any such content, and gateway users to either de-platforming or prosecuting. Chandrasekhar believes that the government has to protect the fundamental rights of people and emphasises legislative guardrails to ensure platforms are accountable for the correctness of the content.
Monetizing Content on the Platform
No platform can deny a content creator access to the platform to distribute and monetise it,' Chandrasekhar declared, boldly laying down a gauntlet that defies the prevailing norms. This tenet signals a nascent dawn where creators may envision reaping the rewards borne of their creative endeavours unfettered by platform restrictions.
An increasingly contentious issue that shadows this debate is the moderation of content within the digital realm. In this vast uncharted expanse, the powers that be within these monolithic platforms assume the mantle of vigilance—policing the digital avenues for transgressions against a conscribed code of conduct. Under the stipulations of India's IT Rules Act, for example, platforms are duty-bound to interdict user content that strays into territories encompassing a spectrum of 11 delineated unlawful categories. Violations span the gamut from the infringement of intellectual property rights to the propagation of misinformation—each category necessitating swift and decisive intervention. He raised the alarm against misinformation—a malignant growth fed by the fertile soils of innovation—a phenomenon wherein media reports chillingly suggest that up to half of the information circulating on the internet might be a mere fabrication, a misleading simulacrum of authenticity.
The government's stance, as expounded by Chandrasekhar, pivots on an axis of safeguarding citizens' fundamental rights, compelling digital platforms to shoulder the responsibility of arbiters of truth. 'We are a nation of over 90 crores today, a nation progressing with vigour, yet we find ourselves beset by those who wish us ill,'
Upcoming Digital India Act
Awaiting upon the horizon, India's proposed Digital India Act (DIA), still in its embryonic stage of pre-consultation deliberation, seeks to sculpt these asymmetries into a more balanced form. Chandrasekhar hinted at the potential inclusion within the DIA of regulatory measures that would sculpt the interactions between platforms and the mosaic of content creators who inhabit them. Although specifics await the crucible of public discourse and the formalities of consultation, indications of a maturing framework are palpable.
Conclusion
It is essential that the fable of digital transformation reverberates with the voices of individual creators, the very lifeblood propelling the vibrant heartbeat of the internet's culture. These are the voices that must echo at the centre stage of policy deliberations and legislative assembly halls; these are the visions that must guide us, and these are the rights that we must uphold. As we stand upon the precipice of a nascent digital age, the decisions we forge at this moment will cascade into the morrow and define the internet of our future. This internet must eternally stand as a bastion of freedom, of ceaseless innovation and as a realm of boundless opportunity for every soul that ventures into its infinite expanse with responsible use.
References
- https://www.financialexpress.com/business/brandwagon-no-platform-can-deny-a-content-creator-access-to-distribute-and-monetise-content-says-mos-it-rajeev-chandrasekhar-3386388/
- https://indianexpress.com/article/india/meta-content-monetisation-social-media-it-rules-rajeev-chandrasekhar-9147334/
- https://www.medianama.com/2024/02/223-rajeev-chandrasekhar-content-creators-publishers/