#FactCheck: Misleading Claim Amid West Asia Conflict: Old Yemen Video Shared as Iran’s Attack on Tel Aviv
Executive Summary
Amid the ongoing tensions in West Asia between the United States–Israel alliance and Iran since February 28, 2026, a video is rapidly going viral on social media. The clip shows buildings engulfed in flames and thick plumes of smoke following an attack. Several users are sharing it with the claim that it depicts Iran’s recent strike on Tel Aviv, Israel. However, an research by the CyberPeace found the claim to be misleading. The viral video is actually from August 2025, when Israel carried out airstrikes in Sanaa, the capital of Yemen. It has no connection to the current conflict.
Claim:
An Instagram user ‘iran_.news24’ posted the video on March 27, 2026, with the caption: “Iran has turned Israel’s largest city Tel Aviv into hell—fears that 200,000 people have died in the war so far.”
Fact Check
To verify the viral claim, keyframes of the video were extracted and searched using Google Lens. The same video was found posted on August 24, 2025, by a Facebook user ‘Mhmdmhywbalshrby5’. The accompanying text, when translated, stated that it showed Israeli bombardment of Sanaa, Yemen.

Similarly, another Instagram user ‘ae5ce’ had also shared the same video on August 24, 2025, identifying it as footage from Sanaa.

Media reports further support this finding. According to a report published by Egypt Today on August 24, 2025, Israel carried out multiple airstrikes in Sanaa targeting key locations, including an oil station, a power facility, and the presidential palace. Casualties were also reported. The strikes were said to be in response to attacks by Houthi forces.

Additionally, the New York Post shared another video of the same incident from a different angle on its X (formerly Twitter) handle on August 25, 2025.

Conclusion
The video being circulated with the claim of Iran attacking Tel Aviv is actually old footage from Israeli airstrikes in Yemen in August 2025. It is unrelated to the ongoing conflict.
Related Blogs
.webp)
The Digital Covenant: Aligning Communication with SDG Goals
“Rethinking Communication, Cyber Responsibility, and Sustainability in a Connected World”
Introduction
It is rightly said by Antonio Guterres, United Nations Secretary General, “Everyone should be able to express themselves freely without fear of attack. Everyone should be able to access a range of views and information sources.” In 2024, when the Global Alliance for PR and Communication Management asserted that it aligns with the era of digital transformation, where technology is moving at terminal velocity and bringing various risks and threats, it called on the global leaders and stakeholders to proclaim ‘Responsible Communication’ as the 18th Sustainable Development Goal (SDG). On May 17th, as we celebrate World Telecommunication and Information Society Day (WTISD) 2025, we must align our personal, professional, and virtual spaces with a safe and sustainable information age.
In terms of digital growth, it is indubitable that India is growing at a brisk pace consistently in alignment with its South Asian and Western counterparts and has incorporated international covenants on digital personal data and cyber crimes within its domestic regime.
UN Global Principles for Information Integrity
The United Nations has displayed its constant commitment to the achievement of the seventeen SDGs that were adopted at the United Nations Conference in 2012 in Rio de Janeiro. It recognises that you cannot isolate the digital transformation, technology, and digitisation from other areas that are included within the SDGs, such as health, education, and poverty. The UN released Policy Brief 8 in June 2023 by the UN Secretary-General that seeks to empirically derive data on the threats posed to information integrity and then come up with norms that help guide the member states, the digital platforms, and other stakeholders. The norms must be in conformity with the right to freedom of opinion and expression and the right to information access.
In line with its agenda, it has formulated Global Principles of Information Integrity, which include “Societal Trust and Resilience”, “Healthy Incentives”, “Public Empowerment”, “Independent, Free and Pluralistic Media” and “Transparency and Research”. The principles recognise the harm caused by hatred, misinformation, and disinformation propagated by the misuse of advances in Artificial Intelligence Technology (AI).
Breaking the Binary: Bridging the Gender Digital Divide
The reflection of how far we have come and how far we have to go can be deciphered with a single sentence, i.e., using digital technologies to promote gender equality. This can be seen both as a paradox and a pressing call to action. As we celebrate WTISD 2025, the day highlights the fundamental role of Information and Communication Technologies (ICTs) in accelerating progress and bringing those not included in this digital transformation to become a part of this change, especially the female population that remains isolated from mainstream growth. As per the data given by ITU, “Out of the world population, 70 per cent of men are using the internet, compared with 65 per cent of women.”
This exclusion is not merely a technical gap but a societal and economic chasm, reinforcing existing inequalities. By including such an important goal in the theme of this day, it marks a critical moment towards the formation of gender-sensitive digital policies, promoting digital literacy among women and girls, and ensuring safe, affordable, and meaningful connectivity. We can explore the future potential where technology is the true instrument for gender parity, not a mirror of old hierarchies.
India and its courts have time and again proven their commitment to cultivating digital transformation as an inherent strength to bridge this digital divide, and the recent judgement where the court declared the right to digital access an intrinsic part of the right to life and liberty is a single instance among many.
CyberPeace Resolution on World Telecommunication and Information Society Day
CyberPeace is actively bridging the gap between digital safety and sustainable development through its initiatives, aligning with the principles of the Sustainable Development Goals (SDGs). The ‘CyberPeace Corps’ empowers communities by fostering cyber hygiene awareness and building digital resilience. The ‘CyberPeace Initiative’, a project with Google.org, tackles digital misinformation, promoting informed online engagement. Additionally, Digital Shakti, now in its fifth phase, empowers women by enhancing their digital literacy and safety. These are just a few of the many impactful initiatives by CyberPeace, aimed at creating a safer and more inclusive digital future. Together, we are spreading awareness and strengthening the foundation for a safer and more inclusive digital future and promoting responsible tech use. Let us be resolute on this World Telecommunication and Information Society Day for “Clean Data. Safe Clicks. Stronger Future. Pledge to Cyber Hygiene Today!”
References

Introduction
Twitter Inc.’s appeal against barring orders for specific accounts issued by the Ministry of Electronics and Information Technology was denied by a single judge on the Karnataka High Court. Twitter Inc. was also given an Rs. 50 lakh fine by Justice Krishna Dixit, who claimed the social media corporation had approached the court defying government directives.
As a foreign corporation, Twitter’s locus standi had been called into doubt by the government, which said they were ineligible to apply Articles 19 and 21 to their situation. Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
The Issue
In accordance with Section 69A of the Information Technology Act, the Ministry issued the directives. Nevertheless, Twitter had argued in its appeal that the orders “fall foul of Section 69A both substantially and procedurally.” Twitter argued that in accordance with 69A, account holders were to be notified before having their tweets and accounts deleted. However, the Ministry failed to provide these account holders with any notices.
On June 4, 2022, and again on June 6, 2022, the government sent letters to Twitter’s compliance officer requesting that they come before them and provide an explanation for why the Blocking Orders were not followed and why no action should be taken against them.
Twitter replied on June 9 that the content against which it had not followed the blocking orders does not seem to be a violation of Section 69A. On June 27, 2022, the Government issued another notice stating Twitter was violating its directions. On June 29, Twitter replied, asking the Government to reconsider the direction on the basis of the doctrine of proportionality. On June 30, 2022, the Government withdrew blocking orders on ten account-level URLs but gave an additional list of 27 URLs to be blocked. On July 10, more accounts were blocked. Compiling the orders “under protest,” Twitter approached the HC with the petition challenging the orders.
Legality
Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
Government attorney Additional Solicitor General R Sankaranarayanan argued that tweets mentioning “Indian Occupied Kashmir” and the survival of LTTE commander Velupillai Prabhakaran were serious enough to undermine the integrity of the nation.
Twitter, on the other hand, claimed that its users have pushed for these rights. Additionally, Twitter maintained that under Article 14 of the Constitution, even as a foreign company, they were entitled to certain rights, such as the right to equality. They also argued that the reason for the account blocking in each case was not stated and that Section 69a’s provision for blocking a URL should only apply to the offending URL rather than the entire account because blocking the entire account would prevent the creation of information while blocking the offending tweet only applied to already-created information.
Conclusion
The evolution of cyberspace has been substantiated by big tech companies like Facebook, Google, Twitter, Amazon and many more. These companies have been instrumental in leading the spectrum of emerging technologies and creating a blanket of ease and accessibility for users. Compliance with laws and policies is of utmost priority for the government, and the new bills and policies are empowering the Indian cyberspace. Non Compliance will be taken very seriously, and the same is legalised under the Intermediary Guidelines 2021 and 2022 by Meity. Referring to Section 79 of the Information Technology Act, which pertains to an exemption from liability of intermediary in some instances, it was said, “Intermediary is bound to obey the orders which the designate authority/agency which the government fixes from time to time.”

Introduction
A Reuters investigation has uncovered an elephant in the room regarding Meta Platforms' internal measures to address online fraud and illicit advertising. The confidential documents that Reuters reviewed disclosed that Meta was planning to generate approximately 10% of its 2024 revenue, i.e., USD 16 billion, from ads related to scams and prohibited goods. The findings point out a disturbing paradox: on the one hand, Meta is a vocal advocate for digital safety and platform integrity, while on the other hand, the internal logs of the company indicate the existence of a very large area allowing the shunning of fraudulent advertisement activities that exploit users throughout the world.
The Scale of the Problem
Internal Meta projections show that its platforms, Facebook, Instagram, and WhatsApp, are displaying a staggering 15 billion scam ads per day combined. The advertisements include deceitful e-commerce promotions, fake investment schemes, counterfeit medical products, and unlicensed gambling platforms.
Meta has developed sophisticated detection tools, but even then, the system does not catch the advertisers until they are 95% certain to be fraudsters. By having at least that threshold for removing an ad, the company is unlikely to lose much money. As a result, instead of turning the fraud adjacent advertisers down, it charges them higher ad rates, which is the strategy they call “penalty bids” internally.
Internal Acknowledgements & Business Dependence
Internal documents that date between 2021 and 2025 reveal that the financial, safety, and lobbying divisions of Meta were cognizant of the enormity of revenues generated from scams. One of the 2025 strategic papers even describes this revenue source as "violating revenue," which implies that it includes ads that are against Meta's policies regarding scams, gambling, sexual services, and misleading healthcare products.
The company's top executives consider the cost-benefit scenario of stricter enforcement. According to a 2024 internal projection, Meta's half-yearly earnings from high-risk scam ads were estimated at USD 3.5 billion, whereas regulatory fines for such violations would not exceed USD 1 billion, thus making it a tolerable trade-off from a commercial viewpoint. At the same time, the company intends to scale down scam ad revenue gradually, thus from 10.1% in 2024 to 7.3% by 2025, and 6% by 2026; however, the documents also reveal a planned slowdown in enforcement to avoid "abrupt reductions" that could affect business forecasts.
Algorithmic Amplification of Scams
One of the most alarming situations is the fact that Meta's own advertising algorithms amplify scam content. It has been reported that users who click on fraudulent ads are more likely to see other similar ads, as the platform's personalisation engine assumes user "interest."
This scenario creates a self-reinforcing feedback loop where the user engagement with scam content dictates the amount of such content being displayed. Thus, a digital environment is created which encourages deceptive engagement and consequently, user trust is eroded and systemic risk is amplified.
An internal presentation in May 2025 was said to put a number on how deeply the platform's ad ecosystem was intertwined with the global fraud economy, estimating that one-third of the scams that succeeded in the U.S. were due to advertising on Meta's platforms.
Regulatory & Legal Implications
The disclosures arrived at the same time as the US and UK governments started to closely check the company's activities more than ever before.
- The U.S. Securities and Exchange Commission (SEC) is said to be looking into whether Meta has had any part in the promotion of fraudulent financial ads.
- The UK’s Financial Conduct Authority (FCA) found that Meta’s platforms were the main sources of scams related to online payments and claimed that the amount of money lost was more than all the other social platforms combined in 2023.
Meta’s spokesperson, Andy Stone, at first denied the accusations, stating that the figures mentioned in the leak were “rough and overly-inclusive”; nevertheless, he conceded that the company’s consistent efforts toward enforcement had negatively impacted revenue and would continue to do so.
Operational Challenges & Policy Gaps
The internal documents also reveal the weaknesses in Meta's day-to-day operations when it comes to the implementation of its own policies.
- Because of the large number of employees laid off in 2023, the whole department that dealt with advertiser-brand impersonation was said to have been dissolved.
- Scam ads were categorised as a "low severity" issue, which was more of a "bad user experience" than a critical security risk.
- At the end of 2023, users were submitting around 100,000 legitimate scam reports per week, of which Meta dismissed or rejected 96%.
Human Impact: When Fraud Becomes Personal
The financial and ethical issues have tangible human consequences. The Reuters investigation documented multiple cases of individuals defrauded through hijacked Meta accounts.
One striking example involves a Canadian Air Force recruiter, whose hacked Facebook account was used to promote fake cryptocurrency schemes. Despite over a hundred user reports, Meta failed to act for weeks, during which several victims, including military colleagues, lost tens of thousands of dollars.
The case underscores not just platform negligence, but also the difficulty of law enforcement collaboration. Canadian authorities confirmed that funds traced to Nigerian accounts could not be recovered due to jurisdictional barriers, a recurring issue in transnational cyber fraud.
Ethical and Cybersecurity Implications
The research has questioned extremely important things at least from the perspective of cyber policy:
- Platform Accountability: Meta, by its practice, is giving more importance to the monetary aspect rather than the truth, and in this way, it is going against the principles of responsible digital governance.
- Transparency in Ad Ecosystems: The lack of transparency in digital advertising systems makes it very easy for dishonest actors to use automated processes with very little supervision.
- Algorithmic Responsibility: The use of algorithms that impact the visibility of misleading content and targeting can be considered the direct involvement of the algorithms in the fraud.
- Regulatory Harmonisation: The presence of different and disconnected enforcement frameworks across jurisdictions is a drawback to the efforts in dealing with cross-border cybercrime.
- Public Trust: Users’ trust in the digital world is mainly dependent on the safety level they see and the accountability of the companies.
Conclusion
Meta’s records show a very unpleasant mix of profit, laxity, and failure in the policy area concerning scam-related ads. The platform’s readiness to accept and even profit from fraudulent players, though admitting the damage they cause, calls for an immediate global rethinking of advertising ethics, regulatory enforcement, and algorithmic transparency.
With the expansion of its AI-driven operations and advertising networks, protecting the users of Meta must evolve from being just a public relations goal to being a core business necessity, thus requiring verifiable accountability measures, independent audits, and regulatory oversight. It is an undeniable fact that there are billions of users who count on Meta’s platforms for their right to digital safety, which is why this right must be respected and enforced rather than becoming optional.
References
- https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/?utm_source=chatgpt.com
- https://www.indiatoday.in/technology/news/story/leaked-docs-claim-meta-made-16-billion-from-scam-ads-even-after-deleting-134-million-of-them-2815183-2025-11-07