#FactCheck - "Viral Video Falsely Claimed as Evidence of Attacks in Bangladesh is False & Misleading”
Executive Summary:
A misleading video of a child covered in ash allegedly circulating as the evidence for attacks against Hindu minorities in Bangladesh. However, the investigation revealed that the video is actually from Gaza, Palestine, and was filmed following an Israeli airstrike in July 2024. The claim linking the video to Bangladesh is false and misleading.

Claims:
A viral video claims to show a child in Bangladesh covered in ash as evidence of attacks on Hindu minorities.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on keyframes of the video, which led us to a X post posted by Quds News Network. The report identified the video as footage from Gaza, Palestine, specifically capturing the aftermath of an Israeli airstrike on the Nuseirat refugee camp in July 2024.
The caption of the post reads, “Journalist Hani Mahmoud reports on the deadly Israeli attack yesterday which targeted a UN school in Nuseirat, killing at least 17 people who were sheltering inside and injuring many more.”

To further verify, we examined the video footage where the watermark of Al Jazeera News media could be seen, We found the same post posted on the Instagram account on 14 July, 2024 where we confirmed that the child in the video had survived a massacre caused by the Israeli airstrike on a school shelter in Gaza.

Additionally, we found the same video uploaded to CBS News' YouTube channel, where it was clearly captioned as "Video captures aftermath of Israeli airstrike in Gaza", further confirming its true origin.

We found no credible reports or evidence were found linking this video to any incidents in Bangladesh. This clearly implies that the viral video was falsely attributed to Bangladesh.
Conclusion:
The video circulating on social media which shows a child covered in ash as the evidence of attack against Hindu minorities is false and misleading. The investigation leads that the video originally originated from Gaza, Palestine and documents the aftermath of an Israeli air strike in July 2024.
- Claims: A video shows a child in Bangladesh covered in ash as evidence of attacks on Hindu minorities.
- Claimed by: Facebook
- Fact Check: False & Misleading
Related Blogs

Introduction
In July 2025, the Digital Trust & Safety Partnership (DTSP) achieved a significant milestone with the formal acceptance of its Safe Framework Specification as an international standard, ISO/IEC 25389. This is the first globally recognised standard that is exclusively concerned with guaranteeing a secure online experience for the general public's use of digital goods and services.
Significance of the New Framework
Fundamentally, ISO/IEC 25389 provides organisations with an organised framework for recognising, controlling, and reducing risks associated with conduct or content. This standard, which was created under the direction of ISO/IEC's Joint Technical Committee 1 (JTC 1), integrates the best practices of DTSP and offers a precise way to evaluate organisational maturity in terms of safety and trust. Crucially, it offers the first unified international benchmark, allowing organisations globally to coordinate on common safety pledges and regularly assess progress.
Other Noteworthy Standards and Frameworks
While ISO/IEC 25389 is pioneering, it’s not the only framework shaping digital trust and safety:
- One of the main outcomes of the United Nations’ 2024 Summit for the Future was the UN's Global Digital Compact, which describes cross-border cooperation on secure and reliable digital environments with an emphasis on countering harmful content, upholding online human rights, and creating accountability standards.
- The World Economic Forum’s Digital Trust Framework defines the goals and values, such as cybersecurity, privacy, transparency, redressability, auditability, fairness, interoperability and safety, implicit to the concept of digital trust. It also provides a roadmap to digital trustworthiness that imbibes these dimensions.
- The Framework for Integrity, Security and Trust (FIST) launched at the Cybereace Summit 2023 at USI of India in New Delhi, calls for a multistakeholder approach to co-create solutions and best practices for digital trust and safety.
- While still in the finalisation stage for implementation rollout, India's Digital Personal Data Protection Act, 2023 (DPDP Act) and its Rules (2025) aim to strike a balance between individual rights and data processing needs by establishing a groundwork for data security and privacy.
- India is developing frameworks in cutting-edge technologies like artificial intelligence. Using a hub-and-spoke model under the IndiaAI Mission, the AI Safety Institute was established in early 2025 with the goal of creating standards for trustworthy, moral, and safe AI systems. Furthermore, AI standards with an emphasis on safety and dependability are being drafted by the Bureau of Indian Standards (BIS).
- Google's DigiKavach program (2023) and Google Safety Engineering Centre (GSEC) in Hyderabad are concrete efforts to support digital safety and fraud prevention in India's tech sector.
What It Means for India
India is already claiming its place in discussions about safety and trust around the world. Google's June 2025 safety charter for India, for example, highlights how India's distinct digital scale, diversity, and vast threat landscape provide insights that inform global cybersecurity strategies.
For India's digital ecosystem, ISO/IEC 25389 comes at a critical juncture. Global best practices in safety and trust are desperately needed as a result of the rapid adoption of digital technologies, including the growth of digital payments, e-governance, and artificial intelligence and a concomitant rise in instances of digital harms. Through its guidelines, ISO/IEC 25389 provides a reference benchmark that Indian startups, government agencies, and tech companies can use to improve their safety standards.
Conclusion
A global trust-and-safety standard like ISO/IEC 25389 is essential for making technology safer for people, even as we discuss the broader adoption of security and safety-by-design principles integrated into the processes of technological product development. India can improve user protection, build its reputation globally, and solidify its position as a key player in the creation of a safer, more resilient digital future by implementing this framework in tandem with its growing domestic regulatory framework (such as the DPDP Act and AI Safety policies).
References
- https://dtspartnership.org/the-safe-framework-specification/
- https://dtspartnership.org/press-releases/dtsps-safe-framework-published-as-an-international-standard/?
- https://www.weforum.org/stories/2024/04/united-nations-global-digital-compact-trust-security/?
- https://economictimes.indiatimes.com/tech/technology/google-releases-safety-charter-for-india-senior-exec-details-top-cyber-threat-actors-in-the-country/articleshow/121903651.cms?
- https://initiatives.weforum.org/digital-trust/framework
- https://government.economictimes.indiatimes.com/news/secure-india/the-launch-of-fist-framework-for-integrity-security-and-trust/103302090

In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/

Executive Summary:
Recently, a viral social media post alleged that the Delhi Metro Rail Corporation Ltd. (DMRC) had increased ticket prices following the BJP’s victory in the Delhi Legislative Assembly elections. After thorough research and verification, we have found this claim to be misleading and entirely baseless. Authorities have asserted that no fare hike has been declared.
Claim:
Viral social media posts have claimed that the Delhi Metro Rail Corporation Ltd. (DMRC) increased metro fares following the BJP's victory in the Delhi Legislative Assembly elections.


Fact Check:
After thorough research, we conclude that the claims regarding a fare hike by the Delhi Metro Rail Corporation Ltd. (DMRC) following the BJP’s victory in the Delhi Legislative Assembly elections are misleading. Our review of DMRC’s official website and social media handles found no mention of any fare increase.Furthermore, the official X (formerly Twitter) handle of DMRC has also clarified that no such price hike has been announced. We urge the public to rely on verified sources for accurate information and refrain from spreading misinformation.

Conclusion:
Upon examining the alleged fare hike, it is evident that the increase pertains to Bengaluru, not Delhi. To verify this, we reviewed the official website of Bangalore Metro Rail Corporation Limited (BMRCL) and cross-checked the information with appropriate evidence, including relevant images. Our findings confirm that no fare hike has been announced by the Delhi Metro Rail Corporation Ltd. (DMRC).

- Claim: Delhi Metro price Hike after BJP’s victory in election
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading