#FactCheck - "Viral Video Falsely Claimed as Evidence of Attacks in Bangladesh is False & Misleading”
Executive Summary:
A misleading video of a child covered in ash allegedly circulating as the evidence for attacks against Hindu minorities in Bangladesh. However, the investigation revealed that the video is actually from Gaza, Palestine, and was filmed following an Israeli airstrike in July 2024. The claim linking the video to Bangladesh is false and misleading.

Claims:
A viral video claims to show a child in Bangladesh covered in ash as evidence of attacks on Hindu minorities.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on keyframes of the video, which led us to a X post posted by Quds News Network. The report identified the video as footage from Gaza, Palestine, specifically capturing the aftermath of an Israeli airstrike on the Nuseirat refugee camp in July 2024.
The caption of the post reads, “Journalist Hani Mahmoud reports on the deadly Israeli attack yesterday which targeted a UN school in Nuseirat, killing at least 17 people who were sheltering inside and injuring many more.”

To further verify, we examined the video footage where the watermark of Al Jazeera News media could be seen, We found the same post posted on the Instagram account on 14 July, 2024 where we confirmed that the child in the video had survived a massacre caused by the Israeli airstrike on a school shelter in Gaza.

Additionally, we found the same video uploaded to CBS News' YouTube channel, where it was clearly captioned as "Video captures aftermath of Israeli airstrike in Gaza", further confirming its true origin.

We found no credible reports or evidence were found linking this video to any incidents in Bangladesh. This clearly implies that the viral video was falsely attributed to Bangladesh.
Conclusion:
The video circulating on social media which shows a child covered in ash as the evidence of attack against Hindu minorities is false and misleading. The investigation leads that the video originally originated from Gaza, Palestine and documents the aftermath of an Israeli air strike in July 2024.
- Claims: A video shows a child in Bangladesh covered in ash as evidence of attacks on Hindu minorities.
- Claimed by: Facebook
- Fact Check: False & Misleading
Related Blogs

Introduction
AI is transforming the way work is done and redefining the nature of jobs over the next decade. In the case of India, it is not just what duties will be taken over by machines, but how millions of employees will move to other sectors, which skills will become more sought-after, and how policy will have to change in response. This article relies on recent labour data of India's Periodic Labour Force Survey (PLFS, 2023-24) and discusses the vulnerabilities to disruption by location and social groups. It recommends viable actions that can be taken to ensure that risks are minimised and economic benefits maximised.
India’s Labour Market and Its Automation Readiness
According to India’s Periodic Labour Force Survey (PLFS), the labour market is changing and growing. Participation in the labour force improved to 60.1 per percent in 2023-24 versus 57.9 per cent the year before, and the ratio of the worker population also improved, signifying the increased employment uptake both in the rural and urban geographies (PLFS, 2023-24). There has also been an upsurge of female involvement. However, a big portion of the job market has been low-wage and informal, with most of the jobs being routine and thus most vulnerable to automation. The statistics indicate a two-tiered reality of the Indian labour market: an increased number of working individuals and a structural weakness.
AI-Driven Automation’s Impact on Tasks and Emerging Opportunities
AI-driven automation, for the most part, affects the task components of jobs rather than wiping out whole jobs. The most automatable tasks are routine and manual, and more recent developments in AI have extended to non-routine cognitive tasks like document review, customer query handling, basic coding and first-level decision-making. There are two concurrent findings of global studies. To start with, part of the ongoing tasks will be automated or expedited. Second, there will be completely new tasks and work positions around data annotation, the operation of AI systems, prompt engineering, algorithmic supervision and AI adherence (World Bank, 2025; McKinsey, 2017).
In the case of India, this change will be skewed by sector. The manufacturing, back-office IT services, retail and parts of financial services will see the highest rate of disruption due to the concentration of routine processes with the ease of technology adoption. In comparison, healthcare, education, high-tech manufacturing and AI safety auditing are placed to create new skilled jobs. NITI Aayog estimates huge returns in GDP with the adoption of AI but emphasises that India has to invest simultaneously in job creation and reskilling to achieve the returns (NITI Aayog, 2025).
Groups with Highest Vulnerability in the Transition to Automation
The PLFS emphasises that a large portion of the Indian population does not have any formal employment and that the social protection is minimal and formal training is not available to them. The risk of displacement is likely to be the greatest for informal employees, making up almost 90% of India’s labour force, who carry out low-skilled, repetitive jobs in the manufacturing and retail industry (PLFS, 2023-24). Women and young people in low-level service jobs also face a greater challenge of transition pressure unless the reskilling and placement efforts can be tailored to them. Meanwhile, major cities and urban centres are likely to have openings for most of the new skilled opportunities at the expense of an increasing geographic and social divide.
The Skills and Supply Challenge
While India’s education and research ecosystem is expanding, there remain significant gaps in preparing the workforce for AI-driven change. Given the vulnerabilities highlighted earlier, AI-focused reskilling must be a priority to equip workers with practical skills that meet industry needs. Short modular programs in areas such as cloud technologies, AI operations, data annotation, human-AI interaction, and cybersecurity can provide workers with employable skills. Particular attention should be given to routine-intensive sectors like manufacturing, retail, and back-office services, as well as to regions with high informal employment or lower access to formal training. Public-private partnerships and localised training initiatives can help ensure that reskilling translates into concrete job opportunities rather than purely theoretical knowledge (NITI Aayog, 2025)
The Way Forward
To facilitate the change process, the policy should focus on three interconnected goals: safeguarding the vulnerable, developing competencies on a large-scale level, and directing innovation towards the widespread ability to benefit.
- Protect the vulnerable through social buffers. Provide informal workers with social protection in the form of portable benefits, temporary income insurance based on reskilling, and earned training leave. While the new labour codes provide essential protections such as unemployment allowances and minimum wage standards, they could be strengthened by incorporating explicit provisions for reskilling. This would better support informal workers during job transitions and enhance workforce adaptability.
- Short modular courses on cloud computing, cybersecurity, data annotation, AI operations, and human-AI interaction should be planned through collaboration between public and private training providers. Special preference should be given to industry-certified certifications and apprenticeship-based placements. These apprenticeships should be made accessible in multiple languages to ensure inclusivity. Existing government initiatives, such as NASSCOM’s Future Skills Prime, need better outreach and marketing to reach the workforce effectively.
- Enhance local labour market mediators. Close the disparity between local demand and the supply of labour in the industry by enhancing placement services and government-subsidised internship programmes for displaced employees and encouraging firms to hire and train locally.
- Invest in AI literacy, AI ethics, and basic education. Democratise access to research and learning by introducing AI literacy in schools, increasing STEM seats in universities, and creating AI labs in the region (NITI Aayog, 2025).
- Encourage AI adoption that creates jobs rather than replaces them. Fiscal and regulatory incentives should prioritise AI tools that augment worker productivity in routine roles instead of eliminating positions. Public procurement can support firms that demonstrate responsible and inclusive deployment of AI, ensuring technology benefits both business and workforce.
- Supervise and oversee the transition. Use PLFS and real-time administrative data to monitor shrinking and expanding occupations. High-frequency labour market dashboards will allow making specific interventions in those regions in which the acceleration of displacement occurs.
Conclusion
The integration of AI will significantly impact the future of the Indian workforce, but policy will determine its effect on the labour market. The PLFS indicates increased employment but a structural weakness of informal and routine employment. Evidence from the Indian market and international research points to the fact that the appropriate combination of social protection, skills building and responsible technology implementation can change disruption into a path of upward mobility. There is a very limited window of action. The extent to which India will realise the productivity and GDP benefits predicted by national research, alongside the investments made in labour market infrastructure, remains uncertain. It is crucial that these efforts lead to the capture of gains and facilitate a fair and inclusive transition for workers.
References
- Annual Report Periodic Labour Force Survey (PLFS) JULY 2022 - JUNE 2023.
- Future Jobs: Robots, Artificial Intelligence, and Digital Platforms in East Asia and Pacific, World Bank.
- Jobs Lost, Jobs Gained: What the Future of Work Will Mean for Jobs, Skills, and Wages, McKinsey Global Institute
- Roadmap for Job Creation in the AI Economy, NITI Aayog
- India central bank chief warns of financial stability risks from growing use of AI, Reuters
- AI Cyber Attacks Statistics 2025, SQ Magazine.

Introduction
“an intermediary, on whose computer resource the information is stored, hosted or published, upon receiving actual knowledge in the form of an order by a court of competent jurisdiction or on being notified by the Appropriate Government or its agency under clause (b) of sub-section (3) of section 79 of the Act, shall not , which is prohibited under any law for the time being in force in relation to the interest of the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation; incitement to an offence relating to the above, or any information which is prohibited under any law for the time being in force”
Law grows by confronting its absences, it heals itself through its own gaps. The most recent notification from MeitY, G.S.R. 775(E) dated October 22, 2025, is an illustration of that self-correction. On November 15, 2025, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, will come into effect. They accomplish two crucial things: they restrict who can use "actual knowledge” to initiate takedown and require senior-level scrutiny of those directives. By doing this, they maintain genuine security requirements while guiding India’s content governance system towards more transparent due process.
When Regulation Learns Restraint
To better understand the jurisprudence of revision, one must need to understand that Regulation, in its truest form, must know when to pause. The 2025 amendment marks that rare moment when the government chooses precision over power, when regulation learns restraint. The amendment revises Rule 3(1)(d) of the 2021 Rules. Social media sites, hosting companies, and other digital intermediaries are still required to take action within 36 hours of receiving “actual knowledge” that a piece of content is illegal (e.g. poses a threat to public order, sovereignty, decency, or morality). However, “actual knowledge” now only occurs in the following situations:
(i) a court order from a court of competent jurisdiction, or
(ii) a reasoned written intimation from a duly authorised government officer not below Joint Secretary rank (or equivalent)
The authorised authority in matters involving the police “must not be below the rank of Deputy Inspector General of Police (DIG)”. This creates a well defined, senior-accountable channel in place of a diffuse trigger.
There are two more new structural guardrails. The Rules first establish a monthly assessment of all takedown notifications by a Secretary-level officer of the relevant government to test necessity, proportionality, and compliance with India’s safe harbour provision under Section 79(3) of the IT Act. Second, in order for platforms to act precisely rather than in an expansive manner, takedown requests must be accompanied by legal justification, a description of the illegal act, and precise URLs or identifiers. The cumulative result of these guardrails is that each removal has a proportionality check and a paper trail.
Due Process as the Law’s Conscience
Indian jurisprudence has been debating what constitutes “actual knowledge” for over a decade. The Supreme Court in Shreya Singhal (2015) connected an intermediary’s removal obligation to notifications from official channels or court orders rather than vague notice. But over time, that line became hazy due to enforcement practices and some court rulings, raising concerns about over-removal and safe-harbour loss under Section 79(3). Even while more recent decisions questioned the “reasonable efforts” of intermediaries, the 2025 amendment institutionally pays homage to Shreya Singhal’s ethos by refocusing “actual knowledge” on formal reviewable communications from senior state actors or judges.
The amendment also introduces an internal constitutionalism to executive orders by mandating monthly audits at the Secretary level. The state is required to re-justify its own orders on a rolling basis, evaluating them against proportionality and necessity, which are criteria that Indian courts are increasingly requesting for speech restrictions. Clearer triggers, better logs, and less vague “please remove” communications that previously left compliance teams in legal limbo are the results for intermediaries.
The Court’s Echo in the Amendment
The essence of this amendment is echoed in Karnataka High Court’s Ruling on Sahyog Portal, a government portal used to coordinate takedown orders under Section 79(3)(b), was constitutional. The HC rejected X’s (formerly Twitter’s) appeal contesting the legitimacy of the portal in September. The business had claimed that by giving nodal officers the authority to issue takedown orders without court review, the portal permitted arbitrary content removals. The court disagreed, holding that the officers’ acts were in accordance with Section 79 (3)(b) and that they were “not dropping from the air but emanating from statutes.” The amendment turns compliance into conscience by conforming to the Sahyog Portal verdict, reiterating that due process is the moral grammar of governance rather than just a formality.
Conclusion: The Necessary Restlessness of Law
Law cannot afford stillness; it survives through self doubt and reinvention. The 2025 amendment, too, is not a destination, it’s a pause before the next question, a reminder that justice breathes through revision. As befits a constitutional democracy, India’s path to content governance has been combative and iterative. The next rule making cycle has been sharpened by the stays split judgments, and strikes down that have resulted from strategic litigation centred on the IT Rules, safe harbour, government fact-checking, and blocking orders. Lessons learnt are reflected in the 2025 amendment: review triumphs over opacity; specificity triumphs over vagueness; and due process triumphs over discretion. A digital republic balances freedom and force in this way.
Sources
- https://pressnews.in/law-and-justice/government-notifies-amendments-to-it-rules-2025-strengthening-intermediary-obligations/
- https://www.meity.gov.in/static/uploads/2025/10/90dedea70a3fdfe6d58efb55b95b4109.pdf
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2181719
- https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/
- https://www.medianama.com/2025/10/223-content-takedown-rules-online-platforms-36-hr-deadline-officer-rank/#:~:text=It%20specifies%20that%20government%20officers,Deputy%20Inspector%20General%20of%20Police%E2%80%9D.

Introduction
The 2023-24 annual report of the Union Home Ministry states that WhatsApp is among the primary platforms being targeted for cyber fraud in India, followed by Telegram and Instagram. Cybercriminals have been conducting frauds like lending and investment scams, digital arrests, romance scams, job scams, online phishing etc., through these platforms, creating trauma for victims and overburdening law enforcement, which is not always the best equipped to recover their money. WhatsApp’s scale, end-to-end encryption, and ease of mass messaging make it both a powerful medium of communication and a vulnerable target for bad actors. It has over 500 million users in India, which makes it a primary subject for scammers running illegal lending apps, phishing schemes, and identity fraud.
Action Taken by Whatsapp
As a response to this worrying trend and in keeping with Rule 4(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, [updated as of 6.4.2023], WhatsApp has been banning millions of Indian accounts through automated tools, AI-based detection systems, and behaviour analysis, which can detect suspicious activity and misuse. In July 2021, it banned over 2 million accounts. By February 2025, this number had shot up to over 9.7 million, with 1.4 million accounts removed proactively, that is, before any user reported them. While this may mean that the number of attacks has increased, or WhatsApp’s detection systems have improved, or both, what it surely signals is the acknowledgement of a deeper, systemic challenge to India’s digital ecosystem and the growing scale and sophistication of cyber fraud, especially on encrypted platforms.
CyberPeace Insights
- Under Rule 4(1)(d) of the IT Rules, 2021, significant social media intermediaries (SSMIs) are required to implement automated tools to detect harmful content. But enforcement has been uneven. WhatsApp’s enforcement action demonstrates what effective compliance with proactive moderation can look like because of the scale and transparency of its actions.
- Platforms must treat fraud not just as a content violation but as a systemic abuse of the platform’s infrastructure.
- India is not alone in facing this challenge. The EU’s Digital Services Act (DSA), for instance, mandates large platforms to conduct regular risk assessments, maintain algorithmic transparency, and allow independent audits of their safety mechanisms. These steps go beyond just removing bad content by addressing the design of the platform itself. India can draw from this by codifying a baseline standard for fraud detection, requiring platforms to publish detailed transparency reports, and clarifying the legal expectations around proactive monitoring. Importantly, regulators must ensure this is done without compromising encryption or user privacy.
- WhatsApp’s efforts are part of a broader, emerging ecosystem of threat detection. The Indian Cyber Crime Coordination Centre (I4C) is now sharing threat intelligence with platforms like Google and Meta to help take down scam domains, malicious apps, and sponsored Facebook ads promoting illegal digital lending. This model of public-private intelligence collaboration should be institutionalized and scaled across sectors.
Conclusion: Turning Enforcement into Policy
WhatsApp’s mass account ban is not just about enforcement but an example of how platforms must evolve. As India becomes increasingly digital, it needs a forward-looking policy framework that supports proactive monitoring, ethical AI use, cross-platform coordination, and user safety. The digital safety of users in India and those around the world must be built into the architecture of the internet.
References
- https://scontent.xx.fbcdn.net/v/t39.8562-6/486805827_1197340372070566_282096906288453586_n.pdf?_nc_cat=104&ccb=1-7&_nc_sid=b8d81d&_nc_ohc=BRGwyxF87MgQ7kNvwHyyW8u&_nc_oc=AdnNG2wXIN5F-Pefw_FTt2T4K6POllUyKpO7nxwzCWxNgQEkVLllHmh81AHT2742dH8&_nc_zt=14&_nc_ht=scontent.xx&_nc_gid=iaQzNQ8nBZzxuIS4rXLOkQ&oh=00_AfEnbac47YDXvymJ5vTVB-gXteibjpbTjY5uhP_sMN9ouw&oe=67F95BF0
- https://scontent.xx.fbcdn.net/v/t39.8562-6/217535270_342765227288666_5007519467044742276_n.pdf?_nc_cat=110&ccb=1-7&_nc_sid=b8d81d&_nc_ohc=aj6og9xy5WQQ7kNvwG9Vzkd&_nc_oc=AdnDtVbrQuo4lm3isKg5O4cw5PHkp1MoMGATVpuAdOUUz-xyJQgWztGV1PBovGACQ9c&_nc_zt=14&_nc_ht=scontent.xx&_nc_gid=gabMfhEICh_gJFiN7vwzcA&oh=00_AfE7lXd9JJlEZCpD4pxW4OOc03BYcp1e3KqHKN9-kaPGMQ&oe=67FD6FD3
- https://www.hindustantimes.com/india-news/whatsapp-is-most-used-platform-for-cyber-crimes-home-ministry-report-101735719475701.html
- https://www.indiatoday.in/technology/news/story/whatsapp-bans-over-97-lakhs-indian-accounts-to-protect-users-from-scam-2702781-2025-04-02