#Fact Check: Viral Footage from Bangladesh Incorrectly Portrayed as Immigrant March for Violence in Assam.
Executive Summary:
As we researched a viral social media video we encountered, we did a comprehensive fact check utilizing reverse image search. The video circulated with the claim that it shows illegal Bangladeshi in Assam's Goalpara district carrying homemade spears and attacking a police and/or government official. Our findings are certain that this claim is false. This video was filmed in the Kishoreganj district, Bangladesh, on July 1, 2025, during a political argument involving two rival factions of the Bangladesh Nationalist Party (BNP). The footage has been intentionally misrepresented, putting the report into context regarding Assam to disseminate false information.

Claim:
The viral video shows illegal Bangladeshi immigrants armed with spears marching in Goalpara, Assam, with the intention of attacking police or officials.

Fact Check:
To establish if the claim was valid, we performed a reverse image search on some of the key frames from the video. We did our research on a number of news articles and social media posts from Bangladeshi sources. This led us to a reality check as the events confirmed in these reports took place in Ashtagram, Kishoreganj district, Bangladesh, in a violent political confrontation between factions of the Bangladesh Nationalist Party (BNP) on July 1, 2025, that ultimately resulted in about 40 injuries.

We also found on local media, in particular, Channel i News reported full accounts of the viral report and showed images from the video post. The individuals seen in the video were engaged in a political fight and wielding makeshift spears rather than transitioning into a cross-border attack. The Assam Police issued an official response on X (formerly Twitter) that denied the claim, while noting that nothing of that nature occurred in Goalpara nor in any other district of Assam.


Conclusion:
Based on our research, we conclude that the viral video does not show unlawful Bangladeshi immigrants in Assam. It depicts a political clash in Kishoreganj, Bangladesh, on July 1, 2025. The claim attached to the video is completely untrue and is intended to mislead the public as to where and what the incident depicted is.
Claim: Video shows illegal migrants with spears moving in groups to assault police!
Claimed On: Social Media
Fact Check: False and Misleading
Related Blogs

In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/

Introduction
The Telecom Regulatory Authority of India (TRAI), on March 13 2023, published a new rule to regulate telemarketing firms. Trai has demonstrated strictness when it comes to bombarding users with intrusive marketing pitches. In a report, TRAI stated that 10-digit mobile numbers could not be utilised for advertising. In reality, different phone numbers are given out for regular calls and telemarketing calls. Hence, it is an appropriate and much-required move in order to suppress and eradicate phishing scammers and secure the Indian Cyber-ecosystem at large.
What are the new rules?
The rules state that now 10-digit unregistered mobile numbers for promotional purposes would be shut down over the following five days. The rule claim that calling from unregistered mobile numbers had been banned was published on February 16. In this case, using 10-digit promotional messages for promotional calling will end within the following five days. This step by TRAI has been seen after nearly 6-8 months of releasing the Telecommunication Bill, 2022, which has focused towards creating a stable Indian Telecom market and reducing the phoney calls/messages by bad actors to reduce cyber crimes like phishing. This is done to distinguish between legitimate and promotional calls. According to certain reports, some telecom firms allegedly break the law by using 10-digit mobile numbers to make unwanted calls and send promotional messages. All telecom service providers must execute the requirements under the recent TRAI directive within five days.
How will the new rules help?
The promotional use of a cellphone number with 10 digits was allowed since the start, however, with the latest NCRB report on cyber crimes and the rising instances and reporting of cyber crimes primarily focused towards frauds related to monetary gains by the bad actors points to the issue of unregulated promotional messages. This move will act as a critical step towards eradicating scammers from the cyber-ecosystem, TRAI has been very critical in understanding the dynamics and shortcomings in the regulation of the telecom spectrum and network in India and has shown keen interest towards suppressing the modes of technology used by the scammers. It is a fact that the invention of the technology does not define its use, the policy of the technology does, hence it is important to draft ad enact policies which better regulate the existing and emerging technologies.
What to avoid?
In pursuance of the rules enacted by TRAI, the business owners involved in promotional services through 10-digit numbers will have to follow these steps-
- It is against the law to utilise a 10-digit cellphone number for promotional calls.
- You should stop doing so right now.
- Your mobile number will be blocked in the following five days if not.
- Users employed by telemarketing firms are encouraged to refrain from using the system in such circumstances.
- Those working for telemarketing firms are encouraged not to call from their mobile numbers.
- Users should phone the company’s registered mobile number for promotional purposes.
Conclusion
The Indian netizen has been exposed to the technology a little later than the western world. However, this changed drastically during the Covid-19 pandemic as the internet and technology penetration rates increased exponentially in just a couple of months. Although this has been used as an advantage by the bad actors, it was pertinent for the government and its institutions to take an effective and efficient step to safeguard the people from financial fraud. Although these frauds occur in high numbers due to a lack of knowledge and awareness, we need to work on preventive solutions rather than precautionary steps and the new rules by TRAI point towards a safe, secured and sustainable future of cyberspace in India.

Introduction
Criminal justice in India is majorly governed by three laws which are – Indian Penal Code, Criminal Procedure Code and Indian Evidence Act. The centre, on 11th August 2023’ Friday, proposes a new bill in parliament Friday, which is replacing the country’s major criminal laws, i.e. Indian Penal Code, Criminal Procedure Code and Indian Evidence Act.
The following three bills are being proposed to replace major criminal laws in the country:
- The Bharatiya Nyaya Sanhita Bill, 2023 to replace Indian Penal Code 1860.
- The Bharatiya Nagrik Suraksha Sanhita Bill, 2023, to replace The Code Of Criminal Procedure, 1973.
- The Bharatiya Sakshya Bill, 2023, to replace The Indian Evidence Act 1872.
Cyber law-oriented view of the new shift in criminal lawNotable changes:Bharatiya Nyaya Sanhita Bill, 2023 Indian Penal Code 1860.
Way ahead for digitalisation
The new laws aim to enhance the utilisation of digital services in court systems, it facilitates online registration of FIR, Online filing of the charge sheet, serving summons in electronic mode, trial and proceedings in electronic mode etc. The new bills also allow the virtual appearance of witnesses, accused, experts, and victims in some instances. This shift will lead to the adoption of technology in courts and all courts to be computerised in the upcoming time.
Enhanced recognition of electronic records
With the change in lifestyle in terms of the digital sphere, significance is given to recognising electronic records as equal to paper records.
Conclusion
The criminal laws of the country play a significant role in establishing law & order and providing justice. The criminal laws of India were the old laws existing under British rule. There have been several amendments to criminal laws to deal with the growing crimes and new aspects. However, there was a need for well-established criminal laws which are in accordance with the present era. The step of the legislature by centralising all criminal laws in their new form and introducing three bills is a good approach which will ultimately strengthen the criminal justice system in India, and it will also facilitate the use of technology in the court system.