#Fact Check: Viral Footage from Bangladesh Incorrectly Portrayed as Immigrant March for Violence in Assam.
Executive Summary:
As we researched a viral social media video we encountered, we did a comprehensive fact check utilizing reverse image search. The video circulated with the claim that it shows illegal Bangladeshi in Assam's Goalpara district carrying homemade spears and attacking a police and/or government official. Our findings are certain that this claim is false. This video was filmed in the Kishoreganj district, Bangladesh, on July 1, 2025, during a political argument involving two rival factions of the Bangladesh Nationalist Party (BNP). The footage has been intentionally misrepresented, putting the report into context regarding Assam to disseminate false information.

Claim:
The viral video shows illegal Bangladeshi immigrants armed with spears marching in Goalpara, Assam, with the intention of attacking police or officials.

Fact Check:
To establish if the claim was valid, we performed a reverse image search on some of the key frames from the video. We did our research on a number of news articles and social media posts from Bangladeshi sources. This led us to a reality check as the events confirmed in these reports took place in Ashtagram, Kishoreganj district, Bangladesh, in a violent political confrontation between factions of the Bangladesh Nationalist Party (BNP) on July 1, 2025, that ultimately resulted in about 40 injuries.

We also found on local media, in particular, Channel i News reported full accounts of the viral report and showed images from the video post. The individuals seen in the video were engaged in a political fight and wielding makeshift spears rather than transitioning into a cross-border attack. The Assam Police issued an official response on X (formerly Twitter) that denied the claim, while noting that nothing of that nature occurred in Goalpara nor in any other district of Assam.


Conclusion:
Based on our research, we conclude that the viral video does not show unlawful Bangladeshi immigrants in Assam. It depicts a political clash in Kishoreganj, Bangladesh, on July 1, 2025. The claim attached to the video is completely untrue and is intended to mislead the public as to where and what the incident depicted is.
Claim: Video shows illegal migrants with spears moving in groups to assault police!
Claimed On: Social Media
Fact Check: False and Misleading
Related Blogs

Executive Summary
A viral video allegedly featuring cricketer Virat Kohli endorsing a betting app named ‘Aviator’ is being shared widely across the social platform. CyberPeace Research Team’s Investigations revealed that the same has been made using the deepfake technology. In the viral video, we found some potential anomalies that can be said to have been created using Synthetic Media, also no genuine celebrity endorsements for the app exist, we have also previously debunked such Deep Fake videos of cricketer Virat Kohli regarding the misuse of deep fake technology. The spread of such content underscores the need for social media platforms to implement robust measures to combat online scams and misinformation.

Claims:
The claim made is that a video circulating on social media depicts Indian cricketer Virat Kohli endorsing a betting app called "Aviator." The video features an Indian News channel named India TV, where the journalist reportedly endorses the betting app followed by Virat Kohli's experience with the betting app.

Fact Check:
Upon receiving the news, we thoroughly watched the video and found some featured anomalies that are usually found in regular deep fake videos such as the lip sync of the journalist is not proper, and if we see it carefully the lips do not match with the audio that we can hear in the Video. It’s the same case when Virat Kohli Speaks in the video.

We then divided the video into keyframes and reverse searched one of the frames from the Kohli’s part, we found a video similar to the one spread, where we could see Virat Kohli wearing the same brown jacket in that video, uploaded on his verified Instagram handle which is an ad promotion in collaboration with American Tourister.

After going through the entire video, it is evident that Virat Kohli is not endorsing any betting app, rather he is talking about an ad promotion collaborating with American Tourister.
We then did some keyword searches to see if India TV had published any news as claimed in the Viral Video, but we didn’t find any credible source.
Therefore, upon noticing the major anomalies in the video and doing further analysis found that the video was created using Synthetic Media, it's a fake and misleading one.
Conclusion:
The video of Virat Kohli promoting a betting app is fake and does not actually feature the celebrity endorsing the app. This brings up many concerns regarding how Artificial Intelligence is being used for fraudulent activities. Social media platforms need to take action against the spread of fake videos like these.
Claim: Video surfacing on social media shows Indian cricket star Virat Kohli promoting a betting application known as "Aviator."
Claimed on: Facebook
Fact Check: Fake & Misleading

What are Decentralised Autonomous Organizations (DAOs)?
A Decentralised Autonomous Organisation or a DAO, is a unique take on democracy on the blockchain. It is a set of rules encoded into a self-executing contract (also known as a smart contract) that operates autonomously on a blockchain system. A DAO imitates a traditional company, although, in its more literal sense, it is a contractually created entity. In theory, DAOs have no centralised authority in making decisions for the system; it is a communally run system whereby all decisions (be it for internal governance or for the development of the blockchain system) are voted upon by the community members. DAOs are primarily characterised by a decentralised form of operation, where there is no one entity, group or individual running the system. They are self-sustaining entities, having their own currency, economy and even governance, that do not depend on a group of individuals to operate. Blockchain systems, especially DAOs are characterised by pure autonomy created to evade external coercion or manipulation from sovereign powers. DAOs follow a mutually created, agreed set of rules created by the community, that dictates all actions, activities, and participation in the system’s governance. There may also be provisions that regulate the decision-making power of the community.
Ethereum’s DAO’s White Paper described DAO as “The first implementation of a [DAO Entity] code to automate organisational governance and decision making.” Can be used by individuals working together collaboratively outside of a traditional corporate form. It can also be used by a registered corporate entity to automate formal governance rules contained in corporate bylaws or imposed by law.” The referred white paper proposes an entity that would use smart contracts to solve governance issues inherent in traditional corporations. DAOs attempt to redesign corporate governance with blockchain such that contractual terms are “formalised, automated and enforced using software.”
Cybersecurity threats under DAOs
While DAOs offer increased transparency and efficiency, they are not immune to cybersecurity threats. Cybersecurity risks in DAO, primarily in governance, stem from vulnerabilities in the underlying blockchain technology and the DAO's smart contracts. Smart contract exploits, code vulnerabilities, and weaknesses in the underlying blockchain protocol can be exploited by malicious actors, leading to unauthorised access, fund manipulations, or disruptions in the governance process. Additionally, DAOs may face challenges related to phishing attacks, where individuals are tricked into revealing sensitive information, such as private keys, compromising the integrity of the governance structure. As DAOs continue to evolve, addressing and mitigating cybersecurity threats is crucial to ensuring the trust and reliability of decentralised governance mechanisms.
Centralisation/Concentration of Power
DAOs today actively try to leverage on-chain governance, where any governance votes or transactions are directly taken on the blockchain. But such governance is often plutocratic in nature, where the wealthy hold influences, rather than democracies, since those who possess the requisite number of tokens are only allowed to vote and each token staked implies that many numbers of votes emerge from the same individual. This concentration of power in the hands of “whales” often creates disadvantages for the newer entrants into the system who may have an in-depth background but lack the funds to cast a vote. Voting, presently in the blockchain sphere, lacks the requisite concept of “one man, one vote” which is critical in democratic societies.
Smart contract vulnerabilities and external threats
Smart contracts, self-executing pieces of code on a blockchain, are integral to decentralised applications and platforms. Despite their potential, smart contracts are susceptible to various vulnerabilities such as coding errors, where mistakes in the code can lead to funds being locked or released erroneously. Some of them have been mentioned as follows;
Smart Contracts are most prone to re-entrance attacks whereby an untrusted external code is allowed to be executed in a smart contract. This scenario occurs when a smart contract invokes an external contract, and the external contract subsequently re-invokes the initial contract. This sequence of events can lead to an infinite loop, and a reentrancy attack is a tactic exploiting this vulnerability in a smart contract. It enables an attacker to repeatedly invoke a function within the contract, potentially creating an endless loop and gaining unauthorised access to funds.
Additionally, smart contracts are also prone to oracle problems. Oracles refer to third-party services or mechanisms that provide smart contracts with real-world data. Since smart contracts on blockchain networks operate in a decentralised, isolated environment, they do not have direct access to external information, such as market prices, weather conditions, or sports scores. Oracles bridge this gap by acting as intermediaries, fetching and delivering off-chain data to smart contracts, enabling them to execute based on real-world conditions. The oracle problem within blockchain pertains to the difficulty of securely incorporating external data into smart contracts. The reliability of external data poses a potential vulnerability, as oracles may be manipulated or provide inaccurate information. This challenge jeopardises the credibility of blockchain applications that rely on precise and timely external data.
Sybil Attack: A Sybil attack involves a single node managing multiple active fake identities, known as Sybil identities, concurrently within a peer-to-peer network. The objective of such an attack is to weaken the authority or influence within a trustworthy system by acquiring the majority of control in the network. The fake identities are utilised to establish and exert this influence. A successful Sybil attack allows threat actors to perform unauthorised actions in the system.
Distributed Denial of Service Attacks: A Distributed Denial of Service (DDoS) attack is a malicious attempt to disrupt the regular functioning of a network, service, or website by overwhelming it with a flood of traffic. In a typical DDoS attack, multiple compromised computers or devices, often part of a botnet (a network of infected machines controlled by a single entity), are used to generate a massive volume of requests or data traffic. The targeted system becomes unable to respond to legitimate user requests due to the excessive traffic, leading to a denial of service.
Conclusion
Decentralised Autonomous Organisations (DAOs) represent a pioneering approach to governance on the blockchain, relying on smart contracts and community-driven decision-making. Despite their potential for increased transparency and efficiency, DAOs are not immune to cybersecurity threats. Vulnerabilities in smart contracts, such as reentrancy attacks and oracle problems, pose significant risks, and the concentration of voting power among wealthy token holders raises concerns about democratic principles. As DAOs continue to evolve, addressing these challenges is essential to ensuring the resilience and trustworthiness of decentralised governance mechanisms. Efforts to enhance security measures, promote inclusivity, and refine governance models will be crucial in establishing DAOs as robust and reliable entities in the broader landscape of blockchain technology.
References:
https://www.imperva.com/learn/application-security/sybil-attack/
https://www.linkedin.com/posts/satish-kulkarni-bb96193_what-are-cybersecurity-risk-to-dao-and-how-activity-7048286955645677568-B3pV/ https://www.geeksforgeeks.org/what-is-ddosdistributed-denial-of-service/ Report of Investigation Pursuant to Section 21 (a) of the Securities Exchange Act of 1934: The DAO, Securities and Exchange Board, Release No. 81207/ July 25, 2017
https://www.sec.gov/litigation/investreport/34-81207.pdf https://www.legalserviceindia.com/legal/article-10921-blockchain-based-decentralized-autonomous-organizations-daos-.html

Introduction
According to a new McAfee survey, 88% of American customers believe that cybercriminals will utilize artificial intelligence to "create compelling online scams" over the festive period. In the meanwhile, 31% believe it will be more difficult to determine whether messages from merchants or delivery services are genuine, while 57% believe phishing emails and texts will be more credible. The study, which was conducted in September 2023 in the United States, Australia, India, the United Kingdom, France, Germany, and Japan, yielded 7,100 responses. Some people may decide to cut back on their online shopping as a result of their worries about AI; among those surveyed, 19% stated they would do so this year.
In 2024, McAfee predicts a rise in AI-driven scams on social media, with cybercriminals using advanced tools to create convincing fake content, exploiting celebrity and influencer identities. Deepfake technology may worsen cyberbullying, enabling the creation of realistic fake content. Charity fraud is expected to rise, leveraging AI to set up fake charity sites. AI's use by cybercriminals will accelerate the development of advanced malware, phishing, and voice/visual cloning scams targeting mobile devices. The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content.
AI Scams' Increase on Social Media
Cybercriminals plan to use strong artificial intelligence capabilities to control social media by 2024. These applications become networking goldmines because they make it possible to create realistic images, videos, and audio. Anticipate the exploitation of influencers and popular identities by cybercriminals.
AI-powered Deepfakes and the Rise in Cyberbullying
The negative turn that cyberbullying might take in 2024 with the use of counterfeit technology is one trend to be concerned about. This cutting-edge technique is freely accessible to youngsters, who can use it to produce eerily convincing synthetic content that compromises victims' privacy, identities, and wellness.
In addition to sharing false information, cyberbullies have the ability to alter public photographs and re-share edited, detailed versions, which exacerbates the suffering done to children and their families. The study issues a warning, stating that deepfake technology would probably cause online harassment to take a negative turn. With this sophisticated tool, young adults may now generate frighteningly accurate synthetic content in addition to using it for fun. The increasing severity of these deceptive pictures and phrases can cause serious, long-lasting harm to children and their families, impairing their identity, privacy, and overall happiness.
Evolvement of GenAI Fraud in 2023
We simply cannot get enough of these persistent frauds and fake emails. People in general are now rather adept at [recognizing] those that are used extensively. But if they become more precise, such as by utilizing AI-generated audio to seem like a loved one's distress call or information that is highly personal to the person, users should be much more cautious about them. The rise in popularity of generative AIs brings with it a new wrinkle, as hackers can utilize these systems to refine their attacks:
- Writing communications more skillfully in order to deceive consumers into sending sensitive information, clicking on a link, or uploading a file.
- Recreate emails and business websites as realistically as possible to prevent arousing concern in the minds of the perpetrators.
- People's faces and voices can be cloned, and deepfakes of sounds or images can be created that are undetectable to the target audience. a problem that has the potential to greatly influence schemes like CEO fraud.
- Because generative AIs can now hold conversations, and respond to victims efficiently.
- Conduct psychological manipulation initiatives more quickly, with less money spent, and with greater complexity and difficulty in detecting them. AI generative already in use in the market can write texts, clone voices, or generate images and program websites.
AI Hastens the Development of Malware and Scams
Even while artificial intelligence (AI) has many uses, cybercriminals are becoming more and more dangerous with it. Artificial intelligence facilitates the rapid creation of sophisticated malware, illicit web pages, and plausible phishing and smishing emails. As these risks become more accessible, mobile devices will be attacked more frequently, with a particular emphasis on audio and visual impersonation schemes.
Olympic Games: A Haven for Scammers
The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content. Cybercriminals are skilled at profiting from big occasions, and the buzz that will surround the 2024 Olympic Games around the world will make it an ideal time for scams. Con artists will take advantage of customers' excitement by focusing on followers who are ready to purchase tickets, arrange travel, obtain special content, and take part in giveaways. During this prominent event, vigilance is essential to avoid an invasion of one's personal records and financial data.
Development of McAfee’s own bot to assist users in screening potential scammers and authenticators for messages they receive
Precisely such kind of technology is under the process of development by McAfee. It's critical to emphasize that solving the issue is a continuous process. AI is being manipulated by bad actors and thus, one of the tricksters can pull off is to exploit the fact that consumers fall for various ruses as parameters to train advanced algorithms. Thus, the con artists may make use of the gadgets, test them on big user bases, and improve with time.
Conclusion
According to the McAfee report, 88% of American customers are consistently concerned about AI-driven internet frauds that target them around the holidays. Social networking poses a growing threat to users' privacy. By 2024, hackers hope to take advantage of AI skills and use deepfake technology to exacerbate harassment. By mimicking voices and faces for intricate schemes, generative AI advances complex fraud. The surge in charitable fraud affects both social and financial aspects, and the 2024 Olympic Games could serve as a haven for scammers. The creation of McAfee's screening bot highlights the ongoing struggle against developing AI threats and highlights the need for continuous modification and increased user comprehension in order to combat increasingly complex cyber deception.
References
- https://www.fonearena.com/blog/412579/deepfake-surge-ai-scams-2024.html
- https://cxotoday.com/press-release/mcafee-reveals-2024-cybersecurity-predictions-advancement-of-ai-shapes-the-future-of-online-scams/#:~:text=McAfee%20Corp.%2C%20a%20global%20leader,and%20increasingly%20sophisticated%20cyber%20scams.
- https://timesofindia.indiatimes.com/gadgets-news/deep-fakes-ai-scams-and-other-tools-cybercriminals-could-use-to-steal-your-money-and-personal-details-in-2024/articleshow/106126288.cms
- https://digiday.com/media-buying/mcafees-cto-on-ai-and-the-cat-and-mouse-game-with-holiday-scams/