#FactCheck: Viral video claims BSF personnel thrashing a person selling Bangladesh National Flag in West Bengal
Executive Summary:
A video circulating online claims to show a man being assaulted by BSF personnel in India for selling Bangladesh flags at a football stadium. The footage has stirred strong reactions and cross border concerns. However, our research confirms that the video is neither recent nor related to the incident that occurred in India. The content has been wrongly framed and shared with misleading claims, misrepresenting the actual incident.
Claim:
It is being claimed through a viral post on social media that a Border Security Force (BSF) soldier physically attacked a man in India for allegedly selling the national flag of Bangladesh in West Bengal. The viral video further implies that the incident reflects political hostility towards Bangladesh within Indian territory.

Fact Check:
After conducting thorough research, including visual verification, reverse image searching, and confirming elements in the video background, we determined that the video was filmed outside of Bangabandhu National Stadium in Dhaka, Bangladesh, during the crowd buildup prior to the AFC Asian Cup. A match featuring Bangladesh against Singapore.

Second layer research confirmed that the man seen being assaulted is a local flag-seller named Hannan. There are eyewitness accounts and local news sources indicating that Bangladeshi Army officials were present to manage the crowd on the day under review. During the crowd control effort a soldier assaulted the vendor with excessive force. The incident created outrage to which the Army responded by identifying the officer responsible and taking disciplinary measures. The victim was reported to have been offered reparations for the misconduct.

Conclusion:
Our research confirms that the viral video does not depict any incident in India. The claim that a BSF officer assaulted a man for selling Bangladesh flags is completely false and misleading. The real incident occurred in Bangladesh, and involved a local army official during a football event crowd-control situation. This case highlights the importance of verifying viral content before sharing, as misinformation can lead to unnecessary panic, tension, and international misunderstanding.
- Claim: Viral video claims BSF personnel thrashing a person selling Bangladesh National Flag in West Bengal
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Meta is the leader in social media platforms and has been successful in having a widespread network of users and services across global cyberspace. The corporate house has been responsible for revolutionizing messaging and connectivity since 2004. The platform has brought people closer together in terms of connectivity, however, being one of the most popular platforms is an issue as well. Popular platforms are mostly used by cyber criminals to gain unauthorised data or create chatrooms to maintain anonymity and prevent tracking. These bad actors often operate under fake names or accounts so that they are not caught. The platforms like Facebook and Instagram have been often in the headlines as portals where cybercriminals were operating and committing crimes.
To keep the data of the netizen safe and secure Paytm under first of its kind service is offering customers protection against cyber fraud through an insurance policy available for fraudulent mobile transactions up to Rs 10,000 for a premium of Rs 30. The cover ‘Paytm Payment Protect’ is provided through a group insurance policy issued by HDFC Ergo. The company said that the plan is being offered to increase the trust in digital payments, which will push up adoption.
Meta’s Cybersecurity
Meta has one of the best cyber security in the world but that diest mean that it cannot be breached. The social media giant is the most vulnerable platform in cases of data breaches as various third parties are also involved. As seen the in the case of Cambridge Analytica, a huge chunk of user data was available to influence the users in terms of elections. Meta needs to be ahead of the curve to have a safe and secure platform, for this Meta has deployed various AI and ML driven crawlers and software which work o keeping the platform safe for its users and simultaneously figure out which accounts may be used by bad actors and further removes the criminal accounts. The same is also supported by the keen participation of the user in terms of the reporting mechanism. Meta-Cyber provides visibility of all OT activities, observes continuously the PLC and SCADA for changes and configuration, and checks the authorization and its levels. Meta is also running various penetration and bug bounty programs to reduce vulnerabilities in their systems and applications, these testers are paid heavily depending upon the scope of the vulnerability they found.
CyberRoot Risk Investigation
Social media giant Meta has taken down over 40 accounts operated by an Indian firm CyberRoot Risk Analysis, allegedly involved in hack-for-hire services along with this Meta has taken down 900 fraudulently run accounts, these accounts are said to be operated from China by an unknown entity. CyberRoot Risk Analysis was responsible for sharing malware over the platform and used it to impersonate themselves just as their targets, i.e lawyers, doctors, entrepreneurs, and industries like – cosmetic surgery, real estate, investment firms, pharmaceutical, private equity firms, and environmental and anti-corruption activists. They would get in touch with such personalities and then share malware hidden in files which would often lead to data breaches subsequently leading to different types of cybercrimes.
Meta and its team is working tirelessly to eradicate the influence of such bad actors from their platforms, use of AI and Ml based tools have increased exponentially.
Paytm CyberFraud Cover
Paytm is offering customers protection against cyber fraud through an insurance policy available for fraudulent mobile transactions up to Rs 10,000 for a premium of Rs 30. The cover ‘Paytm Payment Protect’ is provided through a group insurance policy issued by HDFC Ergo. The company said that the plan is being offered to increase the trust in digital payments, which will push up adoption. The insurance cover protects transactions made through UPI across all apps and wallets. The insurance coverage has been obtained by One97 Communications, which operates under the Paytm brand.
The exponential increase in the use of digital payments during the pandemic has made more people susceptible to cyber fraud. While UPI has all the digital safeguards in place, most UPI-related frauds are undertaken by confidence tricksters who get their victims to authorise a transaction by passing collect requests as payments. There are also many fraudsters collecting payments by pretending to be merchants. These types of frauds have resulted in a loss of more than Rs 63 crores in the previous financial year. The issue of data insurance is new to India but is indeed the need of the hour, majority of netizens are unaware of the value of their data and hence remain ignorant towards data protection, such steps will result in safer data management and protection mechanisms, thus safeguarding the Indian cyberspace.
Conclusion
cyberspace is at a critical juncture in terms of data protection and privacy, with new legislation coming out on the same we can expect new and stronger policies to prevent cybercrimes and cyber-attacks. The efforts by tech giants like Meta need to gain more speed in terms of the efficiency of cyber safety of the platform and the user to make sure that the future of the platforms remains secured strongly. The concept of data insurance needs to be shared with netizens to increase awareness about the subject. The initiative by Paytm will be a monumental initiative as this will encourage more platforms and banks to commit towards coverage for cyber crimes. With the increasing cases of cybercrimes, such financial coverage has come as a light of hope and security for the netizens.

Introduction
In today's era of digitalised community and connections, social media has become an integral part of our lives. A large number of teenagers are also active and have their accounts on social media. They use social media to connect with their friends and family. Social media offers ease to connect and communicate with larger communities and even showcase your creativity. On the other hand, it also poses some challenges or issues such as inappropriate content, online harassment, online stalking, misuse of personal information, abusive and dishearted content etc. There could be unindented consequences on teenagers' mental health by such threats or overuse of social media. The data shows some teens spend hours a day on social media hence it has a larger impact on them whether we notice it or not. Social media addiction and its negative repercussions such as overuse of social media by teens and online threats and vulnerabilities is a growing concern that needs to be taken seriously by social media platforms, regulatory policies and even user's responsibilities. Recently Colorado and California led a joint lawsuit filed by 33 states in the U.S. District Court for the Northern District of California against meta on the concern of child safety.
Meta and concern of child users safety
Recently Meta, the company that owns Facebook, Instagram, WhatsApp, and Messenger, has been sued by more than three dozen states for allegedly using features to hook children to its platforms. The lawsuit claims that Meta violated consumer protection laws and deceived users about the safety of its platforms. The states accuse Meta of designing manipulative features to induce young users' compulsive and extended use, pushing them into harmful content. However, Meta has responded by stating that it is working to provide a safer environment for teenagers and expressing disappointment in the lawsuit.
According to the complaint filed by the states, Meta “designed psychologically manipulative product features to induce young users’ compulsive and extended use" of platforms like Instagram. The states allege that Meta's algorithms were designed to push children and teenagers into rabbit holes of toxic and harmful content, with features like "infinite scroll" and persistent alerts used to hook young users. However, meta responded with disappointment with a lawsuit stating that meta working productively with companies across the industry to create clear, age-appropriate standards for the many apps.
Unplug for sometime
Overuse of social media is associated with increased mental health repercussions along with online threats and risks. Social media’s effect on teenagers is driven by factors such as inadequate sleep, exposure to cyberbullying and online threats and lack of physical activity. Its admitted that social media can help teens feel more connected to their friends and their support system and showcase their creativity to the online world. However, social media overuse by teens is often linked with underlying issues that require attention. To help teenagers, encourage them for responsible use and unplug from social media for some time, encourage them to get outside in nature, do physical activities, and express themselves creatively.
Understanding the threats & risks
- Psychological effects
- Addiction: Excessive use of social media will lead to procrastination and excessively using social media can lead to physical and psychological addiction because it triggers the brain's reward system.
- Mental Conditions Associated: Excessively using social media can be harmful for mental well-being which can also lead to depression and anxiety, self-consciousness and may also lead to social anxiety disorder.
- Eyes, Carpal tunnel syndrome: Excessive spending time on screen may lead to put a real strain on your eyes. Eye problems caused by computer/phone screen use fall under computer vision syndrome (CVS). Carpal tunnel syndrome is caused by pressure on the median nerve.
- Cyberbullying: Cyberbullying is one of the major concerns faced in online interactions on social media. Cyberbullying takes place using the internet or other digital communication technology to bully, harass, or intimidate others and it has become a major concern of online harassment on popular social media platforms. Cyberbullying may include spreading rumours or posting hurtful comments. Cyberbullying has emerged as a phenomenon that has a socio-psychological impact on the victims.
- Online grooming: Online grooming is defined as the tactics abusers deploy through the internet to sexually exploit children. The average time for a bad actor to lure children into his trap is 3 minutes, which is a very alarming number.
- Ransomware/Malware/Spyware: Cybercrooks impose threats such as ransomware, malware and spyware by deploying malicious links on social media. This poses serious cyber threats, and it causes consequences such as financial losses, data loss, and reputation damage. Ransomware is a type of malware which is designed to deny a user or organisation access to their files on the computer. On social media, cyber crooks post malicious links which contain malware, and spyware threats. Hence it is important to be cautious before clicking on any such suspicious link.
- Sextortion: Sextortion is a crime where the perpetrator threatens the victim and demands ransom or asks for sexual favours by threatening the victim to expose or reveal the victim’s sexual activity. It is a kind of sexual blackmail, it may take place on social media and youngsters are mostly targeted. The cyber crooks also misuse the advanced AI Deepfake technology which is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfakes technology since easily accessible, is misused by fraudsters to commit various crimes including sextortion or deceiving and scamming people through fake images or videos which look realistic.
- Child sexual abuse material(CSAM): CSAM is inappropriate or illicit content which is prohibited by the laws and regulatory guidelines. Child while using the internet if encounters age-restricted or inappropriate content which may be harmful to them child. Through regulatory guidelines, internet service providers are refrained from hosting the CSAM content on the websites and blocking such inappropriate or CSAM content.
- In App purchases: The teen user also engages in-app purchases on social media or online gaming where they might fall into financial fraud or easy money scams. Where fraudster targets through offering exciting job offers such as part-time job, work-from-home job, small investments, liking content on social media, and earning money out of this. This has been prevalent on social media and fraudsters target innocent people ask for their personal and financial information, and commit financial fraud by scamming people on the pretext of offering exciting offers.
Safety tips:
To stay safe while using social media teens or users are encouraged to follow the best practices and stay aware of the online threats. Users must keep in regard to the best practices. Such as;
- Safe web browsing.
- Utilising privacy settings of your social media accounts.
- Using strong passwords and enabling two-factor authentication.
- Be careful about what you post or share.
- Becoming familiar with the privacy policy of the social media platforms.
- Being selective of adding unknown users to your social media network.
- Reporting any suspicious activity to the platform or relevant forum.
Conclusion:
Child safety is a major concern on social media platforms. Social media-related offences such as cyberstalking, hacking, online harassment and threats, sextortion, and financial fraud are seen as the most occurring cyber crimes on social media. The tech giants must ensure the safety of teen users on social media by implementing and adopting the best mechanisms on the platform. CyberPeace Foundation is working towards advocating for a Child-friendly SIM to protect from the illicit influence of the internet and Social Media.
References:
- https://www.scientificamerican.com/article/heres-why-states-are-suing-meta-for-hurting-teens-with-facebook-and-instagram/
- https://www.nytimes.com/2023/10/24/technology/states-lawsuit-children-instagram-facebook.html

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions