#FactCheck: Viral video claims BSF personnel thrashing a person selling Bangladesh National Flag in West Bengal
Executive Summary:
A video circulating online claims to show a man being assaulted by BSF personnel in India for selling Bangladesh flags at a football stadium. The footage has stirred strong reactions and cross border concerns. However, our research confirms that the video is neither recent nor related to the incident that occurred in India. The content has been wrongly framed and shared with misleading claims, misrepresenting the actual incident.
Claim:
It is being claimed through a viral post on social media that a Border Security Force (BSF) soldier physically attacked a man in India for allegedly selling the national flag of Bangladesh in West Bengal. The viral video further implies that the incident reflects political hostility towards Bangladesh within Indian territory.

Fact Check:
After conducting thorough research, including visual verification, reverse image searching, and confirming elements in the video background, we determined that the video was filmed outside of Bangabandhu National Stadium in Dhaka, Bangladesh, during the crowd buildup prior to the AFC Asian Cup. A match featuring Bangladesh against Singapore.

Second layer research confirmed that the man seen being assaulted is a local flag-seller named Hannan. There are eyewitness accounts and local news sources indicating that Bangladeshi Army officials were present to manage the crowd on the day under review. During the crowd control effort a soldier assaulted the vendor with excessive force. The incident created outrage to which the Army responded by identifying the officer responsible and taking disciplinary measures. The victim was reported to have been offered reparations for the misconduct.

Conclusion:
Our research confirms that the viral video does not depict any incident in India. The claim that a BSF officer assaulted a man for selling Bangladesh flags is completely false and misleading. The real incident occurred in Bangladesh, and involved a local army official during a football event crowd-control situation. This case highlights the importance of verifying viral content before sharing, as misinformation can lead to unnecessary panic, tension, and international misunderstanding.
- Claim: Viral video claims BSF personnel thrashing a person selling Bangladesh National Flag in West Bengal
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.
References

Introduction
In a world teeming with digital complexities, where information wends through networks with the speed and unpredictability of quicksilver, companies find themselves grappling with the paradox of our epoch: the vast potential of artificial intelligence (AI) juxtaposed with glaring vulnerabilities in data security. It's a terrain fraught with risks, but in the intricacies of this digital age emerges a profound alchemy—the application of AI itself to transmute vulnerable data into a repository as secure and invaluable as gold.
The deployment of AI technologies comes with its own set of challenges, chief among them being concerns about the integrity and safety of data—the precious metal of the information economy. Companies cannot afford to remain idle as the onslaught of cyber threats threatens to fray the fabric of their digital endeavours. Instead, they are rallying, invoking the near-miraculous capabilities of AI to transform the very nature of cybersecurity, crafting an armour of untold resilience by empowering the hunter to become the hunted.
The AI’s Untapped Potential
Industries spanning the globe, varied in their scopes and scales, recognize AI's potential to hone their processes and augment decision-making capabilities. Within this dynamic lies a fertile ground for AI-powered security technologies to flourish, serving not merely as auxiliary tools but as essential components of contemporary business infrastructure. Dynamic solutions, such as anomaly detection mechanisms, highlight the subtle and not-so-subtle deviances in application behaviour, shedding light on potential points of failure or provoking points of intrusion, turning what was once a prelude to chaos into a symphony of preemptive intelligence.
In the era of advanced digital security, AI, exemplified by Dynatrace, stands as the pinnacle, swiftly navigating complex data webs to fortify against cyber threats. These digital fortresses, armed with cutting-edge AI, ensure uninterrupted insights and operational stability, safeguarding the integrity of data in the face of relentless cyber challenges.
India’s AI Stride
India, a burgeoning hub of technology and innovation, evidences AI's transformative powers within its burgeoning intelligent automation market. Driven by the voracious adoption of groundbreaking technological paradigms such as machine learning (ML), natural language processing (NLP), and Automated Workflow Management (AWM), sectors as disparate as banking, finance, e-commerce, healthcare, and manufacturing are swept up in an investment maelstrom. This is further bolstered by the Indian government’s supportive policies like 'Make in India' and 'Digital India'—bold initiatives underpinning the accelerating trajectory of intelligent automation in this South Asian powerhouse.
Consider the velocity at which the digital universe expands: IDC posits that the 5 billion internet denizens, along with the nearly 54 billion smart devices they use, generate about 3.4 petabytes of data each second. The implications for enterprise IT teams, caught in a fierce vice of incoming cyber threats, are profound. AI's emergence as the bulwark against such threats provides the assurance they desperately seek to maintain the seamless operation of critical business services.
The AI integration
The list of industries touched by the chilling specter of cyber threats is as extensive as it is indiscriminate. We've seen international hotel chains ensnared by nefarious digital campaigns, financial institutions laid low by unseen adversaries, Fortune 100 retailers succumbing to cunning scams, air traffic controls disrupted, and government systems intruded upon and compromised. Cyber threats stem from a tangled web of origins—be it an innocent insider's blunder, a cybercriminal's scheme, the rancor of hacktivists, or the cold calculation of state-sponsored espionage. The damage dealt by data breaches and security failures can be monumental, staggering corporations with halted operations, leaked customer data, crippling regulatory fines, and the loss of trust that often follows in the wake of such incidents.
However, the revolution is upon us—a rising tide of AI and accelerated computing that truncates the time and costs imperative to countering cyberattacks. Freeing critical resources, businesses can now turn their energies toward primary operations and the cultivation of avenues for revenue generation. Let us embark on a detailed expedition, traversing various industry landscapes to witness firsthand how AI's protective embrace enables the fortification of databases, the acceleration of threat neutralization, and the staunching of cyber wounds to preserve the sanctity of service delivery and the trust between businesses and their clientele.
Public Sector
Examine the public sector, where AI is not merely a tool for streamlining processes but stands as a vigilant guardian of a broad spectrum of securities—physical, energy, and social governance among them. Federal institutions, laden with the responsibility of managing complicated digital infrastructures, find themselves at the confluence of rigorous regulatory mandates, exacting public expectations, and the imperative of protecting highly sensitive data. The answer, increasingly, resides in the AI pantheon.
Take the U.S. Department of Energy's (DOE) Office of Cybersecurity, Energy Security, and Emergency Response (CESER) as a case in point. An investment exceeding $240 million in cybersecurity R&D since 2010 manifests in pioneering projects, including AI applications that automate and refine security vulnerability assessments, and those employing cutting-edge software-defined networks that magnify the operational awareness of crucial energy delivery systems.
Financial Sector
Next, pivot our gaze to financial services—a domain where approximately $6 million evaporates with each data breach incident, compelling the sector to harness AI not merely for enhancing fraud detection and algorithmic trading but for its indispensability in preempting internal threats and safeguarding knightly vaults of valuable data. Ventures like the FinSec Innovation Lab, born from the collaborative spirits of Mastercard and Enel X, demonstrate AI's facility in real-time threat response—a lifeline in preventing service disruptions and the erosion of consumer confidence.
Retail giants, repositories of countless payment credentials, stand at the threshold of this new era, embracing AI to fortify themselves against the theft of payment data—a grim statistic that accounts for 37% of confirmed breaches in their industry. Best Buy's triumph in refining its phishing detection rates while simultaneously dialling down false positives is a testament to AI's defensive prowess.
Smart Cities
Consider, too, the smart cities and connected spaces that epitomize technological integration. Their web of intertwined IoT devices and analytical AI, which scrutinize the flows of urban life, are no strangers to the drumbeat of cyber threat. AI-driven defense mechanisms not only predict but quarantine threats, ensuring the continuous, safe hum of civic life in the aftermath of intrusions.
Telecom Sector
Telecommunications entities, stewards of crucial national infrastructures, dial into AI for anticipatory maintenance, network optimization, and ensuring impeccable uptime. By employing AI to monitor the edges of IoT networks, they stem the tide of anomalies, deftly handle false users, and parry the blows of assaults, upholding the sanctity of network availability and individual and enterprise data security.
Automobile Industry
Similarly, the automotive industry finds AI an unyielding ally. As vehicles become complex, mobile ecosystems unto themselves, AI's cybersecurity role is magnified, scrutinizing real-time in-car and network activities, safeguarding critical software updates, and acting as the vanguard against vulnerabilities—the linchpin for the assured deployment of autonomous vehicles on our transit pathways.
Conclusion
The inclination towards AI-driven cybersecurity permits industries not merely to cope, but to flourish by reallocating their energies towards innovation and customer experience enhancement. Through AI's integration, developers spanning a myriad of industries are equipped to construct solutions capable of discerning, ensnaring, and confronting threats to ensure the steadfastness of operations and consumer satisfaction.
In the crucible of digital transformation, AI is the philosopher's stone—an alchemic marvel transmuting the raw data into the secure gold of business prosperity. As we continue to sail the digital ocean's intricate swells, the confluence of AI and cybersecurity promises to forge a gleaming future where businesses thrive under the aegis of security and intelligence.
References
- https://timesofindia.indiatimes.com/gadgets-news/why-adoption-of-ai-may-be-critical-for-businesses-to-tackle-cyber-threats-and-more/articleshow/106313082.cms
- https://blogs.nvidia.com/blog/ai-cybersecurity-business-resilience/
.webp)
Introduction: The Internet’s Foundational Ideal of Openness
The Internet was built as a decentralised network to foster open communication and global collaboration. Unlike traditional media or state infrastructure, no single government, company, or institution controls the Internet. Instead, it has historically been governed by a consensus of the multiple communities, like universities, independent researchers, and engineers, who were involved in building it. This bottom-up, cooperative approach was the foundation of Internet governance and ensured that the Internet remained open, interoperable, and accessible to all. As the Internet began to influence every aspect of life, including commerce, culture, education, and politics, it required a more organised governance model. This compelled the rise of the multi-stakeholder internet governance model in the early 2000s.
The Rise of Multistakeholder Internet Governance
Representatives from governments, civil society, technical experts, and the private sector congregated at the United Nations World Summit on Information Society (WSIS), and adopted the Tunis Agenda for the Information Society. Per this Agenda, internet governance was defined as “… the development and application by governments, the private sector, and civil society in their respective roles of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet.” Internet issues are cross-cutting across technical, political, economic, and social domains, and no one actor can manage them alone. Thus, stakeholders with varying interests are meant to come together to give direction to issues in the digital environment, like data privacy, child safety, cybersecurity, freedom of expression, and more, while upholding human rights.
Internet Governance in Practice: A History of Power Shifts
While the idea of democratizing Internet governance is a noble one, the Tunis Agenda has been criticised for reflecting geopolitical asymmetries and relegating the roles of technical communities and civil society to the sidelines. Throughout the history of the internet, certain players have wielded more power in shaping how it is managed. Accordingly, internet governance can be said to have undergone three broad phases.
In the first phase, the Internet was managed primarily by technical experts in universities and private companies, which contributed to building and scaling it up. The standards and protocols set during this phase are in use today and make the Internet function the way it does. This was the time when the Internet was a transformative invention and optimistically hailed as the harbinger of a utopian society, especially in the USA, where it was invented.
In the second phase, the ideal of multistakeholderism was promoted, in which all those who benefit from the Internet work together to create processes that will govern it democratically. This model also aims to reduce the Internet’s vulnerability to unilateral decision-making, an ideal that has been under threat because this phase has seen the growth of Big Tech. What started as platforms enabling access to information, free speech, and creativity has turned into a breeding ground for misinformation, hate speech, cybercrime, Child Sexual Abuse Material (CSAM), and privacy concerns. The rise of generative AI only compounds these challenges. Tech giants like Google, Meta, X (formerly Twitter), OpenAI, Microsoft, Apple, etc. have amassed vast financial capital, technological monopoly, and user datasets. This gives them unprecedented influence not only over communications but also culture, society, and technology governance.
The anxieties surrounding Big Tech have fed into the third phase, with increasing calls for government regulation and digital nationalism. Governments worldwide are scrambling to regulate AI, data privacy, and cybersecurity, often through processes that lack transparency. An example is India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which was passed without parliamentary debate. Governments are also pressuring platforms to take down content through opaque takedown orders. Laws like the UK’s Investigatory Powers Act, 2016, are criticised for giving the government the power to indirectly mandate encryption backdoors, compromising the strength of end-to-end encryption systems. Further, the internet itself is fragmenting into the “splinternet” amid rising geopolitical tensions, in the form of Russia’s “sovereign internet” or through China’s Great Firewall.
Conclusion
While multistakeholderism is an ideal, Internet governance is a playground of contesting power relations in practice. As governments assert digital sovereignty and Big Tech consolidates influence, the space for meaningful participation of other stakeholders has been negligible. Consultation processes have often been symbolic. The principles of openness, inclusivity, and networked decision-making are once again at risk of being sidelined in favour of nationalism or profit. The promise of a decentralised, rights-respecting, and interoperable internet will only be fulfilled if we recommit to the spirit of Multi-Stakeholder Internet Governance, not just its structure. Efficient internet governance requires that the multiple stakeholders be empowered to carry out their roles, not just talk about them.
References
- https://www.newyorker.com/magazine/2024/02/05/can-the-internet-be-governed
- https://www.internetsociety.org/wp-content/uploads/2017/09/ISOC-PolicyBrief-InternetGovernance-20151030-nb.pdf
- https://itp.cdn.icann.org/en/files/government-engagement-ge/multistakeholder-model-internet-governance-fact-sheet-05-09-2024-en.pdf\
- https://nrs.help/post/internet-governance-and-its-importance/
- https://daidac.thecjid.org/how-data-power-is-skewing-internet-governance-to-big-tech-companies-and-ai-tech-guys/