#FactCheck: Viral video claims BSF personnel thrashing a person selling Bangladesh National Flag in West Bengal
Executive Summary:
A video circulating online claims to show a man being assaulted by BSF personnel in India for selling Bangladesh flags at a football stadium. The footage has stirred strong reactions and cross border concerns. However, our research confirms that the video is neither recent nor related to the incident that occurred in India. The content has been wrongly framed and shared with misleading claims, misrepresenting the actual incident.
Claim:
It is being claimed through a viral post on social media that a Border Security Force (BSF) soldier physically attacked a man in India for allegedly selling the national flag of Bangladesh in West Bengal. The viral video further implies that the incident reflects political hostility towards Bangladesh within Indian territory.

Fact Check:
After conducting thorough research, including visual verification, reverse image searching, and confirming elements in the video background, we determined that the video was filmed outside of Bangabandhu National Stadium in Dhaka, Bangladesh, during the crowd buildup prior to the AFC Asian Cup. A match featuring Bangladesh against Singapore.

Second layer research confirmed that the man seen being assaulted is a local flag-seller named Hannan. There are eyewitness accounts and local news sources indicating that Bangladeshi Army officials were present to manage the crowd on the day under review. During the crowd control effort a soldier assaulted the vendor with excessive force. The incident created outrage to which the Army responded by identifying the officer responsible and taking disciplinary measures. The victim was reported to have been offered reparations for the misconduct.

Conclusion:
Our research confirms that the viral video does not depict any incident in India. The claim that a BSF officer assaulted a man for selling Bangladesh flags is completely false and misleading. The real incident occurred in Bangladesh, and involved a local army official during a football event crowd-control situation. This case highlights the importance of verifying viral content before sharing, as misinformation can lead to unnecessary panic, tension, and international misunderstanding.
- Claim: Viral video claims BSF personnel thrashing a person selling Bangladesh National Flag in West Bengal
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
A picture about the April 8 solar eclipse, which was authored by AI and was not a real picture of the astronomical event, has been spreading on social media. Despite all the claims of the authenticity of the image, the CyberPeace’s analysis showed that the image was made using Artificial Intelligence image-creation algorithms. The total solar eclipse on April 8 was observable only in those places on the North American continent that were located in the path of totality, whereas a partial visibility in other places was possible. NASA made the eclipse live broadcast for people who were out of the totality path. The spread of false information about rare celestial occurrences, among others, necessitates relying on trustworthy sources like NASA for correct information.
Claims:
An image making the rounds through social networks, looks like the eclipse of the sun of the 8th of April, which makes it look like a real photograph.




Fact Check:
After receiving the news, the first thing we did was to try with Keyword Search to find if NASA had posted any lookalike image related to the viral photo or any celestial events that might have caused this photo to be taken, on their official social media accounts or website. The total eclipse on April 8 was experienced by certain parts of North America that were located in the eclipse pathway. A part of the sky above Mazatlan, Mexico, was the first to witness it. Partial eclipse was also visible for those who were not in the path of totality.
Next, we ran the image through the AI Image detection tool by Hive moderation, which found it to be 99.2% AI-generated.

Following that, we applied another AI Image detection tool called Isitai, and it found the image to be 96.16% AI-generated.

With the help of AI detection tools, we came to the conclusion that the claims made by different social media users are fake and misleading. The viral image is AI-generated and not a real photograph.
Conclusion:
Hence, it is a generated image by AI that has been circulated on the internet as a real eclipse photo on April 8. In spite of some debatable claims to the contrary, the study showed that the photo was created using an artificial intelligence algorithm. The total eclipse was not visible everywhere in North America, but rather only in a certain part along the eclipse path, with partial visibility elsewhere. Through AI detection tools, we were able to establish a definite fact that the image is fake. It is very important, when you are talking about rare celestial phenomena, to use the information that is provided by the trusted sources like NASA for the accurate reason.
- Claim: A viral image of a solar eclipse claiming to be a real photograph of the celestial event on April 08
- Claimed on: X, Facebook, Instagram, website
- Fact Check: Fake & Misleading
.webp)
Introduction
The judiciary as an institution has always been kept on a pedestal and is often seen as the embodiment of justice. From Dictatorship to Democracy, the judiciary plays a central role; even where the judiciary is controlled, the legitimacy of the policies, in one sense or another, is derived from it. In democracies around the world, the independence and well-being of the judiciary are seen as the barometer of democracy’s strength. In this global age, where technology is omnipresent, it seems the judiciary is no exception. Now more than ever, when the judiciary is at the centre of evaluative focus, it becomes imperative to make the judiciary transparent. Digitisation of the judiciary is not just an administrative reform; it is an extension of constitutionalism into the technological realm, an effort to ensure that justice is accessible, transparent, and efficient. On July 25, which is the International Day on Judicial Well-being, is commemorated every year with a clear message that judicial well-being supports “anti-corruption, access to justice, and sustainable peace.”
Digitisation by Design: Justice in the Age of Transformation
The Prime Minister of India envisioned the future of the Indian legal system in alignment with the digitised world, as when he said, “Technology will integrate police, forensics, jails, and courts, and will speed up their work as well. We are moving towards a justice system that will be fully future-ready,” he said, almost predicting the future. Although there are many challenges in the face of this future, there are various initiatives that ease the transition. To clarify, India is streamlining operations, reducing delays, and enhancing access to justice for all by integrating AI into legal research, case management, judicial procedures, and law enforcement. Machine Learning (ML), Natural Language Processing (NLP), Optical Character Recognition (OCR), and predictive analytics are just a few of the AI-powered technologies that are currently being used to increase crime prevention, automate administrative duties, and improve case monitoring.
The digitisation of Indian courts is a structural necessity rather than just a question of contemporary convenience. Miscarriages of justice have frequently resulted from the growing backlog of cases, challenges with maintaining records, and the loss of physical files. In the seminal case of State of U.P. v. Abhay Raj Singh, the courts acknowledged that a conviction could be overturned by missing records alone. With millions of legal documents at risk, digitisation becomes a shield against such a collapse and a tool for preserving judicial memory.
Judicial Digitalisation in India: Institutional Initiatives and Infrastructural Advancements
For centuries, towering bundles of courtroom files stood as dusty monuments to knowledge, sacred, chaotic, and accessible to a select few. But as we now stand in 2025, the physical boundaries of a traditional courtroom have blurred, and the Indian government is actively working towards transforming the legal system. The e-Courts Mission Mode Project is a flagship initiative that aims to utilise Information and Communication Technology (ICT) to modernise and advance the Indian judiciary. This groundbreaking effort, led by the Department of Justice, Government of India, is being carried out in close coordination with the Supreme Court of India’s e-Committee. As a news report suggests, the Supreme Court (SC) held 7.5 lakh hearings through video conferencing between 2020 and 2024, as stated by the Ministry of Law and Justice, responding to a query in the Rajya Sabha on Thursday. Technological tools such as the Supreme Court Vidhik Anuvaad Software (SUVAS), the Case Information Software (CIS), and the Supreme Court Portal for Assistance in Court’s Efficiency (SUPACE) were established to make all pertinent case facts easily available. In another move, the Registry, SC, in close coordination with IIT, Madras, has created and implemented AI and ML-based technologies that are integrated with the Registry’s electronic filing software. This serves as a statement to the fact that cybersecurity and digital infrastructure are no longer administrative add-ons but essential building blocks for ensuring judicial transparency, efficiency, and resilience.
E-Governance and Integrity: The Judiciary in Transition
The United Nations recognises the fundamentals of the judiciary’s well-being and how corruption acts like water to the rust and taints the integrity of not a single judge in general but creates a perception of the whole institution. This threat of corruption is recognised by the United Nations Convention against Corruption (UNCAC), particularly Article 11, which urges the protection of the judiciary’s independence and integrity. Digitisation, while it cannot operate in a vacuum, acts as a structural antidote to corruption by embedding transparency into the fabric of justice delivery as automated registry systems, e-filing, and real-time access to case data drastically reduce discretionary power and the potential for behind-the-scenes manipulation. However, digital systems are only as ethical as the people who design, maintain, and oversee them, bringing their own limitations.
Conclusion: CyberPeace and the Future of Ethical Digital Justice
The potential of digitalisation resides not just in efficiency but also in equity, as India’s judiciary balances tradition and change. A robust democracy, where justice is lit by code rather than hidden under files, is built on a foundation of an open, accessible, and technologically advanced court. This change is not risk-free, though. Secure justice must also be a component of digital justice. The very values that digitisation seeks to preserve are at risk from algorithmic opacity, data breaches, and insecure technologies.
Our vision is not just of a digitalised court system but of a digitally just society, one where judicial data is protected, legal processes are democratised, and innovation upholds constitutionalism. Therefore, as a step forward, CyberPeace resolves to support AI upskilling for legal professionals, advocate for secure-by-design court infrastructure, and facilitate dialogue between technologists and judicial actors to build trust in the digital justice ecosystem. CyberPeace is dedicated to cyber transparency, privacy protection, and ethical AI.
References
- https://www.un.org/en/observances/judicial-well-being
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2106239
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2106239
- https://www.barandbench.com/view-point/facilitating-legal-access-digitalization-of-supreme-court-high-court-records
- https://www.pib.gov.in/PressReleaseIframePage.aspx?PRID=2085127
- https://www.medianama.com/2024/12/223-supreme-court-seven-lakh-video-conferences-four-year-rajya-sabha/

Artificial Intelligence (AI) provides a varied range of services and continues to catch intrigue and experimentation. It has altered how we create and consume content. Specific prompts can now be used to create desired images enhancing experiences of storytelling and even education. However, as this content can influence public perception, its potential to cause misinformation must be noted as well. The realistic nature of the images can make it hard to discern as artificially generated by the untrained eye. As AI operates by analysing the data it was trained on previously to deliver, the lack of contextual knowledge and human biases (while framing prompts) also come into play. The stakes are higher whilst dabbling with subjects such as history, as there is a fine line between the creation of content with the intent of mere entertainment and the spread of misinformation owing to biases and lack of veracity left unchecked. AI-generated images enhance storytelling but can also spread misinformation, especially in historical contexts. For instance, an AI-generated image of London during the Black Death might include inaccurate details, misleading viewers about the past.
The Rise of AI-Generated Historical Images as Entertainment
Recently, generated images and videos of various historical instances along with the point of view of the people present have been floating all over the internet. Some of them include the streets of London during the Black Death in the 1300s in England, the eruption of Mount Vesuvius at Pompeii etc. Hogne and Dan, two creators who operate accounts named POV Lab and Time Traveller POV on TikTok state that they create such videos as they feel that seeing the past through a first-person perspective is an interesting way to bring history back to life while highlighting the cool parts, helping the audience learn something new. Mostly sensationalised for visual impact and storytelling, such content has been called out by historians for inconsistencies with respect to details particular of the time. Presently, artists admit to their creations being inaccurate, reasoning them to be more of an artistic interpretation than fact-checked documentaries.
It is important to note that AI models may inaccurately depict objects (issues with lateral inversion), people(anatomical implausibilities), or scenes due to "present-ist" bias. As noted by Lauren Tilton, an associate professor of digital humanities at the University of Richmond, many AI models primarily rely on data from the last 15 years, making them prone to modern-day distortions especially when analysing and creating historical content. The idea is to spark interest rather than replace genuine historical facts while it is assumed that engagement with these images and videos is partly a product of the fascination with upcoming AI tools. Apart from this, there are also chatbots like Hello History and Charater.ai which enable simulations of interacting with historical figures that have piqued curiosity.
Although it makes for an interesting perspective, one cannot ignore that our inherent biases play a role in how we perceive the information presented. Dangerous consequences include feeding into conspiracy theories and the erasure of facts as information is geared particularly toward garnering attention and providing entertainment. Furthermore, exposure of such content to an impressionable audience with a lesser attention span increases the gravity of the matter. In such cases, information regarding the sources used for creation becomes an important factor.
Acknowledging the risks posed by AI-generated images and their susceptibility to create misinformation, the Government of Spain has taken a step in regulating the AI content created. It has passed a bill (for regulating AI-Generated content) that mandates the labelling of AI-generated images and failure to do so would warrant massive fines (up to $38 million or 7% of turnover on companies). The idea is to ensure that content creators label their content which would help to spot images that are artificially created from those that are not.
The Way Forward: Navigating AI and Misinformation
While AI-generated images make for exciting possibilities for storytelling and enabling intrigue, their potential to spread misinformation should not be overlooked. To address these challenges, certain measures should be encouraged.
- Media Literacy and Awareness – In this day and age critical thinking and media literacy among consumers of content is imperative. Awareness, understanding, and access to tools that aid in detecting AI-generated content can prove to be helpful.
- AI Transparency and Labeling – Implementing regulations similar to Spain’s bill on labelling content could be a guiding crutch for people who have yet to learn to tell apart AI-generated content from others.
- Ethical AI Development – AI developers must prioritize ethical considerations in training using diverse and historically accurate datasets and sources which would minimise biases.
As AI continues to evolve, balancing innovation with responsibility is essential. By taking proactive measures in the early stages, we can harness AI's potential while safeguarding the integrity and trust of the sources while generating images.
References:
- https://www.npr.org/2023/06/07/1180768459/how-to-identify-ai-generated-deepfake-images
- https://www.nbcnews.com/tech/tech-news/ai-image-misinformation-surged-google-research-finds-rcna154333
- https://www.bbc.com/news/articles/cy87076pdw3o
- https://newskarnataka.com/technology/government-releases-guide-to-help-citizens-identify-ai-generated-images/21052024/
- https://www.technologyreview.com/2023/04/11/1071104/ai-helping-historians-analyze-past/
- https://www.psypost.org/ai-models-struggle-with-expert-level-global-history-knowledge/
- https://www.youtube.com/watch?v=M65IYIWlqes&t=2597s
- https://www.vice.com/en/article/people-are-creating-records-of-fake-historical-events-using-ai/?utm_source=chatgpt.com
- https://www.reuters.com/technology/artificial-intelligence/spain-impose-massive-fines-not-labelling-ai-generated-content-2025-03-11/?utm_source=chatgpt.com
- https://www.theguardian.com/film/2024/sep/13/documentary-ai-guidelines?utm_source=chatgpt.com