#FactCheck - AI-Generated Image Falsely Shared as ‘Border 2’ Shooting Photo Goes Viral
Executive Summary
Border 2 is set to hit theatres today, January 23. Meanwhile, a photograph is going viral on social media showing actors Sunny Deol, Suniel Shetty, Akshaye Khanna and Jackie Shroff sitting together and having a meal, while a woman is seen serving food to them. Social media users are sharing this image claiming that it was taken during the shooting of Border 2. It is being alleged that the photograph shows a moment from the film’s set, where the actors were having food during a break in shooting. However, Cyber Peace research has found the viral claim to be false. Our investigation revealed that users are sharing an AI-generated image with a misleading claim.
Claim
On Instagram, a user shared the viral image on January 9, 2026, with the caption: “During the shooting of Border 2.” The link to the post, its archive link and screenshots can be seen below.

Fact Check:
To verify the claim, we first checked Google for the official star cast of the film Border 2. Our search showed that the names of the actors seen in the viral image are not part of the film’s officially announced cast. Next, upon closely examining the image, we noticed that the facial structure and expressions of the actors appeared unnatural and distorted. The facial features did not look realistic, raising suspicion that the image might have been created using Artificial Intelligence (AI). We then scanned the viral image using the AI-generated content detection tool HIVE Moderation. The results indicated that the image is 95 per cent AI-generated.

In the final step of our investigation, we analysed the image using another AI-detection tool, Undetectable AI. According to the results, the viral image was confirmed to be AI-generated.
Conclusion:
Our research confirms that social media users are sharing an AI-generated image while falsely claiming that it is from the shooting of Border 2. The viral claim is misleading and false.

Our research revealed that users are sharing an AI-generated image along with misleading claims
Related Blogs

Introduction
Discussions took place focused on cybersecurity measures, specifically addressing cybercrime in the context of emerging technologies such as Non-Fungible Tokens (NFTs), Artificial Intelligence (AI), and the Metaverse. Session 5 of the conference focused on the interconnectedness between the darknet and cryptocurrency and the challenges it poses for law enforcement agencies and regulators. They discussed that Understanding AI is necessary for enterprises. AI models have difficulties, but we are looking forward to trustworthy AIs. and AI technology must be transparent.
Darknet and Cryptocurrency
The darknet refers to the hidden part of the internet where illicit activities have proliferated in recent years. It was initially developed to provide anonymity, privacy, and protection to specific individuals such as journalists, activists, and whistleblowers. However, it has now become a playground for criminal activities. Cryptocurrency, particularly Bitcoin, has been widely adopted on the darknet due to its anonymous nature, enabling anti-money laundering and unlawful transactions.
Three major points emerge from this relationship: the integrated nature of the darknet and cryptocurrency, the need for regulations to prevent darknet-based crimes, and the importance of striking a balance between privacy and security.
Key Challenges:
- Integrated Relations: The darknet and cryptocurrency have evolved independently, with different motives and purposes. It is crucial to understand the integrated relationship between them and how criminals exploit this connection.
- Regulatory Frameworks: There is a need for effective regulations to prevent crimes facilitated through the darknet and cryptocurrency while striking a balance between privacy and security.
- Privacy and Security: Privacy is a fundamental right, and any measures taken to enhance security should not infringe upon individual privacy. A multistakeholder approach involving tech companies and regulators is necessary to find this delicate balance.
Challenges Associated with Cryptocurrency Use:
The use of cryptocurrency on the darknet poses several challenges. The risks associated with darknet-based cryptocurrency crimes are a significant concern. Additionally, regulatory challenges arise due to the decentralised and borderless nature of cryptocurrencies. Mitigating these challenges requires innovative approaches utilising emerging technologies.
Preventing Misuse of Technologies:
The discussion emphasised that we can step ahead of the people who wish to use these beautiful technologies meant and developed for a different purpose, to prevent from using them for crime.
Monitoring the Darknet:
The darknet, as explained, is an elusive part of the internet that necessitates the use of a special browser for access. Initially designed for secure communication by the US government, its purpose has drastically changed over time. The darknet’s evolution has given rise to significant challenges for law enforcement agencies striving to monitor its activities.
Around 95% of the activities carried out on the dark net are associated with criminal acts. Estimates suggest that over 50% of the global cybercrime revenue originates from the dark net. This implies that approximately half of all cybercrimes are facilitated through the darknet.
The exploitation of the darknet has raised concerns regarding the need for effective regulation. Monitoring the darknet is crucial for law enforcement, national agencies, and cybersecurity companies. The challenges associated with the darknet’s exploitation and the criminal activities facilitated by cryptocurrency emphasise the pressing need for regulations to ensure a secure digital landscape.
Use of Cryptocurrency on the Darknet
Cryptocurrency plays a central role in the activities taking place on the darknet. The discussion highlighted its involvement in various illicit practices, including ransomware attacks, terrorist financing, extortion, theft, and the operation of darknet marketplaces. These applications leverage cryptocurrency’s anonymous features to enable illegal transactions and maintain anonymity.
AI's Role in De-Anonymizing the Darknet and Monitoring Challenges:
- 1.AI’s Potential in De-Anonymizing the Darknet
During the discussion, it was highlighted how AI could be utilised to help in de-anonymizing the darknet. AI’s pattern recognition capabilities can aid in identifying and analysing patterns of behaviour within the darknet, enabling law enforcement agencies and cybersecurity experts to gain insights into its operations. However, there are limitations to what AI can accomplish in this context. AI cannot break encryption or directly associate patterns with specific users, but it can assist in identifying illegal marketplaces and facilitating their takedown. The dynamic nature of the darknet, with new marketplaces quickly emerging, adds further complexity to monitoring efforts.
- 2.Challenges in Darknet Monitoring
Monitoring the darknet poses various challenges due to its vast amount of data, anonymous and encrypted nature, dynamically evolving landscape, and the need for specialised access. These challenges make it difficult for law enforcement agencies and cybersecurity professionals to effectively track and prevent illicit activities.
- 3.Possible Ways Forward
To address the challenges, several potential avenues were discussed. Ethical considerations, striking a balance between privacy and security, must be taken into account. Cross-border collaboration, involving the development of relevant laws and policies, can enhance efforts to combat darknet-related crimes. Additionally, education and awareness initiatives, driven by collaboration among law enforcement, government entities, and academia, can play a crucial role in combating darknet activities.
The panel also addressed the questions from the audience
- How law enforcement agencies and regulators can use AI to detect and prevent crimes on the darknet and cryptocurrency? The panel answered that- Law enforcement officers should also be AI and technology ready, and that kind of upskilling program should be there in place.
- How should lawyers and the judiciary understand the problem and regulate it? The panel answered that AI should only be applied by looking at the outcomes. And Law has to be clear as to what is acceptable and what is not.
- Aligning AI with human intention? Whether it’s possible? Whether can we create an ethical AI instead of talking about using AI ethically? The panel answered that we have to understand how to behave ethically. AI can beat any human. We have to learn AI. Step one is to focus on our ethical behaviour. And step two is bringing the ethical aspect to the software and technologies. Aligning AI with human intention and creating ethical AI is a challenge. The focus should be on ethical behaviour both in humans and in the development of AI technologies.
Conclusion
The G20 Conference on Crime and Security shed light on the intertwined relationship between the darknet and cryptocurrency and the challenges it presents to cybersecurity. The discussions emphasised the need for effective regulations, privacy-security balance, AI integration, and cross-border collaboration to tackle the rising cybercrime activities associated with the darknet and cryptocurrency. Addressing these challenges will require the combined efforts of governments, law enforcement agencies, technology companies, and individuals committed to building a safer digital landscape.
.webp)
Executive Summary:
A video circulating on social media claims that people in Balochistan, Pakistan, hoisted the Indian national flag and declared independence from Pakistan. The claim has gone viral, sparking strong reactions and spreading misinformation about the geopolitical scenario in South Asia. Our research reveals that the video is misrepresented and actually shows a celebration in Surat, Gujarat, India.

Claim:
A viral video shows people hoisting the Indian flag and allegedly declaring independence from Pakistan in Balochistan. The claim implies that Baloch nationals are revolting against Pakistan and aligning with India.

Fact Check:
After researching the viral video, it became clear that the claim was misleading. We took key screenshots from the video and performed a reverse image search to trace its origin. This search led us to one of the social media posts from the past, which clearly shows the event taking place in Surat, Gujarat, not Balochistan.

In the original clip, a music band is performing in the middle of a crowd, with people holding Indian flags and enjoying the event. The environment, language on signboards, and festive atmosphere all confirm that this is an Indian Independence Day celebration. From a different angle, another photo we found further proves our claim.

However, some individuals with the intention of spreading false information shared this video out of context, claiming it showed people in Balochistan raising the Indian flag and declaring independence from Pakistan. The video was taken out of context and shared with a fake narrative, turning a local celebration into a political stunt. This is a classic example of misinformation designed to mislead and stir public emotions.
To add further clarity, The Indian Express published a report on May 15 titled ‘Slogans hailing Indian Army ring out in Surat as Tiranga Yatra held’. According to the article, “A highlight of the event was music bands of Saifee Scout Surat, which belongs to the Dawoodi Bohra community, seen leading the yatra from Bhagal crossroads.” This confirms that the video was from an event in Surat, completely unrelated to Balochistan, and was falsely portrayed by some to spread misleading claims online.

Conclusion:
The claim that people in Balochistan hoisted the Indian national flag and declared independence from Pakistan is false and misleading. The video used to support this narrative is actually from Surat, Gujarat, India, during “The Tiranga Yatra”. Social media users are urged to verify the authenticity and source of content before sharing, to avoid spreading misinformation that may escalate geopolitical tensions.
- Claim: Mass uprising in Balochistan as citizens reject Pakistan and honor India.
- Claimed On: Social Media
- Fact Check: False and Misleading

There has been a struggle to create legal frameworks that can define where free speech ends and harmful misinformation begins, specifically in democratic societies where the right to free expression is a fundamental value. Platforms like YouTube, Wikipedia, and Facebook have gained a huge consumer base by focusing on hosting user-generated content. This content includes anything a visitor puts on a website or social media pages.
The legal and ethical landscape surrounding misinformation is dependent on creating a fine balance between freedom of speech and expression while protecting public interests, such as truthfulness and social stability. This blog is focused on examining the legal risks of misinformation, specifically user-generated content, and the accountability of platforms in moderating and addressing it.
The Rise of Misinformation and Platform Dynamics
Misinformation content is amplified by using algorithmic recommendations and social sharing mechanisms. The intent of spreading false information is closely interwoven with the assessment of user data to identify target groups necessary to place targeted political advertising. The disseminators of fake news have benefited from social networks to reach more people, and from the technology that enables faster distribution and can make it more difficult to distinguish fake from hard news.
Multiple challenges emerge that are unique to social media platforms regulating misinformation while balancing freedom of speech and expression and user engagement. The scale at which content is created and published, the different regulatory standards, and moderating misinformation without infringing on freedom of expression complicate moderation policies and practices.
The impacts of misinformation on social, political, and economic consequences, influencing public opinion, electoral outcomes, and market behaviours underscore the urgent need for effective regulation, as the consequences of inaction can be profound and far-reaching.
Legal Frameworks and Evolving Accountability Standards
Safe harbour principles allow for the functioning of a free, open and borderless internet. This principle is embodied under the US Communications Decency Act and the Information Technology Act in Sections 230 and 79 respectively. They play a pivotal role in facilitating the growth and development of the Internet. The legal framework governing misinformation around the world is still in nascent stages. Section 230 of the CDA protects platforms from legal liability relating to harmful content posted on their sites by third parties. It further allows platforms to police their sites for harmful content and protects them from liability if they choose not to.
By granting exemptions to intermediaries, these safe harbour provisions help nurture an online environment that fosters free speech and enables users to freely express themselves without arbitrary intrusions.
A shift in regulations has been observed in recent times. An example is the enactment of the Digital Services Act of 2022 in the European Union. The Act requires companies having at least 45 million monthly users to create systems to control the spread of misinformation, hate speech and terrorist propaganda, among other things. If not followed through, they risk penalties of up to 6% of the global annual revenue or even a ban in EU countries.
Challenges and Risks for Platforms
There are multiple challenges and risks faced by platforms that surround user-generated misinformation.
- Moderating user-generated misinformation is a big challenge, primarily because of the quantity of data in question and the speed at which it is generated. It further leads to legal liabilities, operational costs and reputational risks.
- Platforms can face potential backlash, both in instances of over-moderation or under-moderation. It can be considered as censorship, often overburdening. It can also be considered as insufficient governance in cases where the level of moderation is not protecting the privacy rights of users.
- Another challenge is more in the technical realm, including the limitations of AI and algorithmic moderation in detecting nuanced misinformation. It holds out to the need for human oversight to sift through the misinformation that is created by AI-generated content.
Policy Approaches: Tackling Misinformation through Accountability and Future Outlook
Regulatory approaches to misinformation each present distinct strengths and weaknesses. Government-led regulation establishes clear standards but may risk censorship, while self-regulation offers flexibility yet often lacks accountability. The Indian framework, including the IT Act and the Digital Personal Data Protection Act of 2023, aims to enhance data-sharing oversight and strengthen accountability. Establishing clear definitions of misinformation and fostering collaborative oversight involving government and independent bodies can balance platform autonomy with transparency. Additionally, promoting international collaborations and innovative AI moderation solutions is essential for effectively addressing misinformation, especially given its cross-border nature and the evolving expectations of users in today’s digital landscape.
Conclusion
A balance between protecting free speech and safeguarding public interest is needed to navigate the legal risks of user-generated misinformation poses. As digital platforms like YouTube, Facebook, and Wikipedia continue to host vast amounts of user content, accountability measures are essential to mitigate the harms of misinformation. Establishing clear definitions and collaborative oversight can enhance transparency and build public trust. Furthermore, embracing innovative moderation technologies and fostering international partnerships will be vital in addressing this cross-border challenge. As we advance, the commitment to creating a responsible digital environment must remain a priority to ensure the integrity of information in our increasingly interconnected world.
References
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://hbr.org/2021/08/its-time-to-update-section-230
- https://www.cnbctv18.com/information-technology/deepfakes-digital-india-act-safe-harbour-protection-information-technology-act-sajan-poovayya-19255261.htm