#FactCheck - AI-Generated Image Falsely Shared as ‘Border 2’ Shooting Photo Goes Viral
Executive Summary
Border 2 is set to hit theatres today, January 23. Meanwhile, a photograph is going viral on social media showing actors Sunny Deol, Suniel Shetty, Akshaye Khanna and Jackie Shroff sitting together and having a meal, while a woman is seen serving food to them. Social media users are sharing this image claiming that it was taken during the shooting of Border 2. It is being alleged that the photograph shows a moment from the film’s set, where the actors were having food during a break in shooting. However, Cyber Peace research has found the viral claim to be false. Our investigation revealed that users are sharing an AI-generated image with a misleading claim.
Claim
On Instagram, a user shared the viral image on January 9, 2026, with the caption: “During the shooting of Border 2.” The link to the post, its archive link and screenshots can be seen below.

Fact Check:
To verify the claim, we first checked Google for the official star cast of the film Border 2. Our search showed that the names of the actors seen in the viral image are not part of the film’s officially announced cast. Next, upon closely examining the image, we noticed that the facial structure and expressions of the actors appeared unnatural and distorted. The facial features did not look realistic, raising suspicion that the image might have been created using Artificial Intelligence (AI). We then scanned the viral image using the AI-generated content detection tool HIVE Moderation. The results indicated that the image is 95 per cent AI-generated.

In the final step of our investigation, we analysed the image using another AI-detection tool, Undetectable AI. According to the results, the viral image was confirmed to be AI-generated.
Conclusion:
Our research confirms that social media users are sharing an AI-generated image while falsely claiming that it is from the shooting of Border 2. The viral claim is misleading and false.

Our research revealed that users are sharing an AI-generated image along with misleading claims
Related Blogs

Executive Summary
A video showing people running amid smoke and chaos during an attack is being widely shared on social media with the claim that it depicts an Iranian strike on Israel. The clip, around 29 seconds long, shows thick black smoke rising as people flee the scene, with voices heard calling for help. However, research by the CyberPeace found that the claim is misleading. The video is actually from the September 2001 attacks on the World Trade Center in the United States.
Claim:
The video has been shared on Facebook with a caption claiming, “Iran has launched its most powerful attack on Israel. Thousands of soldiers have reportedly been killed. Massive protests have erupted within the country, and Israel appears completely helpless.”

Fact Check:
To verify the claim, we conducted a reverse image search using keyframes from the viral video. This led us to a longer version of the same footage uploaded on YouTube on September 11, 2016.

The relevant portion appears around the 2-minute 9-second mark. The video description identifies the footage as part of the September 2001 attacks on the World Trade Center in New York. Further, we found the same video in an archive folder on a website associated with the US Department of Commerce, which contains multiple images and videos related to the 9/11 attacks. This further confirms the origin of the footage.

Conclusion:
The viral claim is false. The video does not show an Iranian attack on Israel. It is from September 2001 and depicts the aftermath of the World Trade Center attacks in New York, USA.

Introduction
In 2025, the internet is entering a new paradigm and it is hard not to witness it. The internet as we know it is rapidly changing into a treasure trove of hyper-optimised material over which vast bot armies battle to the death, thanks to the amazing advancements in artificial intelligence. All of that advancement, however, has a price, primarily in human lives. It turns out that releasing highly personalised chatbots on a populace that is already struggling with economic stagnation, terminal loneliness, and the ongoing destruction of our planet isn’t exactly a formula for improved mental health. This is the truth of 75% of the kids and teen population who have had chats with chatbot-generated fictitious characters. AI, or artificial intelligence, Chatbots are becoming more and more integrated into our daily lives, assisting us with customer service, entertainment, healthcare, and education. But as the impact of these instruments grows, accountability and moral behaviour become more important. An investigation of the internal policies of a major international tech firm last year exposed alarming gaps: AI chatbots were allowed to create content with child romantic roleplaying, racially discriminatory reasoning, and spurious medical claims. Although the firm has since amended aspects of these rules, the exposé underscores an underlying global dilemma - how can we regulate AI to maintain child safety, guard against misinformation, and adhere to ethical considerations without suppressing innovation?
The Guidelines and Their Gaps
The tech giants like Meta and Google are often reprimanded for overlooking Child Safety and the overall increase in Mental health issues in children and adolescents. According to reports, Google introduced Gemini AI kids, a kid-friendly version of its Gemini AI chatbot, which represents a major advancement in the incorporation of generative artificial intelligence (Gen-AI) into early schooling. Users under the age of thirteen can use supervised accounts on the Family Link app to access this version of Gemini AI Kids.
AI operates on the premise of data collection and analysis. To safeguard children’s personal information in the digital world, the Digital Personal Data Protection Act, 2023 (DPDP Act) introduces particular safeguards. According to Section 9, before processing the data of children, who are defined as people under the age of 18, Data Fiduciaries, entities that decide the goals and methods of processing personal data, must get verified consent from a parent or legal guardian. Furthermore, the Act expressly forbids processing activities that could endanger a child’s welfare, such as behavioural surveillance and child-targeted advertising. According to court interpretations, a child's well-being includes not just medical care but also their moral, ethical, and emotional growth.
While the DPDP Act is a big start in the right direction, there are still important lacunae in how it addresses AI and Child Safety. Age-gating systems, thorough risk rating, and limitations specific to AI-driven platforms are absent from the Act, which largely concentrates on consent and damage prevention in data protection. Furthermore, it ignores the threats to children’s emotional safety or the long-term psychological effects of interacting with generative AI models. Current safeguards are self-regulatory in nature and dispersed across several laws, such as the Bhartiya Nyaya Sanhita, 2023. These include platform disclaimers, technology-based detection of child-sexual abuse content, and measures under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Child Safety and AI
- The Risks of Romantic Roleplay - Enabling chatbots to engage in romantic roleplaying with youngsters is among the most concerning discoveries. These interactions can result in grooming, psychological trauma, and relaxation to inappropriate behaviour, even if they are not explicitly sexual. Having illicit or sexual conversations with kids in cyberspace is unacceptable, according to child protection experts. However, permitting even "flirtatious" conversation could normalise risky boundaries.
- International Standards and Best Practices - The concept of "safety by design" is highly valued in child online safety guidelines from around the world, including UNICEF's Child Online Protection Guidelines and the UK's Online Safety Bill. This mandating of platforms and developers to proactively remove risks, not reactively to respond to harms, is the bare minimum standard that any AI guidelines must meet if they provide loopholes for child-directed roleplay.
Misinformation and Racism in AI Outputs
- The Disinformation Dilemma - The regulations also allowed AI to create fictional narratives with disclaimers. For example, chatbots were able to write articles promulgating false health claims or smears against public officials, as long as they were labelled as "untrue." While disclaimers might give thin legal cover, they add to the proliferation of misleading information. Indeed, misinformation tends to spread extensively because users disregard caveat labels in favour of provocative assertions.
- Ethical Lines and Discriminatory Content - It is ethically questionable to allow AI systems to generate racist arguments, even when requested. Though scholarly research into prejudice and bias may necessitate such examples, unregulated generation has the potential to normalise damaging stereotypes. Researchers warn that such practice brings platforms from being passive hosts of offensive speech to active generators of discriminatory content. It is a difference that makes a difference, as it places responsibility squarely on developers and corporations.
The Broader Governance Challenge
- Corporate Responsibility and AI Material generated by AI is not equivalent to user speech—it is a direct reflection of corporate training, policy decisions, and system engineering. This fact requires a greater level of accountability. Although companies can update guidelines following public criticism, that there were such allowances in the first place indicates a lack of strong ethical regulation.
- Regulatory Gaps Regulatory regimes for AI are currently in disarray. The EU AI Act, the OECD AI Principles, and national policies all emphasise human rights, transparency, and accountability. The few, though, specify clear guidelines for content risks such as child roleplay or hate narratives. This absence of harmonised international rules leaves companies acting in the shadows, establishing their own limits until contradicted.
An active way forward would include
- Express Child Protection Requirements: AI systems must categorically prohibit interactions with children involving flirting or romance.
- Misinformation Protections: Generative AI must not be allowed to generate knowingly false material, disclaimers being irrelevant.
- Bias Reduction: Developers need to proactively train systems against generating discriminatory accounts, not merely tag them as optional outputs.
- Independent Regulation: External audit and ethics review boards can supply transparency and accountability independent of internal company regulations.
Conclusion
The guidelines that are often contentious are more than the internal folly of just one firm; they point to a deeper systemic issue in AI regulation. The stakes rise as generative AI becomes more and more integrated into politics, healthcare, education, and social interaction. Racism, false information, and inadequate child safety measures are severe issues that require quick resolution. Corporate regulation is only one aspect of the future; other elements include multi-stakeholder participation, stronger global systems, and ethical standards. In the end, rather than just corporate interests, trust in artificial neural networks will be based on their ability to preserve the truth, protect the weak, and represent universal human values.
References
- https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
- https://www.lakshmisri.com/insights/articles/ai-for-children/#
- https://the420.in/meta-ai-chatbot-guidelines-child-safety-racism-misinformation/
- https://www.unicef.org/documents/guidelines-industry-online-child-protection
- https://www.oecd.org/en/topics/sub-issues/ai-principles.html
- https://artificialintelligenceact.eu/

Amid reports that the death toll in Iran’s ongoing protests has risen to 2,571, a video has been widely circulated on social media showing a man slapping a person dressed in clerical attire after an argument. Users sharing the clip claim that public anger in Iran has escalated to the point where people are now physically attacking religious clerics. However, research by the Cyber Peace Foundation has found this claim to be misleading. The research established that the video is not recent and has no connection to the current protests in Iran. In fact, the clip dates back to 2021 and was entirely scripted.
Claim
On January 14, 2026, users on X (formerly Twitter) shared the viral video with captions suggesting that Iranian citizens are openly assaulting clerics amid the ongoing unrest. One such post stated that the situation in Iran had deteriorated so badly that people were now beating religious leaders.
The link, archived version, and screenshot of the post are available below:

Factcheck:
To verify the authenticity of the claim, the Cyber Peace Foundation extracted keyframes from the viral video and conducted a Google reverse image search. This led investigators to a report published on April 19, 2021, on the Persian-language website of Deutsche Welle (DW). The visuals matched the viral clip exactly, confirming that the footage is nearly five years old, not recent. Here is the link to the original video, along with a screenshot:

Further examination of reports by Fars News Agency revealed that Tehran police had conducted a detailed probe into the video at the time and declared it fake and pre-scripted. According to Tehran Police Chief Hossein Rahimi, the individual seen wearing religious attire was not a cleric. Here is the link to the original video, along with a screenshot: He was actually employed at a carpet cleaning shop in Tehran, while the man seen slapping him was his own son.
Police stated that the video was deliberately staged and circulated to provoke public sentiment and create unrest by falsely linking it to religious tensions. Both the father and son were arrested, and images of them in police custody were published in contemporaneous reports. Additional confirmation was found on the Independent Persian website, which had also reported on the incident on April 19, 2021, reiterating that the video was fabricated and unrelated to any protest movement. Here is the link to the original video, along with a screenshot:

Conclusion
The claim that the viral video shows an Iranian protester slapping a cleric during the current wave of protests is false. The video is from 2021, was scripted, and has no link to the ongoing demonstrations in Iran. It is being reshared with a misleading narrative to spread disinformation and inflame public sentiment.c