#FactCheck - AI-Generated Image Falsely Shared as ‘Border 2’ Shooting Photo Goes Viral
Executive Summary
Border 2 is set to hit theatres today, January 23. Meanwhile, a photograph is going viral on social media showing actors Sunny Deol, Suniel Shetty, Akshaye Khanna and Jackie Shroff sitting together and having a meal, while a woman is seen serving food to them. Social media users are sharing this image claiming that it was taken during the shooting of Border 2. It is being alleged that the photograph shows a moment from the film’s set, where the actors were having food during a break in shooting. However, Cyber Peace research has found the viral claim to be false. Our investigation revealed that users are sharing an AI-generated image with a misleading claim.
Claim
On Instagram, a user shared the viral image on January 9, 2026, with the caption: “During the shooting of Border 2.” The link to the post, its archive link and screenshots can be seen below.

Fact Check:
To verify the claim, we first checked Google for the official star cast of the film Border 2. Our search showed that the names of the actors seen in the viral image are not part of the film’s officially announced cast. Next, upon closely examining the image, we noticed that the facial structure and expressions of the actors appeared unnatural and distorted. The facial features did not look realistic, raising suspicion that the image might have been created using Artificial Intelligence (AI). We then scanned the viral image using the AI-generated content detection tool HIVE Moderation. The results indicated that the image is 95 per cent AI-generated.

In the final step of our investigation, we analysed the image using another AI-detection tool, Undetectable AI. According to the results, the viral image was confirmed to be AI-generated.
Conclusion:
Our research confirms that social media users are sharing an AI-generated image while falsely claiming that it is from the shooting of Border 2. The viral claim is misleading and false.

Our research revealed that users are sharing an AI-generated image along with misleading claims
Related Blogs
.webp)
Introduction
Empowering today’s youth with the right skills is more crucial than ever in a rapidly evolving digital world. Every year on July 15th, the United Nations marks World Youth Skills Day to emphasise the critical role of skills development in preparing young people for meaningful work and resilient futures. As AI transforms industries and societies, equipping young minds with digital and AI skills is key to fostering security, adaptability, and growth in the years ahead.
Why AI Upskilling is Crucial in Modern Cyber Defence
Security in the digital age remains a complex challenge, regardless of the presence of Artificial Intelligence (AI). It is one of the biggest modern ironies, and not only that, it is a paradox wrapped in code, where the cure and the curse are written in the same language. The very hand that protects the world from cyber threats can very well be used for the creation of that threat. This being said, the modern-day implementation of AI has to circumvent the threats posed by it or any other advanced technology. A solid grasp of AI and machine learning mechanisms is no longer optional; it is fundamental for modern cybersecurity. The traditional cybersecurity training programs employ static content, which can often become outdated and inadequate for the vulnerabilities. AI-powered solutions, such as intrusion detection systems and next-generation firewalls, use behavioural analysis instead of just matching signatures. AI models are susceptible, nevertheless, as malevolent actors can introduce hostile inputs or tainted data to trick computers into incorrect classification. Data poisoning is a major threat to AI defences, according to Cisco's evidence.
As threats surpass the current understanding of cybersecurity professionals, a need arises to upskill them in advanced AI technologies so that they can fortify the security of current systems. Two of the most important skills for professionals would be AI/ML Model Auditing and Data Science. Skilled data scientists can sift through vast logs, from pocket captures to user profiles, to detect anomalies, assess vulnerabilities, and anticipate attacks. A news report from Business Insider puts it correctly: ‘It takes a good-guy AI to fight a bad-guy AI.’ The technology of generative AI is quite new. As a result, it poses fresh security issues and faces security risks like data exfiltration and prompt injections.
Another method that can prove effective is Natural Language Processing (NLP), which helps machines process this unstructured data, enabling automated spam detection, sentiment analysis, and threat context extraction. Security teams skilled in NLP can deploy systems that flag suspicious email patterns, detect malicious content in code reviews, and monitor internal networks for insider threats, all at speeds and scales humans cannot match.
The AI skills, as aforementioned, are not only for courtesy’s sake; they have become essential in the current landscape. India is not far behind in this mission; it is committed, along with its western counterparts, to employ the emerging technologies in its larger goal of advancement. With quiet confidence, India takes pride in its remarkable capacity to nurture exceptional talent in science and technology, with Indian minds making significant contributions across global arenas.
AI Upskilling in India
As per a news report of March 2025, Jayant Chaudhary, Minister of State, Ministry of Skill Development & Entrepreneurship, highlighted that various schemes under the Skill India Programme (SIP) guarantee greater integration of emerging technologies, such as artificial intelligence (AI), cybersecurity, blockchain, and cloud computing, to meet industry demands. The SIP’s parliamentary brochure states that more than 6.15 million recipients have received training as of December 2024. Other schemes that facilitate educating and training professionals, such as Data Scientist, Business Intelligence Analyst, and Machine Learning Engineer are,
- Pradhan Mantri Kaushal Vikas Yojana 4.0 (PMKVY 4.0)
- Pradhan Mantri National Apprenticeship Promotion Scheme (PM-NAPS)
- Jan Shikshan Sansthan (JSS)
Another report showcases how Indian companies, or companies with their offices in India such as Ernst & Young (EY), are recognising the potential of the Indian workforce and yet their deficiencies in emerging technologies and leading the way by internal upskilling and establishing an AI Academy, a new program designed to assist businesses in providing their employees with essential AI capabilities, in response to the increasing need for AI expertise. Using more than 200 real-world AI use cases, the program offers interactive, organised learning opportunities that cover everything from basic ideas to sophisticated generative AI capabilities.
In order to better understand the need for these initiatives, a reference is significant to a report backed by Google.org and the Asian Development Bank; India appears to be at a turning point in the global use of AI. As per the research, “AI for All: Building an AI-Ready Workforce in Asia-Pacific,” India urgently needs to provide accessible and efficient AI upskilling despite having the largest workforce in the world. According to the paper, by 2030, AI could boost the Asia-Pacific region’s GDP by up to USD 3 trillion. The key to this potential is India, a country with the youngest and fastest-growing population.
Conclusion and CyberPeace Resolution
As the world stands at the crossroads of innovation and insecurity, India finds itself uniquely poised, with its vast young population and growing technologies. But to truly safeguard its digital future and harness the promise of AI, the country must think beyond flagship schemes. Imagine classrooms where students learn not just to code but to question algorithms, workplaces where AI training is as routine as onboarding.
India’s journey towards digital resilience is not just about mastering technology but about cultivating curiosity, responsibility, and trust. CyberPeace is committed to this future and is resolute in this collective pursuit of an ethically secure digital world. CyberPeace resolves to be an active catalyst in AI upskilling across India. We commit to launching specialised training modules on AI, cybersecurity, and digital ethics tailored for students and professionals. It seeks to close the AI literacy gap and develop a workforce that is both morally aware and technologically proficient by working with educational institutions, skilling initiatives, and industry stakeholders.
References
- https://www.helpnetsecurity.com/2025/03/07/ai-gamified-simulations-cybersecurity/
- https://www.businessinsider.com/artificial-intelligence-cybersecurity-large-language-model-threats-solutions-2025-5?utm
- https://apacnewsnetwork.com/2025/03/ai-5g-skills-boost-skill-india-targets-industry-demands-over-6-15-million-beneficiaries-trained-till-2024/
- https://indianexpress.com/article/technology/artificial-intelligence/india-must-upskill-fast-to-keep-up-with-ai-jobs-says-new-report-10107821/

Introduction
In today's era of digitalised community and connections, social media has become an integral part of our lives. A large number of teenagers are also active and have their accounts on social media. They use social media to connect with their friends and family. Social media offers ease to connect and communicate with larger communities and even showcase your creativity. On the other hand, it also poses some challenges or issues such as inappropriate content, online harassment, online stalking, misuse of personal information, abusive and dishearted content etc. There could be unindented consequences on teenagers' mental health by such threats or overuse of social media. The data shows some teens spend hours a day on social media hence it has a larger impact on them whether we notice it or not. Social media addiction and its negative repercussions such as overuse of social media by teens and online threats and vulnerabilities is a growing concern that needs to be taken seriously by social media platforms, regulatory policies and even user's responsibilities. Recently Colorado and California led a joint lawsuit filed by 33 states in the U.S. District Court for the Northern District of California against meta on the concern of child safety.
Meta and concern of child users safety
Recently Meta, the company that owns Facebook, Instagram, WhatsApp, and Messenger, has been sued by more than three dozen states for allegedly using features to hook children to its platforms. The lawsuit claims that Meta violated consumer protection laws and deceived users about the safety of its platforms. The states accuse Meta of designing manipulative features to induce young users' compulsive and extended use, pushing them into harmful content. However, Meta has responded by stating that it is working to provide a safer environment for teenagers and expressing disappointment in the lawsuit.
According to the complaint filed by the states, Meta “designed psychologically manipulative product features to induce young users’ compulsive and extended use" of platforms like Instagram. The states allege that Meta's algorithms were designed to push children and teenagers into rabbit holes of toxic and harmful content, with features like "infinite scroll" and persistent alerts used to hook young users. However, meta responded with disappointment with a lawsuit stating that meta working productively with companies across the industry to create clear, age-appropriate standards for the many apps.
Unplug for sometime
Overuse of social media is associated with increased mental health repercussions along with online threats and risks. Social media’s effect on teenagers is driven by factors such as inadequate sleep, exposure to cyberbullying and online threats and lack of physical activity. Its admitted that social media can help teens feel more connected to their friends and their support system and showcase their creativity to the online world. However, social media overuse by teens is often linked with underlying issues that require attention. To help teenagers, encourage them for responsible use and unplug from social media for some time, encourage them to get outside in nature, do physical activities, and express themselves creatively.
Understanding the threats & risks
- Psychological effects
- Addiction: Excessive use of social media will lead to procrastination and excessively using social media can lead to physical and psychological addiction because it triggers the brain's reward system.
- Mental Conditions Associated: Excessively using social media can be harmful for mental well-being which can also lead to depression and anxiety, self-consciousness and may also lead to social anxiety disorder.
- Eyes, Carpal tunnel syndrome: Excessive spending time on screen may lead to put a real strain on your eyes. Eye problems caused by computer/phone screen use fall under computer vision syndrome (CVS). Carpal tunnel syndrome is caused by pressure on the median nerve.
- Cyberbullying: Cyberbullying is one of the major concerns faced in online interactions on social media. Cyberbullying takes place using the internet or other digital communication technology to bully, harass, or intimidate others and it has become a major concern of online harassment on popular social media platforms. Cyberbullying may include spreading rumours or posting hurtful comments. Cyberbullying has emerged as a phenomenon that has a socio-psychological impact on the victims.
- Online grooming: Online grooming is defined as the tactics abusers deploy through the internet to sexually exploit children. The average time for a bad actor to lure children into his trap is 3 minutes, which is a very alarming number.
- Ransomware/Malware/Spyware: Cybercrooks impose threats such as ransomware, malware and spyware by deploying malicious links on social media. This poses serious cyber threats, and it causes consequences such as financial losses, data loss, and reputation damage. Ransomware is a type of malware which is designed to deny a user or organisation access to their files on the computer. On social media, cyber crooks post malicious links which contain malware, and spyware threats. Hence it is important to be cautious before clicking on any such suspicious link.
- Sextortion: Sextortion is a crime where the perpetrator threatens the victim and demands ransom or asks for sexual favours by threatening the victim to expose or reveal the victim’s sexual activity. It is a kind of sexual blackmail, it may take place on social media and youngsters are mostly targeted. The cyber crooks also misuse the advanced AI Deepfake technology which is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfakes technology since easily accessible, is misused by fraudsters to commit various crimes including sextortion or deceiving and scamming people through fake images or videos which look realistic.
- Child sexual abuse material(CSAM): CSAM is inappropriate or illicit content which is prohibited by the laws and regulatory guidelines. Child while using the internet if encounters age-restricted or inappropriate content which may be harmful to them child. Through regulatory guidelines, internet service providers are refrained from hosting the CSAM content on the websites and blocking such inappropriate or CSAM content.
- In App purchases: The teen user also engages in-app purchases on social media or online gaming where they might fall into financial fraud or easy money scams. Where fraudster targets through offering exciting job offers such as part-time job, work-from-home job, small investments, liking content on social media, and earning money out of this. This has been prevalent on social media and fraudsters target innocent people ask for their personal and financial information, and commit financial fraud by scamming people on the pretext of offering exciting offers.
Safety tips:
To stay safe while using social media teens or users are encouraged to follow the best practices and stay aware of the online threats. Users must keep in regard to the best practices. Such as;
- Safe web browsing.
- Utilising privacy settings of your social media accounts.
- Using strong passwords and enabling two-factor authentication.
- Be careful about what you post or share.
- Becoming familiar with the privacy policy of the social media platforms.
- Being selective of adding unknown users to your social media network.
- Reporting any suspicious activity to the platform or relevant forum.
Conclusion:
Child safety is a major concern on social media platforms. Social media-related offences such as cyberstalking, hacking, online harassment and threats, sextortion, and financial fraud are seen as the most occurring cyber crimes on social media. The tech giants must ensure the safety of teen users on social media by implementing and adopting the best mechanisms on the platform. CyberPeace Foundation is working towards advocating for a Child-friendly SIM to protect from the illicit influence of the internet and Social Media.
References:
- https://www.scientificamerican.com/article/heres-why-states-are-suing-meta-for-hurting-teens-with-facebook-and-instagram/
- https://www.nytimes.com/2023/10/24/technology/states-lawsuit-children-instagram-facebook.html

Introduction
Misinformation regarding health is sensitive and can have far-reaching consequences. These include its effect on personal medical decisions taken by individuals, lack of trust in conventional medicine, delay in seeking treatments, and even loss of life. The fast-paced nature and influx of information on social media can aggravate the situation further. Recently, a report titled Health Misinformation Vectors in India was presented at the Health of India Summit, 2024. It provided certain key insights into health-related misinformation circulating online.
The Health Misinformation Vectors in India Report
The analysis was conducted by the doctors at First Check, a global health fact-checking initiative alongside DataLEADS, a Delhi-based digital media and technology company. The report covers health-related social media content that was posted online from October 2023 to November 2024. It mentions that among all the health scares, misinformation regarding reproductive health, cancer, vaccines, and lifestyle diseases such as diabetes and obesity is the most prominent type that is spread through social media. Misinformation regarding reproductive health includes illegal abortion methods that often go unchecked and even tips on conceiving a male child, among other things.
In order to combat this misinformation, the report encourages stricter regulations regarding health-related content on digital media, inculcating technology for health literacy and misinformation management in public health curricula and recommending tech platforms to work on algorithms that prioritise credible information and fact-checks. Doctors state that people affected by life-threatening diseases are particularly vulnerable to such misinformation, as they are desperate to seek options for treatment for themselves and their family members to have a chance at life. In a diverse society, with the lack of clear and credible information, limited access to or awareness about tools that cross-check content, and low digital literacy, people gravitate towards alternate sources of information which also fosters a sense of disengagement among the public overall. The diseases mentioned in the report, which are prone to misinformation, are life-altering and require attention from healthcare professionals.
CyberPeace Outlook
Globally, there are cases of medically-unqualified social media influencers who disperse false/mis- information regarding various health matters. The topics covered are mostly associated with stigma and are still undergoing research. This gap allows for misinformation to be fostered. An example is misinformation regarding PCOS( Polycystic Ovary Syndrome) which is circulating online.
In the midst of all of this, YouTube has released a new feature that aligns with combating health misinformation, trying to bridge the gap between healthcare professionals and Indians who look for trustworthy health-related information online. The initiative includes a feature that allows doctors, nurses, and other healthcare professionals to sign up for a health information source license. This would help by labeling all their informative videos, as addressed- from a healthcare professional. Earlier, this feature was available only for health organisations including a health source information panel and health content shelves, but this step broadens the scope for verification of licenses of individual healthcare professionals.
As digital literacy continues to grow, methods of seeking credible information, especially regarding sensitive topics such as health, require a combined effort on the part of all the stakeholders involved. We need a robust strategy for battling health-related misinformation online, including more awareness programmes and proactive participation from the consumers as well as medical professionals regarding such content.
References
- https://timesofindia.indiatimes.com/india/misinformation-about-cancer-reproductive-health-is-widespread-in-india-impacting-medical-decisions-says-report/articleshow/115931612.cms
- https://www.ndtv.com/india-news/cancer-misinformation-prevalent-in-india-trust-in-medicine-crucial-report-7165458
- https://www.newindian.in/ai-driven-health-misinformation-poses-threat-to-indias-public-health-report/
- https://www.etvbharat.com/en/!health/youtube-latest-initiative-combat-health-misinformation-india-enn24121002361
- https://blog.google/intl/en-in/products/platforms/new-ways-for-registered-healthcare-professionals-in-india-to-reach-people-on-youtube/
- https://www.bbc.com/news/articles/ckgz2p0999yo