#FactCheck - AI-Generated Image of Abhishek Bachchan and Aishwarya Rai Falsely Linked to Kedarnath Visit
A photo featuring Bollywood actor Abhishek Bachchan and actress Aishwarya Rai is being widely shared on social media. In the image, the Kedarnath Temple is clearly visible in the background. Users are claiming that the couple recently visited the Kedarnath shrine for darshan.
Cyber Peace Foundation’s research found the viral claim to be false. Our research revealed that the image of Abhishek Bachchan and Aishwarya Rai is not real, but AI-generated, and is being misleadingly shared as a genuine photograph.
Claim
On January 14, 2026, a user on X (formerly Twitter) shared the viral image with a caption suggesting that all rumours had ended and that the couple had restarted their life together. The post further claimed that both actors were seen smiling after a long time, implying that the image was taken during their visit to Kedarnath Temple.
The post has since been widely circulated on social media platforms

Fact Check:
To verify the claim, we first conducted a keyword search on Google related to Abhishek Bachchan, Aishwarya Rai, and a Kedarnath visit. However, we did not find any credible media reports confirming such a visit.
On closely examining the viral image, several visual inconsistencies raised suspicion about it being artificially generated. To confirm this, we scanned the image using the AI detection tool Sightengine. According to the tool’s analysis, the image was found to be 84 percent AI-generated.

Additionally, we scanned the same image using another AI detection tool, HIVE Moderation. The results showed an even stronger indication, classifying the image as 99 percent AI-generated.

Conclusion
Our research confirms that the viral image showing Abhishek Bachchan and Aishwarya Rai at Kedarnath Temple is not authentic. The picture is AI-generated and is being falsely shared on social media to mislead users.
Related Blogs

Introduction
In recent years, the online gaming sector has seen tremendous growth and is one of the fastest-growing components of the creative economy, contributing significantly to innovation, employment generation and export earnings. India possesses a large pool of skilled young professionals, strong technological capabilities and a rapidly growing domestic market, which together provide an opportunity for the country to assume a leadership role in the global value chain of online gaming. With this, the online gaming industry has also faced an environment of exploitation, abuse, with notable cases of fraud, money laundering, and other emerging cybercrimes. In order to protect the interests of players, ensure fair play and competition, safe and secure online gaming environment, the need for introducing and establishing dedicated gaming regulation was a need of the hour.
On 20 August 2025, the Union government introduced a new bill, ‘Promotion and Regulation of Online Gaming Bill, 2025’ in Lok Sabha that seeks to prohibit online money gaming, including advertisements and financial transactions related to such platforms. From the introduction, the said bill was passed at 5 PM on the same date. Further, the upper house of parliament (Rajya Sabha) passed the bill on 21st August 2025. The bill can be seen as a progressive step towards building safer online gaming spaces for everyone, especially for our youth and combating the emerging cybercrime threats present in the online gaming landscape.
Key Highlights of the Bill
The Bill extends to the whole of India. It also applies to any online money gaming service offered within India or operated from outside the country but accessible in India.
- Definition of E-sports:
Section 2(1)(c) of the Bill defines e-sports as:-
(i) is played as part of multi-sports events;
(ii) involves organised competitive events between individuals or teams, conducted in multiplayer formats governed by predefined rules;
(iii) is duly recognised under the National Sports Governance Act, 2025, and registered with the Authority or agency under section 3;
(iv) has outcome determined solely by factors such as physical dexterity, mental agility, strategic thinking or other similar skills of users as players;
(v) may include payment of registration or participation fees solely for the purpose of entering the competition or covering administrative costs and may include performance-based prize money by the player; and
(vi)shall not involve the placing of bets, wagers or any other stakes by any person, whether or not such person is a participant, including any winning out of such bets, wagers or any other stakes;
- Prohibition of Online Money Gaming and Advertisement thereof
The Bill prohibits the offering of online money games and online money gaming services. It also bans all forms of advertisements or promotions connected to online money games. This includes endorsements by individuals or entities. - Financial Transactions
Banks, financial institutions, and other intermediaries are barred from facilitating transactions related to online money gaming services. - Criminal Liability
Violation of the provisions on online money gaming can result in imprisonment for up to three years, or a fine of up to ₹1 crore, or both. Repeat offenders face stricter punishment with higher fines and longer jail terms. - Cognizable and Non-Bailable Offences
Offences relating to offering online money gaming services and facilitating financial transactions for such games are categorised as cognizable and non-bailable. This gives law enforcement agencies greater power to act without requiring prior approval.
In conversation with CyberPeace ~
Shailendra Vikram Singh, Former Deputy Secretary (Cyber & Information Security), Ministry of Home Affairs, GOI . He highlighted that
"The passage of the Promotion and Regulation of Online Gaming Bill, 2025 in the Lok Sabha highlights the government’s growing priority on national security, public safety, and health in digital regulation. Unfortunately, the real money gaming industry, despite its growth and promise, did not take proactive steps to address these concerns. The absence of safeguards and engagement left the government with no choice but to adopt a blanket ban."Having worked on this issue from both the government and industry side, the clear lesson is that in sensitive digital sectors, early regulatory alignment and constructive dialogue are not optional but essential. Going forward, collaboration is the only way to achieve a balance between innovation and responsibility.”
CyberPeace Outlook
The Promotion and Regulation of Online Gaming Bill, 2025, marks a decisive policy shift by simultaneously fostering the growth of e-sports, educational and social gaming, and imposing an absolute prohibition on online money games. By recognising e-sports as legitimate, skill-based competitive sports under the National Sports Governance Act, 2025, and establishing a central Authority for oversight, registration, and regulation, the Bill creates an institutional framework for safe and responsible development of the sector. The Bill completely bans real money games (RMGs), regardless of whether they are skill-based or chance-based or both, hence it poses significant questions on RMG companies' legal standing, upon which the gaming industry has raised its conundrum. Further, it addresses urgent threats such as cybercrime, gaming addiction, online betting, money laundering, and the misuse of gaming platforms for illicit activities. The move reflects a balanced approach, encouraging innovation and digital skill-building, while safeguarding public order, consumer interests, and financial integrity.
References
- https://prsindia.org/files/bills_acts/bills_parliament/2025/Bill_Text-Online_Gaming_Bill_2025.pdf
- https://prsindia.org/billtrack/the-promotion-and-regulation-of-online-gaming-bill-2025
- https://www.hindustantimes.com/india-news/rajya-sabha-clears-online-gaming-bill-a-day-after-lok-sabha-approval-101755766847840.html

Introduction
The rise in start-up culture, increasing investments, and technological breakthroughs are being encouraged alongside innovations and the incorporation of generative Artificial Intelligence elements. Witnessing the growing focus on human-centred AI, its potential to transform industries like education remains undeniable. Enhancing experiences and inculcating new ways of learning, there is much to be explored. Recently, a Delhi-based non-profit called Rocket Learning, in collaboration with Google.org, launched Appu- a personalised AI educational tool providing a multilingual and conversational learning experience for kids between 3 and 6.
AI Appu
Developed in 6 months, along with the help of dedicated Google.org fellows, interactive Appu has resonated with those the founders call “super-users,” i.e. parents and caregivers. Instead of redirecting students to standard content and instructional videos, it operates on the idea of conversational learning, one equally important for children in the targeted age bracket. Designed in the form of an elephant, Appu is supposed to be a personalised tutor, helping both children and parents understand concepts through dialogue. AI enables the generation of different explanations in case of doubt, aiding in understanding. If children were to answer in mixed languages instead of one complete sentence in a single language (eg, Hindi and English), the AI would still consider it as a response. The AI lessons are two minutes long and are inculcated with real-world examples. The emphasis on interactive and fun learning of concepts through innovation enhances the learning experience. Currently only available in Hindi, it is being worked on to include 20 other languages such as Punjabi and Marathi.
UNESCO, AI, and Education
It is important to note that such innovations also find encouragement in UNESCO’s mandate as AI in education contributes to achieving the 2030 Agenda of Sustainable Development Goals (here; SDG 4- focusing on quality education). Within the ambit of the Beijing Consensus held in 2019, UNESCO encourages a human-centred approach to AI, and has also developed the “Artificial Intelligence and Education: Guidance for Policymakers” aiming towards understanding its potential and opportunities in education as well as the core competencies it needs to work on. Another publication was launched during one of the flagship events of UNESCO- (Digital Learning Week, 2024) - AI competency frameworks for both, students and teachers which provide a roadmap for assessing the potential and risks of AI, each covering common aspects such as AI ethics, and human-centred mindset and even certain distinct options such as AI system design for students and AI pedagogy for teachers.
Potential Challenges
While AI holds immense promise in education, innovation with regard to learning is contentious as several risks must be carefully managed. Depending on the innovation, AI’s struggle with multitasking beyond the classroom, such as administrative duties and tedious grading, which require highly detailed role descriptions could prove to be a challenge. This can become exhausting for developers managing innovative AI systems, as they would have to fit various responses owing to the inherent nature of AI needing to be trained to produce output. Security concerns are another major issue, as data breaches could compromise sensitive student information. Implementation costs also present challenges, as access to AI-driven tools depends on financial resources. Furthermore, AI-driven personalised learning, while beneficial, may inadvertently reduce student motivation, also compromising students' soft skills, such as teamwork and communication, which are crucial for real-world success. These risks highlight the need for a balanced approach to AI integration in education.
Conclusion
Innovations related to education, especially the ones that focus on a human-centred AI approach, have immense potential in not only enhancing learning experiences but also reshaping how knowledge is accessed, understood, and applied. Untapped potential using other services is also encouraged in this sector. However, maintaining a balance between fostering intrigue and ensuring the inculcation of ethical and secure AI remains imperative.
References
- https://www.unesco.org/en/articles/what-you-need-know-about-unescos-new-ai-competency-frameworks-students-and-teachers?hub=32618
- https://www.unesco.org/en/digital-education/artificial-intelligence
- https://www.deccanherald.com/technology/google-backed-rocket-learning-launches-appu-an-ai-powered-tutor-for-kids-3455078
- https://indianexpress.com/article/technology/artificial-intelligence/how-this-google-backed-ai-tool-is-reshaping-education-appu-9896391/
- https://www.thehindu.com/business/ai-appu-to-tutor-children-in-india/article69354145.ece
- https://www.velvetech.com/blog/ai-in-education-risks-and-concerns/

Introduction
According to a new McAfee survey, 88% of American customers believe that cybercriminals will utilize artificial intelligence to "create compelling online scams" over the festive period. In the meanwhile, 31% believe it will be more difficult to determine whether messages from merchants or delivery services are genuine, while 57% believe phishing emails and texts will be more credible. The study, which was conducted in September 2023 in the United States, Australia, India, the United Kingdom, France, Germany, and Japan, yielded 7,100 responses. Some people may decide to cut back on their online shopping as a result of their worries about AI; among those surveyed, 19% stated they would do so this year.
In 2024, McAfee predicts a rise in AI-driven scams on social media, with cybercriminals using advanced tools to create convincing fake content, exploiting celebrity and influencer identities. Deepfake technology may worsen cyberbullying, enabling the creation of realistic fake content. Charity fraud is expected to rise, leveraging AI to set up fake charity sites. AI's use by cybercriminals will accelerate the development of advanced malware, phishing, and voice/visual cloning scams targeting mobile devices. The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content.
AI Scams' Increase on Social Media
Cybercriminals plan to use strong artificial intelligence capabilities to control social media by 2024. These applications become networking goldmines because they make it possible to create realistic images, videos, and audio. Anticipate the exploitation of influencers and popular identities by cybercriminals.
AI-powered Deepfakes and the Rise in Cyberbullying
The negative turn that cyberbullying might take in 2024 with the use of counterfeit technology is one trend to be concerned about. This cutting-edge technique is freely accessible to youngsters, who can use it to produce eerily convincing synthetic content that compromises victims' privacy, identities, and wellness.
In addition to sharing false information, cyberbullies have the ability to alter public photographs and re-share edited, detailed versions, which exacerbates the suffering done to children and their families. The study issues a warning, stating that deepfake technology would probably cause online harassment to take a negative turn. With this sophisticated tool, young adults may now generate frighteningly accurate synthetic content in addition to using it for fun. The increasing severity of these deceptive pictures and phrases can cause serious, long-lasting harm to children and their families, impairing their identity, privacy, and overall happiness.
Evolvement of GenAI Fraud in 2023
We simply cannot get enough of these persistent frauds and fake emails. People in general are now rather adept at [recognizing] those that are used extensively. But if they become more precise, such as by utilizing AI-generated audio to seem like a loved one's distress call or information that is highly personal to the person, users should be much more cautious about them. The rise in popularity of generative AIs brings with it a new wrinkle, as hackers can utilize these systems to refine their attacks:
- Writing communications more skillfully in order to deceive consumers into sending sensitive information, clicking on a link, or uploading a file.
- Recreate emails and business websites as realistically as possible to prevent arousing concern in the minds of the perpetrators.
- People's faces and voices can be cloned, and deepfakes of sounds or images can be created that are undetectable to the target audience. a problem that has the potential to greatly influence schemes like CEO fraud.
- Because generative AIs can now hold conversations, and respond to victims efficiently.
- Conduct psychological manipulation initiatives more quickly, with less money spent, and with greater complexity and difficulty in detecting them. AI generative already in use in the market can write texts, clone voices, or generate images and program websites.
AI Hastens the Development of Malware and Scams
Even while artificial intelligence (AI) has many uses, cybercriminals are becoming more and more dangerous with it. Artificial intelligence facilitates the rapid creation of sophisticated malware, illicit web pages, and plausible phishing and smishing emails. As these risks become more accessible, mobile devices will be attacked more frequently, with a particular emphasis on audio and visual impersonation schemes.
Olympic Games: A Haven for Scammers
The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content. Cybercriminals are skilled at profiting from big occasions, and the buzz that will surround the 2024 Olympic Games around the world will make it an ideal time for scams. Con artists will take advantage of customers' excitement by focusing on followers who are ready to purchase tickets, arrange travel, obtain special content, and take part in giveaways. During this prominent event, vigilance is essential to avoid an invasion of one's personal records and financial data.
Development of McAfee’s own bot to assist users in screening potential scammers and authenticators for messages they receive
Precisely such kind of technology is under the process of development by McAfee. It's critical to emphasize that solving the issue is a continuous process. AI is being manipulated by bad actors and thus, one of the tricksters can pull off is to exploit the fact that consumers fall for various ruses as parameters to train advanced algorithms. Thus, the con artists may make use of the gadgets, test them on big user bases, and improve with time.
Conclusion
According to the McAfee report, 88% of American customers are consistently concerned about AI-driven internet frauds that target them around the holidays. Social networking poses a growing threat to users' privacy. By 2024, hackers hope to take advantage of AI skills and use deepfake technology to exacerbate harassment. By mimicking voices and faces for intricate schemes, generative AI advances complex fraud. The surge in charitable fraud affects both social and financial aspects, and the 2024 Olympic Games could serve as a haven for scammers. The creation of McAfee's screening bot highlights the ongoing struggle against developing AI threats and highlights the need for continuous modification and increased user comprehension in order to combat increasingly complex cyber deception.
References
- https://www.fonearena.com/blog/412579/deepfake-surge-ai-scams-2024.html
- https://cxotoday.com/press-release/mcafee-reveals-2024-cybersecurity-predictions-advancement-of-ai-shapes-the-future-of-online-scams/#:~:text=McAfee%20Corp.%2C%20a%20global%20leader,and%20increasingly%20sophisticated%20cyber%20scams.
- https://timesofindia.indiatimes.com/gadgets-news/deep-fakes-ai-scams-and-other-tools-cybercriminals-could-use-to-steal-your-money-and-personal-details-in-2024/articleshow/106126288.cms
- https://digiday.com/media-buying/mcafees-cto-on-ai-and-the-cat-and-mouse-game-with-holiday-scams/