#FactCheck - Stunning 'Mount Kailash' Video Exposed as AI-Generated Illusion!
EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).
Related Blogs

Introduction
Deepfakes are artificial intelligence (AI) technology that employs deep learning to generate realistic-looking but phoney films or images. Algorithms use large volumes of data to analyse and discover patterns in order to provide compelling and realistic results. Deepfakes use this technology to modify movies or photos to make them appear as if they involve events or persons that never happened or existed.The procedure begins with gathering large volumes of visual and auditory data about the target individual, which is usually obtained from publicly accessible sources such as social media or public appearances. This data is then utilised for training a deep-learning model to resemble the target of deep fakes.
Recent Cases of Deepfakes-
In an unusual turn of events, a man from northern China became the victim of a sophisticated deep fake technology. This incident has heightened concerns about using artificial intelligence (AI) tools to aid financial crimes, putting authorities and the general public on high alert.
During a video conversation, a scammer successfully impersonated the victim’s close friend using AI-powered face-swapping technology. The scammer duped the unwary victim into transferring 4.3 million yuan (nearly Rs 5 crore). The fraud occurred in Baotou, China.
AI ‘deep fakes’ of innocent images fuel spike in sextortion scams
Artificial intelligence-generated “deepfakes” are fuelling sextortion frauds like a dry brush in a raging wildfire. According to the FBI, the number of nationally reported sextortion instances came to 322% between February 2022 and February 2023, with a notable spike since April due to AI-doctored photographs. And as per the FBI, innocent photographs or videos posted on social media or sent in communications can be distorted into sexually explicit, AI-generated visuals that are “true-to-life” and practically hard to distinguish. According to the FBI, predators often located in other countries use doctored AI photographs against juveniles to compel money from them or their families or to obtain actual sexually graphic images.
Deepfake Applications
- Lensa AI.
- Deepfakes Web.
- Reface.
- MyHeritage.
- DeepFaceLab.
- Deep Art.
- Face Swap Live.
- FaceApp.
Deepfake examples
There are numerous high-profile Deepfake examples available. Deepfake films include one released by actor Jordan Peele, who used actual footage of Barack Obama and his own imitation of Obama to convey a warning about Deepfake videos.
A video shows Facebook CEO Mark Zuckerberg discussing how Facebook ‘controls the future’ with stolen user data, most notably on Instagram. The original video is from a speech he delivered on Russian election meddling; only 21 seconds of that address were used to create the new version. However, the vocal impersonation fell short of Jordan Peele’s Obama and revealed the truth.
The dark side of AI-Generated Misinformation
- Misinformation generated by AI-generated the truth, making it difficult to distinguish fact from fiction.
- People can unmask AI content by looking for discrepancies and lacking the human touch.
- AI content detection technologies can detect and neutralise disinformation, preventing it from spreading.
Safeguards against Deepfakes-
Technology is not the only way to guard against Deepfake videos. Good fundamental security methods are incredibly effective for combating Deepfake.For example, incorporating automatic checks into any mechanism for disbursing payments might have prevented numerous Deepfake and related frauds. You might also:
- Regular backups safeguard your data from ransomware and allow you to restore damaged data.
- Using different, strong passwords for different accounts ensures that just because one network or service has been compromised, it does not imply that others have been compromised as well. You do not want someone to be able to access your other accounts if they get into your Facebook account.
- To secure your home network, laptop, and smartphone against cyber dangers, use a good security package such as Kaspersky Total Security. This bundle includes anti-virus software, a VPN to prevent compromised Wi-Fi connections, and webcam security.
What is the future of Deepfake –
Deepfake is constantly growing. Deepfake films were easy to spot two years ago because of the clumsy movement and the fact that the simulated figure never looked to blink. However, the most recent generation of bogus videos has evolved and adapted.
There are currently approximately 15,000 Deepfake videos available online. Some are just for fun, while others attempt to sway your opinion. But now that it only takes a day or two to make a new Deepfake, that number could rise rapidly.
Conclusion-
The distinction between authentic and fake content will undoubtedly become more challenging to identify as technology advances. As a result, experts feel it should not be up to individuals to discover deep fakes in the wild. “The responsibility should be on the developers, toolmakers, and tech companies to create invisible watermarks and signal what the source of that image is,” they stated. Several startups are also working on approaches for detecting deep fakes.

Introduction
The Computer Emergency Response Team (CERT-in) is a nodal agency of the government established and appointed as a national agency in respect of cyber incidents and cyber security incidents in terms of the provisions of section 70B of the Information Technology (IT) Act, 2000. CERT-In has issued a cautionary note to Microsoft Edge, Adobe and Google Chrome users. Users have been alerted to many vulnerabilities by the government's cybersecurity agency, which hackers might use to obtain private data and run arbitrary code on the targeted machine. Users are advised by CERT-In to apply a security update right away in order to guard against the problem.
Vulnerability note
Vulnerability notes CIVN-2023-0361, CIVN-2023-0362 and CIVN-2023-0364 for Google Chrome for Desktop, Microsoft Edge and Adobe respectively, include more information on the alert. The problems have been categorized as high-severity issues by CERT-In, which suggests applying a security upgrade right now. According to the warning, there is a security risk if you use Google Chrome versions earlier than v120.0.6099.62 on Linux and Mac, or earlier than 120.0.6099.62/.63 on Windows. Similar to this, the vulnerability may also impact users of Microsoft Edge browser versions earlier than 120.0.2210.61.
Cause of the Problem
These vulnerabilities are caused by "Use after release in Media Stream, Side Panel Search, and Media Capture; Inappropriate implementation in Autofill and Web Browser UI, “according to the explanation in the issue note on the CERT-In website. The alert further warns that individuals who use the susceptible Microsoft Edge and Google Chrome browsers could end up being targeted by a remote attacker using these vulnerabilities to send a specially crafted request.” Once these vulnerabilities are effectively exploited, hackers may obtain higher privileges, obtain sensitive data, and run arbitrary code on the system of interest.
High-security issues: consequences
CERT-In has brought attention to vulnerabilities in Google Chrome, Microsoft Edge, and Adobe that might have serious repercussions and put users and their systems at risk. The vulnerabilities found in widely used browsers, like Adobe, Microsoft Edge, and Google Chrome, present serious dangers that might result in data breaches, unauthorized code execution, privilege escalation, and remote attacks. If these vulnerabilities are taken advantage of, private information may be violated, money may be lost, and reputational harm may result.
Additionally, the confidentiality and integrity of sensitive information may be compromised. The danger also includes the potential to interfere with services, cause outages, reduce productivity, and raise the possibility of phishing and social engineering assaults. Users may become less trusting of the impacted software as a result of the urgent requirement for security upgrades, which might make them hesitant to utilize these platforms until guarantees of thorough security procedures are provided.
Advisory
- Users should update their Google Chrome, Microsoft Edge, and Adobe software as soon as possible to protect themselves against the vulnerabilities that have been found. These updates are supplied by the individual software makers. Furthermore, use caution when browsing and refrain from downloading things from unidentified sites or clicking on dubious links.
- Make use of reliable ad-blockers and strong, often updated antivirus and anti-malware software. Maintain regular backups of critical data to reduce possible losses in the event of an attack, and keep up with best practices for cybersecurity. Maintaining current security measures with vigilance and proactiveness can greatly lower the likelihood of becoming a target for prospective vulnerabilities.
References

Executive Summary
A video circulating on social media allegedly shows Uttar Pradesh Chief Minister Yogi Adityanath criticizing Bollywood actor Shah Rukh Khan and asking people not to watch his films. Users sharing the clip claim that these statements are recent. CyberPeace’s research has found the claim to be misleading. research revealed that the video is from 2015, long before Yogi Adityanath became the Chief Minister of Uttar Pradesh. At that time, he was serving as a Member of Parliament from Gorakhpur.
Claim
On January 13, 2026, a Facebook user shared the video with the caption: "A clear message from the Hon’ble Chief Minister of Uttar Pradesh, Param Pujya Mahant Yogi Adityanath, urging people not to watch Shah Rukh Khan’s movie. Share this message widely, send it to all groups you are part of, and inform the youth in your family."

Fact Check:
To verify the claim, keyframes from the viral video were extracted and reverse-searched using Google Lens. The same video was found in a Facebook post dated March 28, 2022, where it was shared with the caption: "Baba Ji’s message to not watch Shah Rukh Khan’s ‘Pathaan’ movie."

Further research traced the video to Aaj Tak’s website, which reported on November 4, 2015, that then-BJP MP Yogi Adityanath criticized Shah Rukh Khan, comparing his language to that of terrorist Hafiz Saeed, stating that there was no difference in their statements.

A Live Hindustan report from the same date confirmed that Yogi Adityanath had strongly reacted to Shah Rukh Khan’s comments on rising intolerance in India and Hafiz Saeed’s invitation for him to stay in Pakistan. The reports make it clear that Yogi Adityanath criticized Shah Rukh Khan in 2015 by highlighting the similarity between his statements and those of Hafiz Saeed. At the same time, Shah Rukh Khan had highlighted growing intolerance in the country, citing incidents where filmmakers, scientists, and authors were returning awards, describing it as a sign of “deep intolerance” in India.

Conclusion:
Our research found that the statement attributed to Chief Minister Yogi Adityanath circulating on social media is not recent. The video dates back to 2015, a time when Yogi Adityanath was not yet the Chief Minister of Uttar Pradesh.