#FactCheck - Stunning 'Mount Kailash' Video Exposed as AI-Generated Illusion!
EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).
Related Blogs
.webp)
Introduction
Empowering today’s youth with the right skills is more crucial than ever in a rapidly evolving digital world. Every year on July 15th, the United Nations marks World Youth Skills Day to emphasise the critical role of skills development in preparing young people for meaningful work and resilient futures. As AI transforms industries and societies, equipping young minds with digital and AI skills is key to fostering security, adaptability, and growth in the years ahead.
Why AI Upskilling is Crucial in Modern Cyber Defence
Security in the digital age remains a complex challenge, regardless of the presence of Artificial Intelligence (AI). It is one of the biggest modern ironies, and not only that, it is a paradox wrapped in code, where the cure and the curse are written in the same language. The very hand that protects the world from cyber threats can very well be used for the creation of that threat. This being said, the modern-day implementation of AI has to circumvent the threats posed by it or any other advanced technology. A solid grasp of AI and machine learning mechanisms is no longer optional; it is fundamental for modern cybersecurity. The traditional cybersecurity training programs employ static content, which can often become outdated and inadequate for the vulnerabilities. AI-powered solutions, such as intrusion detection systems and next-generation firewalls, use behavioural analysis instead of just matching signatures. AI models are susceptible, nevertheless, as malevolent actors can introduce hostile inputs or tainted data to trick computers into incorrect classification. Data poisoning is a major threat to AI defences, according to Cisco's evidence.
As threats surpass the current understanding of cybersecurity professionals, a need arises to upskill them in advanced AI technologies so that they can fortify the security of current systems. Two of the most important skills for professionals would be AI/ML Model Auditing and Data Science. Skilled data scientists can sift through vast logs, from pocket captures to user profiles, to detect anomalies, assess vulnerabilities, and anticipate attacks. A news report from Business Insider puts it correctly: ‘It takes a good-guy AI to fight a bad-guy AI.’ The technology of generative AI is quite new. As a result, it poses fresh security issues and faces security risks like data exfiltration and prompt injections.
Another method that can prove effective is Natural Language Processing (NLP), which helps machines process this unstructured data, enabling automated spam detection, sentiment analysis, and threat context extraction. Security teams skilled in NLP can deploy systems that flag suspicious email patterns, detect malicious content in code reviews, and monitor internal networks for insider threats, all at speeds and scales humans cannot match.
The AI skills, as aforementioned, are not only for courtesy’s sake; they have become essential in the current landscape. India is not far behind in this mission; it is committed, along with its western counterparts, to employ the emerging technologies in its larger goal of advancement. With quiet confidence, India takes pride in its remarkable capacity to nurture exceptional talent in science and technology, with Indian minds making significant contributions across global arenas.
AI Upskilling in India
As per a news report of March 2025, Jayant Chaudhary, Minister of State, Ministry of Skill Development & Entrepreneurship, highlighted that various schemes under the Skill India Programme (SIP) guarantee greater integration of emerging technologies, such as artificial intelligence (AI), cybersecurity, blockchain, and cloud computing, to meet industry demands. The SIP’s parliamentary brochure states that more than 6.15 million recipients have received training as of December 2024. Other schemes that facilitate educating and training professionals, such as Data Scientist, Business Intelligence Analyst, and Machine Learning Engineer are,
- Pradhan Mantri Kaushal Vikas Yojana 4.0 (PMKVY 4.0)
- Pradhan Mantri National Apprenticeship Promotion Scheme (PM-NAPS)
- Jan Shikshan Sansthan (JSS)
Another report showcases how Indian companies, or companies with their offices in India such as Ernst & Young (EY), are recognising the potential of the Indian workforce and yet their deficiencies in emerging technologies and leading the way by internal upskilling and establishing an AI Academy, a new program designed to assist businesses in providing their employees with essential AI capabilities, in response to the increasing need for AI expertise. Using more than 200 real-world AI use cases, the program offers interactive, organised learning opportunities that cover everything from basic ideas to sophisticated generative AI capabilities.
In order to better understand the need for these initiatives, a reference is significant to a report backed by Google.org and the Asian Development Bank; India appears to be at a turning point in the global use of AI. As per the research, “AI for All: Building an AI-Ready Workforce in Asia-Pacific,” India urgently needs to provide accessible and efficient AI upskilling despite having the largest workforce in the world. According to the paper, by 2030, AI could boost the Asia-Pacific region’s GDP by up to USD 3 trillion. The key to this potential is India, a country with the youngest and fastest-growing population.
Conclusion and CyberPeace Resolution
As the world stands at the crossroads of innovation and insecurity, India finds itself uniquely poised, with its vast young population and growing technologies. But to truly safeguard its digital future and harness the promise of AI, the country must think beyond flagship schemes. Imagine classrooms where students learn not just to code but to question algorithms, workplaces where AI training is as routine as onboarding.
India’s journey towards digital resilience is not just about mastering technology but about cultivating curiosity, responsibility, and trust. CyberPeace is committed to this future and is resolute in this collective pursuit of an ethically secure digital world. CyberPeace resolves to be an active catalyst in AI upskilling across India. We commit to launching specialised training modules on AI, cybersecurity, and digital ethics tailored for students and professionals. It seeks to close the AI literacy gap and develop a workforce that is both morally aware and technologically proficient by working with educational institutions, skilling initiatives, and industry stakeholders.
References
- https://www.helpnetsecurity.com/2025/03/07/ai-gamified-simulations-cybersecurity/
- https://www.businessinsider.com/artificial-intelligence-cybersecurity-large-language-model-threats-solutions-2025-5?utm
- https://apacnewsnetwork.com/2025/03/ai-5g-skills-boost-skill-india-targets-industry-demands-over-6-15-million-beneficiaries-trained-till-2024/
- https://indianexpress.com/article/technology/artificial-intelligence/india-must-upskill-fast-to-keep-up-with-ai-jobs-says-new-report-10107821/
.webp)
Introduction
YouTube is testing a new feature called ‘Notes,’ which allows users to add community-sourced context to videos. The feature allows users to clarify if a video is a parody or if it is misrepresenting information. The feature builds on existing features to provide helpful content alongside videos. Currently under testing, the feature will be available to a limited number of eligible contributors who will be invited to write notes on videos. These notes will appear publicly under a video if they are found to be broadly helpful. Viewers will be able to rate notes into three categories: ‘Helpful,’ ‘Somewhat helpful,’ or ‘Unhelpful’. Based on the ratings, YouTube will determine which notes are published. The feature will first be rolled out on mobile devices in the U.S. in English. The Google-owned platform will look at ways to improve the feature over time, including whether it makes sense to expand it to other markets.
YouTube To Roll Out The New ‘Notes’ Feature
YouTube is testing an experimental feature that allows users to add notes to provide relevant, timely, and easy-to-understand context for videos. This initiative builds on previous products that display helpful information alongside videos, such as information panels and disclosure requirements when content is altered or synthetic. YouTube in its blog clarified that the pilot will be available on mobiles in the U.S. and in the English language, to start with. During this test phase, viewers, participants, and creators are invited to give feedback on the quality of the notes.
YouTube further stated in its blog that a limited number of eligible contributors will be invited via email or Creator Studio notifications to write notes so that they can test the feature and add value to the system before the organisation decides on next steps and whether or not to expand the feature. Eligibility criteria include having an active YouTube channel in good standing with Yotube’s Community Guidelines.
Viewers in the U.S. will start seeing notes on videos in the coming weeks and months. In this initial pilot, third-party evaluators will rate the helpfulness of notes, which will help train the platform’s systems. As the pilot moves forward, contributors themselves will rate notes as well.
Notes will appear publicly under a video if they are found to be broadly helpful. People will be asked whether they think a note is helpful, somewhat helpful, or unhelpful and the reasons for the same. For example, if a note is marked as ‘Helpful,’ the evaluator will have the opportunity to specify if it is so because it cites high-quality sources or is written clearly and neutrally. A bridging-based algorithm will be used to consider these ratings and determine what notes are published. YouTube is excited to explore new ways to make context-setting even more relevant, dynamic, and unique to the videos we are watching, at scale, across the huge variety of content on YouTube.
CyberPeace Analysis: How Can Notes Help Counter Misinformation
The potential effectiveness of countering misinformation on YouTube using the proposed ‘Notes’ feature is significant. Enabling contributors to include notes on videos can offer relevant and accurate context to clarify any misleading or false information in the video. These notes can aid in enhancing viewers' comprehension of the content and detecting misinformation. The participation from users to rate the added notes as helpful, somewhat helpful, and unhelpful adds a heightened layer of transparency and public participation in identifying the accuracy of the content.
As YouTube intends to gather feedback from its various stakeholders to improve the feature over time, one can look forward to improved policy and practical over time: the feedback mechanism will allow for continuous refinement of the feature, ensuring it effectively addresses misinformation. The platform employs algorithms to identify helpful notes that cater to a broad audience across different perspectives. This helps showcase accurate information and combat misinformation.
Furthermore, along with the Notes feature, YouTube should explore and implement prebunking and debunking strategies on the platform by promoting educational content and empowering users to discern between fact and any misleading information.
Conclusion
The new feature, currently in the testing phase, aims to counter misinformation by providing context, enabling user feedback, leveraging algorithms, promoting transparency, and continuously improving information quality. Considering the diverse audience on the platform and high volumes of daily content consumption, it is important for both the platform operators and users to engage with factual, verifiable information. The fallout of misinformation on such a popular platform can be immense, and so, any mechanism or feature that can help counter the same must be developed to its full potential. Apart from this new Notes feature, YouTube has also implemented certain measures in the past to counter misinformation, such as providing authenticated sources to counter any election misinformation during the recent 2024 elections in India. These efforts are a welcome contribution to our shared responsibility as netizens to create a trustworthy, factual and truly-informational digital ecosystem.
References:
- https://blog.youtube/news-and-events/new-ways-to-offer-viewers-more-context/
- https://www.thehindu.com/sci-tech/technology/internet/youtube-tests-feature-that-will-let-users-add-context-to-videos/article68302933.ece
.webp)
Introduction
Social media has emerged as a leading source of communication and information; its relevance cannot be ignored during natural disasters since it is relied upon by governments and disaster relief organisations as a tool for disseminating aid and relief-related resources and communications instantly. During disaster times, social media has emerged as a primary source for affected populations to access information on relief resources; community forums offering aid resources and official government channels for government aid have enabled efficient and timely administration of relief initiatives.
However, given the nature of social media, misinformation risks during natural disasters has also emerged as a primary concern that severely hampers aid administration during natural disasters. The disaster-disinformation network offers some sensationalised influential campaigns against communities at their most vulnerable. Victims who seek reliable resources during natural calamities often reach out to inhospitable campaigns and may experience delayed or lack of access to necessary healthcare, significantly impacting their recovery and survival. This delay can lead to worsening medical conditions and an increased death toll among those affected by the disaster. Victims may lack clear information on the appropriate agencies to seek assistance from, causing confusion and delays in receiving help.
Misinformation Threat Landscape during Natural Disaster
During the 2018 floods in Kerala, it was noted that a fake video on water leakage from the Mullaperyar Dam created panic among the citizens and negatively impacted the rescue operations. Similarly, in 2017, reports emerged claiming that Hurricane Irma had caused sharks to be displaced onto a Florida highway. Similar stories, accompanied by the same image, resurfaced following Hurricanes Harvey and Florence. The disaster-affected nation may face international criticism and fail to receive necessary support due to its perceived inability to manage the crisis effectively. This lack of confidence from the global community can further exacerbate the challenges faced by the nation, leaving it more vulnerable and isolated in its time of need.
The spread of misinformation through social media severely hinders the administration of aid and relief operations during natural disasters since it hinders first responders' efforts to counteract and reduce the spread of misinformation, rumours, and false information and declines public trust in government, media, and non-governmental organisations (NGOs), who are often the first point of contact for both victims and officials due to their familiarity with the region and the community. In Moldova, it was noted that foreign influence has exploited the ongoing drought to create divisions between the semi-autonomous regions of Transnistria and Gagauzia and the central government in Chisinau. News coverage critical of the government leverages economic and energy insecurities to incite civil unrest in this already unstable region. Additionally, First responders may struggle to locate victims and assist them to safety, complicating rescue operations. The inability to efficiently find and evacuate those in need can result in prolonged exposure to dangerous conditions and a higher risk of injury or death.
Further, international aid from other countries could be impeded, affecting the overall relief effort. Without timely and coordinated support from the global community, the disaster response may be insufficient, leaving many needs unmet. Further, misinformation also impedes military, reducing the effectiveness of rescue and relief operations. Military assistance often plays a crucial role in disaster response, and any delays can hinder efforts to provide immediate and large-scale aid.
Misinformation also creates problems of allocation of relief resources to unaffected areas which resultantly impacts aid processes for regions in actual need. Following the April 2015 earthquake in Nepal, a Facebook post claimed that 300 houses in Dhading needed aid. Shared over 1,000 times, it reached around 350,000 people within 48 hours. The originator aimed to seek help for Ward #4’s villagers via social media. Given the average Facebook user has 350 contacts, the message was widely viewed. However, the need had already been reported on quakemap.org, a crisis-mapping database managed by Kathmandu Living Labs, a week earlier. Helping Hands, a humanitarian group was notified on May 7, and by May 11, Ward #4 received essential food and shelter. The re-sharing and sensationalisation of outdated information could have wasted relief efforts since critical resources would have been redirected to a region that had already been secured.
Policy Recommendations
Perhaps the most important step in combating misinformation during natural disasters is the increasing public education and the rapid, widespread dissemination of early warnings. This was best witnessed in the November 1970 tropical cyclone in southeastern Bangladesh, combined with a high tide, struck southeastern Bangladesh, leaving more than 300,000 people dead and 1.3 million homeless. In May 1985, when a comparable cyclone and storm surge hit the same area, local dissemination of disaster warnings was much improved and the people were better prepared to respond to them. The loss of life, while still high (at about 10,000), the numbers were about 3% of that in 1970. On a similar note, when a devastating cyclone struck the same area of Bangladesh in May 1994, fewer than 1,000 people died. In India, the 1977 cyclone in Andra Pradesh killed 10,000 people, but a similar storm in the same area 13 years later killed only 910. The dramatic difference in mortalities was owed to a new early-warning system connected with radio stations to alert people in low-lying areas.
Additionally, location-based filtering for monitoring social media during disasters is considered as another best practice to curb misinformation. However, agencies should be aware that this method may miss local information from devices without geolocation enabled. A 2012 Georgia Tech study found that less than 1.4 percent of Twitter content is geolocated. Additionally, a study by Humanity Road and Arizona State University on Hurricane Sandy data indicated a significant decline in geolocation data during weather events.
Alternatively, Publish frequent updates to promote transparency and control the message. In emergency management and disaster recovery, digital volunteers—trusted agents who provide online support—can assist overwhelmed on-site personnel by managing the vast volume of social media data. Trained digital volunteers help direct affected individuals to critical resources and disseminate reliable information.
Enhancing the quality of communication requires double-verifying information to eliminate ambiguity and reduce the impact of misinformation, rumors, and false information must also be emphasised. This approach helps prevent alert fatigue and "cry wolf" scenarios by ensuring that only accurate, relevant information is disseminated. Prioritizing ground truth over assumptions and swiftly releasing verified information or acknowledging the situation can bolster an agency's credibility. This credibility allows the agency to collaborate effectively with truth amplifiers. Prebunking and Debunking methods are also effective way to counter misinformation and build cognitive defenses to recognise red flags. Additionally, evaluating the relevance of various social media information is crucial for maintaining clear and effective communication.
References
- https://www.nature.com/articles/s41598-023-40399-9#:~:text=Moreover%2C%20misinformation%20can%20create%20unnecessary,impacting%20the%20rescue%20operations29.
- https://www.redcross.ca/blog/2023/5/why-misinformation-is-dangerous-especially-during-disasters
- https://www.soas.ac.uk/about/blog/disinformation-during-natural-disasters-emerging-vulnerability
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-M dia-Disasters-Emergencies_Mar2018-508.pdf