#FactCheck - Stunning 'Mount Kailash' Video Exposed as AI-Generated Illusion!
EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).
Related Blogs

Introduction
The recent inauguration of the Google Safety Engineering Centre (GSEC) in Hyderabad on 18th June, 2025, marks a pivotal moment not just for India, but for the entire Asia-Pacific region’s digital future. As only the fourth such centre in the world after Munich, Dublin, and Málaga, its presence signals a shift in how AI safety, cybersecurity, and digital trust are being decentralised, leading to a more globalised and inclusive tech ecosystem. India’s digitisation over the years has grown at a rapid scale, introducing millions of first-time internet users, who, depending on their awareness, are susceptible to online scams, phishing, deepfakes, and AI-driven fraud. The establishment of GSEC is not just about launching a facility but a step towards addressing AI readiness, user protection, and ecosystem resilience.
Building a Safer Digital Future in the Global South
The GSEC is set to operationalise the Google Safety Charter, designed around three core pillars: empowering users by protecting them from online fraud, strengthening government cybersecurity and enterprise, and advancing responsible AI in the platform design and execution. This represents a shift from the standard reactive safety responses to proactive, AI-driven risk mitigation. The goal is to make safety tools not only effective, but tailored to threats unique to the Global South, from multilingual phishing to financial fraud via unofficial lending apps. This centre is expected to stimulate regional cybersecurity ecosystems by creating jobs, fostering public-private partnerships, and enabling collaboration across academia, law enforcement, civil society, and startups. In doing so, it positions Asia-Pacific not as a consumer of the standard Western safety solutions but as an active contributor to the next generation of digital safeguards and customised solutions.
Previous piloted solutions by Google include DigiKavach, a real-time fraud detection framework, and tools like spam protection in mobile operating systems and app vetting mechanisms. What GSEC might aid with is the scaling and integration of these efforts into systems-level responses, where threat detection, safety warnings, and reporting mechanisms, etc., would ensure seamless coordination and response across platforms. This reimagines safety as a core design principle in India’s digital public infrastructure rather than focusing on attack-based response.
CyberPeace Insights
The launch aligns with events such as the AI Readiness Methodology Conference recently held in New Delhi, which brought together researchers, policymakers, and industry leaders to discuss ethical, secure, and inclusive AI implementation. As the world grapples with how to deal with AI technologies ranging from generative content to algorithmic decisions, centres like GSEC can play a critical role in defining the safeguards and governance structures that can support rapid innovation without compromising public trust and safety. The region’s experiences and innovations in AI governance must shape global norms, and the role of Tech firms in doing so is significant. Apart from this, efforts with respect to creating digital infrastructure and safety centres addressing their protection resonate with India’s vision of becoming a global leader in AI.
References
- https://www.thehindu.com/news/cities/Hyderabad/google-safety-engineering-centre-india-inaugurated-in-hyderabad/article69708279.ece
- https://www.businesstoday.in/technology/news/story/google-launches-safety-charter-to-secure-indias-ai-future-flags-online-fraud-and-cyber-threats-480718-2025-06-17?utm_source=recengine&utm_medium=web&referral=yes&utm_content=footerstrip-1&t_source=recengine&t_medium=web&t_content=footerstrip-1&t_psl=False
- https://blog.google/intl/en-in/partnering-indias-success-in-a-new-digital-paradigm/
- https://blog.google/intl/en-in/company-news/googles-safety-charter-for-indias-ai-led-transformation/
- https://economictimes.indiatimes.com/magazines/panache/google-rolls-out-hyderabad-hub-for-online-safety-launches-first-indian-google-safety-engineering-centre/articleshow/121928037.cms?from=mdr

Executive Summary:
Given that AI technologies are evolving at a fast pace in 2024, an AI-oriented phishing attack on a large Indian financial institution illustrated the threats. The documentation of the attack specifics involves the identification of attack techniques, ramifications to the institution, intervention conducted, and resultant effects. The case study also turns to the challenges connected with the development of better protection and sensibilisation of automatized threats.
Introduction
Due to the advancement in AI technology, its uses in cybercrimes across the world have emerged significant in financial institutions. In this report a serious incident that happened in early 2024 is analysed, according to which a leading Indian bank was hit by a highly complex, highly intelligent AI-supported phishing operation. Attack made use of AI’s innate characteristic of data analysis and data persuasion which led into a severe compromise of the bank’s internal structures.
Background
The chosen financial institution, one of the largest banks in India, had a good background regarding the extremity of its cybersecurity policies. However, these global cyberattacks opened up new threats that AI-based methods posed that earlier forms of security could not entirely counter efficiently. The attackers concentrated on the top managers of the bank because it is evident that controlling such persons gives the option of entering the inner systems as well as financial information.
Attack Execution
The attackers utilised AI in sending the messages that were an exact look alike of internal messages sent between employees. From Facebook and Twitter content, blog entries, and lastly, LinkedIn connection history and email tenor of the bank’s executives, the AI used to create these emails was highly specific. Some of these emails possessed official formatting, specific internal language, and the CEO’s writing; this made them very realistic.
It also used that link in phishing emails that led the users to a pseudo internal portal in an attempt to obtain the login credentials. Due to sophistication, the targeted individuals thought the received emails were genuine, and entered their log in details easily to the bank’s network, thus allowing the attackers access.
Impact
It caused quite an impact to the bank in every aspect. Numerous executives of the company lost their passwords to the fake emails and compromised several financial databases with information from customer accounts and transactions. The break-in permitted the criminals to cease a number of the financial’s internet services hence disrupting its functions and those of its customers for a number of days.
They also suffered a devastating blow to their customer trust because the breach revealed the bank’s weakness against contemporary cyber threats. Apart from managing the immediate operations which dealt with mitigating the breach, the financial institution was also toppling a long-term reputational hit.
Technical Analysis and Findings
1. The AI techniques that are used in generation of the phishing emails are as follows:
- The attack used powerful NLP technology, which was most probably developed using the large-scaled transformer, such as GPT (Generative Pre-trained Transformer). Since these models are learned from large data samples they used the examples of the conversation pieces from social networks, emails and PC language to create quite credible emails.
Key Technical Features:
- Contextual Understanding: The AI was able to take into account the nature of prior interactions and thus write follow up emails that were perfectly in line with prior discourse.
- Style Mimicry: The AI replicated the writing of the CEO given the emails of the CEO and then extrapolated from the data given such elements as the tone, the language, and the format of the signature line.
- Adaptive Learning: The AI actively adapted from the mistakes, and feedback to tweak the generated emails for other tries and this made it difficult to detect.
2. Sophisticated Spear-Phishing Techniques
Unlike ordinary phishing scams, this attack was phishing using spear-phishing where the attackers would directly target specific people using emails. The AI used social engineering techniques that significantly increased the chances of certain individuals replying to certain emails based on algorithms which machine learning furnished.
Key Technical Features:
- Targeted Data Harvesting: Cyborgs found out the employees of the organisation and targeted messages via the public profiles and messengers were scraped.
- Behavioural Analysis: The latest behaviour pattern concerning the users of the social networking sites and other online platforms were used by the AI to forecast the courses of action expected to be taken by the end users such as clicking on the links or opening of the attachments.
- Real-Time Adjustments: These are times when it was determined that the response to the phishing email was necessary and the use of AI adjusted the consequent emails’ timing and content.
3. Advanced Evasion Techniques
The attackers were able to pull off this attack by leveraging AI in their evasion from the normal filters placed in emails. These techniques therefore entailed a modification of the contents of the emails in a manner that would not be easily detected by the spam filters while at the same time preserving the content of the message.
Key Technical Features:
- Dynamic Content Alteration: The AI merely changed the different aspects of the email message slightly to develop several versions of the phishing email that would compromise different algorithms.
- Polymorphic Attacks: In this case, polymorphic code was used in the phishing attack which implies that the actual payloads of the links changed frequently, which means that it was difficult for the AV tools to block them as they were perceived as threats.
- Phantom Domains: Another tactic employed was that of using AI in generating and disseminating phantom domains, that are actual web sites that appear to be legitimate but are in fact short lived specially created for this phishing attack, adding to the difficulty of detection.
4. Exploitation of Human Vulnerabilities
This kind of attack’s success was not only in AI but also in the vulnerability of people, trust in familiar language and the tendency to obey authorities.
Key Technical Features:
- Social Engineering: As for the second factor, AI determined specific psychological principles that should be used in order to maximise the chance of the targeted recipients opening the phishing emails, namely the principles of urgency and familiarity.
- Multi-Layered Deception: The AI was successfully able to have a two tiered approach of the emails being sent as once the targeted individuals opened the first mail, later the second one by pretext of being a follow up by a genuine company/personality.
Response
On sighting the breach, the bank’s cybersecurity personnel spring into action to try and limit the fallout. They reported the matter to the Indian Computer Emergency Response Team (CERT-In) to find who originated the attack and how to block any other intrusion. The bank also immediately started taking measures to strengthen its security a bit further, for instance, in filtering emails, and increasing the authentication procedures.
Knowing the risks, the bank realised that actions should be taken in order to enhance the cybersecurity level and implement a new wide-scale cybersecurity awareness program. This programme consisted of increasing the awareness of employees about possible AI-phishing in the organisation’s info space and the necessity of checking the sender’s identity beforehand.
Outcome
Despite the fact and evidence that this bank was able to regain its functionality after the attack without critical impacts with regards to its operations, the following issues were raised. Some of the losses that the financial institution reported include losses in form of compensation of the affected customers and costs of implementing measures to enhance the financial institution’s cybersecurity. However, the principle of the incident was significantly critical of the bank as customers and shareholders began to doubt the organisation’s capacity to safeguard information in the modern digital era of advanced artificial intelligence cyber threats.
This case depicts the importance for the financial firms to align their security plan in a way that fights the new security threats. The attack is also a message to other organisations in that they are not immune from such analysis attacks with AI and should take proper measures against such threats.
Conclusion
The recent AI-phishing attack on an Indian bank in 2024 is one of the indicators of potential modern attackers’ capabilities. Since the AI technology is still progressing, so are the advances of the cyberattacks. Financial institutions and several other organisations can only go as far as adopting adequate AI-aware cybersecurity solutions for their systems and data.
Moreover, this case raises awareness of how important it is to train the employees to be properly prepared to avoid the successful cyberattacks. The organisation’s cybersecurity awareness and secure employee behaviours, as well as practices that enable them to understand and report any likely artificial intelligence offences, helps the organisation to minimise risks from any AI attack.
Recommendations
- Enhanced AI-Based Defences: Financial institutions should employ AI-driven detection and response products that are capable of mitigating AI-operation-based cyber threats in real-time.
- Employee Training Programs: CYBER SECURITY: All employees should undergo frequent cybersecurity awareness training; here they should be trained on how to identify AI-populated phishing.
- Stricter Authentication Protocols: For more specific accounts, ID and other security procedures should be tight in order to get into sensitive ones.
- Collaboration with CERT-In: Continued engagement and coordination with authorities such as the Indian Computer Emergency Response Team (CERT-In) and other equivalents to constantly monitor new threats and valid recommendations.
- Public Communication Strategies: It is also important to establish effective communication plans to address the customers of the organisations and ensure that they remain trusted even when an organisation is facing a cyber threat.
Through implementing these, financial institutions have an opportunity for being ready with new threats that come with AI and cyber terrorism on essential financial assets in today’s complex IT environments.

Executive Summary:
A photo circulating on the web that claims to show the future design of the Bhabha Atomic Research Center, BARC building, has been found to be fake after fact checking has been done. Nevertheless, there is no official notice or confirmation from BARC on its website or social media handles. Through the AI Content Detection tool, we have discovered that the image is a fake as it was generated by an AI. In short, the viral picture is not the authentic architectural plans drawn up for the BARC building.

Claims:
A photo allegedly representing the new outlook of the Bhabha Atomic Research Center (BARC) building is reigning over social media platforms.


Fact Check:
To begin our investigation, we surfed the BARC's official website to check out their tender and NITs notifications to inquire for new constructions or renovations.
It was a pity that there was no corresponding information on what was being claimed.

Then, we hopped on their official social media pages and searched for any latest updates on an innovative building construction, if any. We looked on Facebook, Instagram and X . Again, there was no information about the supposed blueprint. To validate the fact that the viral image could be generated by AI, we gave a search on an AI Content Detection tool by Hive that is called ‘AI Classifier’. The tool's analysis was in congruence with the image being an AI-generated computer-made one with 100% accuracy.

To be sure, we also used another AI-image detection tool called, “isitai?” and it turned out to be 98.74% AI generated.

Conclusion:
To conclude, the statement about the image being the new BARC building is fake and misleading. A detailed investigation, examining BARC's authorities and utilizing AI detection tools, proved that the picture is more probable an AI-generated one than an original architectural design. BARC has not given any information nor announced anything for such a plan. This makes the statement untrustworthy since there is no credible source to support it.
Claim: Many social media users claim to show the new design of the BARC building.
Claimed on: X, Facebook
Fact Check: Misleading