#FactCheck - Old Japanese Earthquake Footage Falsely Linked to Tibet
Executive Summary:
A viral post on X (formerly Twitter) gained much attention, creating a false narrative of recent damage caused by the earthquake in Tibet. Our findings confirmed that the clip was not filmed in Tibet, instead it came from an earthquake that occurred in Japan in the past. The origin of the claim is traced in this report. More to this, analysis and verified findings regarding the evidence have been put in place for further clarification of the misinformation around the video.

Claim:
The viral video shows collapsed infrastructure and significant destruction, with the caption or claims suggesting it is evidence of a recent earthquake in Tibet. Similar claims can be found here and here

Fact Check:
The widely circulated clip, initially claimed to depict the aftermath of the most recent earthquake in Tibet, has been rigorously analyzed and proven to be misattributed. A reverse image search based on the Keyframes of the claimed video revealed that the footage originated from a devastating earthquake in Japan in the past. According to an article published by a Japanese news website, the incident occurred in February 2024. The video was authenticated by news agencies, as it accurately depicted the scenes of destruction reported during that event.

Moreover, the same video was already uploaded on a YouTube channel, which proves that the video was not recent. The architecture, the signboards written in Japanese script, and the vehicles appearing in the video also prove that the footage belongs to Japan, not Tibet. The video shows news from Japan that occurred in the past, proving the video was shared with different context to spread false information.

The video was uploaded on February 2nd, 2024.
Snap from viral video

Snap from Youtube video

Conclusion:
The video viral about the earthquake recently experienced by Tibet is, therefore, wrong as it appears to be old footage from Japan, a previous earthquake experienced by this nation. Thus, the need for information verification, such that doing this helps the spreading of true information to avoid giving false data.
- Claim: A viral video claims to show recent earthquake destruction in Tibet.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
You must have heard of several techniques of cybercrime up to this point. Many of which we could never have anticipated. Some of these reports are coming from different parts of the country. Where video calls are being utilised to cheat. Through video calls, cybercriminals are making individuals victims of fraud. During this incident, fraudsters film pornographic recordings of both the victims using a screen recorder, then blackmail them by emailing these videos and demanding money. However, cybercriminals are improving their strategies to defraud more people. In this blog post, we will explore the tactics involved in this case, the psychological impact, and ways to combat it. Before we know more about the case, let’s have a look at deep fake, AI, and Sextortion and how fraudsters use technology to commit crimes.
Understanding Deepfake
Deepfake technology is the manipulation or fabrication of multimedia information such as videos, photos, or audio recordings using artificial intelligence (AI) algorithms, and profound learning models. These algorithms process massive quantities of data to learn and imitate human-like behaviour, allowing for very realistic synthetic media development.
Individuals with malicious intent may change facial expressions, bodily movements, and even voices in recordings using deepfake technology, basically replacing a person’s appearance with someone else’s. The produced film can be practically indistinguishable from authentic footage, making it difficult for viewers to distinguish between the two.
Sextortion and technology
Sextortion is a sort of internet blackmail in which offenders use graphic or compromising content to compel others into offering money, sexual favours, or other concessions. This information is usually gained by hacking, social engineering, or tricking people into providing sensitive information.
Deepfake technology combined with sextortion techniques has increased the impact on victims. Deepfakes may now be used by perpetrators to make and distribute pornographic or compromising movies or photographs that seem genuine but are completely fake. As the prospect of discovery grows increasingly credible and tougher to rebut, the stakes for victims rise.
Cyber crooks Deceive
In this present case, cyber thugs first make video calls to people and capture the footage. They then twist the footage and merge it with a distorted naked video. As a result, the victim is obliged to conceal the case. Following that, “they demand money as a ransom to stop releasing the doctored video on the victim’s contacts and social media platforms.” In this case, a video has emerged in which a lady who was supposedly featured in the first film is depicted committing herself because of the shame caused by the video’s release. These extra threats are merely intended to inflict psychological pressure and coercion on the victims.
Sextortionists have reached a new low by profiting from the misfortunes of others, notably targeting deceased victims. The offenders want to maximise emotional pain and persuade the victim into acquiescence by generating deep fake films depicting these persons. They use the inherent compassion and emotion connected with tragedy to exact bigger ransoms from their victims.
This distressing exploitation not only adds urgency to the extortion demands but also preys on the victim’s sensitivity and emotional instability. They even pressurize the victim by impersonating them, and if the demands are fulfilled, the victims may land up in jail.
Tactics used
The morphed death videos are precisely constructed to heighten emotional discomfort and instil terror in the targeted individual. By editing photographs or videos of the deceased, the offenders create unsettling circumstances that heighten the victim’s emotional response.
The psychological manipulation seeks to instil guilt, regret, and a sense of responsibility in the victim. The notion that they are somehow linked to the catastrophe increases their emotional weakness, making them more vulnerable to the demands of sextortionists. The offenders take use of these emotions, coercing victims into cooperation out of fear of being involved in the apparent tragedy.
The impact on the victim’s mental well-being cannot be overstated. They may experience intense psychological trauma, including anxiety, depression, and post-traumatic stress disorder (PTSD). The guilt and shame associated with the false belief of being linked to someone’s death can have long-lasting effects on their emotional health and overall quality of life, others may have trust issues.
Law enforcement agencies advised
Law enforcement organisations were concerned about the growing annoyance of these illegal acts. The use of deep fake methods or other AI technologies to make convincing morphing films demonstrates scammers’ improved ability. These tools are fully capable of modifying digital information in ways that are radically different from the genuine film, making it difficult for victims to detect the fake nature of the video.
Defence strategies to fight back: To combat sextortion, a proactive approach that empowers individuals and utilizes resources is required. This section delves into crucial anti-sextortion techniques such as reporting events, preserving evidence, raising awareness, and implementing digital security measures.
- Report the Incident: Sextortion victims should immediately notify law enforcement. Contact your local police or cybercrime department and supply them with any important information, including specifics of the extortion attempt, communication logs, and any other evidence that can assist in the investigation. Reporting the occurrence is critical for keeping criminals responsible and averting additional harm to others.
- Preserve Evidence: Preserving evidence is critical in creating a solid case against sextortionists. Save and document any types of contact connected to the extortion, including text messages, emails, and social media conversations. Take screenshots, record phone calls (if legal), and save any other digital material or papers that might be used as evidence. This evidence can be useful in investigations and judicial processes.
Digital security: Implementing comprehensive digital security measures can considerably lower the vulnerability to sextortion assaults. Some important measures that one can use:
- Use unique, complicated passwords for all online accounts, and avoid reusing passwords across platforms. Consider utilising password managers to securely store and create strong passwords.
- Enable two-factor authentication (2FA) whenever possible, which adds an extra layer of protection by requiring a second verification step, such as a code delivered to your phone or email, in addition to the password.
- Regular software updates: Keep your operating system, antivirus software, and programmes up to date. Security patches are frequently included in software upgrades to defend against known vulnerabilities.
- Adjust your privacy settings on social networking platforms and other online accounts to limit the availability of personal information and restrict access to your content.
- Be cautious when clicking on links or downloading files from unfamiliar or suspect sources. When exchanging personal information online, only use trusted websites.
Conclusion:
Combating sextortion demands a collaborative effort that combines proactive tactics and resources to confront this damaging practice. Individuals may actively fight back against sextortion by reporting incidences, preserving evidence, raising awareness, and implementing digital security measures. It is critical to empower victims, encourage their rehabilitation, and collaborate to build a safer online environment where sextortionists are held accountable and everyone can navigate the digital environment with confidence.

Introduction
In a landmark move for India’s growing artificial intelligence (AI) ecosystem, ten cutting-edge Indian startups have been selected to participate in the prestigious Global AI Accelerator Programme in Paris. This initiative, jointly facilitated by the Ministry of Electronics and Information Technology (MeitY) under the IndiaAI mission, aims to project India’s AI innovation on the global stage, empower startups to scale impactful solutions while fostering cross-border collaboration.
Launched in alignment with the vision of India as a global AI powerhouse, the IndiaAI initiative has been working on strengthening domestic AI capabilities. Participation in the Paris Accelerator Programme is a direct extension of this mission, offering Indian startups access to world-class mentorship, investor networks, and a thriving innovation ecosystem in France, one of Europe’s AI capitals.
Global Acceleration for Local Impact
The ten selected startups represent diverse verticals, from conversational AI to cybersecurity, edtech and surveillance intelligence. This selection was made after a rigorous evaluation of innovation potential, scalability, and societal impact. Each of these ventures represents India's technological ambition and capacity to solve real-world problems through AI.
The significance of this opportunity goes beyond business growth. It sets the foundation for collaborative policy dialogues, ethical AI development, and bilateral innovation frameworks. With rising global scrutiny on issues such as AI safety, bias, and misinformation, the need for making efforts for a more responsible innovation takes centre stage.
CyberPeace Outlook
India’s participation opens up a pivotal chapter in India's AI diplomacy. Through such initiatives, the importance of AI is not confined just to commercial tools but also as a cornerstone of national security, citizen safety, and digital sovereignty can be explored. As AI systems increasingly integrate with critical infrastructure from health to law enforcement, the role of cyber resilience becomes significant. With the increasing engagement of AI in several sensitive sectors like audio-video surveillance and digital edtech, there is an urgent need for secure-by-design innovation. Including parameters such as security, ethics, and accountability into the development lifecycle becomes important, aligning with its broader goal of harmonising with digital progress.
Conclusion
India’s participation in the Paris Accelerator Programme signifies its commitment to shaping global AI norms and innovation diplomacy. As Indian startups interact with European regulators, investors, and technologists, they carry the responsibility of representing not just business acumen but the values of an open, inclusive, and secure digital future.
This global exposure also feeds directly into India’s domestic AI strategies, a global platform informing policy evolution, enhancing research and development networks, and building a robust talent pipeline. Programmes like these act as bridges, ensuring India remains adaptive in the ever-evolving AI landscape. Encouraging such global engagements while actively working with stakeholders to build frameworks safeguarding national interests, protecting civil liberties, and fostering innovation becomes paramount. As India takes this global leap, the journey ahead must be shaped by innovation, collaboration, and vigilance.
References
- https://egov.eletsonline.com/2025/05/indiaai-selects-10-innovative-startups-for-prestigious-ai-accelerator-programme-in-paris/#:~:text=The%2010%20startups%20selected%20for,audio%2Dvideo%20analytics%20for%20surveillance.
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2132377
- https://inc42.com/buzz/meet-the-10-indian-ai-startups-selected-for-global-acceleration-programme/
- https://www.businessworld.in/article/govt-to-send-10-ai-startups-to-paris-accelerator-in-push-for-global-reach-558251
.webp)
What is Deepfake
Deepfakes have been, a fascinating but unsettling phenomenon that is now prominent in this digital age. These incredibly convincing films have drawn attention and blended in well with our high-tech surroundings. The lifelike but completely manufactured quality of deepfake videos has become an essential component of our digital environment as we traverse the broad reaches of our digital society. While these works have an undoubtedly captivating charm, they have important ramifications. Come along as we examine the deep effects that misuse of deepfakes can have on our globalized digital culture. After many actors now business tycoon Ratan Tata has become the latest victim of deepfake. Tata called out a post from a user that used a fake interview of him in a video recommending Investments.
Case Study
The nuisance of deep fake is sparing none from actors politicians to entrepreneurs everyone is getting caught in the Trap. Soon after the actresses Rashmika Mandana, Katrina Kaif, Kajol and other actresses fell prey to the rising scenario of deepfake, a new case from the industry emerged, which took Mr. Ratan Tata on storm. Business tycoon Ratan Tata has become the latest victim of deepfake. He took to his social media sharing an image of the interview that asked people to invest money in a project in a post on Instagram. Ratan Tata called out a post from a user that used a fake interview of him in a video recommending these Investments.
This nuisance that has been created because of the deepfake is sparing nobody from actors to politicians to entrepreneurs now everyone is getting caught in the trap the latest victim being Ratan Tata. Tech magnate Ratan Tata is the most recent victim of this deepfake phenomenon. The millionaire was seen in the video, which was posted by the Instagram user, giving his followers a once-in-a-million opportunity to "exaggerate investments risk-free."
In the stated video, Ratan Tata was seen giving everyone in India advice mentioning to the public regarding the opportunity to increase their money with no risk and a 100% guarantee. The caption of the video clip stated, "Go to the channel right now."
Tata annotated both the video and the screenshot of the caption with the word "FAKE."
Ongoing Deepfake Assaults in India
Deepfake videos continue to target celebrities, and Priyanka Chopra is also a recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples, including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her looking the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Prevention and Detection
In order to effectively combat the growing threat posed by deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain.
Conclusion
The current instance involving Ratan Tata serves as an example of how the emergence of counterfeit technology poses an imminent danger to our digital civilization. The fake video, which was posted to Instagram, showed the business tycoon giving financial advice and luring followers with low-risk investment options. Tata quickly called out the footage as "FAKE," highlighting the need for careful media consumption. The Tata incident serves as a reminder of the possible damage deepfakes can do to prominent people's reputations. The issue, in Ratan Tata's instance specifically, demands that public personalities be more mindful of the possible misuse of their virtual identities. We can all work together to strengthen our defenses against this sneaky phenomenon and maintain the trustworthiness of our internet-based culture in the face of ever-changing technological challenges by emphasizing preventive measures like strict safety regulations and the implementation of state-of-the-art deepfake detection technologies.
References
- https://economictimes.indiatimes.com/magazines/panache/ratan-tata-slams-deepfake-video-that-features-him-giving-risk-free-investment-advice/articleshow/105805223.cms
- https://www.ndtv.com/india-news/ratan-tata-flags-deepfake-video-of-his-interview-recommending-investments-4640515
- https://www.businesstoday.in/bt-tv/short-video/viralvideo-business-tycoon-ratan-tata-falls-victim-to-deepfake-408557-2023-12-07
- https://www.livemint.com/news/india/false-ratan-tata-calls-out-a-deepfake-video-of-him-giving-investment-advice-11701926766285.html