#FactCheck - Viral Video Misleadingly Tied to Recent Taiwan Earthquake
Executive Summary:
In the context of the recent earthquake in Taiwan, a video has gone viral and is being spread on social media claiming that the video was taken during the recent earthquake that occurred in Taiwan. However, fact checking reveals it to be an old video. The video is from September 2022, when Taiwan had another earthquake of magnitude 7.2. It is clear that the reversed image search and comparison with old videos has established the fact that the viral video is from the 2022 earthquake and not the recent 2024-event. Several news outlets had covered the 2022 incident, mentioning additional confirmation of the video's origin.

Claims:
There is a news circulating on social media about the earthquake in Taiwan and Japan recently. There is a post on “X” stating that,
“BREAKING NEWS :
Horrific #earthquake of 7.4 magnitude hit #Taiwan and #Japan. There is an alert that #Tsunami might hit them soon”.

Similar Posts:


Fact Check:
We started our investigation by watching the videos thoroughly. We divided the video into frames. Subsequently, we performed reverse search on the images and it took us to an X (formally Twitter) post where a user posted the same viral video on Sept 18, 2022. Worth to notice, the post has the caption-
“#Tsunami warnings issued after Taiwan quake. #Taiwan #Earthquake #TaiwanEarthquake”

The same viral video was posted on several news media in September 2022.

The viral video was also shared on September 18, 2022 on NDTV News channel as shown below.

Conclusion:
To conclude, the viral video that claims to depict the 2024 Taiwan earthquake was from September 2022. In the course of the rigorous inspection of the old proof and the new evidence, it has become clear that the video does not refer to the recent earthquake that took place as stated. Hence, the recent viral video is misleading . It is important to validate the information before sharing it on social media to prevent the spread of misinformation.
Claim: Video circulating on social media captures the recent 2024 earthquake in Taiwan.
Claimed on: X, Facebook, YouTube
Fact Check: Fake & Misleading, the video actually refers to an incident from 2022.
Related Blogs
.webp)
Introduction
In India, the rights of children with regard to protection of their personal data are enshrined under the Digital Personal Data Protection Act, 2023 which is the newly enacted digital personal data protection law of India. The DPDP Act requires that for the processing of children's personal data, verifiable consent of parents or legal guardians is a necessary requirement. If the consent of parents or legal guardians is not obtained then it constitutes a violation under the DPDP Act. Under section 2(f) of the DPDP act, a “child” means an individual who has not completed the age of eighteen years.
Section 9 under the DPDP Act, 2023
With reference to the collection of children's data section 9 of the DPDP Act, 2023 provides that for children below 18 years of age, consent from Parents/Legal Guardians is required. The Data Fiduciary shall, before processing any personal data of a child or a person with a disability who has a lawful guardian, obtain verifiable consent from the parent or the lawful guardian. Section 9 aims to create a safer online environment for children by limiting the exploitation of their data for commercial purposes or otherwise. By virtue of this section, the parents and guardians will have more control over their children's data and privacy and they are empowered to make choices as to how they manage their children's online activities and the permissions they grant to various online services.
Section 9 sub-section (3) specifies that a Data Fiduciary shall not undertake tracking or behavioural monitoring of children or targeted advertising directed at children. However, section 9 sub-section (5) further provides room for exemption from this prohibition by empowering the Central Government which may notify exemption to specific data fiduciaries or data processors from the behavioural tracking or target advertising prohibition under the future DPDP Rules which are yet to be announced or released.
Impact on social media platforms
Social media companies are raising concerns about Section 9 of the DPDP Act and upcoming Rules for the DPDP Act. Section 9 prohibits behavioural tracking or targeted advertising directed at children on digital platforms. By prohibiting intermediaries from tracking a ‘child's internet activities’ and ‘targeted advertising’ - this law aims to preserve children's privacy. However, social media corporations contended that this limitation adversely affects the efficacy of safety measures intended to safeguard young users, highlighting the necessity of monitoring specific user signals, including from minors, to guarantee the efficacy of safety measures designed for them.
Social media companies assert that tracking teenagers' behaviour is essential for safeguarding them from predators and harmful interactions. They believe that a complete ban on behavioural tracking is counterproductive to the government's objectives of protecting children. The scope to grant exemption leaves the door open for further advocacy on this issue. Hence it necessitates coordination with the concerned ministry and relevant stakeholders to find a balanced approach that maintains both privacy and safety for young users.
Furthermore, the impact on social media platforms also extends to the user experience and the operational costs required to implement the functioning of the changes created by regulations. This also involves significant changes to their algorithms and data-handling processes. Implementing robust age verification systems to identify young users and protect their data will also be a technically challenging step for the various scales of platforms. Ensuring that children’s data is not used for targeted advertising or behavioural monitoring also requires sophisticated data management systems. The blanket ban on targeted advertising and behavioural tracking may also affect the personalisation of content for young users, which may reduce their engagement with the platform.
For globally operating platforms, aligning their practices with the DPDP Act in India while also complying with data protection laws in other countries (such as GDPR in Europe or COPPA in the US) can be complex and resource-intensive. Platforms might choose to implement uniform global policies for simplicity, which could impact their operations in regions not governed by similar laws. On the same page, competitive dynamics such as market shifts where smaller or niche platforms that cater specifically to children and comply with these regulations may gain a competitive edge. There may be a drive towards developing new, compliant ways of monetizing user interactions that do not rely on behavioural tracking.
CyberPeace Policy Recommendations
A balanced strategy should be taken into account which gives weightage to the contentions of social media companies as well as to the protection of children's personal information. Instead of a blanket ban, platforms can be obliged to follow and encourage openness in advertising practices, ensuring that children are not exposed to any misleading or manipulative marketing techniques. Self-regulation techniques can be implemented to support ethical behaviour, responsibility, and the safety of young users’ online personal information through the platform’s practices. Additionally, verifiable consent should be examined and put forward in a manner which is practical and the platforms have a say in designing the said verification. Ultimately, this should be dealt with in a manner that behavioural tracking and targeted advertising are not affecting the children's well-being, safety and data protection in any way.
Final Words
Under section 9 of the DPDP Act, the prohibition of behavioural tracking and targeted advertising in case of processing children's personal data - will compel social media platforms to overhaul their data collection and advertising practices, ensuring compliance with stricter privacy regulations. The legislative intent behind this provision is to enhance and strengthen the protection of children's digital personal data security and privacy. As children are particularly vulnerable to digital threats due to their still-evolving maturity and cognitive capacities, the protection of their privacy stands as a priority. The innocence of children is a major cause for concern when it comes to digital access because children simply do not possess the discernment and caution required to be able to navigate the Internet safely. Furthermore, a balanced approach needs to be adopted which maintains both ‘privacy’ and ‘safety’ for young users.
References
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.firstpost.com/tech/as-govt-of-india-starts-preparing-rules-for-dpdp-act-social-media-platforms-worried-13789134.html#google_vignette
- https://www.business-standard.com/industry/news/social-media-platforms-worry-new-data-law-could-affect-child-safety-ads-124070400673_1.html

Introduction
You must have heard of several techniques of cybercrime up to this point. Many of which we could never have anticipated. Some of these reports are coming from different parts of the country. Where video calls are being utilised to cheat. Through video calls, cybercriminals are making individuals victims of fraud. During this incident, fraudsters film pornographic recordings of both the victims using a screen recorder, then blackmail them by emailing these videos and demanding money. However, cybercriminals are improving their strategies to defraud more people. In this blog post, we will explore the tactics involved in this case, the psychological impact, and ways to combat it. Before we know more about the case, let’s have a look at deep fake, AI, and Sextortion and how fraudsters use technology to commit crimes.
Understanding Deepfake
Deepfake technology is the manipulation or fabrication of multimedia information such as videos, photos, or audio recordings using artificial intelligence (AI) algorithms, and profound learning models. These algorithms process massive quantities of data to learn and imitate human-like behaviour, allowing for very realistic synthetic media development.
Individuals with malicious intent may change facial expressions, bodily movements, and even voices in recordings using deepfake technology, basically replacing a person’s appearance with someone else’s. The produced film can be practically indistinguishable from authentic footage, making it difficult for viewers to distinguish between the two.
Sextortion and technology
Sextortion is a sort of internet blackmail in which offenders use graphic or compromising content to compel others into offering money, sexual favours, or other concessions. This information is usually gained by hacking, social engineering, or tricking people into providing sensitive information.
Deepfake technology combined with sextortion techniques has increased the impact on victims. Deepfakes may now be used by perpetrators to make and distribute pornographic or compromising movies or photographs that seem genuine but are completely fake. As the prospect of discovery grows increasingly credible and tougher to rebut, the stakes for victims rise.
Cyber crooks Deceive
In this present case, cyber thugs first make video calls to people and capture the footage. They then twist the footage and merge it with a distorted naked video. As a result, the victim is obliged to conceal the case. Following that, “they demand money as a ransom to stop releasing the doctored video on the victim’s contacts and social media platforms.” In this case, a video has emerged in which a lady who was supposedly featured in the first film is depicted committing herself because of the shame caused by the video’s release. These extra threats are merely intended to inflict psychological pressure and coercion on the victims.
Sextortionists have reached a new low by profiting from the misfortunes of others, notably targeting deceased victims. The offenders want to maximise emotional pain and persuade the victim into acquiescence by generating deep fake films depicting these persons. They use the inherent compassion and emotion connected with tragedy to exact bigger ransoms from their victims.
This distressing exploitation not only adds urgency to the extortion demands but also preys on the victim’s sensitivity and emotional instability. They even pressurize the victim by impersonating them, and if the demands are fulfilled, the victims may land up in jail.
Tactics used
The morphed death videos are precisely constructed to heighten emotional discomfort and instil terror in the targeted individual. By editing photographs or videos of the deceased, the offenders create unsettling circumstances that heighten the victim’s emotional response.
The psychological manipulation seeks to instil guilt, regret, and a sense of responsibility in the victim. The notion that they are somehow linked to the catastrophe increases their emotional weakness, making them more vulnerable to the demands of sextortionists. The offenders take use of these emotions, coercing victims into cooperation out of fear of being involved in the apparent tragedy.
The impact on the victim’s mental well-being cannot be overstated. They may experience intense psychological trauma, including anxiety, depression, and post-traumatic stress disorder (PTSD). The guilt and shame associated with the false belief of being linked to someone’s death can have long-lasting effects on their emotional health and overall quality of life, others may have trust issues.
Law enforcement agencies advised
Law enforcement organisations were concerned about the growing annoyance of these illegal acts. The use of deep fake methods or other AI technologies to make convincing morphing films demonstrates scammers’ improved ability. These tools are fully capable of modifying digital information in ways that are radically different from the genuine film, making it difficult for victims to detect the fake nature of the video.
Defence strategies to fight back: To combat sextortion, a proactive approach that empowers individuals and utilizes resources is required. This section delves into crucial anti-sextortion techniques such as reporting events, preserving evidence, raising awareness, and implementing digital security measures.
- Report the Incident: Sextortion victims should immediately notify law enforcement. Contact your local police or cybercrime department and supply them with any important information, including specifics of the extortion attempt, communication logs, and any other evidence that can assist in the investigation. Reporting the occurrence is critical for keeping criminals responsible and averting additional harm to others.
- Preserve Evidence: Preserving evidence is critical in creating a solid case against sextortionists. Save and document any types of contact connected to the extortion, including text messages, emails, and social media conversations. Take screenshots, record phone calls (if legal), and save any other digital material or papers that might be used as evidence. This evidence can be useful in investigations and judicial processes.
Digital security: Implementing comprehensive digital security measures can considerably lower the vulnerability to sextortion assaults. Some important measures that one can use:
- Use unique, complicated passwords for all online accounts, and avoid reusing passwords across platforms. Consider utilising password managers to securely store and create strong passwords.
- Enable two-factor authentication (2FA) whenever possible, which adds an extra layer of protection by requiring a second verification step, such as a code delivered to your phone or email, in addition to the password.
- Regular software updates: Keep your operating system, antivirus software, and programmes up to date. Security patches are frequently included in software upgrades to defend against known vulnerabilities.
- Adjust your privacy settings on social networking platforms and other online accounts to limit the availability of personal information and restrict access to your content.
- Be cautious when clicking on links or downloading files from unfamiliar or suspect sources. When exchanging personal information online, only use trusted websites.
Conclusion:
Combating sextortion demands a collaborative effort that combines proactive tactics and resources to confront this damaging practice. Individuals may actively fight back against sextortion by reporting incidences, preserving evidence, raising awareness, and implementing digital security measures. It is critical to empower victims, encourage their rehabilitation, and collaborate to build a safer online environment where sextortionists are held accountable and everyone can navigate the digital environment with confidence.

Introduction
The 2023-24 annual report of the Union Home Ministry states that WhatsApp is among the primary platforms being targeted for cyber fraud in India, followed by Telegram and Instagram. Cybercriminals have been conducting frauds like lending and investment scams, digital arrests, romance scams, job scams, online phishing etc., through these platforms, creating trauma for victims and overburdening law enforcement, which is not always the best equipped to recover their money. WhatsApp’s scale, end-to-end encryption, and ease of mass messaging make it both a powerful medium of communication and a vulnerable target for bad actors. It has over 500 million users in India, which makes it a primary subject for scammers running illegal lending apps, phishing schemes, and identity fraud.
Action Taken by Whatsapp
As a response to this worrying trend and in keeping with Rule 4(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, [updated as of 6.4.2023], WhatsApp has been banning millions of Indian accounts through automated tools, AI-based detection systems, and behaviour analysis, which can detect suspicious activity and misuse. In July 2021, it banned over 2 million accounts. By February 2025, this number had shot up to over 9.7 million, with 1.4 million accounts removed proactively, that is, before any user reported them. While this may mean that the number of attacks has increased, or WhatsApp’s detection systems have improved, or both, what it surely signals is the acknowledgement of a deeper, systemic challenge to India’s digital ecosystem and the growing scale and sophistication of cyber fraud, especially on encrypted platforms.
CyberPeace Insights
- Under Rule 4(1)(d) of the IT Rules, 2021, significant social media intermediaries (SSMIs) are required to implement automated tools to detect harmful content. But enforcement has been uneven. WhatsApp’s enforcement action demonstrates what effective compliance with proactive moderation can look like because of the scale and transparency of its actions.
- Platforms must treat fraud not just as a content violation but as a systemic abuse of the platform’s infrastructure.
- India is not alone in facing this challenge. The EU’s Digital Services Act (DSA), for instance, mandates large platforms to conduct regular risk assessments, maintain algorithmic transparency, and allow independent audits of their safety mechanisms. These steps go beyond just removing bad content by addressing the design of the platform itself. India can draw from this by codifying a baseline standard for fraud detection, requiring platforms to publish detailed transparency reports, and clarifying the legal expectations around proactive monitoring. Importantly, regulators must ensure this is done without compromising encryption or user privacy.
- WhatsApp’s efforts are part of a broader, emerging ecosystem of threat detection. The Indian Cyber Crime Coordination Centre (I4C) is now sharing threat intelligence with platforms like Google and Meta to help take down scam domains, malicious apps, and sponsored Facebook ads promoting illegal digital lending. This model of public-private intelligence collaboration should be institutionalized and scaled across sectors.
Conclusion: Turning Enforcement into Policy
WhatsApp’s mass account ban is not just about enforcement but an example of how platforms must evolve. As India becomes increasingly digital, it needs a forward-looking policy framework that supports proactive monitoring, ethical AI use, cross-platform coordination, and user safety. The digital safety of users in India and those around the world must be built into the architecture of the internet.
References
- https://scontent.xx.fbcdn.net/v/t39.8562-6/486805827_1197340372070566_282096906288453586_n.pdf?_nc_cat=104&ccb=1-7&_nc_sid=b8d81d&_nc_ohc=BRGwyxF87MgQ7kNvwHyyW8u&_nc_oc=AdnNG2wXIN5F-Pefw_FTt2T4K6POllUyKpO7nxwzCWxNgQEkVLllHmh81AHT2742dH8&_nc_zt=14&_nc_ht=scontent.xx&_nc_gid=iaQzNQ8nBZzxuIS4rXLOkQ&oh=00_AfEnbac47YDXvymJ5vTVB-gXteibjpbTjY5uhP_sMN9ouw&oe=67F95BF0
- https://scontent.xx.fbcdn.net/v/t39.8562-6/217535270_342765227288666_5007519467044742276_n.pdf?_nc_cat=110&ccb=1-7&_nc_sid=b8d81d&_nc_ohc=aj6og9xy5WQQ7kNvwG9Vzkd&_nc_oc=AdnDtVbrQuo4lm3isKg5O4cw5PHkp1MoMGATVpuAdOUUz-xyJQgWztGV1PBovGACQ9c&_nc_zt=14&_nc_ht=scontent.xx&_nc_gid=gabMfhEICh_gJFiN7vwzcA&oh=00_AfE7lXd9JJlEZCpD4pxW4OOc03BYcp1e3KqHKN9-kaPGMQ&oe=67FD6FD3
- https://www.hindustantimes.com/india-news/whatsapp-is-most-used-platform-for-cyber-crimes-home-ministry-report-101735719475701.html
- https://www.indiatoday.in/technology/news/story/whatsapp-bans-over-97-lakhs-indian-accounts-to-protect-users-from-scam-2702781-2025-04-02