#FactCheck - Old Japanese Earthquake Footage Falsely Linked to Tibet
Executive Summary:
A viral post on X (formerly Twitter) gained much attention, creating a false narrative of recent damage caused by the earthquake in Tibet. Our findings confirmed that the clip was not filmed in Tibet, instead it came from an earthquake that occurred in Japan in the past. The origin of the claim is traced in this report. More to this, analysis and verified findings regarding the evidence have been put in place for further clarification of the misinformation around the video.

Claim:
The viral video shows collapsed infrastructure and significant destruction, with the caption or claims suggesting it is evidence of a recent earthquake in Tibet. Similar claims can be found here and here

Fact Check:
The widely circulated clip, initially claimed to depict the aftermath of the most recent earthquake in Tibet, has been rigorously analyzed and proven to be misattributed. A reverse image search based on the Keyframes of the claimed video revealed that the footage originated from a devastating earthquake in Japan in the past. According to an article published by a Japanese news website, the incident occurred in February 2024. The video was authenticated by news agencies, as it accurately depicted the scenes of destruction reported during that event.

Moreover, the same video was already uploaded on a YouTube channel, which proves that the video was not recent. The architecture, the signboards written in Japanese script, and the vehicles appearing in the video also prove that the footage belongs to Japan, not Tibet. The video shows news from Japan that occurred in the past, proving the video was shared with different context to spread false information.

The video was uploaded on February 2nd, 2024.
Snap from viral video

Snap from Youtube video

Conclusion:
The video viral about the earthquake recently experienced by Tibet is, therefore, wrong as it appears to be old footage from Japan, a previous earthquake experienced by this nation. Thus, the need for information verification, such that doing this helps the spreading of true information to avoid giving false data.
- Claim: A viral video claims to show recent earthquake destruction in Tibet.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
On September 27, 2024, the Indian government took a significant step toward enhancing national security by amending business allocation rules through an extraordinary gazette notification. This amendment, which assigns specific roles to different Union Ministries and Departments regarding telecom network security, cybersecurity, and cybercrime, aims to clarify and streamline efforts in these critical areas. With India's evolving cybersecurity landscape, the need for a structured regulatory framework is pressing, as threats grow in complexity. Recent developments, such as the July 2024 global cyber outage and increasing cyber crimes like SMS scams, highlight the urgency of such reforms. Under Article 77 clause (3), the President amended the Government of India (Allocation of Business) Rules, 1961, to designate clearer responsibilities, reinforcing India's readiness to tackle emerging digital threats.
Key Highlights of the Gazette Notification
- Telecom Networks Security: A new entry ‘1A’ matters relating to the security of telecom networks" has been added under the Department of Telecommunications, highlighting an increased focus on securing the nation's telecom infrastructure.
- Cyber Security Responsibilities: Cyber security responsibilities have been added as a new entry under the Ministry of Electronics and Information Technology (MeitY), "5B. This assigns responsibility to MeitY for cybersecurity issues, concerning the Information Technology Act of 2000, giving the ministry the mandate to support other ministries or departments regarding cybersecurity matters.
- Oversight for Cyber Crime: Under the Ministry of Home Affairs, Department of Internal Security, a new entry "36A Matters relating to Cyber Crime" is introduced. This emphasises that the MHA will handle cybercrime issues, highlighting the government's attention toward enhancing internal security against cyber threats.
- Cyber Security Strategic Coordination: Any matter related to the "overall coordination and strategic direction for Cyber Security," has been given to the National Security Council Secretariat (NSCS). This consolidates the role of the NSCS in guiding cybersecurity strategies at the national level.
Impact on Policy and Governance
The amendments introduced through the notification are poised to significantly enhance the Indian government's cybersecurity framework by clarifying the roles of various ministries. The clear separation of responsibilities, telecom network security to the Department of Telecommunications, cybercrime to the Ministry of Home Affairs, and overall cyber strategy to the National Security Council Secretariat could seen as better coordination between ministries. This clarity is expected to reduce bureaucratic delays, allowing for quicker response times in addressing cyber threats, cybercrimes, and telecom vulnerabilities. Such efficient handling is crucial, especially in the evolving landscape of digital threats. These changes have been largely welcomed as it recognises the potential for improved regulatory oversight and faster policy implementation and a step forward in bolstering India’s cyber resilience.
Conclusion
The Government of India (Allocation of Business) Rules, 1961 amendments mark a critical step in strengthening India's cybersecurity framework. By setting out specific responsibilities for telecom network security, cybercrime, and overall cybersecurity strategy among key ministries, the government seeks to improve coordination and reduce bureaucratic delays. This policy shift is poised to enhance India’s digital resilience, providing a foundation for rapid responses to emerging cyber threats. However, success hinges on effective implementation, resource allocation, and collaboration across ministries. Addressing concerns like potential jurisdictional overlap and ensuring the inclusion of bodies like NCIIPC will be pivotal to ensuring comprehensive cyber protection. The complexity of cyber crimes and threats is evolving every day and the government's ability and preparedness to handle them with regulatory insight is a high priority.
References
- https://egazette.gov.in/(S(4r5oclueuwrjypfvr5b4vtzg))/ViewPDF.aspx
- https://www.ptinews.com/story/national/govt-specifies-roles-on-matters-related-to-security-of-telecom-network-cyber-security-and-cyber-crime/1856627
- https://www.thehindubusinessline.com/economy/centre-to-further-streamline-mechanism-to-deal-with-cyber-security-cyber-crime/article68694330.ece
- https://telecom.economictimes.indiatimes.com/news/policy/govt-specifies-roles-on-matters-related-to-security-of-telecom-network-cyber-security-and-cyber-crime/113754501

Introduction
In January 2026, the Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness came into effect in South Korea, establishing one of the first national AI laws in the world. The bill, enacted by the National Assembly of Korea in December 2024 and implemented from January 22, 2026, aims to strike a balance between the rapid advancement of technology and clear safeguards against risks, as well as transparency, accountability, and responsible AI use. It puts Seoul and the European Union on the frontline of developing legal systems for artificial intelligence and indicates a long-term goal of becoming an AI power on the global stage.
What the AI Basic Act Covers
The AI Basic Act consists of 19 separate AI bills that are merged into a single piece of legislation that covers the lifecycle of AI, including research and development, deployment, and utilisation. It is very wide in its coverage: it refers to any AI system that influences the Korean market or users inside the country, irrespective of the country in which it is created. The law does not apply to national defence and security applications.
The law defines key concepts like artificial intelligence, generative AI, and high-impact AI and establishes the principles of ethical AI, safety, user rights, industry support, and national policy coordination. It also offers a legal foundation for the activities of the government to promote AI innovation without jeopardising the common good.
Fundamentally, the AI Basic Act is designed to establish a culture of trust between businesses and the government/citizens. It does not prohibit AI technologies and does not excessively limit innovation. Instead, it creates the framework of responsible development and economic growth.
Guardrails for Safety and Accountability
One of the defining features of the AI Basic Act is its risk-based approach. Rather than considering all AI systems as similar, it makes a distinction between ordinary and high-impact AI systems, the ones applied in sectors where the wrong or unsafe decision can have a major impact on the safety, rights, or critical infrastructure of the population. Some of them can be seen in healthcare, transportation, financial services, education, and public services.
The high-impact AI operators must integrate risk management plans, human controls, and surveillance systems. In critical decision-making situations, human control should be available at all times; that is, machines can help but not override human control where human safety or other human rights are involved.
The law enables the regulators to perform on-site checks, demand documentation, and conduct compliance investigations. Fines for breaches may go up to 30 million Korean won (approximately 21,000 US dollars). It has a one-year period of transition that is based on guidance but not enforcement, thus allowing companies time to implement compliance measures before imposing fines.
These requirements contribute to enhancing accountability by defining who is accountable for the safety outcomes. The law in South Korea is placed in the ecosystem, as opposed to the methods in which industry self-governance alone is utilised.
Transparency and Labelling Requirements
The AI Basic Act is based on transparency. The legislation ensures that users are notified before an AI system is operating, particularly with the generation of AI outputs that could be confused with human-created material. As an example, AI-generated text, images, video, or audio that may be difficult to distinguish between reality and fake must have obvious labels or watermarks to allow users to understand the source of the content.
The necessity to label is meant to fight misinformation, misleading activities, and unintended influence on the perception of the people. It is based on international anxiety regarding AI-generated content, such as deepfakes, manipulated media, and misleading online advertisements that have already been addressed separately in policy by South Korea, as well as discussions of data governance.
The transparency is also applied to the process of decision-making in AI systems. Developers and operators should be able to give explicit information about the way in which high-impact systems make their conclusions so that those who are victims of automated decisions can seek meaningful explanations. Although specific explainability criteria are in the process of being developed, the law grounds the principle that AI cannot act behind the scenes in situations where crucial decisions are being made.
Data Privacy and User Protection
The AI governance practice in South Korea is complementary to its current data protection laws, the Personal Information Protection Act (PIPA), which is broadly regarded as equivalent to major international data protection regulations like the GDPR in regard to personal data laws. The AI Basic Act provides an explanation as to how the data can be gathered, processed, and utilised within AI systems with regard to privacy rights, particularly in areas of high impact.
The law does not supersede the personal data protection policies, but it sets certain conditions on how AI developers must address the data to be utilised in training, testing, and running AIs. Operators will be required to document their data workflows and demonstrate how they guard the privacy of their users, including by transparency and consent mechanisms where necessary. This can assist in ensuring that the information that is utilised in AI functions is regulated by definite norms, and it is more difficult to avoid privacy requirements in the name of innovation.
Accountability and Governance Infrastructure
The AI Basic Act establishes a national policy framework of AI governance. The National Artificial Intelligence Strategy Committee, chaired by the President, is at the top and proposes the overall AI policy and aligns it with national objectives. The organisations that would support this are the specialised organisations that deal with safety, risk assessment, and research and the policy centre that would analyse the effects of AI on society and assist in its adoption by the industry.
This institutional structure facilitates strategic guidance as well as operational control. It is through incorporating AI governance in the administration of the people, but not into the market forces, that South Korea wishes to have the ethical and societal concerns become part of the sectors and agencies.
Promoting Innovation and Industrial Support
Although the AI Basic Act does not disregard regulation, it is not a law of restrictions. It also offers legal justification for research and development, human capital, and the growth of the AI industry, with special consideration for startups and small and medium-sized businesses. The legislation promotes AI clusters, long-term funding programmes, and policies to bring foreign talent to the Korean AI ecosystem.
This bidimensional approach of compliance and support is indicative of the broader desire of Korea to become one of the leading AI powers in the world, along with the US and China. The government has pointed out that it will encourage trust by having clear and predictable rules that will attract investment and maintain innovation and not stifle it.
What This Means Globally
The AI Basic Act of South Korea is not only interesting in its contents but also in its timing. It is also among the first thorough AI legislations to come into force in the world, and it beats the gradual regulatory implementations in other parts of the globe, like the European Union. Its system incorporates a principle-based framework, transparency requirements, accountability regulations, and industrial support, which reflects a contrasting model to either pure prescriptive risk regulation or lax self-regulation models elsewhere.
Other critics, such as industry groups and civil society organisations, have suggested that some of the protections may be more explicit, in particular to those who are harmed by AI systems, or to establish high-impact categories. Nonetheless, the framework sets a benchmark upon which most nations will pay close attention when they establish their own AI regimes.
Conclusion
The AI Basic Act puts South Korea at the forefront of national AI regulation, including very well-developed guardrails that enforce transparency, ethical control, accountability, and data protection in addition to fostering innovation. It recognises that AI could lead to economic and social advantages, yet also actual risks, particularly when systems are opaque, autonomous, or widely implemented. South Korea has gone holistically in responsible AI governance by integrating human oversight, labelling requirements, risk management planning, and governance infrastructure into law to be emulated by other countries in the years to come.
Sources
- https://www.theguardian.com/world/2026/jan/29/south-korea-world-first-ai-regulation-laws
- https://www.oecd.org/content/dam/oecd/en/publications/reports/2025/10/artificial-intelligence-and-the-labour-market-in-korea_af668423/68ab1a5a-en.pdf
- https://asianintelligence.ai/south-korea
- https://aibasicact.kr/
- https://aibusinessweekly.net/p/south-korea-ai-basic-act-takes-effect-jan22-2026
- https://asiadaily.org/news/12112/

Introduction
The Digital Personal Data Protection (DPDP) Act 2023 of India is a significant transition for privacy legislation in this age of digital data. A key element of this new law is a requirement for organisations to have appropriate, user-friendly consent mechanisms in place for their customers so that collection, use or removal of an individual's personal data occurs in a clear and compliant manner. As a means of putting this requirement into practice, the Ministry of Electronics and Information Technology (MeitY) issued a comprehensive Business Requirements Document (BRD) in June 2025 to guide organizations, as well as Consent Managers, on how to create a Consent Management System (CMS). This document establishes the technical and functional framework by which organizations and individuals (Data Principals) will exercise control over the way their data is gathered, used and removed.
Understanding the BRD and Its Purpose
BRD represents an optional guide created as part of the "Code for Consent" programme run by MeitY in India. The purpose of the BRD is to provide guidance to startups, digital platforms and other enterprises on how to create a technology system that supports management of user consent per the requirements of the DPDP Act. Although the contents of the BRD do not carry any legal weight, it lays out a clear path for organisations to create their own consent mechanisms using best practices that align with the principles of transparency, accountability and purpose limitation in the DPDP Act.
The goal is threefold:
- Enable complete consent lifecycle management from collection to withdrawal.
- Empower individuals to manage their consents actively and transparently.
- Support data fiduciaries and processors with an interoperable system that ensures compliance.
Key Components of the Consent Management System
The BRD proposes the development of a modular Consent Management System (CMS) that provides users with secure APIs and user-friendly interfaces. This system will allow for a variety of features and modules, including:
- Consent Lifecycle Management – consent should be specific, informed and tied to an explicit purpose. The CMS will manage the collection, validation, renewal, updates and withdrawal of consent. Each transaction of consent will create a tamper-proof “consent artifact,” which will include the timestamp of creation as well as an ID identifying the purpose for which it was given.
- User Dashboard – A user will be able to view and modify the status of their active, expired or withdrawn consent and revoke access at any time via the multilingual user-friendly interface. This would make the system accessible to people from different regions and cultures.
- Notification Engine – The CMS will automatically notify users, fiduciaries and processors of any action taken with respect to consent, in order to ensure real-time updates and accountability.
- Grievance Redress Mechanism – The CMS will include a complaints mechanism that allows users to submit complaints related to the misuse of consent or the denial of their rights. This will enable tracking of the complaint resolution status, and will allow for escalation if necessary.
- Audit and Logging – As part of the CMS's internal controls for compliance and regulatory purposes, the CMS must maintain an immutable record of every instance of consent for auditing and regulatory review. The records must be encrypted, time-stamped, and linked permanently to a user and purpose ID.
- Cookie Consent Management – A separate module will enable users to manage cookie consent for websites separately from any other consents.
Roles and Responsibilities
The BRD identifies the various stakeholders involved and their associated responsibilities.
- Data Principals (Users): The user has full authority to give, withhold, amend, or revoke their consent for the use of their personal data, at any time.
- Data Fiduciaries (Companies): Companies (the fiduciaries) must collect the data principals' consents for each particular reason and must only begin processing a data subject's personal data after validating that consent through the CMS. Companies must also provide the data principals with any information or notifications needed, as well as how to resolve their complaints.
- Data Processors: Data Processors must strictly adhere to the consent stated in the CMS, and Data Processors may only process personal data on behalf of the Data Fiduciary.
- Consent Managers: The Consent Managers are independent entities that are registered with the Data Protection Board. They are responsible for administering the CMS, allowing users to manage their consent across different platforms.
This layered structure ensures transparency and shared responsibility for the consent ecosystem.
Technical Specifications and Security
The following principles of the DPDP Act must be followed to remain compliant with the DPDP Act.
- End-to-End Encryption: All exchanges of data with users must be encrypted using a minimum of TSL 1.3 and also encrypting within that standard.
- API-First Approach: API’s will be utilized to validate, withdraw and update consent in a secured manner using external sources.
- Interoperability/Accessibility: The CMS needs to allow for users to utilize several different languages (e.g. Hindi, Tamil, etc.) and be appropriate for use with various types of mobile devices and different abilities.
- Data Retention Policy: The CMS should also include automatic deletion of consent data (when the consent has expired or has been withdrawn) in order to maintain compliance with data retention limits.
Legal Relevance and Timelines
While the BRD itself is not enforceable, it is directly aligned with the upcoming enforcement of the DPDP Act, 2023. The Act was passed in August 2023 but is expected to come into effect in stages, once officially notified by the central government. Draft implementation rules, including those defining the role of Consent Managers, were released for public consultation in early 2025.
For businesses, the BRD serves as an early compliance tool—offering both a conceptual roadmap and technical framework to prepare before the law is enforced. Legal experts have described it as a critical resource for aligning data governance systems with emerging regulatory expectations.
Implications for Businesses
Organizations that collect and process user data will be required to overhaul their consent workflows:
- No blanket consents: Every data processing activity must have explicit, separate consent.
- Granular audit logs: Companies must maintain tamper-proof logs for every consent action.
- Integration readiness: Enterprises need to integrate their platforms with third-party or in-house CMS platforms via the specified APIs.
- Grievance redress and user support: Systems must be in place to handle complaints and withdrawal requests in a timely, verifiable manner.
Failing to comply once the DPDP Act is in force may expose companies to penalties, reputational damage, and potential regulatory action.
Conclusion
The BRD on Consent Management of India is a forward-looking initiative laying a technological framework that is an essential component of the DPDP Act concerning user consent; Although not yet a legal document, it provides an extent of going into all the necessary discipline for companies to prepare. As data protection grows in importance, developing consent mechanisms based on security, transparency, and the needs of the user is no longer just a regulatory requirement, but rather a requirement for the development of trust. This is the time for businesses to establish or implement CMS solutions that support this objective to be better equipped for the future of data governance in India.
References
- https://d38ibwa0xdgwxx.cloudfront.net/whatsnew-docs/8d5409f5-d26c-4697-b10e-5f6fb2d583ef.pdf
- https://ssrana.in/articles/ministry-releases-business-requirement-document-for-consent-management-under-the-dpdp-act-2023/
- https://dpo-india.com/Blogs/consent-dpdpa/
- https://corporate.cyrilamarchandblogs.com/2025/06/the-ghost-in-the-machine-the-recent-business-requirement-document-on-consent/
- https://www.mondaq.com/india/privacy-protection/1660964/analysis-of-the-business-requirement-document-for-consent-management-system