#FactCheck -Scripted Video of Pre-Wedding Roka at Metro Station Misleads Users
Executive Summary
A video is going viral on social media showing a woman performing a pre-wedding ritual called “Roka” for a couple at a metro station. Many users are sharing the clip believing it to be a real incident. CyberPeace found in its research that the viral claim is false. The video is actually scripted.
Claim:
An Instagram user posted the video on February 7, 2026, with the caption, “A mother performed her son’s Roka with his girlfriend at a metro station.”

Fact Check:
To verify the claim, we conducted a reverse image search using Google Lens on screenshots from the viral video. We found the same video was first uploaded on February 5, 2026, by an Instagram account named “chalte_phirte098.” The profile belongs to digital content creator Aarav Mavi, who regularly posts relationship and breakup-related videos.

Although the viral clip does not include any disclaimer stating that it is scripted, an older video posted by the creator on December 16, 2025, clarifies that his content is based on real-life stories shared by people but is filmed using professional actors. Several similar staged videos are also available on his profile on Instagram.

Conclusion:
Our research clearly shows that the viral video claiming to show a pre-wedding Roka ceremony at a metro station is not real. It was created by a content creator for entertainment purposes. Therefore, the claim circulating on social media is misleading.
Related Blogs

Introduction
The Indian government has developed the National Cybersecurity Reference Framework (NCRF) to provide an implementable measure for cybersecurity, based on existing legislations, policies, and guidelines. The National Critical Information Infrastructure Protection Centre is responsible for the framework. The government is expected to recommend enterprises, particularly those in critical sectors like banking, telecom, and energy, to use only security products and services developed in India. The NCRF aims to ensure that cybersecurity is protected and that the use of made-in-India products is encouraged to safeguard cyber infrastructure. The Centre is expected to emphasise the significant progress in developing indigenous cybersecurity products and solutions.
National Cybersecurity Reference Framework (NCRF)
The Indian government has developed the National Cybersecurity Reference Framework (NCRF), a guideline that sets the standard for cybersecurity in India. The framework focuses on critical sectors and provides guidelines to help organisations develop strong cybersecurity systems. It can serve as a template for critical sector entities to develop their own governance and management systems. The government has identified telecom, power, transportation, finance, strategic entities, government entities, and health as critical sectors.
The NCRF is non-binding in nature, meaning its recommendations will not be binding. It recommends enterprises allocate at least 10% of their total IT budget towards cybersecurity, with monitoring by top-level management or the board of directors. The framework may suggest that national nodal agencies evolve platforms and processes for machine-processing data from different sources to ensure proper audits and rate auditors based on performance.
Regulators overseeing critical sectors may have greater powers to set rules for information security and define information security requirements to ensure proper audits. They also need an effective Information Security Management System (ISMS) instance to access sensitive data and deficiencies related to operations in the critical sector. The policy is based on a Common but Differentiated Responsibility (CBDR) approach, recognising that different organisations have varying levels of cybersecurity needs and responsibilities.
India faces a barrage of cybersecurity-related incidents, such as the high-profile attack on AIIMS Delhi in 2022. Many ministries feel hamstrung by the lack of an overarching framework on cybersecurity when formulating sector-specific legislation. In recent years, threat actors backed by nation-states and organised cyber-criminal groups have attempted to target the critical information infrastructure (CII) of the government and enterprises. The current guiding framework on cybersecurity for critical infrastructure in India comes from the National Cybersecurity Policy of 2013. From 2013 to 2023, the world has evolved significantly due to the emergence of new threats necessitating the development of new strategies.
Significance in the realm of Critical Infrastructure
India faces numerous cybersecurity incidents due to a lack of a comprehensive framework. Critical Information Infrastructure like banking, energy, healthcare, telecommunications, transportation, strategic enterprises, and government enterprises are most targeted by threat actors, including nation-states and cybercriminals. These critical information sectors especially by their vary nature as they hold sensitive data make them prime targets for cyber threats and attacks. Cyber-attacks can compromise patient privacy, disrupt services, compromise control systems, pose safety risks, and disrupt critical services. Hence it is of paramount importance to come up with NCRF which can potentially address the emerging issues by providing sector-specific guidelines.
The Indian government is considering promoting the use of made-in-India products to enhance Cyber Infrastructure
India is preparing to recommend the use of domestically developed cybersecurity products and services, particularly for critical sectors like banking, telecom, and energy, to enhance national security in the face of escalating cybersecurity threats. The initiative aims to enhance national security in response to increasing cybersecurity threats.
Conclusion
Promoting locally made cybersecurity products and services in important industries shows India's commitment to strengthening national security. A step of coming up with the National Cybersecurity Reference Framework (NCRF) which outlines duties, responsibilities, and recommendations for organisations and regulators shows the critical step towards a comprehensive cybersecurity policy framework which is a need of the hour. The government underscoring made-in-India solutions and allocating cybersecurity resources underlines its determination to protect the country's cyber infrastructure in light of increasing cyber threats & attacks. The NCRF is expected to help draft sector-specific guidelines on cyber security.
References
- https://indianexpress.com/article/business/market/overhaul-of-cybersecurity-framework-to-safeguard-cyber-infra-govt-may-push-use-of-made-in-india-products-9133687/
- https://vajiramandravi.com/upsc-daily-current-affairs/mains-articles/national-cybersecurity-reference-framework-ncrf/
- https://m.toppersnotes.com/current-affairs/blog/to-push-cyber-infra-govt-may-push-use-of-made-in-india-products-DxQP
- https://appkida.in/overhaul-of-cybersecurity-framework-in-2024/

Introduction
Earlier this month, lawmakers in Colorado, a U.S. state, were summoned to a special legislative session to rewrite their newly passed Artificial Intelligence (AI) law before it even takes effect. Although the discussion taking place in Denver may seem distant, evolving regulations like this one directly address issues that India will soon encounter as we forge our own course for AI governance.
The Colorado Artificial Intelligence Act
Colorado became the first U.S. state to pass a comprehensive AI accountability law, set to come into force in 2026. It aims to protect people from bias, discrimination, and harm caused by predictive algorithms since AI tools have been known to reproduce societal biases by sidelining women from hiring processes, penalising loan applicants from poor neighbourhoods, or through welfare systems that wrongly deny citizens their benefits. But the law met resistance from tech companies who threatened to pull out form the state, claiming it is too broad in scope in its current form and would stifle innovation. This brings critical questions about AI regulation to the forefront:
- Who should be responsible when AI causes harm? Developers, deployers, or both?
- How should citizens seek justice?
- How can tech companies be incentivised to develop safe technologies?
Colorado’s governor has called a special session to update the law before it kicks in.
What This Means for India
India is on its path towards framing a dedicated AI-specific law or directions, and discussions are underway through the IndiaAI Mission, the proposed Digital India Act, committee set by the Delhi High Court on deepfake and other measures. But the dilemmas Colorado is wrestling with are also relevant here.
- AI uptake is growing in public service delivery in India. Facial recognition systems are expanding in policing, despite accuracy and privacy concerns. Fintech apps using AI-driven credit scoring raise questions of fairness and transparency.
- Accountability is unclear. If an Indian AI-powered health app gives faulty advice, who should be liable- the global developer, the Indian startup deploying it, or the regulator who failed to set safeguards?
- India has more than 1,500 AI startups (NASSCOM), which, like Colorado’s firms, fear that onerous compliance could choke growth. But weak guardrails could undermine public trust in AI altogether.
Lessons for India
India’s Ministry of Electronics and IT ( MEITy) favours a light-touch approach to AI regulation, and exploring and advancing ways for a future-proof guideline. Further, lessons from other global frameworks can guide its way.
- Colorado’s case shows us the necessity of incorporating feedback loops in the policy-making process. India should utilise regulatory sandboxes and open, transparent consultation processes before locking in rigid rules.
- It will also need to explore proportionate obligations, lighter for low-risk applications and stricter for high-risk use cases such as policing, healthcare, or welfare delivery.
- Europe’s AI Act is heavy on compliance, the U.S. federal government leans toward deregulation, and Colorado is somewhere in between. India has the chance to create a middle path, grounded in our democratic and developmental context.
Conclusion
As AI becomes increasingly embedded in hiring, banking, education, and welfare, opportunities for ordinary Indians are being redefined. To shape how this pans out, states like Tamil Nadu and Telangana have taken early steps to frame AI policies. Lessons will emerge from their initiative in addressing AI governance. Policy and regulation will always be contested, but contestations are a part of the process.
The Colorado debate shows us how participative law-making, with room for debate, revision, and iteration, is not a weakness but a necessity. For India’s emerging AI governance landscape, the challenge will be to embrace this process while ensuring that citizen rights and inclusion are balanced well with industry concerns. CyberPeace advocates for responsible AI regulation that balances innovation and accountability.
References
- https://www.cbsnews.com/colorado/news/colorado-lawmakers-look-repeal-replace-controversial-artificial-intelligence-law/
- https://www.naag.org/attorney-general-journal/a-deep-dive-into-colorados-artificial-intelligence-act/
- https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en
- https://the-captable.com/2024/12/india-ai-regulation-light-touch/
- https://indiaai.gov.in/article/tamilnadu-s-ai-policy-six-step-tamdef-guidance-framework-and-deepmax-scorecard

Introduction
In today’s digital world, where everything is related to data, the more data you own, the more control and compliance you have over the market, which is why companies are looking for ways to use data to improve their business. But at the same time, they have to make sure they are protecting people’s privacy. It is very tricky to strike a balance between both of them. Imagine you are trying to bake a cake where you need to use all the ingredients to make it taste great, but you also have to make sure no one can tell what’s in it. That’s kind of what companies are dealing with when it comes to data. Here, ‘Pseudonymisation’ emerges as a critical technical and legal mechanism that offers a middle ground between data anonymisation and unrestricted data processing.
Legal Framework and Regulatory Landscape
Pseudonymisation, as defined by the General Data Protection Regulation (GDPR) in Article 4(5), refers to “the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person”. This technique represents a paradigm shift in data protection strategy, enabling organisations to preserve data utility while significantly reducing privacy risks. The growing importance of this balance is evident in the proliferation of data protection laws worldwide, from GDPR in Europe to India’s Digital Personal Data Protection Act (DPDP) of 2023.
Its legal treatment varies across jurisdictions, but a convergent approach is emerging that recognises its value as a data protection safeguard while maintaining that the pseudonymised data remains personal data. Article 25(1) of GDPR recognises it as “an appropriate technical and organisational measure” and emphasises its role in reducing risks to data subjects. It protects personal data by reducing the risk of identifying individuals during data processing. The European Data Protection Board’s (EDPB) 2025 Guidelines on Pseudonymisation provide detailed guidance emphasising the importance of defining the “pseudonymisation domain”. It defines who is prevented from attributing data to specific individuals and ensures that the technical and organised measures are in place to block unauthorised linkage of pseudonymised data to the original data subjects. In India, while the DPDP Act does not explicitly define pseudonymisation, legal scholars argue that such data would still fall under the definition of personal data, as it remains potentially identifiable. The Act defines personal data defined in section 2(t) broadly as “any data about an individual who is identifiable by or in relation to such data,” suggesting that the pseudonymised information, being reversible, would continue to require compliance with data protection obligations.
Further, the DPDP Act, 2023 also includes principles of data minimisation and purpose limitation. Section 8(4) says that a “Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the Rules made under it.” The concept of Pseudonymization fits here because it is a recognised technical safeguard, which means companies can use pseudonymization as one of the methods or part of their compliance toolkit under Section 8(4) of the DPDP Act. However, its use should be assessed on a case to case basis, since ‘encryption’ is also considered one of the strongest methods for protecting personal data. The suitability of pseudonymization depends on the nature of the processing activity, the type of data involved, and the level of risk that needs to be mitigated. In practice, organisations may use pseudonymization in combination with other safeguards to strengthen overall compliance and security.
The European Court of Justice’s recent jurisprudence has introduced nuanced considerations about when pseudonymised data might not constitute personal data for certain entities. In cases where only the original controller possesses the means to re-identify individuals, third parties processing such data may not be subject to the full scope of data protection obligations, provided they cannot reasonably identify the data subjects. The “means reasonably likely” assessment represents a significant development in understanding the boundaries of data protection law.
Corporate Implementation Strategies
Companies find that pseudonymisation is not just about following rules, but it also brings real benefits. By using this technique, businesses can keep their data more secure and reduce the damage in the event of a breach. Customers feel more confident knowing that their information is protected, which builds trust. Additionally, companies can utilise this data for their research or other important purposes without compromising user privacy.
Key Benefits of Pseudonymisation:
- Enhanced Privacy Protection: It hides personal details like names or IDs with fake ones (with artificial values or codes), making it harder for accidental privacy breaches.
- Preserved Data Utility: Unlike completely anonymous data, pseudonymised data keeps its usefulness by maintaining important patterns and relationships within datasets.
- Facilitate Data Sharing: It’s easier to share pseudonymised data with partners or researchers because it protects privacy while still being useful.
However, using pseudonymisation is not as easy as companies have to deal with tricky technical issues like choosing the right methods, such as encryption or tokenisation and managing security keys safely. They have to implement strong policies to stop anyone from figuring out who the data belongs to. This can get expensive and complicated, especially when dealing with a large amount of data, and it often requires expert help and regular upkeep.
Balancing Privacy Rights and Data Utility
The primary challenge in pseudonymisation is striking the right balance between protecting individuals' privacy and maintaining the utility of the data. To get this right, companies need to consider several factors, such as why they are using the data, the potential hacker's level of skill, and the type of data being used.
Conclusion
Pseudonymisation offers a practical middle ground between full anonymisation and restricted data use, enabling organisations to harness the value of data while protecting individual privacy. Legally, it is recognised as a safeguard but still treated as personal data, requiring compliance under frameworks like GDPR and India’s DPDP Act. For companies, it is not only regulatory adherence but also ensuring that it builds trust and enhances data security. However, its effectiveness depends on robust technical methods, governance, and vigilance. Striking the right balance between privacy and data utility is crucial for sustainable, ethical, and innovation-driven data practices.
References:
- https://gdpr-info.eu/art-4-gdpr/
- https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://gdpr-info.eu/art-25-gdpr/
- https://www.edpb.europa.eu/system/files/2025-01/edpb_guidelines_202501_pseudonymisation_en.pdf
- https://curia.europa.eu/juris/document/document.jsf?text=&docid=303863&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=16466915
- https://curia.europa.eu/juris/document/document.jsf?text=&docid=303863&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=16466915