#FactCheck -Scripted Video of Pre-Wedding Roka at Metro Station Misleads Users
Executive Summary
A video is going viral on social media showing a woman performing a pre-wedding ritual called “Roka” for a couple at a metro station. Many users are sharing the clip believing it to be a real incident. CyberPeace found in its research that the viral claim is false. The video is actually scripted.
Claim:
An Instagram user posted the video on February 7, 2026, with the caption, “A mother performed her son’s Roka with his girlfriend at a metro station.”

Fact Check:
To verify the claim, we conducted a reverse image search using Google Lens on screenshots from the viral video. We found the same video was first uploaded on February 5, 2026, by an Instagram account named “chalte_phirte098.” The profile belongs to digital content creator Aarav Mavi, who regularly posts relationship and breakup-related videos.

Although the viral clip does not include any disclaimer stating that it is scripted, an older video posted by the creator on December 16, 2025, clarifies that his content is based on real-life stories shared by people but is filmed using professional actors. Several similar staged videos are also available on his profile on Instagram.

Conclusion:
Our research clearly shows that the viral video claiming to show a pre-wedding Roka ceremony at a metro station is not real. It was created by a content creator for entertainment purposes. Therefore, the claim circulating on social media is misleading.
Related Blogs

The rapid innovation of technology and its resultant proliferation in India has integrated businesses that market technology-based products with commerce. Consumer habits have now shifted from traditional to technology-based products, with many consumers opting for smart devices, online transactions and online services. This migration has increased potential data breaches, product defects, misleading advertisements and unfair trade practices.
The need to regulate technology-based commercial industry is seen in the backdrop of various threats that technologies pose, particularly to data. Most devices track consumer behaviour without the authorisation of the consumer. Additionally, products are often defunct or complex to use and the configuration process may prove to be lengthy with a vague warranty.
It is noted that consumers also face difficulties in the technology service sector, even while attempting to purchase a product. These include vendor lock-ins (whereby a consumer finds it difficult to migrate from one vendor to another), dark patterns (deceptive strategies and design practices that mislead users and violate consumer rights), ethical concerns etc.
Against this backdrop, consumer laws are now playing catch up to adequately cater to new consumer rights that come with technology. Consumer laws now have to evolve to become complimentary with other laws and legislation that govern and safeguard individual rights. This includes emphasising compliance with data privacy regulations, creating rules for ancillary activities such as advertising standards and setting guidelines for both product and product seller/manufacturer.
The Legal Framework in India
Currently, Consumer Laws in India while not tech-targeted, are somewhat adequate; The Consumer Protection Act 2019 (“Act”) protects the rights of consumers in India. It places liability on manufacturers, sellers and service providers for any harm caused to a consumer by faulty/defective products. As a result, manufacturers and sellers of ‘Internet & technology-based products’ are brought under the ambit of this Act. The Consumer Protection Act 2019 may also be viewed in light of the Digital Personal Data Protection Act 2023, which mandates the security of the digital personal data of an individual. Envisioned provisions such as those pertaining to mandatory consent, purpose limitation, data minimization, mandatory security measures by organisations, data localisation, accountability and compliance by the DPDP Act can be applied to information generated by and for consumers.
Multiple regulatory authorities and departments have also tasked themselves to issue guidelines that imbibe the principle of caveat venditor. To this effect, the Networks & Technologies (NT) wing of the Department of Telecommunications (DoT) on 2 March 2023, issued the Advisory Guidelines to M2M/IoT stakeholders for securing consumer IoT (“Guidelines”) aiming for M2M/IoT (i.e. Machine to Machine/Internet of things) compliance with the safety and security standards and guidelines in order to protect the users and the networks that connect these devices. The comprehensive Guidelines suggest the removal of universal default passwords and usernames such as “admin” that come preprogrammed with new devices and mandate the password reset process to be done after user authentication. Web services associated with the product are required to use Multi-Factor Authentication and duty is cast on them to not expose any unnecessary user information prior to authentication. Further, M2M/IoT stakeholders are required to provide a public point of contact for reporting vulnerability and security issues. Such stakeholders must also ensure that the software components are updateable in a secure and timely manner. An end-of-life policy is to be published for end-point devices which states the assured duration for which a device will receive software updates.
The involvement of regulatory authorities depends on the nature of technology products; a single product or technical consumer threat may see multiple guidelines. The Advertising Standards Council of India (ASCI) notes that cryptocurrency and related products were considered as the most violative category to commit fraud. In an attempt to protect consumer safety, it introduced guidelines to regulate advertising and promotion of virtual digital assets (VDA) exchange and trading platforms and associated services as a necessary interim measure in February 2022. It mandates that all VDA ads must carry the stipulated disclaimer “Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions.” must be made in a prominent and unmissable manner.
Further, authorities such as Securities and Exchange Board of India (SEBI) and the Reserve Bank of India (RBI) also issue cautionary notes to consumers and investors against crypto trading and ancillary activities. Even bodies like Bureau of Indian Standards (BIS) act as a complimenting authority, since product quality, including electronic products, is emphasised by mandating compliance to prescribed standards.
It is worth noting that ASCI has proactively responded to new-age technology-induced threats to consumers by attempting to tackle “dark patterns” through its existing Code on Misleading Ads (“Code”), since it is applicable across media to include online advertising on websites and social media handles. It was noted by ASCI that 29% of advertisements were disguised ads by influencers, which is a form of dark pattern. Although the existing Code addressed some issues, a need was felt to encompass other dark patterns.
Perhaps in response, the Central Consumer Protection Authority in November 2023 released guidelines addressing “dark patterns” under the Consumer Protection Act 2019 (“Guidelines”). The Guidelines define dark patterns as deceptive strategies and design practices that mislead users and violate consumer rights. These may include creating false urgency, scarcity or popularity of a product, basket sneaking (whereby additional services are added automatically on purchase of a product or service), confirm shaming (it refers to statements such as “I will stay unsecured” when opting out of travel insurance on booking of transportation tickets), etc. The Guidelines also cater to several data privacy considerations; for example, they stipulate a bar on encouraging consumers from divulging more personal information while making purchases due to difficult language and complex settings of their privacy policies, thereby ensuring compliance of technology product sellers and e-commerce platforms/vendors with data privacy laws in India. It is to be noted that the Guidelines are applicable on all platforms that systematically offer goods and services in India, advertisers and sellers.
Conclusion
Consumer laws for technology-based products in India play a pivotal role in safeguarding the rights and interests of individuals in an era marked by rapid technological advancements. These legislative frameworks, spanning facets such as data protection, electronic transactions, and product liability, assume a pivotal role in establishing a regulatory equilibrium that addresses the nuanced challenges of the digital age. The dynamic evolution of the digital landscape necessitates an adaptive legal infrastructure that ensures ongoing consumer safeguarding amidst technological innovations. As the digital landscape evolves, it is imperative for regulatory frameworks to adapt, ensuring that consumers are protected from potential risks associated with emerging technologies. Striking a balance between innovation and consumer safety requires ongoing collaboration between policymakers, businesses, and consumers. By staying attuned to the evolving needs of the digital age, Indian consumer laws can provide a robust foundation for security and equitable relationships between consumers and technology-based products.
References:
- https://dot.gov.in/circulars/advisory-guidelines-m2miot-stakeholders-securing-consumer-iot
- https://www.mondaq.com/india/advertising-marketing--branding/1169236/asci-releases-guidelines-to-govern-ads-for-cryptocurrency
- https://www.ascionline.in/the-asci-code/#:~:text=Chapter%20I%20(4)%20of%20the,nor%20deceived%20by%20means%20of
- https://www.ascionline.in/wp-content/uploads/2022/11/dark-patterns.pdf

Executive Summary:
Traditional Business Email Compromise(BEC) attacks have become smarter, using advanced technologies to enhance their capability. Another such technology which is on the rise is WormGPT, which is a generative AI tool that is being leveraged by the cybercriminals for the purpose of BEC. This research aims at discussing WormGPT and its features as well as the risks associated with the application of the WormGPT in criminal activities. The purpose is to give a general overview of how WormGPT is involved in BEC attacks and give some advice on how to prevent it.
Introduction
BEC(Business Email Compromise) in simple terms can be defined as a kind of cybercrime whereby the attackers target the business in an effort to defraud through the use of emails. Earlier on, BEC attacks were executed through simple email scams and phishing. However, in recent days due to the advancement of AI tools like WormGPT such malicious activities have become sophisticated and difficult to identify. This paper seeks to discuss WormGPT, a generative artificial intelligence, and how it is used in the BEC attacks to make the attacks more effective.
What is WormGPT?
Definition and Overview
WormGPT is a generative AI model designed to create human-like text. It is built on advanced machine learning algorithms, specifically leveraging large language models (LLMs). These models are trained on vast amounts of text data to generate coherent and contextually relevant content. WormGPT is notable for its ability to produce highly convincing and personalised email content, making it a potent tool in the hands of cybercriminals.
How WormGPT Works
1. Training Data: Here the WormGPT is trained with the arrays of data sets, like emails, articles, and other writing material. This extensive training enables it to understand and to mimic different writing styles and recognizable textual content.
2. Generative Capabilities: Upon training, WormGPT can then generate text based on specific prompts, as in the following examples in response to prompts. For example, if a cybercriminal comes up with a prompt concerning the company’s financial information, WormGPT is capable of releasing an appearance of a genuine email asking for more details.
3. Customization: WormGPT can be retrained any time with an industry or an organisation of interest in mind. This customization enables the attackers to make their emails resemble the business activities of the target thus enhancing the chances for an attack to succeed.
Enhanced Phishing Techniques
Traditional phishing emails are often identifiable by their generic and unconvincing content. WormGPT improves upon this by generating highly personalised and contextually accurate emails. This personalization makes it harder for recipients to identify malicious intent.
Automation of Email Crafting
Previously, creating convincing phishing emails required significant manual effort. WormGPT automates this process, allowing attackers to generate large volumes of realistic emails quickly. This automation increases the scale and frequency of BEC attacks.
Exploitation of Contextual Information
WormGPT can be fed with contextual information about the target, such as recent company news or employee details. This capability enables the generation of emails that appear highly relevant and urgent, further deceiving recipients into taking harmful actions.
Implications for Cybersecurity
Challenges in Detection
The use of WormGPT complicates the detection of BEC attacks. Traditional email security solutions may struggle to identify malicious emails generated by advanced AI, as they can closely mimic legitimate correspondence. This necessitates the development of more sophisticated detection mechanisms.
Need for Enhanced Training
Organisations must invest in training their employees to recognize signs of BEC attacks. Awareness programs should emphasise the importance of verifying email requests for sensitive information, especially when such requests come from unfamiliar or unexpected sources.
Implementation of Robust Security Measures
- Multi-Factor Authentication (MFA): MFA can add an additional layer of security, making it harder for attackers to gain unauthorised access even if they successfully deceive an employee.
- Email Filtering Solutions: Advanced email filtering solutions that use AI and machine learning to detect anomalies and suspicious patterns can help identify and block malicious emails.
- Regular Security Audits: Conducting regular security audits can help identify vulnerabilities and ensure that security measures are up to date.
Case Studies
Case Study 1: Financial Institution
A financial institution fell victim to a BEC attack orchestrated using WormGPT. The attacker used the tool to craft a convincing email that appeared to come from the institution’s CEO, requesting a large wire transfer. The email’s convincing nature led to the transfer of funds before the scam was discovered.
Case Study 2: Manufacturing Company
In another instance, a manufacturing company was targeted by a BEC attack using WormGPT. The attacker generated emails that appeared to come from a key supplier, requesting sensitive business information. The attack exploited the company’s lack of awareness about BEC threats, resulting in a significant data breach.
Recommendations for Mitigation
- Strengthen Email Security Protocols: Implement advanced email security solutions that incorporate AI-driven threat detection.
- Promote Cyber Hygiene: Educate employees on recognizing phishing attempts and practising safe email habits.
- Invest in AI for Defense: Explore the use of AI and machine learning in developing defences against generative AI-driven attacks.
- Implement Verification Procedures: Establish procedures for verifying the authenticity of sensitive requests, especially those received via email.
Conclusion
WormGPT is a new tool in the arsenal of cybercriminals which improved their options to perform Business Email Compromise attacks more effectively and effectively. Therefore, it is critical to provide the defence community with information regarding the potential of WormGPT and its implications for enhancing the threat landscape and strengthening the protection systems against advanced and constantly evolving threats.
This means the development of rigorous security protocols, general awareness of security solutions, and incorporating technologies such as artificial intelligence to mitigate the risk factors that arise from generative AI tools to the best extent possible.

Introduction
An age of unprecedented problems has been brought about by the constantly changing technological world, and misuse of deepfake technology has become a reason for concern which has also been discussed by the Indian Judiciary. Supreme Court has expressed concerns about the consequences of this quickly developing technology, citing a variety of issues from security hazards to privacy violations to the spread of disinformation. In general, misuse of deepfake technology is particularly dangerous since it may fool even the sharpest eye because they are almost identical to the actual thing.
SC judge expressed Concerns: A Complex Issue
During a recent speech, Supreme Court Justice Hima Kohli emphasized the various issues that deepfakes present. She conveyed grave concerns about the possibility of invasions of privacy, the dissemination of false information, and the emergence of security threats. The ability of deepfakes to be created so convincingly that they seem to come from reliable sources is especially concerning as it increases the potential harm that may be done by misleading information.
Gender-Based Harassment Enhanced
In this internet era, there is a concerning chance that harassment based on gender will become more severe, as Justice Kohli noted. She pointed out that internet platforms may develop into epicentres for the quick spread of false information by anonymous offenders who act worrisomely and freely. The fact that virtual harassment is invisible may make it difficult to lessen the negative effects of toxic online postings. In response, It is advocated that we can develop a comprehensive policy framework that modifies current legal frameworks—such as laws prohibiting sexual harassment online —to adequately handle the issues brought on by technology breakthroughs.
Judicial Stance on Regulating Deepfake Content
In a different move, the Delhi High Court voiced concerns about the misuse of deepfake and exercised judicial intervention to limit the use of artificial intelligence (AI)-generated deepfake content. The intricacy of the matter was highlighted by a division bench. The bench proposed that the government, with its wider outlook, could be more qualified to handle the situation and come up with a fair resolution. This position highlights the necessity for an all-encompassing strategy by reflecting the court's acknowledgement of the technology's global and borderless character.
PIL on Deepfake
In light of these worries, an Advocate from Delhi has taken it upon himself to address the unchecked use of AI, with a particular emphasis on deepfake material. In the event that regulatory measures are not taken, his Public Interest Litigation (PIL), which is filed at the Delhi High Court, emphasises the necessity of either strict limits on AI or an outright prohibition. The necessity to discern between real and fake information is at the center of this case. Advocate suggests using distinguishable indicators, such as watermarks, to identify AI-generated work, reiterating the demand for openness and responsibility in the digital sphere.
The Way Ahead:
Finding a Balance
- The authorities must strike a careful balance between protecting privacy, promoting innovation, and safeguarding individual rights as they negotiate the complex world of deepfakes. The Delhi High Court's cautious stance and Justice Kohli's concerns highlight the necessity for a nuanced response that takes into account the complexity of deepfake technology.
- Because of the increased complexity with which the information may be manipulated in this digital era, the court plays a critical role in preserving the integrity of the truth and shielding people from the possible dangers of misleading technology. The legal actions will surely influence how the Indian judiciary and legislature respond to deepfakes and establish guidelines for the regulation of AI in the nation. The legal environment needs to change as technology does in order to allow innovation and accountability to live together.
Collaborative Frameworks:
- Misuse of deepfake technology poses an international problem that cuts beyond national boundaries. International collaborative frameworks might make it easier to share technical innovations, legal insights, and best practices. A coordinated response to this digital threat may be ensured by starting a worldwide conversation on deepfake regulation.
Legislative Flexibility:
- Given the speed at which technology is advancing, the legislative system must continue to adapt. It will be required to introduce new legislation expressly addressing developing technology and to regularly evaluate and update current laws. This guarantees that the judicial system can adapt to the changing difficulties brought forth by the misuse of deepfakes.
AI Development Ethics:
- Promoting moral behaviour in AI development is crucial. Tech businesses should abide by moral or ethical standards that place a premium on user privacy, responsibility, and openness. As a preventive strategy, ethical AI practices can lessen the possibility that AI technology will be misused for malevolent purposes.
Government-Industry Cooperation:
- It is essential that the public and commercial sectors work closely together. Governments and IT corporations should collaborate to develop and implement legislation. A thorough and equitable approach to the regulation of deepfakes may be ensured by establishing regulatory organizations with representation from both sectors.
Conclusion
A comprehensive strategy integrating technical, legal, and social interventions is necessary to navigate the path ahead. Governments, IT corporations, the courts, and the general public must all actively participate in the collective effort to combat the misuse of deepfakes, which goes beyond only legal measures. We can create a future where the digital ecosystem is safe and inventive by encouraging a shared commitment to tackling the issues raised by deepfakes. The Government is on its way to come up with dedicated legislation to tackle the issue of deepfakes. Followed by the recently issued government advisory on misinformation and deepfake.