Survivors Unveil the Dark Reality of Cyber Slavery
Introduction
Cyber slavery has emerged as a serious menace. Offenders target innocent individuals, luring them with false promises of employment, only to capture them and subject them to horrific torture and forced labour. According to reports, hundreds of Indians have been imprisoned in 'Cyber Slavery' in certain Southeast Asian countries. Indians who have travelled to South Asian nations such as Cambodia in the hopes of finding work and establishing themselves have fallen victim to the illusion of internet slavery. According to reports, 30,000 Indians who travelled to this region on tourist visas between 2022 and 2024 did not return. India Today’s coverage demonstrated how survivors of cyber slavery who have somehow escaped and returned to India have talked about the terrifying experiences they had while being coerced into engaging in cyber slavery.
Tricked by a Job Offer, Trapped in Cyber Slavery
India Today aired testimonials of cyber slavery victims who described how they were trapped. One individual shared that he had applied for a well-paying job as an electrician in Cambodia through an agent in Delhi. However, upon arriving in Cambodia, he was offered a job with a Chinese company where he was forced to participate in cyber scam operations and online fraudulent activities.
He revealed that a personal system and mobile phone were provided, and they were compelled to cheat Indian individuals using these devices and commit cyber fraud. They were forced to work 12-hour shifts. After working there for several months, he repeatedly requested his agent to help him escape. In response, the Chinese group violently loaded him into a truck, assaulted him, and left him for dead on the side of the road. Despite this, he managed to survive. He contacted locals and eventually got in touch with his brother in India, and somehow, he managed to return home.
This case highlights how cyber-criminal groups deceive innocent individuals with the false promise of employment and then coerce them into committing cyber fraud against their own country. According to the Ministry of Home Affairs' Indian Cyber Crime Coordination Center (I4C), there has been a significant rise in cybercrimes targeting Indians, with approximately 45% of these cases originating from Southeast Asia.
CyberPeace Recommendations
Cyber slavery has developed as a serious problem, beginning with digital deception and progressing to physical torture and violent actions to commit fraudulent online acts. It is a serious issue that also violates human rights. The government has already taken note of the situation, and the Indian Cyber Crime Coordination Centre (I4C) is taking proactive steps to address it. It is important for netizens to exercise due care and caution, as awareness is the first line of defence. By remaining vigilant, they can oppose and detect the digital deceit of phony job opportunities in foreign nations and the manipulative techniques of scammers. Netizens can protect themselves from significant threats that could harm their lives by staying watchful and double-checking information from reliable sources.
References
- CyberPeace Highlights Cyber Slavery: A Serious Concern https://www.cyberpeace.org/resources/blogs/cyber-slavery-a-serious-concern
- https://www.indiatoday.in/india/story/india-today-operation-cyber-slaves-stories-of-golden-triangle-network-of-fake-job-offers-2642498-2024-11-29
- https://www.indiatoday.in/india/video/cyber-slavery-survivors-narrate-harrowing-accounts-of-torture-2642540-2024-11-29?utm_source=washare
Related Blogs

Executive Summary:
A viral social media claim suggested that India Post would discontinue all red post boxes across the country from 1 September 2025, attributing the move to the government’s Digital India initiative. However, fact-checking revealed this claim to be false. India Post’s official X (formerly Twitter) and Instagram handles clarified on 7 August 2025 that red letterboxes remain operational, calling them timeless symbols of connection and memories. No official notice or notification regarding their discontinuation exists on the Department of Posts’ website. This indicates the viral posts were misleading and aimed at creating confusion among the public.
Claim:
A claim is circulating on social media stating that India Post will discontinue all red post boxes across the country effective 1 September 2025. According to the viral posts,[archived link] the move is being linked to the government’s push towards Digital India, suggesting that traditional post boxes have lost their relevance in the digital era.

Fact Check:
After conducting a reverse image analysis, we found that the official X handle of India Post, in a post dated 7 August 2025, clarified that the viral claim was incorrect and misleading. The post was shared with the caption:
I’m still right here and always will be!"
India Post is evolving with the times, but some things will remain the same- always. We have carried love, news, and stories for generations... And guess what? Our red letterboxes are here to stay.
They are symbols of connection, memories, and moments that mattered. Then. Now. Always.
Keep sending handwritten letters- we are here for you.
This directly refutes the viral claim about the discontinuation of the red post box from 1 September 2025. A similar clarification was also posted on the official Instagram handle @indiapost_dop on the same date.


Furthermore, after thoroughly reviewing the official website of the Department of Posts, Government of India, we found absolutely no trace, notice, or even the slightest mention of any plan to discontinue the iconic red post boxes. This complete absence of official communication strongly reinforces the fact that the viral claim is nothing more than a baseless and misleading rumour.

Conclusion:
The claim about the discontinuation of red post boxes from 1 September 2025 is false and misleading. India Post has officially confirmed that the iconic red letterboxes will continue to function as before and remain an integral part of India’s postal services.
- Claim: A viral claim suggests that India Post will remove all red letter boxes across the country beginning 1 September 2025.
- Claimed On: Social Media
- Fact Check: False and Misleading

The recent Promotion and Regulation of Online Gaming Act, 2025, that came into force in August, has been one of the most widely anticipated regulations in the digital entertainment industry. Among provisions such as promoting esports and licensing of online gaming, the legislation notably introduces a blanket ban on real-money gaming (RMG). The rationale behind this was to reduce its addictive effects, protect minors, and limit the circulation of black-money. However, in reality, the Act has spawned apprehension about the legislative process, regulatory redundancy, and unintended consequences that can shift users and revenue to offshore operators.
From Debate to Prohibition: How the Act was Passed
The Promotion and Regulation of Online Gaming Act was passed as a central law, providing the earlier fragmented state laws on online betting and gambling with an overarching framework. Proponents argue that, among other provisions, some kind of unified national framework was needed to deal with the scale of online betting due to its detrimental impact on young users. The current Act is a direct transition to criminalisation rather than the swings of self-regulation and partial restrictions used during the previous decade of incremental experiments in regulation. Stakeholders in the industry believe that this type of sudden, blanket action creates uncertainty and erodes confidence in the system in the long run. Further, critics have pointed out that the Bill was passed without adequate Parliamentary deliberation. A question has been raised about whether procedural safeguards were upheld.
Prohibition of Online RMG
Within the Indian context, a distinction has long been drawn between games of skill and games of chance, with the latter, like a lottery or a casino, being severely prohibited under state laws, whereas the former, like rummy or fantasy sports, have generally been allowed after being recognized as skill-based by court authorities. The Online Gaming Act of 2025 abolishes this distinction on the internet, thus banning all RMG actions that include cash transactions, regardless of skill or chance. The act also criminalises the advertising, facilitation, and hosting of such sites, thereby penalizing offshore operators with an Indian customer focus, and subjecting their payment gateways, app stores, and advertisers under its jurisdiction to penalties.
The Problem of Overlap
One potential issue that the Act presents is its overlap with the existing laws. The IT Rules 2023 mandate intermediaries in the gaming sector to appoint compliance officers, submit monthly reports, and undergo due diligence. The new Act introduces a three-level classification of games, whereas the advisories of the Central Consumer Protection Authority (CCPA) under the Consumer Protection Act treat online betting as an unfair trade practice.
This multiplicity of regulations builds a maze where different Ministries and state governments have overlapping jurisdiction. Policy experts caution that such an overlap can create enforcement challenges, punish players who act within the law, and leave offshore malefactors undetected.
Unintended Consequences: Driving Users Offshore
Outright prohibition will hardly ever remove demand; it will only push it out. Offshore sites have taken advantage of the situation as Indian operators like Dream11 shut down their money games after the ban. It has already been reported that there is aggressive advertising by foreign betting companies that are not registered in India, most of which have backend infrastructure that cannot be regulated by the Act (Storyboard18).
This diversion of users to unregulated markets has two main risks. First, Indian players are deprived of the consumer protection offered to them in local regulation, and their data can be sent to suspicious foreign organizations. Second, the government loses control over the money flow that can be transferred via informal channels or cryptocurrencies or other obscure systems. Industry analysts are alerting that such developments may only worsen the issue of black-money instead of solving it (IGamingBusiness).
Advertising, Age Gating, and Digital Rights
The Act has also strengthened advertisement regulations, aligning with advisories issued by the Advertising Standards Council of India, which prohibits the targeting of minors. However, critics believe that the application remains inadequately enforced, and children can with comparative ease access unregulated overseas applications. In the absence of complementary digital literacy programs and strong parental controls, these limitations can be effectively superficial instead of real.
Privacy advocates also warn that frequent prompts, vague messages, or invasive surveillance can weaken the digital rights of users instead of strengthening them. Overregulation has also been found to create banner blindness in global contexts where users ignore warnings without first clearly understanding them.
Enforcement Challenges
The Act puts a lot of responsibilities on many stakeholders, including the Ministry of Information and Broadcasting (MIB) and the Reserve Bank of India (RBI). Platforms like Google Play and Apple App Store are expected to verify government-approved lists of compliant gaming apps and remove non-compliant or banned ones, as directed by the MIB and the RBI. Although this pressure may motivate intermediaries to collaborate, it may also have a risk of overreach when it is applied unequally or in a political way.
According to the experts, the solution should be underpinned by technology itself. Artificial intelligence can be used to identify illegal advertisements, track illegal gaming in children, and trace payment streams. At the same time, the regulators should be able to issue final lists of either compliant or non-compliant applications to advise the consumers and intermediaries alike. Without such practical provisions, enforcement risks remaining patchy.
Online Gaming Rules
On 1 October 2025, the government issued a draft of the Online Gaming Rules in accordance with the Promotion and Regulation of Online Gaming Act. The regulations focus on the creation of the compliance frameworks, define the classification of the allowed gaming activities, and prescribe grievance-redressal mechanisms aiming to promote the protection of the players and procedural transparency. However, the draft does not revisit or soften the existing blanket prohibition on real-money gaming (RMG) and, hence, the questions about the effectiveness of enforcement and regulatory clarity remain open (Times of India, 2025).
Protecting Consumers Without Stifling Innovation
The ban highlights a larger conflict, i.e., the protection of the vulnerable users without stifling an industry that has traditionally contributed to innovation, jobs, and the collection of tax revenue. Online gaming has significantly added to the GST collections, and the sudden shakeup brings fiscal concerns (Reuters).
Several legal objections to the Act have already been brought, asking whether the Act is constitutional, especially as to whether the restrictions are proportional to the right to trade. The outcome of such cases will define the future trajectory of the digital economy of India (Reuters).
Way Forward
Instead of outright prohibition, a more balanced approach that incorporates regulation and consumer protection is suggested by the experts. Key measures could include:
- A definite difference between games of skill and games of chance, with proportionate regulation.
- Age confirmation and campaign against online illiteracy to protect the underage population.
- Enhanced advertising and payments compliance requirements and enforceable non-compliance penalty.
- Coordinated oversight among different ministries to prevent duplication and regulatory struggle.
- Leveraging AI and fintech to track illegal financial activities (black money flows) and developing innovation.
Conclusion
The Online Gaming Act 2025 addresses social issues, such as addiction, monetary risk, and child safety, that require governance interventions. However, the path it follows to this end, that of total prohibition, is more likely to spawn a new set of issues instead of providing solutions because it will send consumers to offshore sites, undermine consumer rights, and slow innovation.
For India, the real challenge is not whether to prohibit online money gaming but how to create a balanced, transparent, and enforceable framework that protects users while fostering a responsible gaming ecosystem. India can reduce the adverse consequences of online betting without keeping the industry in the shadows with better coordination, reasonable use of technology, and balanced protection.
References:
- India's Dream11, top gaming apps halt money-based games after ban
- India online gambling ban could drive punters to black market
- Offshore betting firms with backend ops in India not covered by online gaming law
- The Great Gamble: India’s Online Gaming Ban, The GST Battle, And What Lies Ahead.
- Game Over for Online Money Games? An Analysis of the Online Gaming Act 2025
- Government gambles heavily on prohibiting online money gaming
- Online gaming regulation: New rules to take effect from October 1; government stresses consultative approach with industry

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/