Launch of Central Suspect Registry to Combat Cyber Crimes
Introduction
The Indian government has introduced initiatives to enhance data sharing between law enforcement and stakeholders to combat cybercrime. Union Home Minister Amit Shah has launched the Central Suspect Registry, Cyber Fraud Mitigation Center, Samanvay Platform and Cyber Commandos programme on the Indian Cyber Crime Coordination Centre (I4C) Foundation Day celebration took place on the 10th September 2024 at Vigyan Bhawan, New Delhi. The ‘Central Suspect Registry’ will serve as a central-level database with consolidated data on cybercrime suspects nationwide. The Indian Cyber Crime Coordinating Center will share a list of all repeat offenders on their servers. Shri Shah added that the Suspect Registry at the central level and connecting the states with it will help in the prevention of cybercrime.
Key Highlights of Central Suspect Registry
The Indian Cyber Crime Coordination Centre (I4C) has established the suspect registry in collaboration with banks and financial intermediaries to enhance fraud risk management in the financial ecosystem. The registry will serve as a central-level database with consolidated data on cybercrime suspects. Using data from the National Cybercrime Reporting Portal (NCRP), the registry makes it possible to identify cybercriminals as potential threats.
Central Suspect Registry Need of the Hour
The Union Home Minister of India, Shri Shah, has emphasized the need for a national Cyber Suspect Registry to combat cybercrime. He argued that having separate registries for each state would not be effective, as cybercriminals have no boundaries. He emphasized the importance of connecting states to this platform, stating it would significantly help prevent future cyber crimes.
CyberPeace Outlook
There has been an alarming uptick in cybercrimes in the country highlighting the need for proactive approaches to counter the emerging threats. The recently launched initiatives under the umbrella of the Indian Cyber Crime Coordination Centre will serve as significant steps taken by the centre to improve coordination between law enforcement agencies, strengthen user awareness, and offer technical capabilities to target cyber criminals and overall aim to combat the growing rate of cybercrime in the country.
References:
Related Blogs
%20(1).webp)
Introduction
Bumble’s launch of its ‘Opening Move’ feature has sparked a new narrative on safety and privacy within the digital dating sphere and has garnered mixed reactions from users. It was launched against the backdrop of women stating that the ‘message first’ policy of Bumble was proving to be tedious. Addressing the large-scale review, Bumble launched its ‘Opening Move’ feature, whereby users can either craft or select from pre-set questions which potential matches may choose to answer to start the conversation at first glance. These questions are a segue into meaningful and insightful conversation from the get-go and overstep the traditional effort to start engaging chats between matched users. This feature is an optional feature that users may enable and as such does not prevent a user from exercising the autonomy previously in place.
Innovative Approach to Conversation Starters
Many users consider this feature as innovative; not only does it act as a catalyst for fluid conversation but also cultivates insightful dialogue, fostering meaningful interactions that are devoid of the constraint of superficial small talk. The ‘Opening Moves’ feature may also be aligned with unique scientific research indicating that individuals form their initial attractions within 3-seconds of intimate interaction, thereby proving to be a catalyst to the decision-making process of an individual in the attraction time frame.
Organizational Benefits and Data Insights
From an organisational standpoint, the feature is a unique solution towards localisation challenges faced by apps; the option of writing a personalised ‘Opening Move’ implies setting prompts that are culturally relevant and appropriate in a specific area. Moreover, it is anticipated that Bumble may enhance and improve user experience within the platform through data analysis. Data from responses to an ‘Opening Move’ may provide valuable insights into user preferences and patterns by analysing which pre-set prompts garner more responses over others and how often is a user-written ‘Opening Move’ successful in obtaining a response in comparison with Bumble’s pre-set prompts. A quick glance at Bumble’s privacy policy[1] shows that data storing and transferring of chats between users are not shared with third parties, further safeguarding personal privacy. However, Bumble does use the chat data for its own internal purposes after removing personally identifiable information from chats. The manner of such review and removal of data has not been specified, which may raise challenges depending upon whether the reviewer is a human or an algorithm.
However, some users perceive the feature as counterproductive to the company’s principle of ‘women make the first move’. While Bumble aims to market the feature as a neutral ground for matched users based on the exercise of choice, users see it as a step back into the heteronormative gender expectations that most dating apps conform to, putting the onus of the ‘first move’ on men. Many male users have complained that the feature acts as a catalyst for men to opt out of the dating app and would most likely refrain from interacting with profiles enabled with the ‘Opening Move’ feature, since the pressure to answer in a creative manner is disproportionate with the likelihood their response actually being entertained.[2] Coupled with the female users terming the original protocol as ‘too much effort’, the preset questions of the ‘Opening Move’ feature may actively invite users to categorise potential matches according to arbitrary questions that undermine real-life experiences, perspectives and backgrounds of each individual.[3]
Additionally, complications are likely to arise when a notorious user sets a question that indirectly gleans personal or sensitive, identifiable information. The individual responding may be bullied or be subjected to hateful slurs when they respond to such carefully crafted conversation prompts.
Safety and Privacy Concerns
On the corollary, the appearance of choice may translate into more challenges for women on the platform. The feature may spark an increase in the number of unsolicited, undesirable messages and images from a potential match. The most vulnerable groups at present remain individuals who identify as females and other sexual minorities.[4] At present, there appears to be no mechanism in place to proactively monitor the content of responses, relying instead on user reporting. This approach may prove to be impractical given the potential volume of objectionable messages, necessitating a more efficient solution to address this issue. It is to be noted that in spite of a user reporting, the current redressal systems of online platforms remain lax, largely inadequate and demonstrate ineffectiveness in addressing user concerns or grievances. This lack of proactiveness is violative of the right to redressal provided under the Digital Personal Data Protection Act, 2023. It is thought that the feature may actually take away user autonomy that Bumble originally aimed to grant since Individuals who identify as introverted, shy, soft-spoken, or non-assertive may refrain from reporting harassing messages altogether, potentially due to discomfort or reluctance to engage in confrontation. Resultantly, it is anticipated that there would be a sharp uptake in cases pertaining to cyberbullying, harassment and hate speech (especially vulgar communications) towards both the user and the potential match.
From an Indian legal perspective, dating apps have to adhere to the Information Technology Act, 2000 [5], the Information Technology (Intermediary and Digital Media Ethics) Rules 2021 [6] and the Digital Personal Data Protection Act, 2023, that regulates a person’s digital privacy and set standards on the kind of content an intermediary may host. An obligation is cast upon an intermediary to uprise its users on what content is not allowed on its platform in addition to mandating intimation of the user’s digital rights. The lack of automated checks, as mentioned above, is likely to make Bumble non-compliant with the ethical guidelines.
The optional nature of the ‘Opening Move’ grants users some autonomy. However, some technical updates may enhance the user experience of this feature. Technologies like AI are an effective aid in behavioural and predictive analysis. An upgraded ‘matching’ algorithm can analyse the number of un-matches a profile receives, thereby identifying and flagging a profile having multiple lapsed matches. Additionally, the design interface of the application bearing a filter option to filter out flagged profiles would enable a user to be cautious while navigating through the matches. Another possible method of weeding out notorious profiles is by deploying a peer-review system of profiles whereby a user has a singular check-box that enables them to flag a profile. Such a checkbox would ideally be devoid of any option for writing personal comments and would bear a check box stating whether the profile is most or least likely to bully/harass. This would ensure that a binary, precise response is recorded and any coloured remarks are avoided. [7]
Governance and Monitoring Mechanisms
From a governance point of view, a monitoring mechanism on the manner of crafting questions is critical. Systems should be designed to detect certain words/sentences and a specific manner of framing sentences to disallow questions contrary to the national legal framework. An onscreen notification having instructions on generally acceptable manner of conversations as a reminder to users to maintain cyber hygiene while conversing is also proposed as a mandated requirement for platforms. The notification/notice may also include guidelines on what information is safe to share in order to safeguard user privacy. Lastly, a revised privacy policy should establish the legal basis for processing responses to ‘Opening Moves’, thereby bringing it in compliance with national legislations such as the Digital Personal Data Protection Act, 2023.
Conclusion
Bumble's 'Opening Move' feature marks the company’s ‘statement’ step to address user concerns regarding initiating conversations on the platform. While it has been praised for fostering more meaningful interactions, it also raises not only ethical concerns but also concerns over user safety. While the 'Opening Move' feature can potentially enhance user experience, its success is largely dependent on Bumble's ability to effectively navigate the complex issues associated with this feature. A more robust monitoring mechanism that utilises newer technology is critical to address user concerns and to ensure compliance with national laws on data privacy.
Endnotes:
- [1] Bumble’s privacy policy https://bumble.com/en-us/privacy
- [2] Discussion thread, r/bumble, Reddit https://www.reddit.com/r/Bumble/comments/1cgrs0d/women_on_bumble_no_longer_have_to_make_the_first/?share_id=idm6DK7e0lgkD7ZQ2TiTq&utm_content=2&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1&rdt=65068
- [3] Mcrea-Hedley, Olivia, “Love on the Apps: When did Dating Become so Political?”, 8 February 2024 https://www.service95.com/the-politics-of-dating-apps/
- [4] Gewirtz-Meydan, A., Volman-Pampanel, D., Opuda, E., & Tarshish, N. (2024). ‘Dating Apps: A New Emerging Platform for Sexual Harassment? A Scoping Review. Trauma, Violence, & Abuse, 25(1), 752-763. https://doi.org/10.1177/15248380231162969
- [5] Information Technology Act, 2000 https://www.indiacode.nic.in/bitstream/123456789/13116/1/it_act_2000_updated.pdf
- [6] Information Technology (Intermediary Guidelines and Digital Media Ethics) Rules 2021 https://www.meity.gov.in/writereaddata/files/Information%20Technology%20%28Intermediary%20Guidelines%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20%28updated%2006.04.2023%29-.pdf
- [7] Date Confidently: Engaging Features in a Dating App (Use Cases), Consaguous, 10 July 2023 https://www.consagous.co/blog/date-confidently-engaging-features-in-a-dating-app-use-cases

In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/

Introduction
When a tragedy strikes, moments are fragile, people are vulnerable, emotions run high, and every second is important. In such critical situations, information becomes as crucial as food, water, shelter, and medication. As soon as any information is received, it often leads to stampedes and chaos. Alongside the tragedy, whether natural or man-made, emerges another threat: misinformation. People, desperate for answers, cling to whatever they can find.
Tragedies can take many forms. These may include natural disasters, mass accidents, terrorist activities, or other emergencies. During the 2023 earthquakes in Turkey, misinformation spread on social media claiming that the Yarseli Dam had cracked and was about to burst. People believed it and began migrating from the area. Panic followed, and search and rescue teams stopped operations in that zone. Precious hours were lost. Later, it was confirmed to be a rumour. By then, the damage was already done.
Similarly, after the recent plane crash in Ahmedabad, India, numerous rumours and WhatsApp messages spread rapidly. One message claimed to contain the investigation report on the crash of Air India flight AI-171. It was later called out by PIB and declared fake.
These examples show how misinformation can take control of already painful moments. During emergencies, when emotions are intense and fear is widespread, false information spreads faster and hits harder. Some people share it unknowingly, while others do so to gain attention or push a certain agenda. But for those already in distress, the effect is often the same. It brings ore confusion, heightens anxiety, and adds to their suffering.
Understanding Disasters and the Role of Media in Crisis
Disaster can be defined as a natural or human-caused situation that causes a transformation from a usual life of society into a crisis that is far beyond its existing response capacity. It can have minimal or maximum effects, from mere disruption in daily life practices to as adverse as inability to meet basic requirements of life like food, water and shelter. Hence, the disaster is not just a sudden event. It becomes a disaster when it overwhelms a community’s ability to cope with it.
To cope with such situations, there is an organised approach called Disaster Management. It includes preventive measures, minimising damages and helping communities recover. Earlier, public institutions like governments used to be the main actors in disaster management, but today, with every small entity having a role, academic institutions, media outlets and even ordinary people are involved.
Communication is an important element in disaster management. It saves lives when done correctly. People who are vulnerable need to know what’s happening, what they should do and where to seek help. It involves risk in today’s instantaneous communication.
Research shows that the media often fails to focus on disaster preparedness. For example, studies found that during the 2019 Istanbul earthquake, the media focused more on dramatic scenes than on educating people. Similar trends were seen during the 2023 Turkey earthquakes. Rather than helping people prepare or stay calm, much of the media coverage amplified fear and sensationalised suffering. This shows a shift from preventive, helpful reporting to reactive, emotional storytelling. In doing so, the media sometimes fails in its duty to support resilience and worse, can become a channel for spreading misinformation during already traumatic events. However, fighting misinformation is not just someone’s liability. It is penalised in the official disaster management strategy. Section 54 of the Disaster Management Act, 2005 mentions that "Whoever makes or circulates a false alarm or warning as to disaster or its severity or magnitude, leading to panic, shall, on conviction, be punishable with imprisonment which may extend to one year or with a fine."
AI as a Tool in Countering Misinformation
AI has emerged as a powerful mechanism to fight against misinformation. AI technologies like Natural Language Processing (NLP) and Machine Learning (ML) are effective in spotting and classifying misinformation with up to 97% accuracy. AI flags unverified content, leading to a 24% decrease in shares and 7% drop in likes on platforms like TikTok. Up to 95% fewer people view content on Facebook when fact-checking labels are used. Facebook AI also eliminates 86% of graphic violence, 96% of adult nudity, 98.5% of fake accounts and 99.5% of content related to terrorism. These tools help rebuild public trust in addition to limiting the dissemination of harmful content. In 2023, support for tech companies acting to combat misinformation rose to 65%, indicating a positive change in public expectations and awareness.
How to Counter Misinformation
Experts should step up in such situations. Social media has allowed many so-called experts to spread fake information without any real knowledge, research, or qualification. In such conditions, real experts such as authorities, doctors, scientists, public health officials, researchers, etc., need to take charge. They can directly address the myths and false claims and stop misinformation before it spreads further and reduce confusion.
Responsible journalism is crucial during crises. In times of panic, people look at the media for guidance. Hence, it is important to fact-check every detail before publishing. Reporting that is based on unclear tips, social media posts, or rumours can cause major harm by inciting mistrust, fear, or even dangerous behaviour. Cross-checking information, depending on reliable sources and promptly fixing errors are all components of responsible journalism. Protecting the public is more important than merely disseminating the news.
Focus on accuracy rather than speed. News spreads in a blink in today's world. Media outlets and influencers often come under pressure to publish it first. But in tragic situations like natural disasters and disease outbreaks, rushing to come first is not as important as accuracy is, as a single piece of misinformation can spark mass-scale panic and can slow down emergency efforts and lead people to make rash decisions. Taking a little more time to check the facts ensures that the information being shared is helpful, not harmful. Accuracy may save numerous lives during tragedies.
Misinformation spreads quickly it can only be prevented if people learn to critically evaluate what they hear and see. This entails being able to spot biased or deceptive headlines, cross-check claims and identify reliable sources. Digital literacy is of utmost importance; it makes people less susceptible to fear-based rumours, conspiracy theories and hoaxes.
Disaster preparedness programs should include awareness about the risks of spreading unverified information. Communities, schools and media platforms must educate people on how to respond responsibly during emergencies by staying calm, checking facts and sharing only credible updates. Spreading fake alerts or panic-inducing messages during a crisis is not only dangerous, but it can also have legal consequences. Public communication must focus on promoting trust, calm and clarity. When people understand the weight their words can carry during a crisis, they become part of the solution, not the problem.
References:
- https://dergipark.org.tr/en/download/article-file/3556152
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-Media-Disasters-Emergencies_Mar2018-508.pdf
- https://english.mathrubhumi.com/news/india/fake-whatsapp-message-air-india-crash-pib-fact-check-fcwmvuyc
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-Media-Disasters-Emergencies_Mar2018-508.pdf