Launch of Central Suspect Registry to Combat Cyber Crimes
Introduction
The Indian government has introduced initiatives to enhance data sharing between law enforcement and stakeholders to combat cybercrime. Union Home Minister Amit Shah has launched the Central Suspect Registry, Cyber Fraud Mitigation Center, Samanvay Platform and Cyber Commandos programme on the Indian Cyber Crime Coordination Centre (I4C) Foundation Day celebration took place on the 10th September 2024 at Vigyan Bhawan, New Delhi. The ‘Central Suspect Registry’ will serve as a central-level database with consolidated data on cybercrime suspects nationwide. The Indian Cyber Crime Coordinating Center will share a list of all repeat offenders on their servers. Shri Shah added that the Suspect Registry at the central level and connecting the states with it will help in the prevention of cybercrime.
Key Highlights of Central Suspect Registry
The Indian Cyber Crime Coordination Centre (I4C) has established the suspect registry in collaboration with banks and financial intermediaries to enhance fraud risk management in the financial ecosystem. The registry will serve as a central-level database with consolidated data on cybercrime suspects. Using data from the National Cybercrime Reporting Portal (NCRP), the registry makes it possible to identify cybercriminals as potential threats.
Central Suspect Registry Need of the Hour
The Union Home Minister of India, Shri Shah, has emphasized the need for a national Cyber Suspect Registry to combat cybercrime. He argued that having separate registries for each state would not be effective, as cybercriminals have no boundaries. He emphasized the importance of connecting states to this platform, stating it would significantly help prevent future cyber crimes.
CyberPeace Outlook
There has been an alarming uptick in cybercrimes in the country highlighting the need for proactive approaches to counter the emerging threats. The recently launched initiatives under the umbrella of the Indian Cyber Crime Coordination Centre will serve as significant steps taken by the centre to improve coordination between law enforcement agencies, strengthen user awareness, and offer technical capabilities to target cyber criminals and overall aim to combat the growing rate of cybercrime in the country.
References:
Related Blogs
.webp)
The Equitable Growth Approach of AI and Digital Twins
Digital Twins can be simply described as virtual replicas of physical assets or systems, powered by real-time data and advanced simulations. When this technology is combined with AI, the impact it has on enabling real-time monitoring, predictive maintenance, optimised operations, and improved design processes through the creation of virtual replicas of physical assets becomes even greater. The greatest value of AI is its ability to make data actionable. And when combined with digital twins, these data can be collated, analysed, inefficiencies removed, and better decisions can be taken to improve efficiency and quality.
This intersection between AI and Digital Twins holds immense potential for addressing key challenges, particularly in countries like India, which is rapidly embracing digital adoption to achieve its economic ambitions and sustainability goals. According to Salesforce’s most recent survey on generative AI use among the general population within the U.S., UK, Australia and India, 75% of generative AI users are looking to automate repetitive tasks and use generative AI for work communications. India is particularly looking towards a rapid digital adoption, economic ambitions, and sustainable developments to be achieved through AI adoption. This blog discuss the intersection of equitable growth, sustainability, and AI-driven policies in India.
Sustainability and the Path Ahead: Digital Twin and AI-Driven Solutions
India faces sustainability challenges which are mainly associated with issues such as urban congestion, the rising demand for energy along with climate change and environmental degradation. AI and Digital Twins provide solutions for real-time simulations and predictive analysis. Some of the examples are its applications in sustainable urban planning such as smart cities like the Indore Smart City Initiative and traffic optimisation, energy efficiency/optimisation through AI-driven renewable energy projects and power grid optimisation and even water resource management through leak detection, equitable distribution and conservation.
The need is to balance innovation with regulation, particularly, underscoring the importance of ethical and sustainable deployment of AI and digital twins and addressing data privacy with AI ethics with recent developments such as the India’s evolving AI policy landscape, including the National Strategy for Artificial Intelligence and its focus on AI for All, regulatory frameworks such as DPDP Act and the manner in which they address AI ethics, data privacy, and digital governance.
The need is to initiate targeted policies that promote research and development in AI and digital twin technologies, skill development and partnerships with the private sector, think tanks, nonprofits and others. Also, collaborations at the global level would include aligning our domestic policies with global AI and sustainability initiatives and leveraging the international frameworks for climate tech and smart infrastructure.
Cyberpeace Outlook
As part of specific actions, policymakers need to engage in proactive governance to ensure the responsible use and development of AI. This includes enacting incentive schemes for sustainable AI projects and strengthening the enforcement of data privacy laws. Industry leaders must support equitable access to AI and digital twin technologies and develop tailored AI tools for resource-constrained settings, particularly in India. Finally, researchers need to drive innovation in alignment with sustainability goals, such as those related to agriculture and groundwater management.
References
- https://economictimes.indiatimes.com/tech/artificial-intelligence/technologies-like-ai-and-digital-twins-can-tackle-challenges-like-equitable-growth-to-sustainability-wef/articleshow/117121897.cms
- https://www.salesforce.com/news/stories/generative-ai-statistics/
- https://www.mdpi.com/2673-2688/4/3/38
- https://www.ibm.com/think/topics/generative-ai-for-digital-twin-energy-utilities

AI systems have grown in both popularity and complexity on which they operate. They are enhancing accessibility for all, including people with disabilities, by revolutionising sectors including healthcare, education, and public services. We are at the stage where AI-powered solutions that can help people with mental, physical, visual or hearing impairments perform everyday and complex tasks are being created.
Generative AI is now being used to amplify human capability. The development of tools for speech-to-text and image recognition is helping in facilitating communication and interaction for visually or hearing-impaired individuals, and smart prosthetics are providing tailored support. Unfortunately, even with these developments, PWDs have continued to face challenges. Therefore, it is important to balance innovation with ethical considerations aand ensuring that these technologies are designed with qualities like privacy, equity, and inclusivity in mind.
Access to Tech: the Barriers Faced by PWDs
PWDs face several barriers while accessing technology. Identifying these challenges is important as they lack computer accessibility, in the use of hardware and software, which has become a norm in life nowadays. Website functions that only work when users click with a mouse, self-service kiosks without accessibility features, touch screens without screen reader software or tactile keyboards, and out-of-order equipment, such as lifts, captioning mirrors and description headsets, are just some difficulties that they face in their day-to-day life.
While they are helpful, much of the current technology doesn’t fully address all disabilities. For example, many assistive devices focus on visual or mobility impairments, but they fall short of addressing cognitive or sensory conditions. In addition to this, these solutions often lack personalisation, making them less effective for individuals with diverse needs. AI has significant potential to bridge this gap. With adaptive systems like voice assistants, real-time translation, and personalised features, AI can create more inclusive solutions, improving access to both digital and physical spaces for everyone.
The Importance of Inclusive AI Design
Creating an Inclusive AI design is important. It ensures that PWDs are not excluded from technological advancements because of the impairments that they are suffering from. The concept of an ‘inclusive or universal’ design promotes creating products and services that are usable for the widest possible range of people. Tech Developers have an ethical responsibility to create advancements in AI that serve everyone. Accessibility features should be built into the core design. They should be treated as a practice rather than an afterthought. However, bias in AI development often stems from data of a non-representative nature, or assumptions can lead to systems that overlook or poorly serve PWDs. If AI algorithms are trained on limited or biased data, they risk excluding marginalised groups, making ethical, inclusive design a necessity for equity and accessibility.
Regulatory Efforts to Ensure Accessible AI
In India, the Rights of Persons with Disabilities Act of 2016 impresses upon the need to provide PWDs with equal accessibility to technology. Subsequently, the DPDP Act of 2023 highlights data privacy concerns for the disabled under section 9 to process their data.
On the international level, the newly incorporated EU’s AI Act mandates measures for transparent, safe, and fair access to AI systems along with including measures that are related to accessibility.
In the US, the Americans with Disabilities Act of 1990 and Section 508 of the 1998 amendment to the Rehabilitation Act of 1973 are the primary legislations that work on promoting digital accessibility in public services.
Challenges in implementing Regulations for AI Accessibility for PWDs
Defining the term ‘inclusive AI’ is a challenge. When working on implementing regulations and compliance for the accessibility of AI, if the primary work is left undefined, it makes the task of creating tools to address the issue an issue. The rapid pace of tech and AI development has more often outpaced legal frameworks in development. This leads to the creation of enforcement gaps. Countries like Canada and tech industry giants like Microsoft and Google are leading forces behind creating accessible AI innovations. Their regulatory frameworks focus on developing AI ethics with inclusivity and collaboration with disability rights groups.
India’s efforts in creating an inclusive AI include the redesign of the Sugamya Bharat app. The app had been created to assist PWDs and the elderly. It will now be incorporating AI features specifically to assist the intended users.
Though AI development has opportunities for inclusivity, unregulated development can be risky. Regulation plays a critical role in ensuring that AI-driven solutions prioritise inclusivity, fairness, and accessibility, harnessing AI’s potential to empower PWDs and contribute to a more inclusive society.
Conclusion
AI development can offer PWDs unprecedented independence and accessibility in leading their lives. The development of AI while keeping inclusivity and fairness in mind is needed to be prioritised. AI that is free from bias, combined with robust regulatory frameworks, together are essential in ensuring that AI serves equitably. Collaborations between tech developers, policymakers, and disability advocates need to be supported and promoted to build AI systems. This will in turn work towards bridging the accessibility gaps for PWDs. As AI continues to evolve, maintaining a steadfast commitment to inclusivity will be crucial in preventing marginalisation and advancing true technological progress for all.
References
- https://www.business-standard.com/india-news/over-1-4k-accessibility-related-complaints-filed-on-govt-app-75-solved-124090800118_1.html
- https://www.forbes.com/councils/forbesbusinesscouncil/2023/06/16/empowering-individuals-with-disabilities-through-ai-technology/ .
- https://hbr.org/2023/08/designing-generative-ai-to-work-for-people-with-disabilities
- Thehttps://blogs.microsoft.com/on-the-issues/2018/05/07/using-ai-to-empower-people-with-disabilities/andensur,personalization

Introduction
We consume news from various sources such as news channels, social media platforms and the Internet etc. In the age of the Internet and social media, the concern of misinformation has become a common issue as there is widespread misinformation or fake news on the Internet and social media platforms.
Misinformation on social media platforms
The wide availability of user-provided content on online social media platforms facilitates the spread of misinformation. With the vast population on social media platforms, the information gets viral and spreads all over the internet. It has become a serious concern as such misinformation, including rumours, morphed images, unverified information, fake news, and planted stories, spread easily on the internet, leading to severe consequences such as public riots, lynching, communal tensions, misconception about facts, defamation etc.
Platform-centric measures to mitigate the spread of misinformation
- Google introduced the ‘About this result’ feature’. This allows the users to help with better understand the search results and websites at a glance.
- During the covid-19 pandemic, there were huge cases of misinformation being shared. Google, in April 2020, invested $6.5 million in funding to fact-checkers and non-profits fighting misinformation around the world, including a check on information related to coronavirus or on issues related to the treatment, prevention, and transmission of Covid-19.
- YouTube also have its Medical Misinformation Policy which prevents the spread of information or content which is in contravention of the World Health Organization (WHO) or local health authorities.
- At the time of the Covid-19 pandemic, major social media platforms such as Facebook and Instagram have started showing awareness pop-ups which connected people to information directly from the WHO and regional authorities.
- WhatsApp has a limit on the number of times a WhatsApp message can be forwarded to prevent the spread of fake news. And also shows on top of the message that it is forwarded many times. WhatsApp has also partnered with fact-checking organisations to make sure to have access to accurate information.
- On Instagram as well, when content has been rated as false or partly false, Instagram either removes it or reduces its distribution by reducing its visibility in Feeds.
Fight Against Misinformation
Misinformation is rampant all across the world, and the same needs to be addressed at the earliest. Multiple developed nations have synergised with tech bases companies to address this issue, and with the increasing penetration of social media and the internet, this remains a global issue. Big tech companies such as Meta and Google have undertaken various initiatives globally to address this issue. Google has taken up the initiative to address this issue in India and, in collaboration with Civil Society Organisations, multiple avenues for mass-scale awareness and upskilling campaigns have been piloted to make an impact on the ground.
How to prevent the spread of misinformation?
Conclusion
In the digital media space, there is a widespread of misinformative content and information. Platforms like Google and other social media platforms have taken proactive steps to prevent the spread of misinformation. Users should also act responsibly while sharing any information. Hence creating a safe digital environment for everyone.