Launch of Central Suspect Registry to Combat Cyber Crimes
Introduction
The Indian government has introduced initiatives to enhance data sharing between law enforcement and stakeholders to combat cybercrime. Union Home Minister Amit Shah has launched the Central Suspect Registry, Cyber Fraud Mitigation Center, Samanvay Platform and Cyber Commandos programme on the Indian Cyber Crime Coordination Centre (I4C) Foundation Day celebration took place on the 10th September 2024 at Vigyan Bhawan, New Delhi. The ‘Central Suspect Registry’ will serve as a central-level database with consolidated data on cybercrime suspects nationwide. The Indian Cyber Crime Coordinating Center will share a list of all repeat offenders on their servers. Shri Shah added that the Suspect Registry at the central level and connecting the states with it will help in the prevention of cybercrime.
Key Highlights of Central Suspect Registry
The Indian Cyber Crime Coordination Centre (I4C) has established the suspect registry in collaboration with banks and financial intermediaries to enhance fraud risk management in the financial ecosystem. The registry will serve as a central-level database with consolidated data on cybercrime suspects. Using data from the National Cybercrime Reporting Portal (NCRP), the registry makes it possible to identify cybercriminals as potential threats.
Central Suspect Registry Need of the Hour
The Union Home Minister of India, Shri Shah, has emphasized the need for a national Cyber Suspect Registry to combat cybercrime. He argued that having separate registries for each state would not be effective, as cybercriminals have no boundaries. He emphasized the importance of connecting states to this platform, stating it would significantly help prevent future cyber crimes.
CyberPeace Outlook
There has been an alarming uptick in cybercrimes in the country highlighting the need for proactive approaches to counter the emerging threats. The recently launched initiatives under the umbrella of the Indian Cyber Crime Coordination Centre will serve as significant steps taken by the centre to improve coordination between law enforcement agencies, strengthen user awareness, and offer technical capabilities to target cyber criminals and overall aim to combat the growing rate of cybercrime in the country.
References:
Related Blogs
Introduction
Language is an important part of human communication and a basic aspect of human understanding. The world is a global market and this diversity of languages has led to difficulties in engaging for effective communication and collaboration. India alone has 22 official languages and countless regional languages and dialects which change every few hundred kilometres.
AI has emerged to overcome this challenge of language barriers and has stepped into bringing about a transformative shift. It is leading the charge in breaking down traditional barriers and paving the way for more inclusive and seamless global interactions. AI’s integration into language translation has revolutionised the field, addressing longstanding challenges associated with traditional human-centric approaches. The limitations posed by reliance on human translators, such as time constraints, resource limitations, and the inability to handle the data efficiently, paved the way for the furtherance of the transformative impact of AI. However, challenges such as maintaining translation accuracy, addressing cultural nuances, and ensuring data privacy require careful attention to realize AI's full potential.
AI Technologies Bridging Language Gaps
AI tools have transformed translation, transcription, and natural language processing, providing language solutions. They can instantly translate text, transcribe audio, and analyse linguistic nuances, enabling effective cross-cultural communication. Moreover, AI's adaptive capabilities have facilitated language learning, allowing individuals to grasp new languages and adapt their communication styles to diverse cultural contexts.
AI technologies are making information and services more accessible to non-native speakers and are impacting global business, allowing effective engagement. Building on this transformative potential, various AI tools are now used to bridge language gaps in real-world applications. Some examples of AI’s role in bridging the language gap are:
- Real-time translation tools that enable instant communication by providing translations between languages on the fly. This would help in effortless conversations with clients and partners worldwide.
- Tools such as ‘speech-to-text’ and ‘text-to-speech’ like Murf AI, Lovo AI, and ElevenLabs work towards converting spoken language into written text and vice versa. These technologies have led to streamlined interactions, boosted productivity, and clarity in global business dealings. Businesses can extract important information, insights, and action points from meetings, interviews, and presentations.
- AI chatbots like MyGov Corona Helpdesk, WhatsApp Chatbot by the Government of India, Railway Food Order & Delivery by Zoop India, and Gen AI-Powered 'Elena' by Indian School of Business (ISB) are some examples that act as intelligent virtual assistants that engage in real-time conversations, by answering queries, providing information, and facilitating transactions. They offer round-the-clock support, freeing human resources and enhancing customer experience across language barriers.
Challenges and Limitations of AI Translation
While AI’s integration in combatting language barriers is commendable, there are challenges and limitations in overcoming this endeavour. These challenges and limitations are:
- AI translation systems face several challenges in handling accuracy, context, nuance, and idiomatic expressions.
- These systems may encounter struggles with complex or specialised language, along with those towards regional dialects, leading to potential misinterpretations.
- Biases within the AI models can further affect the inclusivity of translations, often favouring dominant languages and cultural norms while marginalising others.
- Ethical concerns, regarding privacy and data security, particularly when sensitive information is processed have also been arising.
- Ensuring user consent and protecting data integrity are essential to addressing these concerns. As AI continues to evolve, ongoing efforts are needed to improve fairness, transparency, and the cultural sensitivity of translation systems.
AI’s Future in Language Translation
AI technologies are moving towards improving translation accuracy and contextual understanding, allowing AI models to grasp cultural nuances and idiomatic expressions better. This can significantly enhance communication across diverse languages, fostering multilingual interactions and global collaboration in business, education, and diplomacy. Improvements in AI tech are taking place ubiquitous, and models like GPT and Google Translate are now better at capturing nuances, idioms, and cultural differences, reducing errors. AI tools like the Microsoft Translator help cross-continental teams work seamlessly by enhancing their productivity and inclusivity.
AI is capable of offering real-time translation in healthcare, education, and public services. This would enable more inclusive environments and bridging communication gaps. For example in the healthcare system, AI-powered translation tools are helping the industry to provide better care by crossing linguistic barriers. Doctors can now communicate with patients who speak different languages, ensuring equitable care even with linguistic boundaries.
Conclusion
We live in a world where diverse languages pose significant challenges to global communication, and AI has emerged as a powerful tool to bridge these gaps. AI is paving the way for more inclusive and seamless interactions by revolutionising language translation, transcription, and natural language processing. Its ability to break down barriers caused by linguistic diversity ensures effective communication in fields ranging from business to healthcare. Despite challenges like accuracy and cultural sensitivity, the potential for AI to continuously improve is undeniable. As AI technologies evolve, they stand as the key to overcoming language barriers and fostering a more connected and inclusive global community.
Notwithstanding AI's potential abilities to overcome language barriers through advances in natural language processing and translation, cybersecurity and data privacy must always come first. The same technologies that make it easier to communicate globally also put private information at risk. The likelihood of data breaches, personal information misuse, and compromised communication rises in the absence of strict cybersecurity safeguards. Thus, in order to guarantee safe and reliable international Interactions as AI develops, it is crucial to strike a balance between innovation and privacy protection.
References
- https://megasisnetwork.medium.com/ai-and-language-translation-breaking-down-language-barriers-47873cfdb13b
- https://pubmed.ncbi.nlm.nih.gov/38099504/
- https://www.linkedin.com/pulse/breaking-language-barriers-ai-era-leveraging-tools-business-a-rad
- https://www.researchgate.net/publication/373842132_Breaking_Down_Barriers_With_Artificial_Intelligence_AI_Cross-Cultural_Communication_in_Foreign_Language_Education
Introduction
In a setback to the Centre, the Bombay High Court on Friday 20th September 2024, struck down the provisions under IT Amendment Rules 2023, which empowered the Central Government to establish Fact Check Units (FCUs) to identify ‘fake and misleading’ information about its business on social media platforms.
Chronological Overview
- On 6th April 2023, the Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules, 2023). These rules introduced new provisions to establish a fact-checking unit with respect to “any business of the central government”. This amendment was done In exercise of the powers conferred by section 87 of the Information Technology Act, 2000. (IT Act).
- On 20 March 2024, the Central Government notified the Press Information Bureau (PIB) as FCU under rule 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2023 (IT Amendment Rules 2023).
- The next day on 21st March 2024, the Supreme Court stayed the Centre's decision on notifying PIB -FCU, considering the pendency of the proceedings before the High Court of Judicature at Bombay. A detailed analysis covered by CyberPeace on the Supreme Court Stay decision can be accessed here.
- In the latest development, the Bombay High Court on 20th September 2024, struck down the provisions under IT Amendment Rules 2023, which empowered the Central Government to establish Fact Check Units (FCUs) to identify ‘fake and misleading’ information about its business on social media platforms.
Brief Overview of Bombay High Court decision dated 20th September 2024
Justice AS Chandurkar was appointed as the third judge after a split verdict in January 2023 by a division bench consisting of Justices Gautam Patel and Neela Gokhal. As a Tie-breaker judge' Justice AS Chandurkar delivered the decision striking down provisions for setting up a Fact Check Unit under IT amendment 2023 rules. Striking down the Centre's proposed fact check unit provision, Justice A S Chandurkar of Bombay High Court also opined that there was no rationale to undertake an exercise in determining whether information related to the business of the Central govt was fake or false or misleading when in digital form but not doing the same when such information was in print. It was also contended that there is no justification to introduce an FCU only in relation to the business of the Central Government. Rule 3(1)(b)(v) has a serious chilling effect on the exercise of the freedom of speech and expression under Article 19(1)(a) of the Constitution since the communication of the view of the FCU will result in the intermediary simply pulling down the content for fear of consequences or losing the safe harbour provision given under IT Act.
Justice Chandurkar held that the expressions ‘fake, false or misleading’ are ‘vague and overbroad’, and that the ‘test of proportionality’ is not satisfied. Rule 3(1)(b)(v), was violative of Articles 14 and 19 (1) (a) and 19 (1) (g) of the Constitution and it is “ultra vires”, or beyond the powers, of the IT Act.
Role of Expert Organisations in Curbing Mis/Disinformation and Fake News
In light of the recent developments, and the rising incidents of Mis/Disinformation and Fake News it becomes significantly important that we all stand together in the fight against these challenges. The actions against Mis/Disinformation and fake news should be strengthened by collective efforts, the expert organisations like CyberPeace Foundation plays an key role in enabling and encouraging netizens to exercise caution and rely on authenticated sources, rather than solely rely on govt FCU to block the content.
Mis/Disinformation and Fake News should be stopped, identified and countered by netizens at the very first stage of its spread. In light of the Bombay High Court's decision to stuck down the provision related to setting up the FCU by the Central Government, it entails that the government's intention to address misinformation related solely to its business/operations may not have been effectively communicated in the eyes of the judiciary.
It is high time to exercise collective efforts against Mis/Disinformation and Fake News and support expert organizations who are actively engaged in conducting proactive measures, and campaigns to target these challenges, specifically in the online information landscape. CyberPeace actively publishes fact-checking reports and insights on Prebunking and Debunking, conducts expert sessions and takes various key steps aimed at empowering netizens to build cognitive defences to recognise the susceptible information, disregard misleading claims and prevent further spreads to ensure the true online information landscape.
References:
- https://www.scconline.com/blog/post/2024/09/20/bombay-high-court-it-rules-amendment-2023-fact-check-units-article14-article19-legal-news/#:~:text=Bombay%20High%20Court%3A%20A%20case,grounds%20that%20it%20violated%20constitutional
- https://indianexpress.com/article/cities/mumbai/bombay-hc-strikes-down-it-act-amendment-fact-check-unit-9579044/
- https://www.cyberpeace.org/resources/blogs/supreme-court-stay-on-centres-notification-of-pibs-fact-check-unit-under-it-amendment-rules-2023
Introduction
Twitter Inc.’s appeal against barring orders for specific accounts issued by the Ministry of Electronics and Information Technology was denied by a single judge on the Karnataka High Court. Twitter Inc. was also given an Rs. 50 lakh fine by Justice Krishna Dixit, who claimed the social media corporation had approached the court defying government directives.
As a foreign corporation, Twitter’s locus standi had been called into doubt by the government, which said they were ineligible to apply Articles 19 and 21 to their situation. Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
The Issue
In accordance with Section 69A of the Information Technology Act, the Ministry issued the directives. Nevertheless, Twitter had argued in its appeal that the orders “fall foul of Section 69A both substantially and procedurally.” Twitter argued that in accordance with 69A, account holders were to be notified before having their tweets and accounts deleted. However, the Ministry failed to provide these account holders with any notices.
On June 4, 2022, and again on June 6, 2022, the government sent letters to Twitter’s compliance officer requesting that they come before them and provide an explanation for why the Blocking Orders were not followed and why no action should be taken against them.
Twitter replied on June 9 that the content against which it had not followed the blocking orders does not seem to be a violation of Section 69A. On June 27, 2022, the Government issued another notice stating Twitter was violating its directions. On June 29, Twitter replied, asking the Government to reconsider the direction on the basis of the doctrine of proportionality. On June 30, 2022, the Government withdrew blocking orders on ten account-level URLs but gave an additional list of 27 URLs to be blocked. On July 10, more accounts were blocked. Compiling the orders “under protest,” Twitter approached the HC with the petition challenging the orders.
Legality
Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
Government attorney Additional Solicitor General R Sankaranarayanan argued that tweets mentioning “Indian Occupied Kashmir” and the survival of LTTE commander Velupillai Prabhakaran were serious enough to undermine the integrity of the nation.
Twitter, on the other hand, claimed that its users have pushed for these rights. Additionally, Twitter maintained that under Article 14 of the Constitution, even as a foreign company, they were entitled to certain rights, such as the right to equality. They also argued that the reason for the account blocking in each case was not stated and that Section 69a’s provision for blocking a URL should only apply to the offending URL rather than the entire account because blocking the entire account would prevent the creation of information while blocking the offending tweet only applied to already-created information.
Conclusion
The evolution of cyberspace has been substantiated by big tech companies like Facebook, Google, Twitter, Amazon and many more. These companies have been instrumental in leading the spectrum of emerging technologies and creating a blanket of ease and accessibility for users. Compliance with laws and policies is of utmost priority for the government, and the new bills and policies are empowering the Indian cyberspace. Non Compliance will be taken very seriously, and the same is legalised under the Intermediary Guidelines 2021 and 2022 by Meity. Referring to Section 79 of the Information Technology Act, which pertains to an exemption from liability of intermediary in some instances, it was said, “Intermediary is bound to obey the orders which the designate authority/agency which the government fixes from time to time.”