Using incognito mode and VPN may still not ensure total privacy, according to expert
SVIMS Director and Vice-Chancellor B. Vengamma lighting a lamp to formally launch the cybercrime awareness programme conducted by the police department for the medical students in Tirupati on Wednesday.
An awareness meet on safe Internet practices was held for the students of Sri Venkateswara University University (SVU) and Sri Venkateswara Institute of Medical Sciences (SVIMS) here on Wednesday.
“Cyber criminals on the prowl can easily track our digital footprint, steal our identity and resort to impersonation,” cyber expert I.L. Narasimha Rao cautioned the college students.
Addressing the students in two sessions, Mr. Narasimha Rao, who is a Senior Manager with CyberPeace Foundation, said seemingly common acts like browsing a website, and liking and commenting on posts on social media platforms could be used by impersonators to recreate an account in our name.
Turning to the youth, Mr. Narasimha Rao said the incognito mode and Virtual Private Network (VPN) used as a protected network connection do not ensure total privacy as third parties could still snoop over the websites being visited by the users. He also cautioned them tactics like ‘phishing’, ‘vishing’ and ‘smishing’ being used by cybercriminals to steal our passwords and gain access to our accounts.
“After cracking the whip on websites and apps that could potentially compromise our security, the Government of India has recently banned 232 more apps,” he noted.
Additional Superintendent of Police (Crime) B.H. Vimala Kumari appealed to cyber victims to call 1930 or the Cyber Mitra’s helpline 9121211100. SVIMS Director B. Vengamma stressed the need for caution with smartphones becoming an indispensable tool for students, be it for online education, seeking information, entertainment or for conducting digital transactions.
Related Blogs

Introduction
In the age of digital advancement, where technology continually grows, so does the method of crime. The rise of cybercrime has created various threats to individuals and organizations, businesses, and government agencies. To combat such crimes law enforcement agencies are looking out for innovative solutions against these challenges. One such innovative solution is taken by the Surat Police in Gujarat, who have embraced the power of Artificial Intelligence (AI) to bolster their efforts in reducing cybercrimes.
Key Highlights
Surat, India, has launched an AI-based WhatsApp chatbot called "Surat Police Cyber Mitra Chatbot" to tackle growing cybercrime. The chatbot provides quick assistance to individuals dealing with various cyber issues, ranging from reporting cyber crimes to receiving safety tips. The initiative is the first of its kind in the country, showcasing Surat Police's dedication to using advanced technology for public safety. Surat Police Commissioner-in-Charge commended the use of AI in crime control as a positive step forward, while also stressing the need for continuous improvements in various areas, including technological advancements, data acquisition related to cybercrime, and training for police personnel.
The Surat Cyber Mitra Chatbot, available on WhatsApp number 9328523417, offers round-the-clock assistance to citizens, allowing them to access crucial information on cyber fraud and legal matters.
Surat Police's AI Chatbot: Cyber Mitra
- Surat Police in Gujarat, India, has launched an AI-based WhatsApp chatbot, "Surat Police Cyber Mitra Chatbot," to combat growing cybercrime.
- The chatbot provides assistance to individuals dealing with various cyber issues, from reporting cyber crimes to receiving safety tips.
- The initiative is the first of its kind in the country, showcasing Surat Police's dedication to using advanced technology for public safety.
- The Surat Cyber Mitra Chatbot, available on WhatsApp number 9328523417, offers round-the-clock assistance to citizens, providing crucial information on cyber fraud.
The Growing Cybercrime Threat
With the advancement of technology, cybercrime has become more complex due to the interconnectivity of digital devices and the internet. The criminals exploit vulnerabilities in software, networks, and human behavior to perpetrate a wide range of malicious activities to fulfill their illicit gains. Individuals and organizations face a wide range of cyber risks that can cause significant financial, reputational, and emotional harm.
Surat Police’s Strategic Initiative
Surat Police Cyber Mitra Chatbot is an AI-powered tool for instant problem resolution. This innovative approach allows citizens to address any issue or query at their doorstep, providing immediate and accurate responses to concerns. The chatbot is accessible 24/7, 24 hours a day, and serves as a reliable resource for obtaining legal information related to cyber fraud.
The use of AI in police initiatives has been a topic of discussion for some time, and the Surat City Police has taken this step to leverage technology for the betterment of society. The chatbot promises to boost public trust towards law enforcement and improve the legal system by addressing citizen issues within seconds, ranging from financial disputes to cyber fraud incidents.
This accessibility extends to inquiries such as how to report financial crimes or cyber-fraud incidents and understand legal procedures. The availability of accurate information will not only enhance citizens' trust in the police but also contribute to the efficiency of law enforcement operations. The availability of accurate information will lead to more informed interactions between citizens and the police, fostering a stronger sense of community security and collaboration.
The utilisation of this chatbot will facilitate access to information and empower citizens to engage more actively with the legal system. As trust in the police grows and legal processes become more transparent and accessible, the overall integrity and effectiveness of the legal system are expected to improve significantly.
Conclusion
The Surat Police Cyber Mitra Chatbot is an AI-powered tool that provides round-the-clock assistance to citizens, enhancing public trust in law enforcement and streamlining access to legal information. This initiative bridges the gap between law enforcement and the community, fostering a stronger sense of security and collaboration, and driving improvements in the efficiency and integrity of the legal process.
References:
- https://www.ahmedabadmirror.com/surat-first-city-in-india-to-launch-ai-chatbot-to-tackle-cybercrime/81861788.html
- https://government.economictimes.indiatimes.com/news/secure-india/gujarat-surat-police-adopts-ai-to-check-cyber-crimes/107410981
- https://www.timesnownews.com/india/chatbot-and-advanced-analytics-surat-police-utilising-ai-technology-to-reduce-cybercrime-article-107397157
- https://www.grownxtdigital.in/technology/surat-police-ai-cyber-mitra-chatbot-gujarat/
.webp)
Misinformation spread has become a cause for concern for all stakeholders, be it the government, policymakers, business organisations or the citizens. The current push for combating misinformation is rooted in the growing awareness that misinformation leads to sentiment exploitation and can result in economic instability, personal risks, and a rise in political, regional, and religious tensions. The circulation of misinformation poses significant challenges for organisations, brands and administrators of all types. The spread of misinformation online poses a risk not only to the everyday content consumer, but also creates concerns for the sharer but the platforms themselves. Sharing misinformation in the digital realm, intentionally or not, can have real consequences.
Consequences for Platforms
Platforms have been scrutinised for the content they allow to be published and what they don't. It is important to understand not only how this misinformation affects platform users, but also its impact and consequences for the platforms themselves. These consequences highlight the complex environment that social media platforms operate in, where the stakes are high from the perspective of both business and societal impact. They are:
- Legal Consequences: Platforms can be fined by regulators if they fail to comply with content moderation or misinformation-related laws and a prime example of such a law is the Digital Services Act of the EU, which has been created for the regulation of digital services that act as intermediaries for consumers and goods, services, and content. They can face lawsuits by individuals, organisations or governments for any damages due to misinformation. Defamation suits are part of the standard practice when dealing with misinformation-causing vectors. In India, the Prohibition of Fake News on Social Media Bill of 2023 is in the pipeline and would establish a regulatory body for fake news on social media platforms.
- Reputational Consequences: Platforms employ a trust model where the user trusts it and its content. If a user loses trust in the platform because of misinformation, it can reduce engagement. This might even lead to negative coverage that affects the public opinion of the brand, its value and viability in the long run.
- Financial Consequences: Businesses that engage with the platform may end their engagement with platforms accused of misinformation, which can lead to a revenue drop. This can also have major consequences affecting the long-term financial health of the platform, such as a decline in stock prices.
- Operational Consequences: To counter the scrutiny from regulators, the platform might need to engage in stricter content moderation policies or other resource-intensive tasks, increasing operational costs for the platforms.
- Market Position Loss: If the reliability of a platform is under question, then, platform users can migrate to other platforms, leading to a loss in the market share in favour of those platforms that manage misinformation more effectively.
- Freedom of Expression vs. Censorship Debate: There needs to be a balance between freedom of expression and the prevention of misinformation. Censorship can become an accusation for the platform in case of stricter content moderation and if the users feel that their opinions are unfairly suppressed.
- Ethical and Moral Responsibilities: Accountability for platforms extends to moral accountability as they allow content that affects different spheres of the user's life such as public health, democracy etc. Misinformation can cause real-world harm like health misinformation or inciting violence, which leads to the fact that platforms have social responsibility too.
Misinformation has turned into a global issue and because of this, digital platforms need to be vigilant while they navigate the varying legal, cultural and social expectations across different jurisdictions. Efforts to create standardised practices and policies have been complicated by the diversity of approaches, leading platforms to adopt flexible strategies for managing misinformation that align with global and local standards.
Addressing the Consequences
These consequences can be addressed by undertaking the following measures:
- The implementation of a more robust content moderation system by the platforms using a combination of AI and human oversight for the identification and removal of misinformation in an effective manner.
- Enhancing the transparency in platform policies for content moderation and decision-making would build user trust and reduce the backlash associated with perceived censorship.
- Collaborations with fact checkers in the form of partnerships to help verify the accuracy of content and reduce the spread of misinformation.
- Engage with regulators proactively to stay ahead of legal and regulatory requirements and avoid punitive actions.
- Platforms should Invest in media literacy initiatives and help users critically evaluate the content available to them.
Final Takeaways
The accrual of misinformation on digital platforms has resulted in presenting significant challenges across legal, reputational, financial, and operational functions for all stakeholders. As a result, a critical need arises where the interlinked, but seemingly-exclusive priorities of preventing misinformation and upholding freedom of expression must be balanced. Platforms must invest in the creation and implementation of a robust content moderation system with in-built transparency, collaborating with fact-checkers, and media literacy efforts to mitigate the adverse effects of misinformation. In addition to this, adapting to diverse international standards is essential to maintaining their global presence and societal trust.
References
- https://pirg.org/edfund/articles/misinformation-on-social-media/
- https://www.mdpi.com/2076-0760/12/12/674
- https://scroll.in/article/1057626/israel-hamas-war-misinformation-is-being-spread-across-social-media-with-real-world-consequences
- https://www.who.int/europe/news/item/01-09-2022-infodemics-and-misinformation-negatively-affect-people-s-health-behaviours--new-who-review-finds

Introduction
The nation got its first consolidated data protection regulation in the form of the Digital Personal Data Protection Act, 2023, in the month of August, and the Indian netizens got their independence in terms of data protection and privacy. The act lays heavy penalties for non-compliance with the provisions, and the same is under the jurisdiction of a Data Protection Board set up by the Central Government, which enjoys powers equivalent to a civil court. The act upholds the right to data privacy as the fundamental right under Article 19 (1)(A) and 21 of the Constitution of India. The same has been judicially supported in the form of the landmark judgement, Jus. K.S Puttawamy vs. Union of India of 2018. Let us take a look at the impact the act will make on the Indian netizens.
What is Personal Data?
Personal Data refers to any form of digitised data which can be directly replicated by any person. This includes email IDs, mobile numbers, health data, banking data, photos, etc. A person to whom the personal data belongs is called the Data Principle. A Data principle is anyone who is above the age of 18 years and consents to the data of children/minors. In the case of children/minors, it is mandatory for the parents or guardians to provide their express consent for the processing of personal data for all or any purposes. Any individual who is processing personal data is known as the Data Fiduciry, and individuals registered under the act may act as consent managers to make the consent transparent. When it comes to the rights of the netizens, it is seen that the act is created with an aspect of “Safety by Design” to secure the rights and responsibilities of the netizens.
Rights secured under the DPDP Act 2023
- Right to Grievance Redressal: The Data fiduciary and the consent manager are required to respond to the grievances of the Data Principal within a time period, which is soon to be prescribed, thus creating a blanket of responsibility for the data fiduciary and consent manager.
- Right to Nominate: Data Principals have the right to nominate any other individual who shall, in the event of death or incapacity of the data principal, exercise his/her rights.
- Right to access to information: The Data principal has the right to seek confirmation from Data fiduciaries regarding the processing of their personal data and the summary of the processed data as well.
- Right to Erasure and Correction: Data principals can reach out to the data fiduciaries in order to exercise their right to correct, complete, update and erasure of their personal data.
- Territorial Rights: The data is to be processed within India, and processing outside India should be in regard to the services provided in India.
- Material Rights: The rights are applicable to any personal data collected in digitised form and also for the data collected in a non-digital form but subsequently digitised.
Obligations for Data Fiduciaries
The data fiduciaries are mandated to oblige with the following provisions in order to maintain compliance with the laws of the land and by securing the Digital rights of the netizens.
These are the obligations of the data fiduciaries:
- Implement technical and organisational measures to safeguard Personal Data.
- Determine the legal grounds for processing and obtaining consent from Data principals where required.
- Provide a privacy notice while obtaining consent from Data principals.
- Implement a mechanism for data principals to exercise their rights.
- Implement a grievance redressal mechanism for handling the queries from Data principals.
- Irrecoverably delete personal data after the purpose for which it was collected has expired or when the consent has been withdrawn.
- Have a breach management policy to notify the data protection board and the data principals in accordance with prescribed timelines.
- Sign a valid contract with Data processors to ensure key obligations are abided by them, including timely deletion of data.
Conclusion
As the world steps into the digital age, it is pertinent for the governments of the world to come up with efficient and effective legislation to protect cyber rights and responsibilities, but as cyberspace has no boundaries, nations need to work in synergy to protect their cyber interests and netizens. This can only begin once all nations have indigenous Cyber laws and rights to protect netizens, and the same has been addressed by the Indian Government in the form of the Digital Perosnl Data Protection Act, 2023. The future is full of emerging technologies and the evolution of cyber laws; hence, consolidating a basic legal structure now is of utmost importance and the same is expected to be strengthened in India by the soon-to-be-released Draft Digital India Bill.