Using incognito mode and VPN may still not ensure total privacy, according to expert
SVIMS Director and Vice-Chancellor B. Vengamma lighting a lamp to formally launch the cybercrime awareness programme conducted by the police department for the medical students in Tirupati on Wednesday.
An awareness meet on safe Internet practices was held for the students of Sri Venkateswara University University (SVU) and Sri Venkateswara Institute of Medical Sciences (SVIMS) here on Wednesday.
“Cyber criminals on the prowl can easily track our digital footprint, steal our identity and resort to impersonation,” cyber expert I.L. Narasimha Rao cautioned the college students.
Addressing the students in two sessions, Mr. Narasimha Rao, who is a Senior Manager with CyberPeace Foundation, said seemingly common acts like browsing a website, and liking and commenting on posts on social media platforms could be used by impersonators to recreate an account in our name.
Turning to the youth, Mr. Narasimha Rao said the incognito mode and Virtual Private Network (VPN) used as a protected network connection do not ensure total privacy as third parties could still snoop over the websites being visited by the users. He also cautioned them tactics like ‘phishing’, ‘vishing’ and ‘smishing’ being used by cybercriminals to steal our passwords and gain access to our accounts.
“After cracking the whip on websites and apps that could potentially compromise our security, the Government of India has recently banned 232 more apps,” he noted.
Additional Superintendent of Police (Crime) B.H. Vimala Kumari appealed to cyber victims to call 1930 or the Cyber Mitra’s helpline 9121211100. SVIMS Director B. Vengamma stressed the need for caution with smartphones becoming an indispensable tool for students, be it for online education, seeking information, entertainment or for conducting digital transactions.
Related Blogs
.webp)
Introduction
AI-generated fake videos are proliferating on the Internet indeed becoming more common by the day. There is a use of sophisticated AI algorithms that help manipulate or generate multimedia content such as videos, audio, and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake content, and these AI-manipulated videos look realistic. A recent study has shown that 98% of deepfake-generated videos have adult content featuring young girls, women, and children, with India ranking 6th among the nations that suffer from misuse of deepfake technology. This practice has dangerous consequences and could harm an individual's reputation, and criminals could use this technology to create a false narrative about a candidate or a political party during elections.
The working of deepfake videos is based on algorithms that refine the fake content, and the generators are built and trained in such a way as to get the desired output. The process is repeated several times, allowing the generator to improve the content until it seems realistic, making it more flawless. Deepfake videos are created by specific approaches some of them are: -
- Lip syncing: This is the most common technique used in deepfake. Here, the voice recordings of the video, make it appear as to what was originally said by the person appearing in the video.
- Audio deepfake: For Audio-generated deepfake, a generative adversarial network (GAN) is used to colon a person’s voice, based on the vocal patterns and refine it till the desired output is generated.
- Deepfake has become so serious that the technology could be used by bad actors or by cyber-terrorist squads to set their Geo-political agendas. Looking at the present situation in the past few the number of cases has just doubled, targeting children, women and popular faces.
- Greater Risk: in the last few years the cases of deep fake have risen. by the end of the year 2022, the number of cases has risen to 96% against women and children according to a survey.
- Every 60 seconds, a deepfake pornographic video is created, now quicker and more affordable than ever, it takes less than 25 minutes and costs using just one clean face image.
- The connection to deepfakes is that people can become targets of "revenge porn" without the publisher having sexually explicit photographs or films of the victim. They may be made using any number of random pictures or images collected from the internet to obtain the same result. This means that almost everyone who has taken a selfie or shared a photograph of oneself online faces the possibility of a deepfake being constructed in their image.
Deepfake-related security concerns
As deepfakes proliferate, more people are realising that they can be used not only to create non-consensual porn but also as part of disinformation and fake news campaigns with the potential to sway elections and rekindle frozen or low-intensity conflicts.
Deepfakes have three security implications: at the international level, strategic deepfakes have the potential to destroy precarious peace; at the national level, deepfakes may be used to unduly influence elections, and the political process, or discredit opposition, which is a national security concern, and at the personal level, the scope for using Women suffer disproportionately from exposure to sexually explicit content as compared to males, and they are more frequently threatened.
Policy Consideration
Looking at the present situation where the cases of deepfake are on the rise against women and children, the policymakers need to be aware that deepfakes are utilized for a variety of valid objectives, including artistic and satirical works, which policymakers should be aware of. Therefore, simply banning deepfakes is not a way consistent with fundamental liberties. One conceivable legislative option is to require a content warning or disclaimer. Deepfake is an advanced technology and misuse of deepfake technology is a crime.
What are the existing rules to combat deepfakes?
It's worth noting that both the IT Act of 2000 and the IT Rules of 2021 require social media intermediaries to remove deep-fake videos or images as soon as feasible. Failure to follow these guidelines can result in up to three years in jail and a Rs 1 lakh fine. Rule 3(1)(b)(vii) requires social media intermediaries to guarantee that its users do not host content that impersonates another person, and Rule 3(2)(b) requires such content to be withdrawn within 24 hours of receiving a complaint. Furthermore, the government has stipulated that any post must be removed within 36 hours of being published online. Recently government has also issued an advisory to social media intermediaries to identify misinformation and deepfakes.
Conclusion
It is important to foster ethical and responsible consumption of technology. This can only be achieved by creating standards for both the creators and users, educating individuals about content limits, and providing information. Internet-based platforms should also devise techniques to deter the uploading of inappropriate information. We can reduce the negative and misleading impacts of deepfakes by collaborating and ensuring technology can be used in a better manner.
References
- https://timesofindia.indiatimes.com/life-style/parenting/moments/how-social-media-scandals-like-deepfake-impact-minors-and-students-mental-health/articleshow/105168380.cms?from=mdr
- https://www.aa.com.tr/en/science-technology/deepfake-technology-putting-children-at-risk-say-experts/2980880
- https://wiisglobal.org/deepfakes-as-a-security-issue-why-gender-matters/

Digital vulnerabilities like cyber-attacks and data breaches proliferate rapidly in the hyper-connected world that is created today. These vulnerabilities can compromise sensitive data like personal information, financial data, and intellectual property and can potentially threaten businesses of all sizes and in all sectors. Hence, it has become important to inform all stakeholders about any breach or attack to ensure they can be well-prepared for the consequences of such an incident.
The non-reporting of reporting can result in heavy fines in many parts of the world. Data breaches caused by malicious acts are crimes and need proper investigation. Organisations may face significant penalties for failing to report the event. Failing to report data breach incidents can result in huge financial setbacks and legal complications. To understand why transparency is vital and understanding the regulatory framework that governs data breaches is the first step.
The Current Indian Regulatory Framework on Data Breach Disclosure
A data breach essentially, is the unauthorised processing or accidental disclosure of personal data, which may occur through its acquisition, sharing, use, alteration, destruction, or loss of access. Such incidents can compromise the affected data’s confidentiality, integrity, or availability. In India, the Information Technology Act of 2000 and the Digital Personal Data Protection Act of 2023 are the primary legislation that tackles cybercrimes like data breaches.
- Under the DPDP Act, neither materiality thresholds nor express timelines have been prescribed for the reporting requirement. Data Fiduciaries are required to report incidents of personal data breach, regardless of their sensitivity or impact on the Data Principal.
- The IT (Indian Computer Emergency Response Team and Manner of Performing Functions and Duties) Rules, 2013, the IT (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, along with the Cyber Security Directions, under section 70B(6) of the IT Act, 2000, relating to information security practices, procedure, prevention, response and reporting of cyber incidents for Safe & Trusted Internet prescribed in 2022 impose mandatory notification requirements on service providers, intermediaries, data centres and corporate entities, upon the occurrence of certain cybersecurity incidents.
- These laws and regulations obligate companies to report any breach and any incident to regulators such as the CERT-In and the Data Protection Board.
The Consequences of Non-Disclosure
A non-disclosure of a data breach has a manifold of consequences. They are as follows:
- Legal and financial penalties are the immediate consequence of a data breach in India. The DPDP Act prescribes a fine of up to Rs 250 Crore from the affected parties, along with suits of a civil nature and regulatory scrutiny. Non-compliance can also attract action from CERT-In, leading to more reputational damage.
- In the long term, failure to disclose data breaches can erode customer trust as they are less likely to engage with a brand that is deemed unreliable. Investor confidence may potentially waver due to concerns about governance and security, leading to stock price drops or reduced funding opportunities. Brand reputation can be significantly tarnished, and companies may struggle with retaining and attracting customers and employees. This can affect long-term profitability and growth.
- Companies such as BigBasket and Jio in 2020 and Haldiram in 2022 have suffered from data breaches recently. Poor transparency and delay in disclosures led to significant reputational damage, legal scrutiny, and regulatory actions for the companies.
Measures for Improvement: Building Corporate Reputation via Transparency
Transparency is critical when disclosing data breaches. It enhances trust and loyalty for a company when the priority is data privacy for stakeholders. Ensuring transparency mitigates backlash. It demonstrates a company’s willingness to cooperate with authorities. A farsighted approach instils confidence in all stakeholders in showcasing a company's resilience and commitment to governance. These measures can be further improved upon by:
- Offering actionable steps for companies to establish robust data breach policies, including regular audits, prompt notifications, and clear communication strategies.
- Highlighting the importance of cooperation with regulatory bodies and how to ensure compliance with the DPDP Act and other relevant laws.
- Sharing best public communications practices post-breach to manage reputational and legal risks.
Conclusion
Maintaining transparency when a data breach happens is more than a legal obligation. It is a good strategy to retain a corporate reputation. Companies can mitigate the potential risks (legal, financial and reputational) by informing stakeholders and cooperating with regulatory bodies proactively. In an era where digital vulnerabilities are ever-present, clear communication and compliance with data protection laws such as the DPDP Act build trust, enhance corporate governance, and secure long-term business success. Proactive measures, including audits, breach policies, and effective public communication, are critical in reinforcing resilience and fostering stakeholder confidence in the face of cyber threats.
References
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.cert-in.org.in/PDF/CERT-In_Directions_70B_28.04.2022.pdf
- https://chawdamrunal.medium.com/the-dark-side-of-covering-up-data-breaches-why-transparency-is-crucial-fe9ed10aac27
- https://www.dlapiperdataprotection.com/index.html?t=breach-notification&c=IN

Introduction
The United Nations General Assembly (UNGA) has unanimously adopted the first global resolution on Artificial Intelligence (AI), encouraging countries to take into consideration human rights, keeping personal data safe, and further monitoring the threats associated with AI. This non-binding resolution proposed by the United States and co-sponsored by China and over 120 other nations advocates the strengthening of privacy policies. This step is crucial for governments across the world to shape how AI grows because of the dangers it carries that could undermine the protection, promotion, and right to human dignity and fundamental freedoms. The resolution emphasizes the importance of respecting human rights and fundamental freedoms throughout the life cycle of AI systems, highlighting the benefits of digital transformation and safe AI systems.
Key highlights
● This is indeed a landmark move by the UNGA, which adopted the first global resolution on AI. This resolution encourages member countries to safeguard human rights, protect personal data, and monitor AI for risks.
● Global leaders have shown their consensus for safe, secure, trustworthy AI systems that advance sustainable development and respect fundamental freedom.
● Resolution is the latest in a series of initiatives by governments around the world to shape AI. Therefore, AI will have to be created and deployed through the lens of humanity and dignity, Safety and Security, human rights and fundamental freedoms throughout the life cycle of AI systems.
● UN resolution encourages global cooperation, warns against improper AI use, and emphasizes the issues of human rights.
● The resolution aims to protect from potential harm and ensure that everyone can enjoy its benefits. The United States has worked with over 120 countries at the United Nations, including Russia, China, and Cuba, to negotiate the text of the resolution adopted.
Brief Analysis
AI has become increasingly prevalent in recent years, with chatbots such as the Chat GPT taking the world by storm. AI has been steadily attempting to replicate human-like thinking and solve problems. Furthermore, machine learning, a key aspect of AI, involves learning from experience and identifying patterns to solve problems autonomously. The contemporary emergence of AI has, however, raised questions about its ethical implications, potential negative impact on society, and whether it is too late to control it.
While AI is capable of solving problems quickly and performing various tasks with ease, it also has its own set of problems. As AI continues to grow, global leaders have called for regulations to prevent significant harm due to the unregulated AI landscape to the world and encourage the use of trustworthy AI. The European Union (EU) has come up with an AI act called the “European AI Act”. Recently, a Senate bill called “The AI Consent Bill” was introduced in the US. Similarly, India is also proactively working towards setting the stage for a more regulated Al landscape by fostering dialogues and taking significant measures. Recently, the Ministry of Electronics and Information Technology (MeitY) issued an advisory on AI, which requires explicit permission to deploy under-testing or unreliable AI models related to India's Internet. The following advisory also indicates measures advocating to combat deepfakes or misinformation.
AI has thus become a powerful tool that has raised concerns about its ethical implications and the potential negative influence on society. Governments worldwide are taking action to regulate AI and ensure that it remains safe and effective. Now, the groundbreaking move of the UNGA, which adopted the global resolution on AI, with the support of all 193 U.N. member nations, shows the true potential of efforts by countries to regulate AI and promote safe and responsible use globally.
New AI tools have emerged in the public sphere, which may threaten humanity in an unexpected direction. AI is able to learn by itself through machine learning to improve itself, and developers often are surprised by the emergent abilities and qualities of these tools. The ability to manipulate and generate language, whether with words, images, or sounds, is the most important aspect of the current phase of the ongoing AI Revolution. In the future, AI can have several implications. Hence, it is high time to regulate AI and promote the safe, secure and responsible use of it.
Conclusion
The UNGA has approved its global resolution on AI, marking significant progress towards creating global standards for the responsible development and employment of AI. The resolution underscores the critical need to protect human rights, safeguard personal data, and closely monitor AI technologies for potential hazards. It calls for more robust privacy regulations and recognises the dangers associated with improper AI systems. This profound resolution reflects a unified stance among UN member countries on overseeing AI to prevent possible negative effects and promote safe, secure and trustworthy AI.
References