DPDP Bill 2023 A Comparative Analysis
Introduction
THE DIGITAL PERSONAL DATA PROTECTION BILL, 2022 Released for Public Consultation on November 18, 2022THE DIGITAL PERSONAL DATA PROTECTION BILL, 2023Tabled at LokSabha on August 03. 2023Personal data may be processed only for a lawful purpose for which an individual has given consent. Consent may be deemed in certain cases.The 2023 bill imposes reasonable obligations on data fiduciaries and data processors to safeguard digital personal data.There is a Data Protection Board under the 2022 bill to deal with the non-compliance of the Act.Under the 2023 bill, there is the Establishment of a new Data Protection Board which will ensure compliance, remedies and penalties.
Under the new bill, the Board has been entrusted with the power of a civil court, such as the power to take cognisance in response to personal data breaches, investigate complaints, imposing penalties. Additionally, the Board can issue directions to ensure compliance with the act.The 2022 Bill grants certain rights to individuals, such as the right to obtain information, seek correction and erasure, and grievance redressal.The 2023 bill also grants More Rights to Individuals and establishes a balance between user protection and growing innovations. The bill creates a transparent and accountable data governance framework by giving more rights to individuals. In the 2023 bill, there is an Incorporation of Business-friendly provisions by removing criminal penalties for non-compliance and facilitating international data transfers.
The new 2023 bill balances out fundamental privacy rights and puts reasonable limitations on those rights.Under the 2022 bill, Personal data can be processed for a lawful purpose for which an individual has given his consent. And there was a concept of deemed consent.The new data protection board will carefully examine the instance of non-compliance by imposing penalties on non-compiler.The bill does not provide any express clarity in regards to compensation to be granted to the Data Principal in case of a Data Breach.Under 2023 Deemed consent is there in its new form as ‘Legitimate Users’.The 2022 bill allowed the transfer of personal data to locations notified by the government.There is an introduction of the negative list, which restricts cross-data transfer.
Related Blogs

Introduction
The Telecom Regulatory Authority of India (TRAI), on March 13 2023, published a new rule to regulate telemarketing firms. Trai has demonstrated strictness when it comes to bombarding users with intrusive marketing pitches. In a report, TRAI stated that 10-digit mobile numbers could not be utilised for advertising. In reality, different phone numbers are given out for regular calls and telemarketing calls. Hence, it is an appropriate and much-required move in order to suppress and eradicate phishing scammers and secure the Indian Cyber-ecosystem at large.
What are the new rules?
The rules state that now 10-digit unregistered mobile numbers for promotional purposes would be shut down over the following five days. The rule claim that calling from unregistered mobile numbers had been banned was published on February 16. In this case, using 10-digit promotional messages for promotional calling will end within the following five days. This step by TRAI has been seen after nearly 6-8 months of releasing the Telecommunication Bill, 2022, which has focused towards creating a stable Indian Telecom market and reducing the phoney calls/messages by bad actors to reduce cyber crimes like phishing. This is done to distinguish between legitimate and promotional calls. According to certain reports, some telecom firms allegedly break the law by using 10-digit mobile numbers to make unwanted calls and send promotional messages. All telecom service providers must execute the requirements under the recent TRAI directive within five days.
How will the new rules help?
The promotional use of a cellphone number with 10 digits was allowed since the start, however, with the latest NCRB report on cyber crimes and the rising instances and reporting of cyber crimes primarily focused towards frauds related to monetary gains by the bad actors points to the issue of unregulated promotional messages. This move will act as a critical step towards eradicating scammers from the cyber-ecosystem, TRAI has been very critical in understanding the dynamics and shortcomings in the regulation of the telecom spectrum and network in India and has shown keen interest towards suppressing the modes of technology used by the scammers. It is a fact that the invention of the technology does not define its use, the policy of the technology does, hence it is important to draft ad enact policies which better regulate the existing and emerging technologies.
What to avoid?
In pursuance of the rules enacted by TRAI, the business owners involved in promotional services through 10-digit numbers will have to follow these steps-
- It is against the law to utilise a 10-digit cellphone number for promotional calls.
- You should stop doing so right now.
- Your mobile number will be blocked in the following five days if not.
- Users employed by telemarketing firms are encouraged to refrain from using the system in such circumstances.
- Those working for telemarketing firms are encouraged not to call from their mobile numbers.
- Users should phone the company’s registered mobile number for promotional purposes.
Conclusion
The Indian netizen has been exposed to the technology a little later than the western world. However, this changed drastically during the Covid-19 pandemic as the internet and technology penetration rates increased exponentially in just a couple of months. Although this has been used as an advantage by the bad actors, it was pertinent for the government and its institutions to take an effective and efficient step to safeguard the people from financial fraud. Although these frauds occur in high numbers due to a lack of knowledge and awareness, we need to work on preventive solutions rather than precautionary steps and the new rules by TRAI point towards a safe, secured and sustainable future of cyberspace in India.

Introduction
A message has recently circulated on WhatsApp alleging that voice and video chats made through the app will be recorded, and devices will be linked to the Ministry of Electronics and Information Technology’s system from now on. WhatsApp from now, record the chat activities and forward the details to the Government. The Anti-Government News has been shared on social media.
Message claims
- The fake WhatsApp message claims that an 11-point new communication guideline has been established and that voice and video calls will be recorded and saved. It goes on to say that WhatsApp devices will be linked to the Ministry’s system and that Facebook, Twitter, Instagram, and all other social media platforms will be monitored in the future.
- The fake WhatsApp message further advises individuals not to transmit ‘any nasty post or video against the government or the Prime Minister regarding politics or the current situation’. The bogus message goes on to say that it is a “crime” to write or transmit a negative message on any political or religious subject and that doing so could result in “arrest without a warrant.”
- The false message claims that any message in a WhatsApp group with three blue ticks indicates that the message has been noted by the government. It also notifies Group members that if a message has 1 Blue tick and 2 Red ticks, the government is checking their information, and if a member has 3 Red ticks, the government has begun procedures against the user, and they will receive a court summons shortly.
WhatsApp does not record voice and video calls
There has been news which is spreading that WhatsApp records voice calls and video calls of the users. the news is spread through a message that has been recently shared on social media. As per the Government, the news is fake, that WhatsApp cannot record voice and video calls. Only third-party apps can record voice and video calls. Usually, users use third-party Apps to record voice and video calls.
Third-party apps used for recording voice and video calls
- App Call recorder
- Call recorder- Cube ACR
- Video Call Screen recorder for WhatsApp FB
- AZ Screen Recorder
- Video Call Recorder for WhatsApp
Case Study
In 2022 there was a fake message spreading on social media, suggesting that the government might monitor WhatsApp talks and act against users. According to this fake message, a new WhatsApp policy has been released, and it claims that from now on, every message that is regarded as suspicious will have three 3 Blue ticks, indicating that the government has taken note of that message. And the same fake news is spreading nowadays.
WhatsApp Privacy policies against recording voice and video chats
The WhatsApp privacy policies say that voice calls, video calls, and even chats cannot be recorded through WhatsApp because of end-to-end encryption settings. End-to-end encryption ensures that the communication between two people will be kept private and safe.
WhatsApp Brand New Features
- Chat lock feature: WhatsApp Chat Lock allows you to store chats in a folder that can only be viewed using your device’s password or biometrics such as a fingerprint. When you lock a chat, the details of the conversation are automatically hidden in notifications. The motive of WhatsApp behind the cha lock feature is to discover new methods to keep your messages private and safe. The feature allows the protection of most private conversations with an extra degree of security
- Edit chats feature: WhatsApp can now edit your WhatsApp messages up to 15 minutes after they have been sent. With this feature, the users can make the correction in the chat or can add some extra points, users want to add.
Conclusion
The spread of misinformation and fake news is a significant problem in the age of the internet. It can have serious consequences for individuals, communities, and even nations. The news is fake as per the government, as neither WhatsApp nor the government could have access to WhatsApp chats, voice, and video calls on WhatsApp because of end-to-end encryption. End-to-end encryption ensures to protect of the communications of the users. The government previous year blocked 60 social media platforms because of the spreading of Anti India News. There is a fact check unit which identifies misleading and false online content.

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.