Alterations in Personal Data Protection Bill
Introduction
The appeal is to be heard by the TDSAT (telecommunication dispute settlement & appellate tribunal) regarding several changes under Digital personal data protection. The Changes should be a removal of the deemed consent, a change in appellate mechanism, No change in delegation legislation, and under data breach. And there are some following other changes in the bill, and the digital personal data protection bill 2023 will now provide a negative list of countries that cannot transfer the data.
New Version of the DPDP Bill
The Digital Personal Data Protection Bill has a new version. There are three major changes in the 2022 draft of the digital personal data protection bill. The changes are as follows: The new version proposes changes that there shall be no deemed consent under the bill and that the personal data processing should be for limited uses only. By giving the deemed consent, there shall be consent for the processing of data for any purposes. That is why there shall be no deemed consent.
- In the interest of the sovereignty
- The integrity of India and the National Security
- For the issue of subsidies, benefits, services, certificates, licenses, permits, etc
- To comply with any judgment or order under the law
- To protect, assist, or provide service in a medical or health emergency, a disaster situation, or to maintain public order
- In relation to an employee and his/her rights
The 2023 version now includes an appeals mechanism
It states that the Board will have the authority to issue directives for data breach remediation or mitigation, investigate data breaches and complaints, and levy financial penalties. It would be authorised to submit complaints to alternative dispute resolution, accept voluntary undertakings from data fiduciaries, and advise the government to prohibit a data fiduciary’s website, app, or other online presence if the terms of the law were regularly violated. The Telecom Disputes Settlement and Appellate Tribunal will hear any appeals.
The other change is in delegated legislation, as one of the criticisms of the 2022 version bill was that it gave the government extensive rule-making powers. The committee also raised the same concern with the ministry. The committed wants that the provisions that cannot be fully defined within the scope of the bill can be addressed.
The other major change raised in the new version bill is regarding the data breach; there will be no compensation for the data breach. This raises a significant concern for the victims, If the victims suffer a data breach and he approaches the relevant court or authority, he will not be awarded compensation for the loss he has suffered due to the data breach.
Need of changes under DPDP
There is a need for changes in digital personal data protection as we talk about the deemed consent so simply speaking, by ‘deeming’ consent for subsequent uses, your data may be used for purposes other than what it has been provided for and, as there is no provision for to be informed of this through mandatory notice, there may never even come to know about it.
Conclusion
The bill requires changes to meet the need of evolving digital landscape in the digital personal data protection 2022 draft. The removal of deemed consent will ultimately protect the data of the data principal. And the data of the data principal will be used or processed only for the purpose for which the consent is given. The change in the appellate mechanism is also crucial as it meets the requirements of addressing appeals. However, the no compensation for a data breach is derogatory to the interest of the victim who has suffered a data breach.
Related Blogs

Recently, Apple has pushed away the Advanced Data Protection feature for its customers in the UK. This was done due to a request by the UK’s Home Office, which demanded access to encrypted data stored in its cloud service, empowered by the Investigatory Powers Act (IPA). The Act compels firms to provide information to law enforcement. This move and its subsequent result, however, have raised concerns—bringing out different perspectives regarding the balance between privacy and security, along with the involvement of higher authorities and tech firms.
What is Advanced Data Protection?
Advanced Data Protection is an opt-in feature and doesn’t necessarily require activation. It is Apple’s strongest data tool, which provides end-to-end encryption for the data that the user chooses to protect. This is different from the standard (default) encrypted data services that Apple provides for photos, back-ups, and notes, among other things. The flip side of having such a strong security feature from a user perspective is that if the Apple account holder were to lose access to the account, they would lose their data as well since there are no recovery paths.
Doing away with the feature altogether, the sign-ups have been currently halted, and the company is working on removing existing user access at a later date (which is yet to be confirmed). For the UK users who hadn’t availed of this feature, there would be no change. However, for the ones who are currently trying to avail it are met with a notification on the Advanced Data Protection settings page that states that the feature cannot be enabled anymore. Consequently, there is no clarity whether the data stored by the UK users who availed the former facility would now cease to exist as even Apple doesn’t have access to it. It is important to note that withdrawing the feature does not ensure compliance with the Investigative Powers Act (IPA) as it is applicable to tech firms worldwide that have a UK market. Similar requests to access data have been previously shut down by Apple in the US.
Apple’s Stand on Encryption and Government Requests
The Tech giant has resisted court orders, rejecting requests to write software that would allow officials to access and enable identification of iPhones operated by gunmen (made in 2016 and 2020). It is said that the supposed reasons for such a demand by the UK Home Office have been made owing to the elusive role of end-to-end encryption in hiding criminal activities such as child sexual abuse and terrorism, hampering the efforts of security officials in catching them. Over the years, Apple has emphasised time and again its reluctance to create a backdoor to its encrypted data, stating the consequences of it being more vulnerable to attackers once a pathway is created. The Salt Typhoon attack on the US Telecommunication system is a recent example that has alerted officials, who now encourage the use of end-to-end encryption. Barring this, such requests could set a dangerous precedent for how tech firms and governments operate together. This comes against the backdrop of the Paris AI Action Summit, where US Vice President J.D. Vance raised concerns regarding regulation. As per reports, Apple has now filed a legal complaint against the Investigatory Powers Tribunal, the UK’s judicial body that handles complaints with respect to surveillance power usage by public authorities.
The Broader Debate on Privacy vs. Security
This standoff raises critical questions about how tech firms and governments should collaborate without compromising fundamental rights. Striking the right balance between privacy and regulation is imperative, ensuring security concerns are addressed without dismantling individual data protection. The outcome of Apple’s legal challenge against the IPA may set a significant precedent for how encryption policies evolve in the future.
References
- https://www.bbc.com/news/articles/c20g288yldko
- https://www.bbc.com/news/articles/cgj54eq4vejo
- https://www.bbc.com/news/articles/cn524lx9445o
- https://www.yahoo.com/tech/apple-advanced-data-protection-why-184822119.html
- https://indianexpress.com/article/technology/tech-news-technology/apple-advanced-data-protection-removal-uk-9851486/
- https://www.techtarget.com/searchsecurity/news/366619638/Apple-pulls-Advanced-Data-Protection-in-UK-sparking-concerns
- https://www.computerweekly.com/news/366619614/Apple-withdraws-encrypted-iCloud-storage-from-UK-after-government-demands-back-door-access?_gl=1*1p1xpm0*_ga*NTE3NDk1NzQxLjE3MzEzMDA2NTc.*_ga_TQKE4GS5P9*MTc0MDc0MTA4Mi4zMS4xLjE3NDA3NDEwODMuMC4wLjA.
- https://www.theguardian.com/technology/2025/feb/21/apple-removes-advanced-data-protection-tool-uk-government
- https://proton.me/blog/protect-data-apple-adp-uk#:~:text=Proton-,Apple%20revoked%20advanced%20data%20protection%20
- https://www.theregister.com/2025/03/05/apple_reportedly_ipt_complaint/
- https://www.computerweekly.com/news/366616972/Government-agencies-urged-to-use-encrypted-messaging-after-Chinese-Salt-Typhoon-hack

Introduction
In today's digital age protecting your personal information is of utmost importance. The bad actors are constantly on the lookout for ways to misuse your sensitive or personal data. The Aadhaar card is a crucial document that is utilised by all of us for various aspects. It is considered your official government-verified ID and is used for various purposes such as for verification purposes, KYC purposes, and even for financial transactions. Your Aadhaar card is used in so many ways such as flight tickets booked by travel agents, check-in in hotels, verification at educational institutions and more. The bad actors can target and lure the victims by unauthorized access to your Aadhaar data and commit cyber frauds such as identity theft, unauthorized access, and financial fraud. Hence it is significantly important to protect your personal information and Aadhaar card details and prevent the misuse of your personal information.
What is fingerprint cloning?
Cybercrooks have been exploiting the Aadhaar Enabled Payment System (AePS). These scams entail cloning individuals' Aadhaar-linked biometrics through silicon fingerprints and unauthorized biometric devices, subsequently siphoning money from their bank accounts. Fingerprint cloning also known as fingerprint spoofing is a technique or a method where an individual tries to replicate someone else's fingerprint for unauthorized use. This is done for various reasons, including gaining unauthorized access to data, unlocking data or committing identity theft. The process of fingerprint cloning includes collection and creation.
The recent case of Aadhaar Card fingerprint cloning in Nawada
Nawada Cyber Police unit has arrested two perpetrators who were engaged in fingerprint cloning fraud. The criminals are accused of duping consumers of money from their bank accounts by cloning their fingerprints. Among the two perpetrators, one of them runs the Common Service Centre (CSC) whereas the second is a sweeper at the DBGB branch bank. The criminals are accused of duping consumers of money from their bank accounts by cloning their fingerprints. According to the police, an organized gang of cyber criminals had been defrauding the consumers for the last two years with the help of a CSC operator and were embezzling money from the accounts of consumers by cloning their fingerprints and taking Aadhaar numbers. The operator used to collect the Aadhaar number from the consumers by putting their thumb impression on a register. Among these two perpetrators, one was accused of withdrawing more money from the consumer's account and making less payment and sometimes not making the payment after withdrawing the money. Whereas the second perpetrator stole the data of consumers from the DBGB branch bank and prepared their fingerprint clone. During the investigation of a case related to fraud, the Special Investigation Team (SIT) of Cyber Police conducted raids in Govindpur and Roh police station areas on the basis of technical surveillance and available evidence and arrested them.
Safety measures for the security of your Aadhaar Card data
- Locking your biometrics: One way to save your Aadhaar card and prevent unauthorized access is by locking your biometrics. To lock & unlock your Aadhaar biometrics you can visit the official website of UIDAI or its official portal. So go to UIDAI’s and select the “Lock/Unlock Biometrics” from the Aadhar service section. Then enter the 12-digit Aadhaar number and security code and click on the OTP option. An OTP will be sent to your registered mobile number with Aadhaar. Once the OTP is received enter the OTP and click on the login button that will allow you to lock your biometrics. Enter the 4-digit security code mentioned on the screen and click on the “Enable” button. Your biometrics will be locked and you will have to unblock them in case you want to access them again. The official website of UIDAI is “https://uidai.gov.in/” and there is a dedicated Aadhar helpline 1947.
- Use masked Aadhaar Card: A masked Aadhaar card is a different rendition of an Aadhaar card that is designed to amplify the privacy and security of an individual Aadhaar number. In a masked Aadhaar card, the first eight digits of the twelve digits Aadhaar number are replaced by XXXX- XXXX and only the last four digits are visible. This adds an additional layer of protection to an individual Aadhaar’s number. To download a masked Aadhaar card you visit the government website of UIDAI and on the UIDAI homepage, you will see a "Download Aadhaar" option. Click on it. In the next step, you will be required to enter your 12-digit Aadhaar number along with the security code displayed on the screen. After entering your Aadhaar number, click on the Send OTP. You will receive an OTP on your registered phone number. Enter the OTP received in the provided field and click on the “Submit” button. You will be asked to select the format of your Aadhaar card, You can choose the masked Aadhaar card option. This will replace the first eight digits of your Aadhaar number with "XXXX-XXXX" on the downloaded Aadhaar card. Once the format is selected, click on the “Download Aadhaar” button and your masked Aadhaar card will be downloaded. So if any organisation requires your Aadhaar for verification you can share your masked Aadhar card which only shows the last 4 digits of your Aadhaar card number. Just the way you keep your bank details safe you should also keep your Aadhaar number secure otherwise people can misuse your identity and use it for fraud.
- Monitoring your bank account transactions: Regularly monitor your bank account statements for any suspicious activity and you can also configure transaction alerts with your bank account transactions.
Conclusion:
It is important to secure your Aadhaar card data effectively. The valuable security measure option of locking biometrics provides an additional layer of security. It safeguards your identity from potential scammers. By locking your biometrics you can secure your biometric data and other personal information preventing unauthorized access and any misuse of your Aadhaar card data. In today's evolving digital landscape protecting your personal information is of utmost importance. The cyber hygiene practices, safety and security measures must be adopted by all of us hence establishing cyber peace and harmonizing cyberspace.
References:
- https://www.livehindustan.com/bihar/story-cyber-crime-csc-operator-and-bank-sweeper-arrested-in-nawada-cheating-by-cloning-finger-prints-8913667.html
- https://www.indiatoday.in/news-analysis/story/cloning-fingerprints-fake-shell-entities-is-your-aadhaar-as-safe-as-you-may-think-2398596-2023-06-27

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide