Delhi High Court Directs Centre to Nominate Members for Deepfake Committee
The Delhi High Court vide order dated 21st November 2024 directed the Centre to nominate members for a committee constituted to examine the issue of deepfakes. The court was informed by the Union Ministry of Electronics and Information Technology (MeitY) that a committee had been formed on 20 November 2024 on deepfake matters. The Delhi High Court passed an order while hearing two writ petitions against the non-regulation of deepfake technology in the country and the threat of its potential misuse. The Centre submitted that it was actively taking measures to address and mitigate the issues related to deepfake technology. The court directed the central government to nominate the members within a week.
The court further stated that the committee shall examine and take into consideration the suggestions filed by the petitioners and consider the regulations as well as statutory frameworks in foreign countries like the European Union. The court has directed the committee to invite the experiences and suggestions of stakeholders such as intermediary platforms, telecom service providers, victims of deepfakes, and websites which provide and deploy deepfakes. The counsel for the petitioners stated that delay in the creation, detection and removal of deepfakes is causing immense hardship to the public at large. Further, the court has directed the said committee to submit its report, as expeditiously as possible, preferably within three months. The matter is further listed on 24th March 2025.
CyberPeace Outlook
Through the issue of misuse of deepfakes by bad actors, it has become increasingly difficult for users to differentiate between genuine and altered content created by deepfakes. This increasing misuse has led to a rise in cyber crimes and poses dangers to users' privacy. Bad actors use any number of random pictures or images collected from the internet to create such non-consensual deepfake content. Such deepfake videos further pose risks of misinformation and fake news campaigns with the potential to sway elections, cause confusion and mistrust in authorities, and more.
The conceivable legislation governing the deepfake is the need of the hour. It is important to foster regulated, ethical and responsible consumption of technology. The comprehensive legislation governing the issue can help ensure technology can be used in a better manner. The dedicated deepfake regulation and deploying ethical practices through a coordinated approach by concerned stakeholders can effectively manage the problems presented by the misuse of deepfake technology. Legal frameworks in this regard need to be equipped to handle the challenges posed by deepfake and AI. Accountability in AI is also a complex issue that requires comprehensive legal reforms. The government should draft policies and regulations that balance innovation and regulation. Through a multifaceted approach and comprehensive regulatory landscape, we can mitigate the risks posed by deepfakes and safeguard privacy, trust, and security in the digital age.
References
- https://www.devdiscourse.com/article/law-order/3168452-delhi-high-court-calls-for-action-on-deepfake-regulation
- https://images.assettype.com/barandbench/2024-11-23/w63zribm/Chaitanya_Rohilla_vs_Union_of_India.pdf
Related Blogs
.webp)
Executive Summary:
In late 2024 an Indian healthcare provider experienced a severe cybersecurity attack that demonstrated how powerful AI ransomware is. This blog discusses the background to the attack, how it took place and the effects it caused (both medical and financial), how organisations reacted, and the final result of it all, stressing on possible dangers in the healthcare industry with a lack of sufficiently adequate cybersecurity measures in place. The incident also interrupted the normal functioning of business and explained the possible economic and image losses from cyber threats. Other technical results of the study also provide more evidence and analysis of the advanced AI malware and best practices for defending against them.
1. Introduction
The integration of artificial intelligence (AI) in cybersecurity has revolutionised both defence mechanisms and the strategies employed by cybercriminals. AI-powered attacks, particularly ransomware, have become increasingly sophisticated, posing significant threats to various sectors, including healthcare. This report delves into a case study of an AI-powered ransomware attack on a prominent Indian healthcare provider in 2024, analysing the attack's execution, impact, and the subsequent response, along with key technical findings.
2. Background
In late 2024, a leading healthcare organisation in India which is involved in the research and development of AI techniques fell prey to a ransomware attack that was AI driven to get the most out of it. With many businesses today relying on data especially in the healthcare industry that requires real-time operations, health care has become the favourite of cyber criminals. AI aided attackers were able to cause far more detailed and damaging attack that severely affected the operation of the provider whilst jeopardising the safety of the patient information.
3. Attack Execution
The attack began with the launch of a phishing email designed to target a hospital administrator. They received an email with an infected attachment which when clicked in some cases injected the AI enabled ransomware into the hospitals network. AI incorporated ransomware was not as blasé as traditional ransomware, which sends copies to anyone, this studied the hospital’s IT network. First, it focused and targeted important systems which involved implementation of encryption such as the electronic health records and the billing departments.
The fact that the malware had an AI feature allowed it to learn and adjust its way of propagation in the network, and prioritise the encryption of most valuable data. This accuracy did not only increase the possibility of the potential ransom demand but also it allowed reducing the risks of the possibility of early discovery.
4. Impact
- The consequences of the attack were immediate and severe: The consequences of the attack were immediate and severe.
- Operational Disruption: The centralization of important systems made the hospital cease its functionality through the acts of encrypting the respective components. Operations such as surgeries, routine medical procedures and admitting of patients were slowed or in some cases referred to other hospitals.
- Data Security: Electronic patient records and associated billing data became off-limit because of the vulnerability of patient confidentiality. The danger of data loss was on the verge of becoming permanent, much to the concern of both the healthcare provider and its patients.
- Financial Loss: The attackers asked for 100 crore Indian rupees (approximately 12 USD million) for the decryption key. Despite the hospital not paying for it, there were certain losses that include the operational loss due to the server being down, loss incurred by the patients who were affected in one way or the other, loss incurred in responding to such an incident and the loss due to bad reputation.
5. Response
As soon as the hotel’s management was informed about the presence of ransomware, its IT department joined forces with cybersecurity professionals and local police. The team decided not to pay the ransom and instead recover the systems from backup. Despite the fact that this was an ethically and strategically correct decision, it was not without some challenges. Reconstruction was gradual, and certain elements of the patients’ records were permanently erased.
In order to avoid such attacks in the future, the healthcare provider put into force several organisational and technical actions such as network isolation and increase of cybersecurity measures. Even so, the attack revealed serious breaches in the provider’s IT systems security measures and protocols.
6. Outcome
The attack had far-reaching consequences:
- Financial Impact: A healthcare provider suffers a lot of crashes in its reckoning due to substantial service disruption as well as bolstering cybersecurity and compensating patients.
- Reputational Damage: The leakage of the data had a potential of causing a complete loss of confidence from patients and the public this affecting the reputation of the provider. This, of course, had an effect on patient care, and ultimately resulted in long-term effects on revenue as patients were retained.
- Industry Awareness: The breakthrough fed discussions across the country on how to improve cybersecurity provisions in the healthcare industry. It woke up the other care providers to review and improve their cyber defence status.
7. Technical Findings
The AI-powered ransomware attack on the healthcare provider revealed several technical vulnerabilities and provided insights into the sophisticated mechanisms employed by the attackers. These findings highlight the evolving threat landscape and the importance of advanced cybersecurity measures.
7.1 Phishing Vector and Initial Penetration
- Sophisticated Phishing Tactics: The phishing email was crafted with precision, utilising AI to mimic the communication style of trusted contacts within the organisation. The email bypassed standard email filters, indicating a high level of customization and adaptation, likely due to AI-driven analysis of previous successful phishing attempts.
- Exploitation of Human Error: The phishing email targeted an administrative user with access to critical systems, exploiting the lack of stringent access controls and user awareness. The successful penetration into the network highlighted the need for multi-factor authentication (MFA) and continuous training on identifying phishing attempts.
7.2 AI-Driven Malware Behavior
- Dynamic Network Mapping: Once inside the network, the AI-powered malware executed a sophisticated mapping of the hospital's IT infrastructure. Using machine learning algorithms, the malware identified the most critical systems—such as Electronic Health Records (EHR) and the billing system—prioritising them for encryption. This dynamic mapping capability allowed the malware to maximise damage while minimising its footprint, delaying detection.
- Adaptive Encryption Techniques: The malware employed adaptive encryption techniques, adjusting its encryption strategy based on the system's response. For instance, if it detected attempts to isolate the network or initiate backup protocols, it accelerated the encryption process or targeted backup systems directly, demonstrating an ability to anticipate and counteract defensive measures.
- Evasive Tactics: The ransomware utilised advanced evasion tactics, such as polymorphic code and anti-forensic features, to avoid detection by traditional antivirus software and security monitoring tools. The AI component allowed the malware to alter its code and behaviour in real time, making signature-based detection methods ineffective.
7.3 Vulnerability Exploitation
- Weaknesses in Network Segmentation: The hospital’s network was insufficiently segmented, allowing the ransomware to spread rapidly across various departments. The malware exploited this lack of segmentation to access critical systems that should have been isolated from each other, indicating the need for stronger network architecture and micro-segmentation.
- Inadequate Patch Management: The attackers exploited unpatched vulnerabilities in the hospital’s IT infrastructure, particularly within outdated software used for managing patient records and billing. The failure to apply timely patches allowed the ransomware to penetrate and escalate privileges within the network, underlining the importance of rigorous patch management policies.
7.4 Data Recovery and Backup Failures
- Inaccessible Backups: The malware specifically targeted backup servers, encrypting them alongside primary systems. This revealed weaknesses in the backup strategy, including the lack of offline or immutable backups that could have been used for recovery. The healthcare provider’s reliance on connected backups left them vulnerable to such targeted attacks.
- Slow Recovery Process: The restoration of systems from backups was hindered by the sheer volume of encrypted data and the complexity of the hospital’s IT environment. The investigation found that the backups were not regularly tested for integrity and completeness, resulting in partial data loss and extended downtime during recovery.
7.5 Incident Response and Containment
- Delayed Detection and Response: The initial response was delayed due to the sophisticated nature of the attack, with traditional security measures failing to identify the ransomware until significant damage had occurred. The AI-powered malware’s ability to adapt and camouflage its activities contributed to this delay, highlighting the need for AI-enhanced detection and response tools.
- Forensic Analysis Challenges: The anti-forensic capabilities of the malware, including log wiping and data obfuscation, complicated the post-incident forensic analysis. Investigators had to rely on advanced techniques, such as memory forensics and machine learning-based anomaly detection, to trace the malware’s activities and identify the attack vector.
8. Recommendations Based on Technical Findings
To prevent similar incidents, the following measures are recommended:
- AI-Powered Threat Detection: Implement AI-driven threat detection systems capable of identifying and responding to AI-powered attacks in real time. These systems should include behavioural analysis, anomaly detection, and machine learning models trained on diverse datasets.
- Enhanced Backup Strategies: Develop a more resilient backup strategy that includes offline, air-gapped, or immutable backups. Regularly test backup systems to ensure they can be restored quickly and effectively in the event of a ransomware attack.
- Strengthened Network Segmentation: Re-architect the network with robust segmentation and micro-segmentation to limit the spread of malware. Critical systems should be isolated, and access should be tightly controlled and monitored.
- Regular Vulnerability Assessments: Conduct frequent vulnerability assessments and patch management audits to ensure all systems are up to date. Implement automated patch management tools where possible to reduce the window of exposure to known vulnerabilities.
- Advanced Phishing Defences: Deploy AI-powered anti-phishing tools that can detect and block sophisticated phishing attempts. Train staff regularly on the latest phishing tactics, including how to recognize AI-generated phishing emails.
9. Conclusion
The AI empowered ransomware attack on the Indian healthcare provider in 2024 makes it clear that the threat of advanced cyber attacks has grown in the healthcare facilities. Sophisticated technical brief outlines the steps used by hackers hence underlining the importance of ongoing active and strong security. This event is a stark message to all about the importance of not only remaining alert and implementing strong investments in cybersecurity but also embarking on the formulation of measures on how best to counter such incidents with limited harm. AI is now being used by cybercriminals to increase the effectiveness of the attacks they make and it is now high time all healthcare organisations ensure that their crucial systems and data are well protected from such attacks.

Introduction
Misinformation in India has emerged as a significant societal challenge, wielding a potent influence on public perception, political discourse, and social dynamics. A potential number of first-time voters across India identified fake news as a real problem in the nation. With the widespread adoption of digital platforms, false narratives, manipulated content, and fake news have found fertile ground to spread unchecked information and news.
In the backdrop of India being the largest market of WhatsApp users, who forward more content on chats than anywhere else, the practice of fact-checking forwarded information continues to remain low. The heavy reliance on print media, television, unreliable news channels and primarily, social media platforms acts as a catalyst since studies reveal that most Indians trust any content forwarded by family and friends. It is noted that out of all risks, misinformation and disinformation ranked the highest in India, coming before infectious diseases, illicit economic activity, inequality and labour shortages. World Economic Forum analysts, in connection with their 2024 Global Risk Report, note that “misinformation and disinformation in electoral processes could seriously destabilise the real and perceived legitimacy of newly elected governments, risking political unrest, violence and terrorism and long-term erosion of democratic processes.”
The Supreme Court of India on Misinformation
The Supreme Court of India, through various judgements, has noted the impact of misinformation on democratic processes within the country, especially during elections and voting. In 1995, while adjudicating a matter pertaining to keeping the broadcasting media under the control of the public, it noted that democracy becomes a farce when the medium of information is monopolized either by partisan central authority or by private individuals or oligarchic organizations.
In 2003, the Court stated that “Right to participate by casting a vote at the time of election would be meaningless unless the voters are well informed about all sides of the issue in respect of which they are called upon to express their views by casting their votes. Disinformation, misinformation, non-information all equally create an uninformed citizenry which would finally make democracy a mobocracy and a farce.” It noted that elections would be a useless procedure if voters remained unaware of the antecedents of the candidates contesting elections. Thus, a necessary aspect of a voter’s duty to cast intelligent and rational votes is being well-informed. Such information forms one facet of the fundamental right under Article 19 (1)(a) pertaining to freedom of speech and expression. Quoting James Madison, it stated that a citizen’s right to know the true facts about their country’s administration is one of the pillars of a democratic State.
On a similar note, the Supreme Court, while discussing the disclosure of information by an election candidate, gave weightage to the High Court of Bombay‘s opinion on the matter, which opined that non-disclosure of information resulted in misinformation and disinformation, thereby influencing voters to take uninformed decisions. It stated that a voter had the elementary right to know the full particulars of a candidate who is to represent him in Parliament/Assemblies.
While misinformation was discussed primarily in relation to elections, the effects of misinformation in other sectors have also been discussed from time to time. In particular, The court highlighted the World Health Organisation’s observation in 2021 while discussing the spread of COVID-19, noting that the pandemic was not only an epidemic but also an “infodemic” due to the overabundance of information on the internet, which was riddled with misinformation and disinformation. While condemning governments’ direct or indirect threats of prosecution to citizens, it noted that various citizens who relied on the internet to provide help in securing medical facilities and oxygen tanks were being targeted by alleging that the information posted by them was false and was posted to create panic, defame the administration or damage national image. It instructed authorities to cease such threats and prevent clampdown on information sharing.
More recently, in Facebook v. Delhi Legislative Assembly [(2022) 3 SCC 529], the apex court, while upholding the summons issued to Facebook by the Delhi Legislative Assembly in the aftermath of the 2020 Delhi Riots, noted that while social media enables equal and open dialogue between citizens and policymakers, it is also a tool in the where extremist views are peddled into mainstream media, thereby spreading misinformation. It noted Facebook’s role in the Mynmar, where misinformation and posts that Facebook employees missed fueled offline violence. Since Facebook is one of the most popular social media applications, the platform itself acts as a power center by hosting various opinions and voices on its forum. This directly impacts the governance of States, and some form of liability must be attached to the platform. The Supreme Court objected to Facebook taking contrary stands in various jurisdictions; while in the US, it projected itself as a publisher, which enabled it to maintain control over the material disseminated from its platform, while in India, “it has chosen to identify itself purely as a social media platform, despite its similar functions and services in the two countries.”
Conclusion
The pervasive issue of misinformation in India is a multifaceted challenge with profound implications for democratic processes, public awareness, and social harmony. The alarming statistics of fake news recognition among first-time voters, coupled with a lack of awareness regarding fact-checking organizations, underscore the urgency of addressing this issue. The Supreme Court of India has consistently recognized the detrimental impact of misinformation, particularly in elections. The judiciary has stressed the pivotal role of an informed citizenry in upholding the essence of democracy. It has emphasized the right to access accurate information as a fundamental aspect of freedom of speech and expression. As India grapples with the challenges of misinformation, the intersection of technology, media literacy and legal frameworks will be crucial in mitigating the adverse effects and fostering a more resilient and informed society.
References
- https://thewire.in/media/survey-finds-false-information-risk-highest-in-india
- https://www.statista.com/topics/5846/fake-news-in-india/#topicOverview
- https://www.weforum.org/publications/global-risks-report-2024/digest/
- https://main.sci.gov.in/supremecourt/2020/20428/20428_2020_37_1501_28386_Judgement_08-Jul-2021.pdf
- Secretary, Ministry of Information & Broadcasting, Govt, of India and Others v. Cricket Association of Bengal and Another [(1995) 2 SCC 161]
- People’s Union for Civil Liberties (PUCL) v. Union of India [(2003) 4 SCC 399]
- Kisan Shankar Kathore v. Arun Dattatray Sawant and Others [(2014) 14 SCC 162]
- Distribution of Essential Supplies & Services During Pandemic, In re [(2021) 18 SCC 201]
- Facebook v. Delhi Legislative Assembly [(2022) 3 SCC 529]

Introduction
Artificial Intelligence (AI) is fast transforming our future in the digital world, transforming healthcare, finance, education, and cybersecurity. But alongside this technology, bad actors are also weaponising it. More and more, state-sponsored cyber actors are misusing AI tools such as ChatGPT and other generative models to automate disinformation, enable cyberattacks, and speed up social engineering operations. This write-up explores why and how AI, in the form of large language models (LLMs), is being exploited in cyber operations associated with adversarial states, and the necessity for international vigilance, regulation, and AI safety guidelines.
The Shift: AI as a Cyber Weapon
State-sponsored threat actors are misusing tools such as ChatGPT to turbocharge their cyber arsenal.
- Phishing Campaigns using AI- Generative AI allows for highly convincing and grammatically correct phishing emails. Unlike the shoddily written scams of yesteryears, these AI-based messages are tailored according to the victim's location, language, and professional background, increasing the attack success rate considerably. Example: It has recently been reported by OpenAI and Microsoft that Russian and North Korean APTs have employed LLMs to create customised phishing baits and malware obfuscation notes.
- Malware Obfuscation and Script Generation- Big Language Models (LLMs) such as ChatGPT may be used by cyber attackers to help write, debug, and camouflage malicious scripts. While the majority of AI instruments contain safety mechanisms to guard against abuse, threat actors often exploit "jailbreaking" to evade these protections. Once such constraints are lifted, the model can be utilised to develop polymorphic malware that alters its code composition to avoid detection. It can also be used to obfuscate PowerShell or Python scripts to render them difficult for conventional antivirus software to identify. Also, LLMs have been employed to propose techniques for backdoor installation, additional facilitating stealthy access to hijacked systems.
- Disinformation and Narrative Manipulation
State-sponsored cyber actors are increasingly employing AI to scale up and automate disinformation operations, especially on election, protest, and geopolitical dispute days. With LLMs' assistance, these actors can create massive amounts of ersatz news stories, deepfake interview transcripts, imitation social media posts, and bogus public remarks on online forums and petitions. The localisation of content makes this strategy especially perilous, as messages are written with cultural and linguistic specificity, making them credible and more difficult to detect. The ultimate aim is to seed societal unrest, manipulate public sentiments, and erode faith in democratic institutions.
Disrupting Malicious Uses of AI – OpenAI Report (June 2025)
OpenAI released a comprehensive threat intelligence report called "Disrupting Malicious Uses of AI" and the “Staying ahead of threat actors in the age of AI”, which outlined how state-affiliated actors had been testing and misusing its language models for malicious intent. The report named few advanced persistent threat (APT) groups, each attributed to particular nation-states. OpenAI highlighted that the threat actors used the models mostly for enhancing linguistic quality, generating social engineering content, and expanding operations. Significantly, the report mentioned that the tools were not utilized to produce malware, but rather to support preparatory and communicative phases of larger cyber operations.
AI Jailbreaking: Dodging Safety Measures
One of the largest worries is how malicious users can "jailbreak" AI models, misleading them into generating banned content using adversarial input. Some methods employed are:
- Roleplay: Simulating the AI being a professional criminal advisor
- Obfuscation: Concealing requests with code or jargon
- Language Switching: Proposing sensitive inquiries in less frequently moderated languages
- Prompt Injection: Lacing dangerous requests within innocent-appearing questions
These methods have enabled attackers to bypass moderation tools, transforming otherwise moral tools into cybercrime instruments.
Conclusion
As AI generations evolve and become more accessible, its application by state-sponsored cyber actors is unprecedentedly threatening global cybersecurity. The distinction between nation-state intelligence collection and cybercrime is eroding, with AI serving as a multiplier of adversarial campaigns. AI tools such as ChatGPT, which were created for benevolent purposes, can be targeted to multiply phishing, propaganda, and social engineering attacks. The cross-border governance, ethical development practices, and cyber hygiene practices need to be encouraged. AI needs to be shaped not only by innovation but by responsibility.
References
- https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/
- https://www.bankinfosecurity.com/openais-chatgpt-hit-nation-state-hackers-a-28640
- https://oecd.ai/en/incidents/2025-06-13-b5e9
- https://www.microsoft.com/en-us/security/security-insider/meet-the-experts/emerging-AI-tactics-in-use-by-threat-actors
- https://www.wired.com/story/youre-not-ready-for-ai-hacker-agents/
- https://www.cert-in.org.in/PDF/Digital_Threat_Report_2024.pdf
- https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf