What’s Your New Year's Resolution?
2025 is knocking firmly at our door and we have promises to make and resolutions to keep. Time you make your list for the New Year and check it twice.
- Lifestyle targets 🡪 Check
- Family targets 🡪 Check
- Social targets 🡪 Check
Umm, so far so good, but what about your cybersecurity targets for the year? Hey, you look confused and concerned. Wait a minute, you do not have one, do you?
I get it. Though the digital world still puzzles, and sometimes outright scares us, we still are not in the ‘Take-Charge-Of-Your-Digital-Safety Mode. We prefer to depend on whatever software security we are using and keep our fingers crossed that the bad guys (read threat actors) do not find us.
Let me illustrate why cybersecurity should be one of your top priorities. You know that stress is a major threat to our continued good health, right? However, if your devices, social media accounts, office e-mail or network, or God forbid, bank accounts become compromised, would that not cause stress? Think about it and the probable repercussions and you will comprehend why I am harping on prioritising security.
Fret not. We will keep it brief as we well know you have 101 things to do in the next few days leading up to 01/01/2025. Just add cyber health to the list and put in motion the following:
- Install and activate comprehensive security software on ALL internet-enabled devices you have at home. Yes, including your smartphones.
- Set yourself a date to change and create separate unique passwords for all accounts. Or use the password manager that comes with all reputed security software to make life simpler.
- Keep home Wi-Fi turned off at night
- Do not set social media accounts to auto-download photos/documents
- Activate parental controls on all the devices used by your children to monitor and mentor them. But keep them apprised.
- Do not blindly trust anyone or anything online – this includes videos, speeches, emails, voice calls, and video calls. Be aware of fakes.
- Be aware of the latest threats and talk about unsafe cyber practices and behaviour often at home.
Short and sweet, as promised.
We will be back, with more tips, and answers to your queries. Drop us a line anytime, and we will be happy to resolve your doubts.
Ciao!
Related Blogs
.webp)
Introduction
India's National Commission for Protection of Child Rights (NCPCR) is set to approach the Ministry of Electronics and Information Technology (MeitY) to recommend mandating a KYC-based system for verifying children's age under the Digital Personal Data Protection (DPDP) Act. The decision to approach or send recommendations to MeitY was taken by NCPCR in a closed-door meeting held on August 13 with social media entities. In the meeting, NCPCR emphasised proposing a KYC-based age verification mechanism. In this background, Section 9 of the Digital Personal Data Protection Act, 2023 defines a child as someone below the age of 18, and Section 9 mandates that such children have to be verified and parental consent will be required before processing their personal data.
Requirement of Verifiable Consent Under Section 9 of DPDP Act
Regarding the processing of children's personal data, Section 9 of the DPDP Act, 2023, provides that for children below 18 years of age, consent from parents/legal guardians is required. The Data Fiduciary shall, before processing any personal data of a child or a person with a disability who has a lawful guardian, obtain verifiable consent from the parent or lawful guardian. Additionally, behavioural monitoring or targeted advertising directed at children is prohibited.
Ongoing debate on Method to obtain Verifiable Consent
Section 9 of the DPDP Act gives parents or lawful guardians more control over their children's data and privacy, and it empowers them to make decisions about how to manage their children's online activities/permissions. However, obtaining such verifiable consent from the parent or legal guardian presents a quandary. It was expected that the upcoming 'DPDP rules,' which have yet to be notified by the Central Government, would shed light on the procedure of obtaining such verifiable consent from a parent or lawful guardian.
However, In the meeting held on 18th July 2024, between MeitY and social media companies to discuss the upcoming Digital Personal Data Protection Rules (DPDP Rules), MeitY stated that it may not intend to prescribe a ‘specific mechanism’ for Data Fiduciaries to verify parental consent for minors using digital services. MeitY instead emphasised obligations put forth on the data fiduciary under section 8(4) of the DPDP Act to implement “appropriate technical and organisational measures” to ensure effective observance of the provisions contained under this act.
In a recent update, MeitY held a review meeting on DPDP rules, where they focused on a method for determining children's ages. It was reported that the ministry is making a few more revisions before releasing the guidelines for public input.
CyberPeace Policy Outlook
CyberPeace in its policy recommendations paper published last month, (available here) also advised obtaining verifiable parental consent through methods such as Government Issued ID, integration of parental consent at ‘entry points’ like app stores, obtaining consent through consent forms, or drawing attention from foreign laws such as California Privacy Law, COPPA, and developing child-friendly SIMs for enhanced child privacy.
CyberPeace in its policy paper also emphasised that when deciding the method to obtain verifiable consent, the respective platforms need to be aligned with the fact that verifiable age verification must be done without compromising user privacy. Balancing user privacy is a question of both technological capabilities and ethical considerations.
DPDP Act is a brand new framework for protecting digital personal data and also puts forth certain obligations on Data Fiduciaries and provides certain rights to Data Principal. With upcoming ‘DPDP Rules’ which are expected to be notified soon, will define the detailed procedure for the implementation of the provisions of the Act. MeitY is refining the DPDP rules before they come out for public consultation. The approach of NCPCR is aimed at ensuring child safety in this digital era. We hope that MeitY comes up with a sound mechanism for obtaining verifiable consent from parents/lawful guardians after taking due consideration to recommendations put forth by various stakeholders, expert organisations and concerned authorities such as NCPCR.
References
- https://www.moneycontrol.com/technology/dpdp-rules-ncpcr-to-recommend-meity-to-bring-in-kyc-based-age-verification-for-children-article-12801563.html
- https://pune.news/government/ncpcr-pushes-for-kyc-based-age-verification-in-digital-data-protection-a-new-era-for-child-safety-215989/#:~:text=During%20this%20meeting%2C%20NCPCR%20issued,consent%20before%20processing%20their%20data
- https://www.hindustantimes.com/india-news/ncpcr-likely-to-seek-clause-for-parents-consent-under-data-protection-rules-101724180521788.html
- https://www.drishtiias.com/daily-updates/daily-news-analysis/dpdp-act-2023-and-the-isssue-of-parental-consent

The Rise of Tech Use Amongst Children
Technology today has become an invaluable resource for children, as a means to research issues, be informed about events, gather data, and share views and experiences with others. Technology is no longer limited to certain age groups or professions: children today are using it for learning & entertainment, engaging with their friends, online games and much more. With increased digital access, children are also exposed to online mis/disinformation and other forms of cyber crimes, far more than their parents, caregivers, and educators were in their childhood or are, even in the present. Children are particularly vulnerable to mis/disinformation due to their still-evolving maturity and cognitive capacities. The innocence of the youth is a major cause for concern when it comes to digital access because children simply do not possess the discernment and caution required to be able to navigate the Internet safely. They are active users of online resources and their presence on social media is an important factor of social, political and civic engagement but young people and children often lack the cognitive and emotional capacity needed to distinguish between reliable and unreliable information. As a result, they can be targets of mis/disinformation. ‘A UNICEF survey in 10 countries’[1] reveals that up to three-quarters of children reported feeling unable to judge the veracity of the information they encounter online.
Social media has become a crucial part of children's lives, with them spending a significant time on digital platforms such as Youtube, Facebook, Instagram and more. All these platforms act as source of news, educational content, entertainment, peer communication and more. These platforms host a variety of different kinds of content across a diverse range of subject matters, and each platform’s content and privacy policies are different. Despite age restrictions under the Children's Online Privacy Protection Act (COPPA), and other applicable laws, it is easy for children to falsify their birth date or use their parent's accounts to access content which might not be age-appropriate.
The Impact of Misinformation on Children
In virtual settings, inaccurate information can come in the form of text, images, or videos shared through traditional and social media channels. In this age, online misinformation is a significant cause for concern, especially with children, because it can cause anxiety, damage self-esteem, shape beliefs, and skewing their worldview/viewpoints. It can distort children's understanding of reality, hinder their critical thinking skills, and cause confusion and cognitive dissonance. The growing infodemic can even cause an overdose of information. Misinformation can also influence children's social interactions, leading to misunderstandings, conflicts, and mistrust among peers. Children from low literacy backgrounds are more susceptible to fabricated content. Mis/disinformation can exacerbate social divisions amongst peers and lead to unwanted behavioural patterns. Sometimes even children themselves can unwittingly spread/share misinformation. Therefore, it is important to educate & empower children to build cognitive defenses against online misinformation risks, promote media literacy skills, and equip them with the necessary tools to critically evaluate online information.
CyberPeace Policy Wing Recommendations
- Role of Parents & Educators to Build Cognitive Defenses
One way parents shape their children's values, beliefs and actions is through modelling. Children observe how their parents use technology, handle challenging situations, and make decisions. For example, parents who demonstrate honesty, encourage healthy use of social media and show kindness and empathy are more likely to raise children who hold these qualities in high regard. Hence parents/educators play an important role in shaping the minds of their young charges and their behaviours, whether in offline or online settings. It is important for parents/educators to realise that they must pay close attention to how online content consumption is impacting the cognitive skills of their child. Parents/educators should educate children about authentic sources of information. This involves instructing children on the importance of using reliable, credible sources to utilise while researching on any topic of study or otherwise, and using verification mechanisms to test suspected information., This may sound like a challenging ideal to meet, but the earlier we teach children about Prebunking and Debunking strategies and the ability to differentiate between fact and misleading information, the sooner we can help them build cognitive defenses so that they may use the Internet safely. Hence it becomes paramount important for parents/educators to require children to question the validity of information, verify sources, and critically analyze content. Developing these skills is essential for navigating the digital world effectively and making informed decisions.
- The Role of Tech & Social Media Companies to Fortify their Steps in Countering Misinformation
Is worth noting that all major tech/social media companies have privacy policies in place to discourage any spread of harmful content or misinformation. Social media platforms have already initiated efforts to counter misinformation by introducing new features such as adding context to content, labelling content, AI watermarks and collaboration with civil society organisations to counter the widespread online misinformation. In light of this, social media platforms must prioritise both the designing and the practical implementation aspects of policy development and deployment to counter misinformation strictly. These strategies can be further improved upon through government support and regulatory controls. It is recommended that social media platforms must further increase their efforts to counter increasing spread of online mis/disinformation and apply advanced techniques to counter misinformation including filtering, automated removal, detection and prevention, watermarking, increasing reporting mechanisms, providing context to suspected content, and promoting authenticated/reliable sources of information.
Social media platforms should consider developing children-specific help centres that host educational content in attractive, easy-to-understand formats so that children can learn about misinformation risks and tactics, how to spot red flags and how to increase their information literacy and protect themselves and their peers. Age-appropriate, attractive and simple content can go a long way towards fortifying young minds and making them aware and alert without creating fear.
- Laws and Regulations
It is important that the government and the social media platforms work in sync to counteract misinformation. The government must consult with the concerned platforms and enact rules and regulations which strengthen the platform’s age verification mechanisms at the sign up/ account creation stage whilst also respecting user privacy. Content moderation, removal of harmful content, and strengthening reporting mechanisms all are important factors which must be prioritised at both the regulatory level and the platform operational level. Additionally, in order to promote healthy and responsible use of technology by children, the government should collaborate with other institutions to design information literacy programs at the school level. The government must make it a key priority to work with civil society organisations and expert groups that run programs to fight misinformation and co-create a safe cyberspace for everyone, including children.
- Expert Organisations and Civil Societies
Cybersecurity experts and civil society organisations possess the unique blend of large scale impact potential and technical expertise. We have the ability to educate and empower huge numbers, along with the skills and policy acumen needed to be able to not just make people aware of the problem but also teach them how to solve it for themselves. True, sustainable solutions to any social concern only come about when capacity-building and empowerment are at the heart of the initiative. Programs that prioritise resilience, teach Prebunking and Debunking and are able to understand the unique concerns, needs and abilities of children and design solutions accordingly are the best suited to implement the administration’s mission to create a safe digital society.
Final Words
Online misinformation significantly impacts child development and can hinder their cognitive abilities, color their viewpoints, and cause confusion and mistrust. It is important that children are taught not just how to use technology but how to use it responsibly and positively. This education can begin at a very young age and parents, guardians and educators can connect with CyberPeace and other similar initiatives on how to define age-appropriate learning milestones. Together, we can not only empower children to be safe today, but also help them develop into netizens who make the world even safer for others tomorrow.
References:
- [1] Digital misinformation / disinformation and children
- [2] Children's Privacy | Federal Trade Commission
.webp)
Executive Summary:
In late 2024 an Indian healthcare provider experienced a severe cybersecurity attack that demonstrated how powerful AI ransomware is. This blog discusses the background to the attack, how it took place and the effects it caused (both medical and financial), how organisations reacted, and the final result of it all, stressing on possible dangers in the healthcare industry with a lack of sufficiently adequate cybersecurity measures in place. The incident also interrupted the normal functioning of business and explained the possible economic and image losses from cyber threats. Other technical results of the study also provide more evidence and analysis of the advanced AI malware and best practices for defending against them.
1. Introduction
The integration of artificial intelligence (AI) in cybersecurity has revolutionised both defence mechanisms and the strategies employed by cybercriminals. AI-powered attacks, particularly ransomware, have become increasingly sophisticated, posing significant threats to various sectors, including healthcare. This report delves into a case study of an AI-powered ransomware attack on a prominent Indian healthcare provider in 2024, analysing the attack's execution, impact, and the subsequent response, along with key technical findings.
2. Background
In late 2024, a leading healthcare organisation in India which is involved in the research and development of AI techniques fell prey to a ransomware attack that was AI driven to get the most out of it. With many businesses today relying on data especially in the healthcare industry that requires real-time operations, health care has become the favourite of cyber criminals. AI aided attackers were able to cause far more detailed and damaging attack that severely affected the operation of the provider whilst jeopardising the safety of the patient information.
3. Attack Execution
The attack began with the launch of a phishing email designed to target a hospital administrator. They received an email with an infected attachment which when clicked in some cases injected the AI enabled ransomware into the hospitals network. AI incorporated ransomware was not as blasé as traditional ransomware, which sends copies to anyone, this studied the hospital’s IT network. First, it focused and targeted important systems which involved implementation of encryption such as the electronic health records and the billing departments.
The fact that the malware had an AI feature allowed it to learn and adjust its way of propagation in the network, and prioritise the encryption of most valuable data. This accuracy did not only increase the possibility of the potential ransom demand but also it allowed reducing the risks of the possibility of early discovery.
4. Impact
- The consequences of the attack were immediate and severe: The consequences of the attack were immediate and severe.
- Operational Disruption: The centralization of important systems made the hospital cease its functionality through the acts of encrypting the respective components. Operations such as surgeries, routine medical procedures and admitting of patients were slowed or in some cases referred to other hospitals.
- Data Security: Electronic patient records and associated billing data became off-limit because of the vulnerability of patient confidentiality. The danger of data loss was on the verge of becoming permanent, much to the concern of both the healthcare provider and its patients.
- Financial Loss: The attackers asked for 100 crore Indian rupees (approximately 12 USD million) for the decryption key. Despite the hospital not paying for it, there were certain losses that include the operational loss due to the server being down, loss incurred by the patients who were affected in one way or the other, loss incurred in responding to such an incident and the loss due to bad reputation.
5. Response
As soon as the hotel’s management was informed about the presence of ransomware, its IT department joined forces with cybersecurity professionals and local police. The team decided not to pay the ransom and instead recover the systems from backup. Despite the fact that this was an ethically and strategically correct decision, it was not without some challenges. Reconstruction was gradual, and certain elements of the patients’ records were permanently erased.
In order to avoid such attacks in the future, the healthcare provider put into force several organisational and technical actions such as network isolation and increase of cybersecurity measures. Even so, the attack revealed serious breaches in the provider’s IT systems security measures and protocols.
6. Outcome
The attack had far-reaching consequences:
- Financial Impact: A healthcare provider suffers a lot of crashes in its reckoning due to substantial service disruption as well as bolstering cybersecurity and compensating patients.
- Reputational Damage: The leakage of the data had a potential of causing a complete loss of confidence from patients and the public this affecting the reputation of the provider. This, of course, had an effect on patient care, and ultimately resulted in long-term effects on revenue as patients were retained.
- Industry Awareness: The breakthrough fed discussions across the country on how to improve cybersecurity provisions in the healthcare industry. It woke up the other care providers to review and improve their cyber defence status.
7. Technical Findings
The AI-powered ransomware attack on the healthcare provider revealed several technical vulnerabilities and provided insights into the sophisticated mechanisms employed by the attackers. These findings highlight the evolving threat landscape and the importance of advanced cybersecurity measures.
7.1 Phishing Vector and Initial Penetration
- Sophisticated Phishing Tactics: The phishing email was crafted with precision, utilising AI to mimic the communication style of trusted contacts within the organisation. The email bypassed standard email filters, indicating a high level of customization and adaptation, likely due to AI-driven analysis of previous successful phishing attempts.
- Exploitation of Human Error: The phishing email targeted an administrative user with access to critical systems, exploiting the lack of stringent access controls and user awareness. The successful penetration into the network highlighted the need for multi-factor authentication (MFA) and continuous training on identifying phishing attempts.
7.2 AI-Driven Malware Behavior
- Dynamic Network Mapping: Once inside the network, the AI-powered malware executed a sophisticated mapping of the hospital's IT infrastructure. Using machine learning algorithms, the malware identified the most critical systems—such as Electronic Health Records (EHR) and the billing system—prioritising them for encryption. This dynamic mapping capability allowed the malware to maximise damage while minimising its footprint, delaying detection.
- Adaptive Encryption Techniques: The malware employed adaptive encryption techniques, adjusting its encryption strategy based on the system's response. For instance, if it detected attempts to isolate the network or initiate backup protocols, it accelerated the encryption process or targeted backup systems directly, demonstrating an ability to anticipate and counteract defensive measures.
- Evasive Tactics: The ransomware utilised advanced evasion tactics, such as polymorphic code and anti-forensic features, to avoid detection by traditional antivirus software and security monitoring tools. The AI component allowed the malware to alter its code and behaviour in real time, making signature-based detection methods ineffective.
7.3 Vulnerability Exploitation
- Weaknesses in Network Segmentation: The hospital’s network was insufficiently segmented, allowing the ransomware to spread rapidly across various departments. The malware exploited this lack of segmentation to access critical systems that should have been isolated from each other, indicating the need for stronger network architecture and micro-segmentation.
- Inadequate Patch Management: The attackers exploited unpatched vulnerabilities in the hospital’s IT infrastructure, particularly within outdated software used for managing patient records and billing. The failure to apply timely patches allowed the ransomware to penetrate and escalate privileges within the network, underlining the importance of rigorous patch management policies.
7.4 Data Recovery and Backup Failures
- Inaccessible Backups: The malware specifically targeted backup servers, encrypting them alongside primary systems. This revealed weaknesses in the backup strategy, including the lack of offline or immutable backups that could have been used for recovery. The healthcare provider’s reliance on connected backups left them vulnerable to such targeted attacks.
- Slow Recovery Process: The restoration of systems from backups was hindered by the sheer volume of encrypted data and the complexity of the hospital’s IT environment. The investigation found that the backups were not regularly tested for integrity and completeness, resulting in partial data loss and extended downtime during recovery.
7.5 Incident Response and Containment
- Delayed Detection and Response: The initial response was delayed due to the sophisticated nature of the attack, with traditional security measures failing to identify the ransomware until significant damage had occurred. The AI-powered malware’s ability to adapt and camouflage its activities contributed to this delay, highlighting the need for AI-enhanced detection and response tools.
- Forensic Analysis Challenges: The anti-forensic capabilities of the malware, including log wiping and data obfuscation, complicated the post-incident forensic analysis. Investigators had to rely on advanced techniques, such as memory forensics and machine learning-based anomaly detection, to trace the malware’s activities and identify the attack vector.
8. Recommendations Based on Technical Findings
To prevent similar incidents, the following measures are recommended:
- AI-Powered Threat Detection: Implement AI-driven threat detection systems capable of identifying and responding to AI-powered attacks in real time. These systems should include behavioural analysis, anomaly detection, and machine learning models trained on diverse datasets.
- Enhanced Backup Strategies: Develop a more resilient backup strategy that includes offline, air-gapped, or immutable backups. Regularly test backup systems to ensure they can be restored quickly and effectively in the event of a ransomware attack.
- Strengthened Network Segmentation: Re-architect the network with robust segmentation and micro-segmentation to limit the spread of malware. Critical systems should be isolated, and access should be tightly controlled and monitored.
- Regular Vulnerability Assessments: Conduct frequent vulnerability assessments and patch management audits to ensure all systems are up to date. Implement automated patch management tools where possible to reduce the window of exposure to known vulnerabilities.
- Advanced Phishing Defences: Deploy AI-powered anti-phishing tools that can detect and block sophisticated phishing attempts. Train staff regularly on the latest phishing tactics, including how to recognize AI-generated phishing emails.
9. Conclusion
The AI empowered ransomware attack on the Indian healthcare provider in 2024 makes it clear that the threat of advanced cyber attacks has grown in the healthcare facilities. Sophisticated technical brief outlines the steps used by hackers hence underlining the importance of ongoing active and strong security. This event is a stark message to all about the importance of not only remaining alert and implementing strong investments in cybersecurity but also embarking on the formulation of measures on how best to counter such incidents with limited harm. AI is now being used by cybercriminals to increase the effectiveness of the attacks they make and it is now high time all healthcare organisations ensure that their crucial systems and data are well protected from such attacks.