#FactCheck: Viral AI Video Showing Finance Minister of India endorsing an investment platform offering high returns.
Executive Summary:
A video circulating on social media falsely claims that India’s Finance Minister, Smt. Nirmala Sitharaman, has endorsed an investment platform promising unusually high returns. Upon investigation, it was confirmed that the video is a deepfake—digitally manipulated using artificial intelligence. The Finance Minister has made no such endorsement through any official platform. This incident highlights a concerning trend of scammers using AI-generated videos to create misleading and seemingly legitimate advertisements to deceive the public.

Claim:
A viral video falsely claims that the Finance Minister of India Smt. Nirmala Sitharaman is endorsing an investment platform, promoting it as a secure and highly profitable scheme for Indian citizens. The video alleges that individuals can start with an investment of ₹22,000 and earn up to ₹25 lakh per month as guaranteed daily income.

Fact check:
By doing a reverse image search from the key frames of the viral fake video we found an original YouTube clip of the Finance Minister of India delivering a speech on the webinar regarding 'Regulatory, Investment and EODB reforms'. Upon further research we have not found anything related to the viral investment scheme in the whole video.
The manipulated video has had an AI-generated voice/audio and scripted text injected into it to make it appear as if she has approved an investment platform.

The key to deepfakes is that they seem relatively realistic in their facial movement; however, if you look closely, you can see that there are mismatched lip-syncing and visual transitions that are out of the ordinary, and the results prove our point.


Also, there doesn't appear to be any acknowledgment of any such endorsement from a legitimate government website or a credible news outlet. This video is a fabricated piece of misinformation to attempt to scam the viewers by leveraging the image of a trusted public figure.
Conclusion:
The viral video showing the Finance Minister of India, Smt. Nirmala Sitharaman promoting an investment platform is fake and AI-generated. This is a clear case of deepfake misuse aimed at misleading the public and luring individuals into fraudulent schemes. Citizens are advised to exercise caution, verify any such claims through official government channels, and refrain from clicking on unknown investment links circulating on social media.
- Claim: Nirmala Sitharaman promoted an investment app in a viral video.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The integration of Artificial Intelligence into our daily workflows has compelled global policymakers to develop legislative frameworks to govern its impact efficiently. The question that we arrive at here is: While AI is undoubtedly transforming global economies, who governs the transformation? The EU AI Act was the first of its kind legislation to govern Artificial Intelligence, making the EU a pioneer in the emerging technology regulation space. This blog analyses the EU's Draft AI Rules and Code of Practice, exploring their implications for ethics, innovation, and governance.
Background: The Need for AI Regulation
AI adoption has been happening at a rapid pace and is projected to contribute $15.7 trillion to the global economy by 2030. The AI market size is expected to grow by at least 120% year-over-year. Both of these statistics have been stated in arguments citing concrete examples of AI risks (e.g., bias in recruitment tools, misinformation spread through deepfakes). Unlike the U.S., which relies on sector-specific regulations, the EU proposes a unified framework to address AI's challenges comprehensively, especially with the vacuum that exists in the governance of emerging technologies such as AI. It should be noted that the GDPR or the General Data Protection Regulation has been a success with its global influence on data privacy laws and has started a domino effect for the creation of privacy regulations all over the world. This precedent emphasises the EU's proactive approach towards regulations which are population-centric.
Overview of the Draft EU AI Rules
This Draft General Purpose AI Code of Practice details the AI rules for the AI Act rules and the providers of general-purpose AI models with systemic risks. The European AI Office facilitated the drawing up of the code, and was chaired by independent experts and involved nearly 1000 stakeholders and EU member state representatives and observers both European and international observers.
14th November 2024 marks the publishing of the first draft of the EU’s General-Purpose AI Code of Practice, established by the EU AI Act. As per Article 56 of the EU AI Act, the code outlines the rules that operationalise the requirements, set out for General-Purpose AI (GPAI) model under Article 53 and GPAI models with systemic risks under Article 55. The AI Act is legislation that finds its base in product safety and relies on setting harmonised standards in order to support compliance. These harmonised standards are essentially sets of operational rules that have been established by the European Standardisation bodies, such as the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC) and the European Telecommunications Standards Institute. Industry experts, civil society and trade unions are translating the requirements set out by the EU sectoral legislation into the specific mandates set by the European Commission. The AI Act obligates the developers, deployers and users of AI on mandates for transparency, risk management and compliance mechanisms
The Code of Practice for General Purpose AI
The most popular applications of GPAI include ChatGPT and other foundational models such as CoPilot from Microsoft, BERT from Google, Llama from Meta AI and many others and they are under constant development and upgradation. The 36-pages long draft Code of Practice for General Purpose AI is meant to serve as a roadmap for tech companies to comply with the AI Act and avoid paying penalties. It focuses on transparency, copyright compliance, risk assessment, and technical/governance risk mitigation as the core areas for the companies that are developing GPAIs. It also lays down guidelines that look to enable greater transparency on what goes into developing GPAIs.
The Draft Code's provisions for risk assessment focus on preventing cyber attacks, large-scale discrimination, nuclear and misinformation risks, and the risk of the models acting autonomously without oversight.
Policy Implications
The EU’s Draft AI Rules and Code of Practice represent a bold step in shaping the governance of general-purpose AI, positioning the EU as a global pioneer in responsible AI regulation. By prioritising harmonised standards, ethical safeguards, and risk mitigation, these rules aim to ensure AI benefits society while addressing its inherent risks. While the code is a welcome step, the compliance burdens on MSMEs and startups could hinder innovation, whereas, the voluntary nature of the Code raises concerns about accountability. Additionally, harmonising these ambitious standards with varying global frameworks, especially in regions like the U.S. and India, presents a significant challenge to achieving a cohesive global approach.
Conclusion
The EU’s initiative to regulate general-purpose AI aligns with its legacy of proactive governance, setting the stage for a transformative approach to balancing innovation with ethical accountability. However, challenges remain. Striking the right balance is crucial to avoid stifling innovation while ensuring robust enforcement and inclusivity for smaller players. Global collaboration is the next frontier. As the EU leads, the world must respond by building bridges between regional regulations and fostering a unified vision for AI governance. This demands active stakeholder engagement, adaptive frameworks, and a shared commitment to addressing emerging challenges in AI. The EU’s Draft AI Rules are not just about regulation, they are about leading a global conversation.
References
- https://indianexpress.com/article/technology/artificial-intelligence/new-eu-ai-code-of-practice-draft-rules-9671152/
- https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice
- https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft#:~:text=Drafting%20of%20the%20Code%20of%20Practice%20is%20taking%20place%20under,the%20drafting%20of%20the%20code.
- https://copyrightblog.kluweriplaw.com/2024/12/16/first-draft-of-the-general-purpose-ai-code-of-practice-has-been-released/
.webp)
Executive Summary:
In late 2024 an Indian healthcare provider experienced a severe cybersecurity attack that demonstrated how powerful AI ransomware is. This blog discusses the background to the attack, how it took place and the effects it caused (both medical and financial), how organisations reacted, and the final result of it all, stressing on possible dangers in the healthcare industry with a lack of sufficiently adequate cybersecurity measures in place. The incident also interrupted the normal functioning of business and explained the possible economic and image losses from cyber threats. Other technical results of the study also provide more evidence and analysis of the advanced AI malware and best practices for defending against them.
1. Introduction
The integration of artificial intelligence (AI) in cybersecurity has revolutionised both defence mechanisms and the strategies employed by cybercriminals. AI-powered attacks, particularly ransomware, have become increasingly sophisticated, posing significant threats to various sectors, including healthcare. This report delves into a case study of an AI-powered ransomware attack on a prominent Indian healthcare provider in 2024, analysing the attack's execution, impact, and the subsequent response, along with key technical findings.
2. Background
In late 2024, a leading healthcare organisation in India which is involved in the research and development of AI techniques fell prey to a ransomware attack that was AI driven to get the most out of it. With many businesses today relying on data especially in the healthcare industry that requires real-time operations, health care has become the favourite of cyber criminals. AI aided attackers were able to cause far more detailed and damaging attack that severely affected the operation of the provider whilst jeopardising the safety of the patient information.
3. Attack Execution
The attack began with the launch of a phishing email designed to target a hospital administrator. They received an email with an infected attachment which when clicked in some cases injected the AI enabled ransomware into the hospitals network. AI incorporated ransomware was not as blasé as traditional ransomware, which sends copies to anyone, this studied the hospital’s IT network. First, it focused and targeted important systems which involved implementation of encryption such as the electronic health records and the billing departments.
The fact that the malware had an AI feature allowed it to learn and adjust its way of propagation in the network, and prioritise the encryption of most valuable data. This accuracy did not only increase the possibility of the potential ransom demand but also it allowed reducing the risks of the possibility of early discovery.
4. Impact
- The consequences of the attack were immediate and severe: The consequences of the attack were immediate and severe.
- Operational Disruption: The centralization of important systems made the hospital cease its functionality through the acts of encrypting the respective components. Operations such as surgeries, routine medical procedures and admitting of patients were slowed or in some cases referred to other hospitals.
- Data Security: Electronic patient records and associated billing data became off-limit because of the vulnerability of patient confidentiality. The danger of data loss was on the verge of becoming permanent, much to the concern of both the healthcare provider and its patients.
- Financial Loss: The attackers asked for 100 crore Indian rupees (approximately 12 USD million) for the decryption key. Despite the hospital not paying for it, there were certain losses that include the operational loss due to the server being down, loss incurred by the patients who were affected in one way or the other, loss incurred in responding to such an incident and the loss due to bad reputation.
5. Response
As soon as the hotel’s management was informed about the presence of ransomware, its IT department joined forces with cybersecurity professionals and local police. The team decided not to pay the ransom and instead recover the systems from backup. Despite the fact that this was an ethically and strategically correct decision, it was not without some challenges. Reconstruction was gradual, and certain elements of the patients’ records were permanently erased.
In order to avoid such attacks in the future, the healthcare provider put into force several organisational and technical actions such as network isolation and increase of cybersecurity measures. Even so, the attack revealed serious breaches in the provider’s IT systems security measures and protocols.
6. Outcome
The attack had far-reaching consequences:
- Financial Impact: A healthcare provider suffers a lot of crashes in its reckoning due to substantial service disruption as well as bolstering cybersecurity and compensating patients.
- Reputational Damage: The leakage of the data had a potential of causing a complete loss of confidence from patients and the public this affecting the reputation of the provider. This, of course, had an effect on patient care, and ultimately resulted in long-term effects on revenue as patients were retained.
- Industry Awareness: The breakthrough fed discussions across the country on how to improve cybersecurity provisions in the healthcare industry. It woke up the other care providers to review and improve their cyber defence status.
7. Technical Findings
The AI-powered ransomware attack on the healthcare provider revealed several technical vulnerabilities and provided insights into the sophisticated mechanisms employed by the attackers. These findings highlight the evolving threat landscape and the importance of advanced cybersecurity measures.
7.1 Phishing Vector and Initial Penetration
- Sophisticated Phishing Tactics: The phishing email was crafted with precision, utilising AI to mimic the communication style of trusted contacts within the organisation. The email bypassed standard email filters, indicating a high level of customization and adaptation, likely due to AI-driven analysis of previous successful phishing attempts.
- Exploitation of Human Error: The phishing email targeted an administrative user with access to critical systems, exploiting the lack of stringent access controls and user awareness. The successful penetration into the network highlighted the need for multi-factor authentication (MFA) and continuous training on identifying phishing attempts.
7.2 AI-Driven Malware Behavior
- Dynamic Network Mapping: Once inside the network, the AI-powered malware executed a sophisticated mapping of the hospital's IT infrastructure. Using machine learning algorithms, the malware identified the most critical systems—such as Electronic Health Records (EHR) and the billing system—prioritising them for encryption. This dynamic mapping capability allowed the malware to maximise damage while minimising its footprint, delaying detection.
- Adaptive Encryption Techniques: The malware employed adaptive encryption techniques, adjusting its encryption strategy based on the system's response. For instance, if it detected attempts to isolate the network or initiate backup protocols, it accelerated the encryption process or targeted backup systems directly, demonstrating an ability to anticipate and counteract defensive measures.
- Evasive Tactics: The ransomware utilised advanced evasion tactics, such as polymorphic code and anti-forensic features, to avoid detection by traditional antivirus software and security monitoring tools. The AI component allowed the malware to alter its code and behaviour in real time, making signature-based detection methods ineffective.
7.3 Vulnerability Exploitation
- Weaknesses in Network Segmentation: The hospital’s network was insufficiently segmented, allowing the ransomware to spread rapidly across various departments. The malware exploited this lack of segmentation to access critical systems that should have been isolated from each other, indicating the need for stronger network architecture and micro-segmentation.
- Inadequate Patch Management: The attackers exploited unpatched vulnerabilities in the hospital’s IT infrastructure, particularly within outdated software used for managing patient records and billing. The failure to apply timely patches allowed the ransomware to penetrate and escalate privileges within the network, underlining the importance of rigorous patch management policies.
7.4 Data Recovery and Backup Failures
- Inaccessible Backups: The malware specifically targeted backup servers, encrypting them alongside primary systems. This revealed weaknesses in the backup strategy, including the lack of offline or immutable backups that could have been used for recovery. The healthcare provider’s reliance on connected backups left them vulnerable to such targeted attacks.
- Slow Recovery Process: The restoration of systems from backups was hindered by the sheer volume of encrypted data and the complexity of the hospital’s IT environment. The investigation found that the backups were not regularly tested for integrity and completeness, resulting in partial data loss and extended downtime during recovery.
7.5 Incident Response and Containment
- Delayed Detection and Response: The initial response was delayed due to the sophisticated nature of the attack, with traditional security measures failing to identify the ransomware until significant damage had occurred. The AI-powered malware’s ability to adapt and camouflage its activities contributed to this delay, highlighting the need for AI-enhanced detection and response tools.
- Forensic Analysis Challenges: The anti-forensic capabilities of the malware, including log wiping and data obfuscation, complicated the post-incident forensic analysis. Investigators had to rely on advanced techniques, such as memory forensics and machine learning-based anomaly detection, to trace the malware’s activities and identify the attack vector.
8. Recommendations Based on Technical Findings
To prevent similar incidents, the following measures are recommended:
- AI-Powered Threat Detection: Implement AI-driven threat detection systems capable of identifying and responding to AI-powered attacks in real time. These systems should include behavioural analysis, anomaly detection, and machine learning models trained on diverse datasets.
- Enhanced Backup Strategies: Develop a more resilient backup strategy that includes offline, air-gapped, or immutable backups. Regularly test backup systems to ensure they can be restored quickly and effectively in the event of a ransomware attack.
- Strengthened Network Segmentation: Re-architect the network with robust segmentation and micro-segmentation to limit the spread of malware. Critical systems should be isolated, and access should be tightly controlled and monitored.
- Regular Vulnerability Assessments: Conduct frequent vulnerability assessments and patch management audits to ensure all systems are up to date. Implement automated patch management tools where possible to reduce the window of exposure to known vulnerabilities.
- Advanced Phishing Defences: Deploy AI-powered anti-phishing tools that can detect and block sophisticated phishing attempts. Train staff regularly on the latest phishing tactics, including how to recognize AI-generated phishing emails.
9. Conclusion
The AI empowered ransomware attack on the Indian healthcare provider in 2024 makes it clear that the threat of advanced cyber attacks has grown in the healthcare facilities. Sophisticated technical brief outlines the steps used by hackers hence underlining the importance of ongoing active and strong security. This event is a stark message to all about the importance of not only remaining alert and implementing strong investments in cybersecurity but also embarking on the formulation of measures on how best to counter such incidents with limited harm. AI is now being used by cybercriminals to increase the effectiveness of the attacks they make and it is now high time all healthcare organisations ensure that their crucial systems and data are well protected from such attacks.

Introduction
Cybercrime is one of the most pressing concerns in today’s era. As the digital world is evolving rapidly, so do the threats and challenges to curb these cybercrimes. The complexities associated with the evolving cybercrimes make it difficult to detect and investigate by the law enforcement across the world. India is one of those countries that is actively engaged in creating awareness about the cybercrimes and security concerns across the State. At the national level, initiatives like National Cybercrime Reporting Portal, CERT-In and I4C have been established to assist the law enforcement in dealing with cybercrimes in India. According to the press release by the Ministry of Home Affairs, 12,5153 cases of Financial Cyber Frauds were reported in the year 2023, which is the second highest in State-wise Reporting after UP. Maharashtra has been highlighted as one of the States with the highest cybercrime cases for the past few years.
In response to curbing the increasing number of cases, the state of Maharashtra has launched the initiative ‘the Maharashtra Cyber Security Project’. The purpose of this project is to strengthen the system’s defense mechanism by establishing cybersecurity infrastructure, exploiting technological advancements and enhancing the skills of law enforcement agencies.
Maharashtra Cyber Department and the Cyber Security Project
The Maharashtra Cyber Department, also referred as MahaCyber was established in the year 2016 and employs a multi-faceted approach to address cyberthreats. The objective is to provide a user-friendly space to report Cybercrimes, safeguarding Critical Information Infrastructure from cyber threats, empowering the investigation law agencies ultimately improving its efficiency and creating awareness among common people.
The Maharashtra Cyber Security Project aims to strengthen the department, bringing all the aspects of the cyber security system under one facility. The key components of the Maharashtra Cyber Security Project are as follows:
- Command & Control Centre:
The Command & Control Centre will function as a 24/ complaint registration hub and grievance handling mechanism which can be accessed by calling the helpline number, mobile app or on the online portal. The Centre continuously monitors cyber threats, reduce the impact of cyber attacks and ensures that issues are resolved as soon as possible.
- Technology Assisted Investigation (TAI):
Complaints that are registered are analysed and investigated by experts using cutting edge technologies such as Computer Forensic or Mobile Forensic, Voice Analysis System, Image Enhancement Tool, Deepfake Detection Solution to name a few which helps the Maharashtra Cyber Department to collect evidence, identify weak spots and mitigate the cyber threats effectively.
- Computer Emergency Response Team – Maharashtra (CERT-MH):
The CRET-MH works on curbing cybercrimes which are especially targeted to affect the Critical Infrastructure like banks, railway services, electricity of the State and threats related to national security using technologies such as Deep web and Dark web analysis, Darknet & Threat Intelligence Feeds, Vulnerability Management, Cyber Threat Intelligence Platform, Malware Analysis and Network Capture Analysis and coordinates with other agencies.
- Security Operations Centre (SOC):
The SOC looks after the security of the MahaCyber from any cyber threats. It 24/7 monitors the infrastructure for any signs of breach or threats and thus aids in early detection and prevention of any further harm.
- Centre of Excellence (COE):
The Centre of Excellence focuses on training the police officials to equip them with desired tools and technologies to deal with cyber threats. The Centre also works on creating awareness about various cyber threats among the citizens of the state.
- Nodal Cyber Police Station:
The Nodal Cyber Police Station works as a focal point for all cybercrime related law enforcement activities. It is responsible for coordinating the investigation procedure and prevention of cybercrimes within the state. Such Cyber Police Stations have been established in each district of Maharashtra.
Funds of Funds to scale up Startups
The government of Maharashtra through the Fund of Funds for Startups scheme has invested in more than 300 startups that align with the objective of cyber security and digital safety. The government is promoting ideas and cyber defence innovation which will help to push the boundaries of traditional cybersecurity tools and improve the State’s ability to tackle cybercrimes. Such partnerships can be a cost-effective solution that proactively promotes a culture of cybersecurity across industries.
Dynamic Cyber Platform
The government of Maharashtra has been working on creating a dynamic cyber platform that would assist them in tackling cybercrimes and save hundreds of crores of rupees in a short span of time. The platform will act as a link between various stakeholders such as banks, Non-Banking Financial Companies (NBFCs) and social media providers to provide a technology-driven solution to the evolving cybercrimes. As a part of this process, the government has invited tenders and has called top IT companies from the world to participate and aid them in setting up this dynamic cyber platform.
Why Does The Initiative By Maharashtra’s Government Act As A Model For Other States
The components of the Maharashtra Cyber Security Project and the dynamic cyber platform create a comprehensive system which aims at tackling the increasing complexities of cyber threats. The initiative with integration on cutting edge technologies, specialised institutions, expert professionals from various industries and real-time monitoring of cybercrimes sets an example that Maharashtra is well-equipped to prevent, detect and respond to cybercrimes being reported in the State. The project collaborates between government and law enforcement agencies, providing them proper training and addressing grievances of the public. By working on four key areas, i.e. centralised platform for reporting, collaboration between government and private sectors, public awareness and use of advanced technologies, the Cyber Security System in Maharashtra serves as a model for creating secure digital space and tackling cybercrime effectively on a large scale.
Other States in India could certainly adopt similar models and achieve success in curbing cybercrimes. They need to create a dedicated response team consisting of trained personnel, invest in advanced software as used by Maharashtra, foster partnerships with companies or startups involved in AI and technology to build resilient cybersecurity infrastructures. The government of Maharashtra can extend hands to assist other states to establish a model that addresses the evolving cybercrimes efficiently.
References
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2003158
- https://mhcyber.gov.in/about-us
- https://www.youtube.com/watch?v=jjPw-8afTTw
- https://www.ltts.com/press-release/maharashtra-inaugurates-india-first-integrated-cyber-command-control-center-ltts
- https://theprint.in/india/maharashtra-tackling-evolving-cyber-crimes-through-dynamic-platform-cm/2486772/
- https://www.freepressjournal.in/mumbai/maharashtra-dynamic-cyber-security-platform-in-the-offing-says-fadnavis