#FactCheck: Fake video falsely claims FM Sitharaman endorsed investment scheme
Executive Summary:
A video gone viral on Facebook claims Union Finance Minister Nirmala Sitharaman endorsed the government’s new investment project. The video has been widely shared. However, our research indicates that the video has been AI altered and is being used to spread misinformation.

Claim:
The claim in this video suggests that Finance Minister Nirmala Sitharaman is endorsing an automotive system that promises daily earnings of ₹15,00,000 with an initial investment of ₹21,000.

Fact Check:
To check the genuineness of the claim, we used the keyword search for “Nirmala Sitharaman investment program” but we haven’t found any investment related scheme. We observed that the lip movements appeared unnatural and did not align perfectly with the speech, leading us to suspect that the video may have been AI-manipulated.
When we reverse searched the video which led us to this DD News live-stream of Sitharaman’s press conference after presenting the Union Budget on February 1, 2025. Sitharaman never mentioned any investment or trading platform during the press conference, showing that the viral video was digitally altered. Technical analysis using Hive moderator further found that the viral clip is Manipulated by voice cloning.

Conclusion:
The viral video on social media shows Union Finance Minister Nirmala Sitharaman endorsing the government’s new investment project as completely voice cloned, manipulated and false. This highlights the risk of online manipulation, making it crucial to verify news with credible sources before sharing it. With the growing risk of AI-generated misinformation, promoting media literacy is essential in the fight against false information.
- Claim: Fake video falsely claims FM Nirmala Sitharaman endorsed an investment scheme.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

AI-generated content has been taking up space in the ever-changing dynamics of today's tech landscape. Generative AI has emerged as a powerful tool that has enabled the creation of hyper-realistic audio, video, and images. While advantageous, this ability has some downsides, too, particularly in content authenticity and manipulation.
The impact of this content is varied in the areas of ethical, psychological and social harms seen in the past couple of years. A major concern is the creation of non-consensual explicit content, including nudes. This content includes content where an individual’s face gets superimposed onto explicit images or videos without their consent. This is not just a violation of privacy for individuals, and can have humongous consequences for their professional and personal lives. This blog examines the existing laws and whether they are equipped to deal with the challenges that this content poses.
Understanding the Deepfake Technology
Deepfake technology is a media file (image, video, or speech) that typically represents a human subject that is altered deceptively using deep neural networks (DNNs). It is used to alter a person’s identity, and it usually takes the form of a “face swap” where the identity of a source subject is transferred onto a destination subject. The destination’s facial expressions and head movements remain the same, but the appearance in the video is that of the source. In the case of videos, the identities can be substituted by way of replacement or reenactment.
This superimposed content creates realistic content, such as fake nudes. Presently, creating a deepfake is not a costly endeavour. It requires a Graphics Processing Unit (GPU), software that is free, open-source, and easy to download, and graphics editing and audio-dubbing skills. Some of the common apps to create deepfakes are DeepFaceLab and FaceSwap, which are both public and open source and are supported by thousands of users who actively participate in the evolution and development of these software and models.
Legal Gaps and Challenges
Multiple gaps and challenges exist in the legal space for deepfakes and their regulation. They are:
- The inadequate definitions governing AI-generated explicit content often lead to enforcement challenges.
- Jurisdictional challenges due to the cross-border nature of crimes and the difficulties caused by international cooperation measures are in the early stages for AI content.
- There is a gap between the current consent-based and harassment laws for AI-generated nudes.
- Providing evidence or providing proof for the intent and identification of perpetrators in digital crimes is a challenge that is yet to be overcome.
Policy Responses and Global Trends
Presently, the global response to deepfakes is developing. The UK has developed the Online Safety Bill, the EU has the AI Act, the US has some federal laws such as the National AI Initiative Act of 2020 and India is currently developing the India AI Act as the specific legislation dealing with AI and its correlating issues.
The IT Rules, 2021, and the DPDP Act, 2023, regulate digital platforms by mandating content governance, privacy policies, grievance redressal, and compliance with removal orders. Emphasising intermediary liability and safe harbour protections, these laws play a crucial role in tackling harmful content like AI-generated nudes, while the DPDP Act focuses on safeguarding privacy and personal data rights.
Bridging the Gap: CyberPeace Recommendations
- Initiate legislative reforms by advocating for clear and precise definitions for the consent frameworks and instituting high penalties for AI-based offences, particularly those which are aimed at sexually explicit material.
- Advocate for global cooperation and collaborations by setting up international standards and bilateral and multilateral treaties that address the cross-border nature of these offences.
- Platforms should push for accountability by pushing for stricter platform responsibility for the detection and removal of harmful AI-generated content. Platforms should introduce strong screening mechanisms to counter the huge influx of harmful content.
- Public campaigns which spread awareness and educate users about their rights and the resources available to them in case such an act takes place with them.
Conclusion
The rapid advancement of AI-generated explicit content demands immediate and decisive action. As this technology evolves, the gaps in existing legal frameworks become increasingly apparent, leaving individuals vulnerable to profound privacy violations and societal harm. Addressing this challenge requires adaptive, forward-thinking legislation that prioritises individual safety while fostering technological progress. Collaborative policymaking is essential and requires uniting governments, tech platforms, and civil society to develop globally harmonised standards. By striking a balance between innovation and societal well-being, we can ensure that the digital age is not only transformative but also secure and respectful of human dignity. Let’s act now to create a safer future!
References
- https://etedge-insights.com/technology/artificial-intelligence/deepfakes-and-the-future-of-digital-security-are-we-ready/
- https://odsc.medium.com/the-rise-of-deepfakes-understanding-the-challenges-and-opportunities-7724efb0d981
- https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/

Executive Summary:
A new threat being uncovered in today’s threat landscape is that while threat actors took an average of one hour and seven minutes to leverage Proof-of-Concept(PoC) exploits after they went public, now the time is at a record low of 22 minutes. This incredibly fast exploitation means that there is very limited time for organizations’ IT departments to address these issues and close the leaks before they are exploited. Cloudflare released the Application Security report which shows that the attack percentage is more often higher than the rate at which individuals invent and develop security countermeasures like the WAF rules and software patches. In one case, Cloudflare noted an attacker using a PoC-based attack within a mere 22 minutes from the moment it was released, leaving almost no time for a remediation window.
Despite the constant growth of vulnerabilities in various applications and systems, the share of exploited vulnerabilities, which are accompanied by some level of public exploit or PoC code, has remained relatively stable over the past several years and fluctuates around 50%. These vulnerabilities with publicly known exploit code, 41% was initially attacked in the zero-day mode while of those with no known code, 84% was first attacked in the same mode.
Modus Operandi:
The modus operandi of the attack involving the rapid weaponization of proof-of-concept (PoC) exploits is characterized by the following steps:
- Vulnerability Identification: Threat actors bring together the exploitation of a system vulnerability that may be in the software or hardware of the system; this may be a code error, design failure, or a configuration error. This is normally achieved using vulnerability scanners and test procedures that have to be performed manually.
- Vulnerability Analysis: After the vulnerability is identified, the attackers study how it operates to determine when and how it can be triggered and what consequences that action will have. This means that one needs to analyze the details of the PoC code or system to find out the connection sequence that leads to vulnerability exploitation.
- Exploit Code Development: Being aware of the weakness, the attackers develop a small program or script denoted as the PoC that addresses exclusively the identified vulnerability and manipulates it in a moderated manner. This particular code is meant to be utilized in showing a particular penalty, which could be unauthorized access or alteration of data.
- Public Disclosure and Weaponization: The PoC exploit is released which is frequently done shortly after the vulnerability has been announced to the public. This makes it easier for the attackers to exploit it while waiting for the software developer to release the patch. To illustrate, Cloudflare has spotted an attacker using the PoC-based exploit 22 minutes after the publication only.
- Attack Execution: The attackers then use the weaponized PoC exploit to attack systems which are known to be vulnerable to it. Some of the actions that are tried in this context are attempts at running remote code, unauthorized access and so on. The pace at which it happens is often much faster than the pace at which humans put in place proper security defense mechanisms, such as the WAF rules or software application fixes.
- Targeted Operations: Sometimes, they act as if it’s a planned operation, where the attackers are selective in the system or organization to attack. For example, exploitation of CVE-2022-47966 in ManageEngine software was used during the espionage subprocess, where to perform such activity, the attackers used the mentioned vulnerability to install tools and malware connected with espionage.
Precautions: Mitigation
Following are the mitigating measures against the PoC Exploits:
1. Fast Patching and New Vulnerability Handling
- Introduce proper patching procedures to address quickly the security released updates and disclosed vulnerabilities.
- Focus should be made on the patching of those vulnerabilities that are observed to be having available PoC exploits, which often risks being exploited almost immediately.
- It is necessary to frequently check for the new vulnerability disclosures and PoC releases and have a prepared incident response plan for this purpose.
2. Leverage AI-Powered Security Tools
- Employ intelligent security applications which can easily generate desirable protection rules and signatures as attackers ramp up the weaponization of PoC exploits.
- Step up use of artificial intelligence (AI) - fueled endpoint detection and response (EDR) applications to quickly detect and mitigate the attempts.
- Integrate Artificial Intelligence based SIEM tools to Detect & analyze Indicators of compromise to form faster reaction.
3. Network Segmentation and Hardening
- Use strong networking segregation to prevent the attacker’s movement across the network and also restrict the effects of successful attacks.
- Secure any that are accessible from the internet, and service or protocols such as RDP, CIFS, or Active directory.
- Limit the usage of native scripting applications as much as possible because cyber attackers may exploit them.
4. Vulnerability Disclosure and PoC Management
- Inform the vendors of the bugs and PoC exploits and make sure there is a common understanding of when they are reported, to ensure fast response and mitigation.
- It is suggested to incorporate mechanisms like digital signing and encryption for managing and distributing PoC exploits to prevent them from being accessed by unauthorized persons.
- Exploits used in PoC should be simple and independent with clear and meaningful variable and function names that help reduce time spent on triage and remediation.
5. Risk Assessment and Response to Incidents
- Maintain constant supervision of the environment with an intention of identifying signs of a compromise, as well as, attempts of exploitation.
- Support a frequent detection, analysis and fighting of threats, which use PoC exploits into the system and its components.
- Regularly communicate with security researchers and vendors to understand the existing threats and how to prevent them.
Conclusion:
The rapid process of monetization of Proof of Concept (POC) exploits is one of the most innovative and constantly expanding global threats to cybersecurity at the present moment. Cyber security experts must react quickly while applying a patch, incorporate AI to their security tools, efficiently subdivide their networks and always heed their vulnerability announcements. Stronger incident response plan would aid in handling these kinds of menaces. Hence, applying measures mentioned above, the organizations will be able to prevent the acceleration of turning PoC exploits into weapons and the probability of neutral affecting cyber attacks.
Reference:
https://www.mayrhofer.eu.org/post/vulnerability-disclosure-is-positive/
https://www.uptycs.com/blog/new-poc-exploit-backdoor-malware
https://www.balbix.com/insights/attack-vectors-and-breach-methods/
https://blog.cloudflare.com/application-security-report-2024-update

Introduction
There is a rising desire for artificial intelligence (AI) laws that limit threats to public safety and protect human rights while allowing for a flexible and inventive setting. Most AI policies prioritize the use of AI for the public good. The most compelling reason for AI innovation as a valid goal of public policy is its promise to enhance people's lives by assisting in the resolution of some of the world's most difficult difficulties and inefficiencies and to emerge as a transformational technology, similar to mobile computing. This blog explores the complex interplay between AI and internet governance from an Indian standpoint, examining the challenges, opportunities, and the necessity for a well-balanced approach.
Understanding Internet Governance
Before delving into an examination of their connection, let's establish a comprehensive grasp of Internet Governance. This entails the regulations, guidelines, and criteria that influence the global operation and management of the Internet. With the internet being a shared resource, governance becomes crucial to ensure its accessibility, security, and equitable distribution of benefits.
The Indian Digital Revolution
India has witnessed an unprecedented digital revolution, with a massive surge in internet users and a burgeoning tech ecosystem. The government's Digital India initiative has played a crucial role in fostering a technology-driven environment, making technology accessible to even the remotest corners of the country. As AI applications become increasingly integrated into various sectors, the need for a comprehensive framework to govern these technologies becomes apparent.
AI and Internet Governance Nexus
The intersection of AI and Internet governance raises several critical questions. How should data, the lifeblood of AI, be governed? What role does privacy play in the era of AI-driven applications? How can India strike a balance between fostering innovation and safeguarding against potential risks associated with AI?
- AI's Role in Internet Governance:
Artificial Intelligence has emerged as a powerful force shaping the dynamics of the internet. From content moderation and cybersecurity to data analysis and personalized user experiences, AI plays a pivotal role in enhancing the efficiency and effectiveness of Internet governance mechanisms. Automated systems powered by AI algorithms are deployed to detect and respond to emerging threats, ensuring a safer online environment.
A comprehensive strategy for managing the interaction between AI and the internet is required to stimulate innovation while limiting hazards. Multistakeholder models including input from governments, industry, academia, and civil society are gaining appeal as viable tools for developing comprehensive and extensive governance frameworks.
The usefulness of multistakeholder governance stems from its adaptability and flexibility in requiring collaboration from players with a possible stake in an issue. Though flawed, this approach allows for flaws that may be remedied using knowledge-building pieces. As AI advances, this trait will become increasingly important in ensuring that all conceivable aspects are covered.
The Need for Adaptive Regulations
While AI's potential for good is essentially endless, so is its potential for damage - whether intentional or unintentional. The technology's highly disruptive nature needs a strong, human-led governance framework and rules that ensure it may be used in a positive and responsible manner. The fast emergence of GenAI, in particular, emphasizes the critical need for strong frameworks. Concerns about the usage of GenAI may enhance efforts to solve issues around digital governance and hasten the formation of risk management measures.
Several AI governance frameworks have been published throughout the world in recent years, with the goal of offering high-level guidelines for safe and trustworthy AI development. The OECD's "Principles on Artificial Intelligence" (OECD, 2019), the EU's "Ethics Guidelines for Trustworthy AI" (EU, 2019), and UNESCO's "Recommendations on the Ethics of Artificial Intelligence" (UNESCO, 2021) are among the multinational organizations that have released their own principles. However, the advancement of GenAI has resulted in additional recommendations, such as the OECD's newly released "G7 Hiroshima Process on Generative Artificial Intelligence" (OECD, 2023).
Several guidance documents and voluntary frameworks have emerged at the national level in recent years, including the "AI Risk Management Framework" from the United States National Institute of Standards and Technology (NIST), a voluntary guidance published in January 2023, and the White House's "Blueprint for an AI Bill of Rights," a set of high-level principles published in October 2022 (The White House, 2022). These voluntary policies and frameworks are frequently used as guidelines by regulators and policymakers all around the world. More than 60 nations in the Americas, Africa, Asia, and Europe had issued national AI strategies as of 2023 (Stanford University).
Conclusion
Monitoring AI will be one of the most daunting tasks confronting the international community in the next centuries. As vital as the need to govern AI is the need to regulate it appropriately. Current AI policy debates too often fall into a false dichotomy of progress versus doom (or geopolitical and economic benefits versus risk mitigation). Instead of thinking creatively, solutions all too often resemble paradigms for yesterday's problems. It is imperative that we foster a relationship that prioritizes innovation, ethical considerations, and inclusivity. Striking the right balance will empower us to harness the full potential of AI within the boundaries of responsible and transparent Internet Governance, ensuring a digital future that is secure, equitable, and beneficial for all.
References
- The Key Policy Frameworks Governing AI in India - Access Partnership
- AI in e-governance: A potential opportunity for India (indiaai.gov.in)
- India and the Artificial Intelligence Revolution - Carnegie India - Carnegie Endowment for International Peace
- Rise of AI in the Indian Economy (indiaai.gov.in)
- The OECD Artificial Intelligence Policy Observatory - OECD.AI
- Artificial Intelligence | UNESCO
- Artificial intelligence | NIST