#FactCheck: Viral AI Video Showing Finance Minister of India endorsing an investment platform offering high returns.
Executive Summary:
A video circulating on social media falsely claims that India’s Finance Minister, Smt. Nirmala Sitharaman, has endorsed an investment platform promising unusually high returns. Upon investigation, it was confirmed that the video is a deepfake—digitally manipulated using artificial intelligence. The Finance Minister has made no such endorsement through any official platform. This incident highlights a concerning trend of scammers using AI-generated videos to create misleading and seemingly legitimate advertisements to deceive the public.

Claim:
A viral video falsely claims that the Finance Minister of India Smt. Nirmala Sitharaman is endorsing an investment platform, promoting it as a secure and highly profitable scheme for Indian citizens. The video alleges that individuals can start with an investment of ₹22,000 and earn up to ₹25 lakh per month as guaranteed daily income.

Fact check:
By doing a reverse image search from the key frames of the viral fake video we found an original YouTube clip of the Finance Minister of India delivering a speech on the webinar regarding 'Regulatory, Investment and EODB reforms'. Upon further research we have not found anything related to the viral investment scheme in the whole video.
The manipulated video has had an AI-generated voice/audio and scripted text injected into it to make it appear as if she has approved an investment platform.

The key to deepfakes is that they seem relatively realistic in their facial movement; however, if you look closely, you can see that there are mismatched lip-syncing and visual transitions that are out of the ordinary, and the results prove our point.


Also, there doesn't appear to be any acknowledgment of any such endorsement from a legitimate government website or a credible news outlet. This video is a fabricated piece of misinformation to attempt to scam the viewers by leveraging the image of a trusted public figure.
Conclusion:
The viral video showing the Finance Minister of India, Smt. Nirmala Sitharaman promoting an investment platform is fake and AI-generated. This is a clear case of deepfake misuse aimed at misleading the public and luring individuals into fraudulent schemes. Citizens are advised to exercise caution, verify any such claims through official government channels, and refrain from clicking on unknown investment links circulating on social media.
- Claim: Nirmala Sitharaman promoted an investment app in a viral video.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
Misinformation poses a significant challenge to public health policymaking since it undermines efforts to promote effective health interventions and protect public well-being. The spread of inaccurate information, particularly through online channels such as social media and internet platforms, further complicates the decision-making process for policymakers since it perpetuates public confusion and distrust. This misinformation can lead to resistance against health initiatives, such as vaccination programs, and fuels scepticism towards scientifically-backed health guidelines.
Before the COVID-19 pandemic, misinformation surrounding healthcare largely encompassed the effects of alcohol and tobacco consumption, marijuana use, eating habits, physical exercise etc. However, there has been a marked shift in the years since. One such example is the outcry against palm oil in 2024: it is an ingredient prevalent in numerous food and cosmetic products, and came under the scanner after a number of claims that palmitic acid, which is present in palm oil, is detrimental to our health. However, scientific research by reputable institutions globally established that there is no cause for concern regarding the health risks posed by palmitic acid. Such trends and commentaries tend to create a parallel unscientific discourse that has the potential to not only impact individual choices but also public opinion and as a result, market developments and policy conversations.
A prevailing narrative during the worst of the Covid-19 pandemic was that the virus had been engineered to control society and boost hospital profits. The extensive misinformation surrounding COVID-19 and its management and care increased vaccine hesitancy amongst people worldwide. It is worth noting that vaccine hesitancy has been a consistent trend historically; the World Health Organisation flagged vaccine hesitancy as one of the main threats to global health, and there have been other instances where a majority of the population refused to get vaccinated anticipating unverified, long-lasting side effects. For example, research from 2016 observed a significant level of public skepticism regarding the development and approval process of the Zika vaccine in Africa. Further studies emphasised the urgent need to disseminate accurate information about the Zika virus on online platforms to help curb the spread of the pandemic.
In India during the COVID-19 pandemic, despite multiple official advisories, notifications and guidelines issued by the government and ICMR, people continued to remain opposed to vaccination, which resulted in inflated mortality rates within the country. Vaccination hesitancy was also compounded by anti-vaccination celebrities who claimed that vaccines were dangerous and contributed in large part to the conspiracy theories doing the rounds. Similar hesitation was noted in misinformation surrounding the MMR vaccines and their likely role in causing autism was examined. At the time of the crisis, the Indian government also had to tackle disinformation-induced fraud surrounding the supply of oxygens in hospitals. Many critically-ill patients relied on fake news and unverified sources that falsely portrayed the availability of beds, oxygen cylinders and even home set-ups, only to be cheated out of money.
The above examples highlight the difficulty health officials face in administering adequate healthcare. The special case of the COVID-19 pandemic also highlighted how current legal frameworks failed to address misinformation and disinformation, which impedes effective policymaking. It also highlights how taking corrective measures against health-related misinformation becomes difficult since such corrective action creates an uncomfortable gap in an individual’s mind, and it is seen that people ignore accurate information that may help bridge the gap. Misinformation, coupled with the infodemic trend, also leads to false memory syndrome, whereby people fail to differentiate between authentic information and fake narratives. Simple efforts to correct misperceptions usually backfire and even strengthen initial beliefs, especially in the context of complex issues like healthcare. Policymakers thus struggle with balancing policy making and making people receptive to said policies in the backdrop of their tendencies to reject/suspect authoritative action. Examples of the same can be observed on both the domestic front and internationally. In the US, for example, the traditional healthcare system rations access to healthcare through a combination of insurance costs and options versus out-of-pocket essential expenses. While this has been a subject of debate for a long time, it hadn’t created a large scale public healthcare crisis because the incentives offered to the medical professionals and public trust in the delivery of essential services helped balance the conversation. In recent times, however, there has been a narrative shift that sensationalises the system as an issue of deliberate “denial of care,” which has led to concerns about harms to patients.
Policy Recommendations
The hindrances posed by misinformation in policymaking are further exacerbated against the backdrop of policymakers relying on social media as a method to measure public sentiment, consensus and opinions. If misinformation about an outbreak is not effectively addressed, it could hinder individuals from adopting necessary protective measures and potentially worsen the spread of the epidemic. To improve healthcare policymaking amidst the challenges posed by health misinformation, policymakers must take a multifaceted approach. This includes convening a broad coalition of central, state, local, territorial, tribal, private, nonprofit, and research partners to assess the impact of misinformation and develop effective preventive measures. Intergovernmental collaborations such as the Ministry of Health and the Ministry of Electronics and Information Technology should be encouraged whereby doctors debunk online medical misinformation, in the backdrop of the increased reliance on online forums for medical advice. Furthermore, increasing investment in research dedicated to understanding misinformation, along with the ongoing modernization of public health communications, is essential. Enhancing the resources and technical support available to state and local public health agencies will also enable them to better address public queries and concerns, as well as counteract misinformation. Additionally, expanding efforts to build long-term resilience against misinformation through comprehensive educational programs is crucial for fostering a well-informed public capable of critically evaluating health information.
From an individual perspective, since almost half a billion people use WhatsApp it has become a platform where false health claims can spread rapidly. This has led to a rise in the use of fake health news. Viral WhatsApp messages containing fake health warnings can be dangerous, hence it is always recommended to check such messages with vigilance. This highlights the growing concern about the potential dangers of misinformation and the need for more accurate information on medical matters.
Conclusion
The proliferation of misinformation in healthcare poses significant challenges to effective policymaking and public health management. The COVID-19 pandemic has underscored the role of misinformation in vaccine hesitancy, fraud, and increased mortality rates. There is an urgent need for robust strategies to counteract false information and build public trust in health interventions; this includes policymakers engaging in comprehensive efforts, including intergovernmental collaboration, enhanced research, and public health communication modernization, to combat misinformation. By fostering a well-informed public through education and vigilance, we can mitigate the impact of misinformation and promote healthier communities.
References
- van der Meer, T. G. L. A., & Jin, Y. (2019), “Seeking Formula for Misinformation Treatment in Public Health Crises: The Effects of Corrective Information Type and Source” Health Communication, 35(5), 560–575. https://doi.org/10.1080/10410236.2019.1573295
- “Health Misinformation”, U.S. Department of Health and Human Services. https://www.hhs.gov/surgeongeneral/priorities/health-misinformation/index.html
- Mechanic, David, “The Managed Care Backlash: Perceptions and Rhetoric in Health Care Policy and the Potential for Health Care Reform”, Rutgers University. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2751184/pdf/milq_195.pdf
- “Bad actors are weaponising health misinformation in India”, Financial Express, April 2024.
- “Role of doctors in eradicating misinformation in the medical sector.”, Times of India, 1 July 2024. https://timesofindia.indiatimes.com/life-style/health-fitness/health-news/national-doctors-day-role-of-doctors-in-eradicating-misinformation-in-the-healthcare-sector/articleshow/111399098.cms

AI-generated content has been taking up space in the ever-changing dynamics of today's tech landscape. Generative AI has emerged as a powerful tool that has enabled the creation of hyper-realistic audio, video, and images. While advantageous, this ability has some downsides, too, particularly in content authenticity and manipulation.
The impact of this content is varied in the areas of ethical, psychological and social harms seen in the past couple of years. A major concern is the creation of non-consensual explicit content, including nudes. This content includes content where an individual’s face gets superimposed onto explicit images or videos without their consent. This is not just a violation of privacy for individuals, and can have humongous consequences for their professional and personal lives. This blog examines the existing laws and whether they are equipped to deal with the challenges that this content poses.
Understanding the Deepfake Technology
Deepfake technology is a media file (image, video, or speech) that typically represents a human subject that is altered deceptively using deep neural networks (DNNs). It is used to alter a person’s identity, and it usually takes the form of a “face swap” where the identity of a source subject is transferred onto a destination subject. The destination’s facial expressions and head movements remain the same, but the appearance in the video is that of the source. In the case of videos, the identities can be substituted by way of replacement or reenactment.
This superimposed content creates realistic content, such as fake nudes. Presently, creating a deepfake is not a costly endeavour. It requires a Graphics Processing Unit (GPU), software that is free, open-source, and easy to download, and graphics editing and audio-dubbing skills. Some of the common apps to create deepfakes are DeepFaceLab and FaceSwap, which are both public and open source and are supported by thousands of users who actively participate in the evolution and development of these software and models.
Legal Gaps and Challenges
Multiple gaps and challenges exist in the legal space for deepfakes and their regulation. They are:
- The inadequate definitions governing AI-generated explicit content often lead to enforcement challenges.
- Jurisdictional challenges due to the cross-border nature of crimes and the difficulties caused by international cooperation measures are in the early stages for AI content.
- There is a gap between the current consent-based and harassment laws for AI-generated nudes.
- Providing evidence or providing proof for the intent and identification of perpetrators in digital crimes is a challenge that is yet to be overcome.
Policy Responses and Global Trends
Presently, the global response to deepfakes is developing. The UK has developed the Online Safety Bill, the EU has the AI Act, the US has some federal laws such as the National AI Initiative Act of 2020 and India is currently developing the India AI Act as the specific legislation dealing with AI and its correlating issues.
The IT Rules, 2021, and the DPDP Act, 2023, regulate digital platforms by mandating content governance, privacy policies, grievance redressal, and compliance with removal orders. Emphasising intermediary liability and safe harbour protections, these laws play a crucial role in tackling harmful content like AI-generated nudes, while the DPDP Act focuses on safeguarding privacy and personal data rights.
Bridging the Gap: CyberPeace Recommendations
- Initiate legislative reforms by advocating for clear and precise definitions for the consent frameworks and instituting high penalties for AI-based offences, particularly those which are aimed at sexually explicit material.
- Advocate for global cooperation and collaborations by setting up international standards and bilateral and multilateral treaties that address the cross-border nature of these offences.
- Platforms should push for accountability by pushing for stricter platform responsibility for the detection and removal of harmful AI-generated content. Platforms should introduce strong screening mechanisms to counter the huge influx of harmful content.
- Public campaigns which spread awareness and educate users about their rights and the resources available to them in case such an act takes place with them.
Conclusion
The rapid advancement of AI-generated explicit content demands immediate and decisive action. As this technology evolves, the gaps in existing legal frameworks become increasingly apparent, leaving individuals vulnerable to profound privacy violations and societal harm. Addressing this challenge requires adaptive, forward-thinking legislation that prioritises individual safety while fostering technological progress. Collaborative policymaking is essential and requires uniting governments, tech platforms, and civil society to develop globally harmonised standards. By striking a balance between innovation and societal well-being, we can ensure that the digital age is not only transformative but also secure and respectful of human dignity. Let’s act now to create a safer future!
References
- https://etedge-insights.com/technology/artificial-intelligence/deepfakes-and-the-future-of-digital-security-are-we-ready/
- https://odsc.medium.com/the-rise-of-deepfakes-understanding-the-challenges-and-opportunities-7724efb0d981
- https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/

Introduction
Summer vacations have always been one of the most anticipated times in a child’s life. In earlier times, it was something entirely different. The season was filled with outdoor games, muddy hands, mango-stained mouths, and stories shared with cousins under the stars. Children lived in the moment, playing in parks, riding bicycles, and inventing new adventures without a screen in sight. Today, those same summer days are shaped by glowing devices, virtual games, and hours spent online. While technology brings learning and entertainment, it also invites risks that parents cannot ignore. The Cyber Mom Toolkit is here to help you navigate this shift, offering simple and thoughtful ways to keep your children safe, balanced, and joyful during these screen filled holidays.
The Hidden Cyber Risks of Summer Break
With increased leisure time and less supervision, children are likely to venture into unknown reaches of the internet. I4C reports indicate that child-related cases, such as cyberbullying, sextortion, and viewing offensive content, surge during school vacations. Gaming applications, social networking applications, and YouTube can serve as entry points for cyber predators and spammers. That's why it is important that parents, particularly mothers know what digital spaces their children live in and how to intervene appropriately.
Your Action Plan for Being a Cyber Smart Mom
Moms Need to Get Digitally Engaged
You do not need to be a tech expert to become a cyber smart mom. With just a few simple digital skills, you can start protecting your child online with confidence and ease.
1. Know the Platforms Your Children Use
Spend some time investigating apps such as Instagram, Snapchat, Discord, YouTube, or computer games like Roblox and Minecraft. Familiarise yourself with the type of content, chat options, and privacy loopholes they may have.
2. Install Parental Controls
Make use of native features on devices (Android, iOS, Windows) to limit screen time, block mature content, and track downloads. Applications such as Google Family Link and Apple Screen Time enable parents to control apps and web browsing.
3. Develop a Family Cyber Agreement
- Establish common rules such as:
- No devices in bedrooms past 9 p.m.
- Add only safe connections on social media.
- Don't open suspicious messages or click on mysterious links.
- Always tell your mom if something makes you feel uncomfortable online.
Talk Openly and Often
Kids tend to hide things online because they don't want to get punished or embarrassed. Trust is built better than monitoring. Here's how:
- Have non-judgmental chats about what they do online.
- Use news reports or real-life cases as conversation starters: "Did you hear about that YouTuber's hacked account?
- Encourage them to question things if they're confused or frightened.
- Honour their online life as a legitimate aspect of their lives.
Look for the Signs of Online Trouble
Stay alert to subtle changes in your child’s behavior, as they can be early signs of trouble in their online world.
- Sudden secrecy or aggression when questioned about online activity.
- Overuse of screens, particularly in the evening.
- Deterioration in school work or interest in leisure activities.
- Mood swings, anxiety, or withdrawn behaviour.
If you notice these, speak to your child calmly. You can also report serious matters such as cyberbullying or blackmail on the Cybercrime Helpline 1930 or visit https://cybercrime.gov.in
Support Healthy Digital Behaviours
Teach your kids to be good netizens by leading them to:
- Reflect Before Posting: No address, school name, or family information should ever appear in public posts.
- Set Strong Passwords: Passwords must be long, complicated, and not disclosed to friends, even best friends.
- Enable Privacy Settings: Keep social media accounts privately. Disable location sharing. Restrict comments and messages from others.
- Vigilance: Encourage them to spot fake news, scams, and manipulative ads. Critical thinking is the ultimate defence.
Stay alert to subtle changes in your child’s behavior, as they can be early signs of trouble in their online world.
Where to Learn More and Get Support as a Cyber Mom
Cyber moms looking to deepen their understanding of online safety can explore a range of helpful resources offered by CyberPeace. Our blog features easy-to-understand articles on current cyber threats, safety tips, and parenting guidance for the digital age. You can also follow our social media pages for regular updates, quick tips, and awareness campaigns designed especially for families. If you ever feel concerned or need help, the CyberPeace Helpline is available to offer support and guidance. (+91 9570000066 or write to us at helpline@cyberpeace.net). For those who want to get more involved, joining the CyberPeace Corps allows you to become part of a larger community working to promote digital safety and cyber awareness across the country.
Empowering Mothers Empowers Society
We at CyberPeace feel that every mother, irrespective of her background and technological expertise, has the potential to be a Cyber Mom. The intention is not to control the child but to mentor towards safer decisions, identify issues early, and prepare them for a lifetime of online responsibility. Mothers are empowered when they know. And children are safe when they are protected.
Conclusion
The web isn't disappearing, and neither are its dangers. But when mothers are digital role models, they can make summer screen time a season of wise decisions. This summer, become a Cyber Mom: someone who learns, leads, and listens. Whether it's installing a parental control app, discussing openly about cyberbullying, or just asking your child, "What did you discover online today? " that engagement can make a difference. This summer break, help your child become digitally equipped with the skills and knowledge they need to navigate the online world safely and confidently.
Cyber safety starts at home, and there's no better point of departure than being alongside your child, rather than behind them.
References
- https://cybercrime.gov.in
- https://support.apple.com/en-in/HT208982
- https://beinternetawesome.withgoogle.com
- https://www.cyberpeace.org
- https://ncpcr.gov.in