#FactCheck: Fake video falsely claims FM Sitharaman endorsed investment scheme
Executive Summary:
A video gone viral on Facebook claims Union Finance Minister Nirmala Sitharaman endorsed the government’s new investment project. The video has been widely shared. However, our research indicates that the video has been AI altered and is being used to spread misinformation.

Claim:
The claim in this video suggests that Finance Minister Nirmala Sitharaman is endorsing an automotive system that promises daily earnings of ₹15,00,000 with an initial investment of ₹21,000.

Fact Check:
To check the genuineness of the claim, we used the keyword search for “Nirmala Sitharaman investment program” but we haven’t found any investment related scheme. We observed that the lip movements appeared unnatural and did not align perfectly with the speech, leading us to suspect that the video may have been AI-manipulated.
When we reverse searched the video which led us to this DD News live-stream of Sitharaman’s press conference after presenting the Union Budget on February 1, 2025. Sitharaman never mentioned any investment or trading platform during the press conference, showing that the viral video was digitally altered. Technical analysis using Hive moderator further found that the viral clip is Manipulated by voice cloning.

Conclusion:
The viral video on social media shows Union Finance Minister Nirmala Sitharaman endorsing the government’s new investment project as completely voice cloned, manipulated and false. This highlights the risk of online manipulation, making it crucial to verify news with credible sources before sharing it. With the growing risk of AI-generated misinformation, promoting media literacy is essential in the fight against false information.
- Claim: Fake video falsely claims FM Nirmala Sitharaman endorsed an investment scheme.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Introduction
As the calendar pages turn inexorably towards 2024, a question looms large on the horizon of our collective consciousness: Are we cyber-resilient? This is not a rhetorical flourish but a pragmatic inquiry, as the digital landscape we navigate is fraught with cyberattacks and disruptions that threaten to capsize our virtual vessels.
What, then, is Cyber Resilience? It is the capacity to prepare for, respond to, and recover from these cyber squalls. Picture, if you will, a venerable oak amid a howling gale. The roots, those unseen sinews, delve deep into the earth, anchoring the tree – this is preparation. The robust trunk and flexible branches, swaying yet unbroken, embody response. And the new growth that follows the storm's rage is recovery. Cyber resilience is the digital echo of this natural strength and flexibility.
The Need for Resilience
Why, you might ask, is Cyber Resilience of such paramount importance as we approach 2024? The answer lies in the stark reality of our times:
- A staggering half of businesses have been breached by cyberattacks in the past three years.
- The financial haemorrhage from these incursions is projected to exceed a mind-numbing $10 trillion by the end of 2024.
- The relentless march of technology has not only brought innovation but also escalated the arms race against cyber threats.
- Cyber resilience transcends mere cybersecurity; it is a holistic approach that weaves recovery and continuity into the fabric of digital defenses.
- The adaptability of organisations, often through measures such as remote working protocols, is a testament to the evolving strategies of cyber resilience.
- The advent of AI and Machine Learning heralds a new era of automated cyber defense, necessitating an integrated framework that marries security with continuity protocols.
- Societal awareness, particularly of social engineering tactics, and maintaining public relations during crises are now recognised as critical elements of resilience strategies.
- Cyber threats have evolved in sophistication, paralleling the intense competition to develop new AI-driven solutions.
- As we gaze towards the future, cyber resilience is expected to be a prominent trend in both business and consumer technology sectors throughout 2024.
The Virtues
The benefits of cyber resilience for organisations are manifold, offering a bulwark against the digital onslaught:
- A reduction in the risk of data breaches, safeguarding sensitive information and customer data.
- Business continuity, ensuring operations persist with minimal disruption.
- Protection of reputation, as companies that demonstrate effective cyber resilience engender trust.
- Compliance with data protection and privacy regulations, thus avoiding fines and legal entanglements.
- Financial stability, as the costs associated with breaches can be mitigated or even prevented.
- Enhanced customer trust, as clients feel more secure with companies that take cybersecurity seriously.
- A competitive advantage in a market rife with cyber threats.
- Innovation and agility, as cyber-resilient companies can pivot and adapt without fear of digital disruptions.
- Employee confidence, leading to improved morale and productivity.
- Long-term savings by sidestepping the expenses of frequent or major cyber incidents.
As the year wanes, it is a propitious moment to evaluate your organisation's cyber resilience. In this edition, we will guide you through the labyrinth of cyber investment buy-in, tailored discussions with stakeholders, and the quintessential security tools for your 2024 cybersecurity strategy.
How to be more Resilient
Cyber resilience is more than a shield; it is the preparedness to withstand and recover from a cyber onslaught. Let us explore the key steps to fortify your digital defenses:
- Know your risks: Map the terrain where you are most vulnerable, identify the treasures that could be plundered, and fortify accordingly.
- Get the technology right: Invest in solutions that not only detect threats with alacrity but also facilitate rapid recovery, all the while staying one step ahead of the cyber brigands.
- Involve your people: Embed cybersecurity awareness into the fabric of every role. Train your crew in the art of recognising and repelling digital dangers.
- Test your strategies: Regularly simulate incidents to stress-test your policies and procedures, honing your ability to contain and neutralise threats.
- Plan for the worst: Develop a playbook so that everyone knows their part in the grand scheme of damage control and communication in the event of a breach.
- Continually review: The digital seas are ever-changing; adjust your sails accordingly. Cyber resilience is not a one-time endeavour but a perpetual commitment.
Conclusion
As we stand on the precipice of 2024, let us not be daunted by the digital storms that rage on the horizon. Instead, let us embrace the imperative of cyber resilience, for it is our steadfast companion in navigating the treacherous waters of the cyber world. Civil Society Organizations such as ‘CyberPeace Foundation’ playing a crucial role in promoting cyber resilience by bridging the gap between the public and cybersecurity complexities, conducting awareness campaigns, and advocating for robust policies to safeguard collective digital interests. Their active role is imperative in fostering a culture of cyber hygiene and vigilance.
References
- https://www.loginradius.com/blog/identity/cybersecurity-trends-2024/
- https://ciso.economictimes.indiatimes.com/news/ciso-strategies/cisos-guide-to-2024-top-10-cybersecurity-trends/106293196
.webp)
Introduction
Cyber slavery has emerged as a serious menace. Offenders target innocent individuals, luring them with false promises of employment, only to capture them and subject them to horrific torture and forced labour. According to reports, hundreds of Indians have been imprisoned in 'Cyber Slavery' in certain Southeast Asian countries. Indians who have travelled to South Asian nations such as Cambodia in the hopes of finding work and establishing themselves have fallen victim to the illusion of internet slavery. According to reports, 30,000 Indians who travelled to this region on tourist visas between 2022 and 2024 did not return. India Today’s coverage demonstrated how survivors of cyber slavery who have somehow escaped and returned to India have talked about the terrifying experiences they had while being coerced into engaging in cyber slavery.
Tricked by a Job Offer, Trapped in Cyber Slavery
India Today aired testimonials of cyber slavery victims who described how they were trapped. One individual shared that he had applied for a well-paying job as an electrician in Cambodia through an agent in Delhi. However, upon arriving in Cambodia, he was offered a job with a Chinese company where he was forced to participate in cyber scam operations and online fraudulent activities.
He revealed that a personal system and mobile phone were provided, and they were compelled to cheat Indian individuals using these devices and commit cyber fraud. They were forced to work 12-hour shifts. After working there for several months, he repeatedly requested his agent to help him escape. In response, the Chinese group violently loaded him into a truck, assaulted him, and left him for dead on the side of the road. Despite this, he managed to survive. He contacted locals and eventually got in touch with his brother in India, and somehow, he managed to return home.
This case highlights how cyber-criminal groups deceive innocent individuals with the false promise of employment and then coerce them into committing cyber fraud against their own country. According to the Ministry of Home Affairs' Indian Cyber Crime Coordination Center (I4C), there has been a significant rise in cybercrimes targeting Indians, with approximately 45% of these cases originating from Southeast Asia.
CyberPeace Recommendations
Cyber slavery has developed as a serious problem, beginning with digital deception and progressing to physical torture and violent actions to commit fraudulent online acts. It is a serious issue that also violates human rights. The government has already taken note of the situation, and the Indian Cyber Crime Coordination Centre (I4C) is taking proactive steps to address it. It is important for netizens to exercise due care and caution, as awareness is the first line of defence. By remaining vigilant, they can oppose and detect the digital deceit of phony job opportunities in foreign nations and the manipulative techniques of scammers. Netizens can protect themselves from significant threats that could harm their lives by staying watchful and double-checking information from reliable sources.
References
- CyberPeace Highlights Cyber Slavery: A Serious Concern https://www.cyberpeace.org/resources/blogs/cyber-slavery-a-serious-concern
- https://www.indiatoday.in/india/story/india-today-operation-cyber-slaves-stories-of-golden-triangle-network-of-fake-job-offers-2642498-2024-11-29
- https://www.indiatoday.in/india/video/cyber-slavery-survivors-narrate-harrowing-accounts-of-torture-2642540-2024-11-29?utm_source=washare