#FactCheck: Viral AI Video Showing Finance Minister of India endorsing an investment platform offering high returns.
Executive Summary:
A video circulating on social media falsely claims that India’s Finance Minister, Smt. Nirmala Sitharaman, has endorsed an investment platform promising unusually high returns. Upon investigation, it was confirmed that the video is a deepfake—digitally manipulated using artificial intelligence. The Finance Minister has made no such endorsement through any official platform. This incident highlights a concerning trend of scammers using AI-generated videos to create misleading and seemingly legitimate advertisements to deceive the public.

Claim:
A viral video falsely claims that the Finance Minister of India Smt. Nirmala Sitharaman is endorsing an investment platform, promoting it as a secure and highly profitable scheme for Indian citizens. The video alleges that individuals can start with an investment of ₹22,000 and earn up to ₹25 lakh per month as guaranteed daily income.

Fact check:
By doing a reverse image search from the key frames of the viral fake video we found an original YouTube clip of the Finance Minister of India delivering a speech on the webinar regarding 'Regulatory, Investment and EODB reforms'. Upon further research we have not found anything related to the viral investment scheme in the whole video.
The manipulated video has had an AI-generated voice/audio and scripted text injected into it to make it appear as if she has approved an investment platform.

The key to deepfakes is that they seem relatively realistic in their facial movement; however, if you look closely, you can see that there are mismatched lip-syncing and visual transitions that are out of the ordinary, and the results prove our point.


Also, there doesn't appear to be any acknowledgment of any such endorsement from a legitimate government website or a credible news outlet. This video is a fabricated piece of misinformation to attempt to scam the viewers by leveraging the image of a trusted public figure.
Conclusion:
The viral video showing the Finance Minister of India, Smt. Nirmala Sitharaman promoting an investment platform is fake and AI-generated. This is a clear case of deepfake misuse aimed at misleading the public and luring individuals into fraudulent schemes. Citizens are advised to exercise caution, verify any such claims through official government channels, and refrain from clicking on unknown investment links circulating on social media.
- Claim: Nirmala Sitharaman promoted an investment app in a viral video.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
A video circulating on social media falsely claims to show Indian Air Chief Marshal AP Singh admitting that India lost six jets and a Heron drone during Operation Sindoor in May 2025. It has been revealed that the footage had been digitally manipulated by inserting an AI generated voice clone of Air Chief Marshal Singh into his recent speech, which was streamed live on August 9, 2025.
Claim:
A viral video (archived video) (another link) shared by an X user stating in the caption “ Breaking: Finally Indian Airforce Chief admits India did lose 6 Jets and one Heron UAV during May 7th Air engagements.” which is actually showing the Air Chief Marshal has admitted the aforementioned loss during Operation Sindoor.

Fact Check:
By conducting a reverse image search on key frames from the video, we found a clip which was posted by ANI Official X handle , after watching the full clip we didn't find any mention of the aforementioned alleged claim.

On further research we found an extended version of the video in the Official YouTube Channel of ANI which was published on 9th August 2025. At the 16th Air Chief Marshal L.M. Katre Memorial Lecture in Marathahalli, Bengaluru, Air Chief Marshal AP Singh did not mention any loss of six jets or a drone in relation to the conflict with Pakistan. The discrepancies observed in the viral clip suggest that portions of the audio may have been digitally manipulated.

The audio in the viral video, particularly the segment at the 29:05 minute mark alleging the loss of six Indian jets, appeared to be manipulated and displayed noticeable inconsistencies in tone and clarity.
Conclusion:
The viral video claiming that Air Chief Marshal AP Singh admitted to the loss of six jets and a Heron UAV during Operation Sindoor is misleading. A reverse image search traced the footage that no such remarks were made. Further an extended version on ANI’s official YouTube channel confirmed that, during the 16th Air Chief Marshal L.M. Katre Memorial Lecture, no reference was made to the alleged losses. Additionally, the viral video’s audio, particularly around the 29:05 mark, showed signs of manipulation with noticeable inconsistencies in tone and clarity.
- Claim: Viral Video Claiming IAF Chief Acknowledged Loss of Jets Found Manipulated
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
Children today are growing up amidst technology, and the internet has become an important part of their lives. The internet provides a wealth of recreational and educational options and learning environments to children, but it also presents extensively unseen difficulties, particularly in the context of deepfakes and misinformation. AI is capable of performing complex tasks in a fast time. However, misuse of AI technologies led to increasing cyber crimes. The growing nature of cyber threats can have a negative impact on children wellbeing and safety while using the Internet.
India's Digital Environment
India has one of the world's fastest-growing internet user bases, and young netizens here are getting online every passing day. The internet has now become an inseparable part of their everyday lives, be it social media or online courses. But the speed at which the digital world is evolving has raised many privacy and safety concerns increasing the chance of exposure to potentially dangerous content.
Misinformation: The raising Concern
Today, the internet is filled with various types of misinformation, and youngsters are especially vulnerable to its adverse effects. With the diversity in the language and culture in India, the spread of misinformation can have a vast negative impact on society. In particular, misinformation in education has the power to divulge young brains and create hindrances in their cognitive development.
To address this issue, it is important that parents, academia, government, industry and civil society start working together to promote digital literacy initiatives that educate children to critically analyse online material which can ease navigation in the digital realm.
DeepFakes: The Deceptive Mirage:
Deepfakes, or digitally altered videos and/or images made with the use of artificial intelligence, pose a huge internet threat. The possible ramifications of deepfake technology are concerning in India, since there is a high level of dependence on the media. Deepfakes can have far-reaching repercussions, from altering political narratives to disseminating misleading information.
Addressing the deepfake problem demands a multifaceted strategy. Media literacy programs should be integrated into the educational curriculum to assist youngsters in distinguishing between legitimate and distorted content. Furthermore, strict laws as well as technology developments are required to detect and limit the negative impact of deepfakes.
Safeguarding Children in Cyberspace
● Parental Guidance and Open Communication: Open communication and parental guidance are essential for protecting children's internet safety. It's a necessity to have open discussions about the possible consequences and appropriate internet use. Understanding the platforms and material children are consuming online, parents should actively participate in their children's online activities.
● Educational Initiatives: Comprehensive programs for digital literacy must be implemented in educational settings. Critical thinking abilities, internet etiquette, and knowledge of the risks associated with deepfakes and misinformation should all be included in these programs. Fostering a secure online environment requires giving young netizens the tools they need to question and examine digital content.
● Policies and Rules: Admitting the threats or risks posed by misuse of advanced technologies such as AI and deepfake, the Indian government is on its way to coming up with dedicated legislation to tackle the issues arising from misuse of deepfake technology by the bad actors. The government has recently come up with an advisory to social media intermediaries to identify misinformation and deepfakes and to make sure of the compliance of Information Technology (IT) Rules 2021. It is the legal obligation of online platforms to prevent the spread of misinformation and exercise due diligence or reasonable efforts are made to identify misinformation and deepfakes. Legal frameworks need to be equipped to handle the challenges posed by AI. Accountability in AI is a complex issue that requires comprehensive legal reforms. In light of various cases reported about the misuse of deepfakes and spreading such deepfake content on social media, It is advocated that there is a need to adopt and enforce strong laws to address the challenges posed by misinformation and deepfakes. Working with technological companies to implement advanced content detection tools and ensuring that law enforcement takes swift action against those who misuse technology will act as a deterrent among cyber crooks.
● Digital parenting: It is important for parents to keep up with the latest trends and digital technologies. Digital parenting includes understanding privacy settings, monitoring online activity, and using parental control tools to create a safe online environment for children.
Conclusion
As India continues to move forward digitally, protecting children in cyberspace has become a shared responsibility. By promoting digital literacy, encouraging open communication and enforcing strong laws, we can create a safer online environment for younger generations. Knowledge, understanding, and active efforts to combat misinformation and deeply entrenched myths are the keys to unlocking the safety net in the online age. Social media Intermediaries or platforms must ensure compliance under IT Rules 2021, IT Act, 2000 and the newly enacted Digital Personal Data Protection Act, 2023. It is the shared responsibility of the government, parents & teachers, users and organisations to establish safe online space for children.
References:

Introduction
Discussions took place focused on cybersecurity measures, specifically addressing cybercrime in the context of emerging technologies such as Non-Fungible Tokens (NFTs), Artificial Intelligence (AI), and the Metaverse. Session 5 of the conference focused on the interconnectedness between the darknet and cryptocurrency and the challenges it poses for law enforcement agencies and regulators. They discussed that Understanding AI is necessary for enterprises. AI models have difficulties, but we are looking forward to trustworthy AIs. and AI technology must be transparent.
Darknet and Cryptocurrency
The darknet refers to the hidden part of the internet where illicit activities have proliferated in recent years. It was initially developed to provide anonymity, privacy, and protection to specific individuals such as journalists, activists, and whistleblowers. However, it has now become a playground for criminal activities. Cryptocurrency, particularly Bitcoin, has been widely adopted on the darknet due to its anonymous nature, enabling anti-money laundering and unlawful transactions.
Three major points emerge from this relationship: the integrated nature of the darknet and cryptocurrency, the need for regulations to prevent darknet-based crimes, and the importance of striking a balance between privacy and security.
Key Challenges:
- Integrated Relations: The darknet and cryptocurrency have evolved independently, with different motives and purposes. It is crucial to understand the integrated relationship between them and how criminals exploit this connection.
- Regulatory Frameworks: There is a need for effective regulations to prevent crimes facilitated through the darknet and cryptocurrency while striking a balance between privacy and security.
- Privacy and Security: Privacy is a fundamental right, and any measures taken to enhance security should not infringe upon individual privacy. A multistakeholder approach involving tech companies and regulators is necessary to find this delicate balance.
Challenges Associated with Cryptocurrency Use:
The use of cryptocurrency on the darknet poses several challenges. The risks associated with darknet-based cryptocurrency crimes are a significant concern. Additionally, regulatory challenges arise due to the decentralised and borderless nature of cryptocurrencies. Mitigating these challenges requires innovative approaches utilising emerging technologies.
Preventing Misuse of Technologies:
The discussion emphasised that we can step ahead of the people who wish to use these beautiful technologies meant and developed for a different purpose, to prevent from using them for crime.
Monitoring the Darknet:
The darknet, as explained, is an elusive part of the internet that necessitates the use of a special browser for access. Initially designed for secure communication by the US government, its purpose has drastically changed over time. The darknet’s evolution has given rise to significant challenges for law enforcement agencies striving to monitor its activities.
Around 95% of the activities carried out on the dark net are associated with criminal acts. Estimates suggest that over 50% of the global cybercrime revenue originates from the dark net. This implies that approximately half of all cybercrimes are facilitated through the darknet.
The exploitation of the darknet has raised concerns regarding the need for effective regulation. Monitoring the darknet is crucial for law enforcement, national agencies, and cybersecurity companies. The challenges associated with the darknet’s exploitation and the criminal activities facilitated by cryptocurrency emphasise the pressing need for regulations to ensure a secure digital landscape.
Use of Cryptocurrency on the Darknet
Cryptocurrency plays a central role in the activities taking place on the darknet. The discussion highlighted its involvement in various illicit practices, including ransomware attacks, terrorist financing, extortion, theft, and the operation of darknet marketplaces. These applications leverage cryptocurrency’s anonymous features to enable illegal transactions and maintain anonymity.
AI's Role in De-Anonymizing the Darknet and Monitoring Challenges:
- 1.AI’s Potential in De-Anonymizing the Darknet
During the discussion, it was highlighted how AI could be utilised to help in de-anonymizing the darknet. AI’s pattern recognition capabilities can aid in identifying and analysing patterns of behaviour within the darknet, enabling law enforcement agencies and cybersecurity experts to gain insights into its operations. However, there are limitations to what AI can accomplish in this context. AI cannot break encryption or directly associate patterns with specific users, but it can assist in identifying illegal marketplaces and facilitating their takedown. The dynamic nature of the darknet, with new marketplaces quickly emerging, adds further complexity to monitoring efforts.
- 2.Challenges in Darknet Monitoring
Monitoring the darknet poses various challenges due to its vast amount of data, anonymous and encrypted nature, dynamically evolving landscape, and the need for specialised access. These challenges make it difficult for law enforcement agencies and cybersecurity professionals to effectively track and prevent illicit activities.
- 3.Possible Ways Forward
To address the challenges, several potential avenues were discussed. Ethical considerations, striking a balance between privacy and security, must be taken into account. Cross-border collaboration, involving the development of relevant laws and policies, can enhance efforts to combat darknet-related crimes. Additionally, education and awareness initiatives, driven by collaboration among law enforcement, government entities, and academia, can play a crucial role in combating darknet activities.
The panel also addressed the questions from the audience
- How law enforcement agencies and regulators can use AI to detect and prevent crimes on the darknet and cryptocurrency? The panel answered that- Law enforcement officers should also be AI and technology ready, and that kind of upskilling program should be there in place.
- How should lawyers and the judiciary understand the problem and regulate it? The panel answered that AI should only be applied by looking at the outcomes. And Law has to be clear as to what is acceptable and what is not.
- Aligning AI with human intention? Whether it’s possible? Whether can we create an ethical AI instead of talking about using AI ethically? The panel answered that we have to understand how to behave ethically. AI can beat any human. We have to learn AI. Step one is to focus on our ethical behaviour. And step two is bringing the ethical aspect to the software and technologies. Aligning AI with human intention and creating ethical AI is a challenge. The focus should be on ethical behaviour both in humans and in the development of AI technologies.
Conclusion
The G20 Conference on Crime and Security shed light on the intertwined relationship between the darknet and cryptocurrency and the challenges it presents to cybersecurity. The discussions emphasised the need for effective regulations, privacy-security balance, AI integration, and cross-border collaboration to tackle the rising cybercrime activities associated with the darknet and cryptocurrency. Addressing these challenges will require the combined efforts of governments, law enforcement agencies, technology companies, and individuals committed to building a safer digital landscape.