#FactCheck: Viral AI Video Showing Finance Minister of India endorsing an investment platform offering high returns.
Executive Summary:
A video circulating on social media falsely claims that India’s Finance Minister, Smt. Nirmala Sitharaman, has endorsed an investment platform promising unusually high returns. Upon investigation, it was confirmed that the video is a deepfake—digitally manipulated using artificial intelligence. The Finance Minister has made no such endorsement through any official platform. This incident highlights a concerning trend of scammers using AI-generated videos to create misleading and seemingly legitimate advertisements to deceive the public.

Claim:
A viral video falsely claims that the Finance Minister of India Smt. Nirmala Sitharaman is endorsing an investment platform, promoting it as a secure and highly profitable scheme for Indian citizens. The video alleges that individuals can start with an investment of ₹22,000 and earn up to ₹25 lakh per month as guaranteed daily income.

Fact check:
By doing a reverse image search from the key frames of the viral fake video we found an original YouTube clip of the Finance Minister of India delivering a speech on the webinar regarding 'Regulatory, Investment and EODB reforms'. Upon further research we have not found anything related to the viral investment scheme in the whole video.
The manipulated video has had an AI-generated voice/audio and scripted text injected into it to make it appear as if she has approved an investment platform.

The key to deepfakes is that they seem relatively realistic in their facial movement; however, if you look closely, you can see that there are mismatched lip-syncing and visual transitions that are out of the ordinary, and the results prove our point.


Also, there doesn't appear to be any acknowledgment of any such endorsement from a legitimate government website or a credible news outlet. This video is a fabricated piece of misinformation to attempt to scam the viewers by leveraging the image of a trusted public figure.
Conclusion:
The viral video showing the Finance Minister of India, Smt. Nirmala Sitharaman promoting an investment platform is fake and AI-generated. This is a clear case of deepfake misuse aimed at misleading the public and luring individuals into fraudulent schemes. Citizens are advised to exercise caution, verify any such claims through official government channels, and refrain from clicking on unknown investment links circulating on social media.
- Claim: Nirmala Sitharaman promoted an investment app in a viral video.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Social media is the new platform for free speech and expressing one’s opinions. The latest news breaks out on social media and is often used by political parties to propagate their parties during the elections. Hashtag (#)is the new weapon, a powerful hashtag that goes a long way in making an impact in society that so at a global level. Various hashtags have gained popularity in the last years, such as – #blacklivesmatter, #metoo, #pride, #cybersecurity, and many more, which were influential in spreading awareness among the people regarding various social issues and taboos, which then were removed from multiple cultures. Social media is strengthened by social media influencers who are famous personalities with a massive following as they create regular content that the users consume and share with their friends. Social media is all about the message and its speed, and hence issues like misinformation and disinformation are widespread on nearly all social media platforms, so the influencers play a keen role in making sure the content on social media is in compliance with its community and privacy guidelines.
The Know-How
The Department of Consumer Affairs under the Ministry of Consumer Affairs, Food and Public Distribution released a guide, ‘Endorsements Know-hows!’ for celebrities, influencers, and virtual influencers on social media platforms, The guide aims to ensure that individuals do not mislead their audiences when endorsing products or services and that they are in compliance with the Consumer Protection Act and any associated rules or guidelines. Advertisements are no longer limited to traditional media like print, television, or radio, with the increasing reach of digital platforms and social media, such as Facebook, Twitter, and Instagram, there has been a rise in the influence of virtual influencers, celebrities, and social media influencers. This has led to an increased risk of consumers being misled by advertisements and unfair trade practices by these individuals on social media platforms. Endorsements must be made in simple, clear language, and terms such as “advertisement,” “sponsored,” or “paid promotion” can be used. They should not endorse any product or service and service in which they have done due diligence or that they have not personally used or experienced. The Act established guidelines for protecting consumers from unfair trade practices and misleading advertisements. The Department of Consumer Affairs published Guidelines for prevention of Misleading Advertisements and Endorsements for Misleading Advertisements, 2022, on 9th June 2022. These guidelines outline the criteria for valid advertisements and the responsibilities of manufacturers, service providers, advertisers, and advertising agencies. These guidelines also touched upon celebrities and endorsers. It states that misleading advertisements in any form, format, or medium are prohibited by law.
The guidelines apply to social media influencers as well as virtual avatars promoting products and services online. The disclosures should be easy to notice in post descriptions, where you can usually find hashtags or links. It should also be prominent enough to be noticeable in the content,
Changes Expected
The new guidelines will bring about uniformity in social media content in respect of privacy and the opinions of different people. The primary issue being addressed is misinformation, which was at its peak during the Covid-19 pandemic and impacted millions of people worldwide. The aspect of digital literacy and digital etiquette is a fundamental art of social media ethics, and hence social media influencers and celebrities can go a long way in spreading awareness about the same among common people and regular social media users. The increasing threats of cybercrimes and various exploitations over cyberspace can be eradicated with the help of efficient awareness and education among the youth and the vulnerable population, and the influencers can easily do the same, so its time that the influencers understand their responsibility of leading the masses online and create a healthy secure cyber ecosystem. Failing to follow the guidelines will make social media influencers liable for a fine of up to Rs 10 lakh. In the case of repeated offenders, the penalty can go up to Rs 50 lakh.
Conclusion
The size of the social media influencer market in India in 2022 was $157 million. It could reach as much as $345 million by 2025. Indian advertising industry’s self-regulatory body Advertising Standards Council of India (ASCI), shared that Influencer violations comprise almost 30% of ads taken up by ASCI, hence this legal backing for disclosure requirements is a welcome step. The Ministry of Consumer Affairs had been in touch with ASCI to review the various global guidelines on influencers. The social media guidelines from Clairfirnia and San Fransisco share the same basis, and hence guidelines inspired by different countries will allow the user and the influencer to understand the global perspective and work towards securing the bigger picture. As we know that cyberspace has no geographical boundaries and limitations; hence now is the time to think beyond conventional borders and start contributing towards securing and safeguarding global cyberspace.

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21

Introduction
The Department of Telecommunications on 28th October 2024 notified an amendment to the Flight and Maritime Connectivity Rules, 2018 (FMCR 2018).
Rule 9 of the principle rules in FMCR 2018 stated:
“Restrictions–(1) The IFMC service provider shall provide the operation of mobile communication services in aircraft at minimum height of 3000 meters in Indian airspace to avoid interference with terrestrial mobile networks. (2) Internet services through Wi-Fi in aircraft shall be made available when electronic devices are permitted to be used only in airplane mode.”
In 2022, an amendment was made to the attached form in the Rules for obtaining authorisation to provide IFMC services.
Subsequently, the 2024 amendment substitutes sub-rule (2), namely :
“ (2) Notwithstanding the minimum height in Indian airspace referred to in sub-rule (1), internet services through Wi-Fi in aircraft shall be made available when electronic devices are permitted to be used in the aircraft.”
Highlights of the Amendment
These rules govern the use of Wi-Fi in airplanes and ships within or above India or Indian territorial waters through In Flight and Maritime Connectivity (IFMC) services provided by IFMC service providers responsible for establishing and maintaining them.
Airplanes are equipped with antennas, onboard servers, and routers to connect to signals received from ground towers via Direct Air-to-Ground Communications (DA2GC) or through satellites. The DA2GC system offers connectivity through various communication methods, supporting services like in-flight Internet access and mobile multimedia. Licensed In-Flight Mobile Connectivity (IFMC) providers must adhere to standards set by international organizations such as the International Telecommunications Union (ITU), the European Telecommunications Standards Institute (ETSI), and the Institute of Electrical and Electronics Engineers (IEEE), or by international forums like the 3rd Generation Partnership Project (3GPP) to offer In-Flight Connectivity. Providers using Indian or foreign satellite systems must obtain approval from the Department of Space.
The IFMC service provider must operate mobile communication services on aircrafts at a minimum altitude of 3,000 meters within Indian airspace to prevent interference with terrestrial mobile networks. However, Wi-Fi access can be enabled at any point during the flight when device use is permitted, not just after reaching 3,000 meters. This flexibility is intended to allow passengers to connect to Wi-Fi earlier in the flight. This amendment aims to ensure that passengers can access the internet while maintaining the safety standards critical to in-flight communication systems.
Implications
- Increased Data Security Needs: There will be a need for robust cybersecurity measures against potential threats and data breaches.
- Increased Costs: Airplanes will have to incur the initial costs for installing antennae. Since airfare pricing in India is market-driven and largely unregulated, these costing changes might find their way into ticket prices, making flight tickets more expensive.
- Interference Management: A framework regarding the conditions under which Wi-FI must be switched off to avoid interference with terrestrial communication systems can be determined by stakeholders and communicated to passengers.
- Enhanced Connectivity Infrastructure: Airlines may need to invest in better flight-connectivity infrastructure to handle increased network traffic as more passengers access Wi-fi at lower altitudes and for longer durations.
Conclusion
The Flight and Maritime Connectivity (Amendment) Rules, 2024, enhance passenger convenience and align India with global standards for in-flight connectivity while complying with international safety protocols. Access to the internet during flights and at sea provides valuable real-time information, enhances safety, and offers access to health support during aviation and maritime operations. However, new challenges including the need for robust cybersecurity measures, cost implications for airlines and passengers, and management of interference with terrestrial networks will have to be addressed through a collaborative approach between airlines, IFMC providers, and regulatory authorities.
Sources
- https://dot.gov.in/sites/default/files/2018_12_17%20AS%20IFMC_2.pdf?download=1
- https://dot.gov.in/sites/default/files/Amendment%20dated%2004112024%20in%20flight%20and%20maritime%20connectivity%20rules%202018%20to%20IFMC%20Service%20Provider.pdf
- https://www.t-mobile.com/dialed-in/wireless/how-does-airplane-wifi-work
- https://tec.gov.in/public/pdf/Studypaper/DA2GC_Paper%2008-10-2020%20v2.pdf
- https://www.indiatoday.in/india/story/wifi-use-flights-no-longer-linked-altitude-now-subject-permission-2628118-2024-11-05
- https://pib.gov.in/Pressreleaseshare.aspx?PRID=1843408#:~:text=With%20the%20repeal%20of%20Air,issue%20directions%20to%20such%20airline.