#FactCheck: Fake video falsely claims FM Sitharaman endorsed investment scheme
Executive Summary:
A video gone viral on Facebook claims Union Finance Minister Nirmala Sitharaman endorsed the government’s new investment project. The video has been widely shared. However, our research indicates that the video has been AI altered and is being used to spread misinformation.

Claim:
The claim in this video suggests that Finance Minister Nirmala Sitharaman is endorsing an automotive system that promises daily earnings of ₹15,00,000 with an initial investment of ₹21,000.

Fact Check:
To check the genuineness of the claim, we used the keyword search for “Nirmala Sitharaman investment program” but we haven’t found any investment related scheme. We observed that the lip movements appeared unnatural and did not align perfectly with the speech, leading us to suspect that the video may have been AI-manipulated.
When we reverse searched the video which led us to this DD News live-stream of Sitharaman’s press conference after presenting the Union Budget on February 1, 2025. Sitharaman never mentioned any investment or trading platform during the press conference, showing that the viral video was digitally altered. Technical analysis using Hive moderator further found that the viral clip is Manipulated by voice cloning.

Conclusion:
The viral video on social media shows Union Finance Minister Nirmala Sitharaman endorsing the government’s new investment project as completely voice cloned, manipulated and false. This highlights the risk of online manipulation, making it crucial to verify news with credible sources before sharing it. With the growing risk of AI-generated misinformation, promoting media literacy is essential in the fight against false information.
- Claim: Fake video falsely claims FM Nirmala Sitharaman endorsed an investment scheme.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
China is on the verge of unveiling a new policy that will address how Artificial Intelligence (AI) influences employment. On January 27, 2026, the Ministry of Human Resources and Social Security (MOHRSS) announced it would publish a paper on the contribution of AI to the labour and employment markets. The policy will include provisions to help impacted industries, expand assistance to young workers and graduates, and come up with interdisciplinary training programmes to equip individuals with jobs in an AI-enabled economy. The authorities have stressed that AI does not kill jobs but changes them, and education will be needed to assist employees in adjusting to the changes.
This announcement reflects a more proactive policy on AI-based changes in labour, showing that China intends to sustain economic modernisation through AI, as well as social stability. It also depicts wider international issues concerning the rate of automation and the necessity of considering labour and training policy.
AI and the Changing Nature of Work
AI is transforming work content and nature in industries. AI systems enhance the productivity of various functions, including data processing, logistics, and customer service, although they alter the nature of tasks carried out by humans. Extant studies indicate that although AI can automate routine activities, new occupations that require complex thinking, management of artificial intelligence, and skills related to people, including empathy, creativity, and problem-solving, may be generated.
This is the key nuance in the policy framing of China. Authorities point out that AI does not always result in massive unemployment. Instead, it transforms jobs and necessitates workers to change to new task profiles. This perspective is in line with the recent reports of the world research organisations, which predict the effects of AI as transformational and not necessarily destructive. As an example, the World Economic Forum Future Jobs Report 2023 observes that the change in technology will introduce new jobs that were not there 10 years ago, and retraining and upskilling will be instrumental in accessing those opportunities.
Key Components of China’s Policy Response
China’s forthcoming policy is expected to focus on three main areas that address both current workforce needs and future readiness.
Support for Key Industries
The policy will offer targeted assistance to sectors where artificial intelligence is gaining pace. Industries like advanced manufacturing, high-tech services, and online logistics will also get specialised assistance to assist companies in using AI to complement human labour and not just to replace it. The Chinese government tries to balance industrial upgrading with employment by channelling resources to the growth areas.
Assistance for Youth and Graduates
The youth and the recent graduates are entering a labour market that is changing rapidly. The policy aims to increase the support services to this population by career counselling, internships, and training programmes correlated with changing employer demands. According to a study by McKinsey Global Institute, the young workforce all over the globe can face disproportionate disruption in case the prospects of training are scarce, making initial career backing imperative.
Interdisciplinary Talent Development
The Chinese strategy focuses on interdisciplinary training that blends knowledge of domains and AI literacy and digital illiteracy. This is indicative of the realisation that hybrid skills are required in the future. The Organisation for Economic Cooperation and Development suggests that workers who can make it through the technical and non-technical elements of work will stand a better chance of winning in the AI age.
These components show that China’s strategy is not simply to protect existing jobs but to help workers transition to roles that leverage AI’s strengths.
Economy, Stability and Strategic Modernisation
The policy is an attempt to control technological transition as part of wider economic planning. It is an indication that the government regards AI as a structural change rather than an external shock that can be predicted and influenced by policy.
This is in contrast to some other reactions to labour markets in other countries, where the reactionary approach has been seen as a reaction to the job losses that have already become reality. The initiative by China implies that there should be a change in the manner in which one can expect change instead of reacting to change.
Global Comparisons and Shared Challenges
Governments worldwide are testing the options to adapt to the work effects of AI. The European Union is considering the individual learning account and portable training benefits, which would assist workers to gain access to reskilling opportunities in the course of their careers. In the US, there is a concerted effort by the public-private partnerships to match the development of the workforce with technological implementation.
The strategy of China has some of these components, but it stands out due to its incorporation with national planning processes. China wants the adoption of AI to help it achieve the common good and not division by connecting the workforce policy to the overall innovation and economic purpose.
Meanwhile, the issue of balancing the supply of labour with the demand of technology is a challenge of its own to countries with older populations and relatively smaller working forces. The timing and design of policy are particularly significant in China, as there is a large labour force and continuous changes in demography.
Practical Challenges and Risks
The success of China’s emerging policy will depend on effective implementation. Several practical issues will require careful attention:
Ensuring Equitable Access to Training
The labour force in China is diversified, and it goes through technology zones in cities and other rural areas. It will be paramount to make sure that the opportunity of upskilling is extended to all workers across the spectrum to prevent the further worsening of regional inequalities. Research conducted on reskilling across the globe shows that rural and low-income groups tend to lack access to training, despite the availability of programmes.
Aligning Training with Labour Demand
The programme of upskilling should be related to the market requirements. Disconnected training is prone to resulting in the production of skills that are obsolete or not applicable in actual work settings. Experience in emerging economies indicates that the involvement of employers in the training design enhances placement success on the part of the learner.
Private Sector Participation
The policy needs to be translated into employment outcomes with the help of private companies. Incentives to make firms invest in worker training, internships, and apprenticeships will enable workers to shift to AI-augmented jobs with ease.
A Model for AI Workforce Policy
The Chinese policy can serve as an example for other countries that want to balance technological advancement and labour market security. It acknowledges the fact that the effect of AI on employment is not only a technical or an economic problem but also a social challenge. Through foregrounding training, support, and coordinated action, China aims to create a future where people are ready to change and not lose their jobs to this change.
This strategy can be agreed with the suggestions of international organisations like the World Bank and the OECD, which insist on the idea of lifelong learning and flexibility of labour markets, as well as proactive investment in human capital as the main aspects of the labour policy in the future.
Conclusion
Artificial intelligence will continue to reshape work around the world. China’s forthcoming policy, which emphasises support, training and strategic integration of AI into labour markets, reflects a proactive and holistic view of technological transition. Other countries could benefit from studying this approach, especially in terms of linking workforce development with innovation goals.
By anticipating disruption and investing in people as well as technology, policymakers can help ensure that AI becomes a driver of shared economic opportunity rather than a source of exclusion. The balance between innovation and employment will shape not only economic outcomes but also social cohesion in the years ahead.
References

In the vast, interconnected cosmos of the internet, where knowledge and connectivity are celebrated as the twin suns of enlightenment, there lurk shadows of a more sinister nature. Here, in these darker corners, the innocence of childhood is not only exploited but also scarred, indelibly and forever. The production, distribution, and consumption of Child Sexual Abuse Material (CSAM) have surged to alarming levels globally, casting a long, ominous shadow over the digital landscape.
In response to this pressing issue, the National Human Rights Commission (NHRC) has unfurled a comprehensive four-part advisory, a beacon of hope aimed at combating CSAM and safeguarding the rights of children in this digital age. This advisory dated 27/10/23 is not merely a reaction to the rising tide of CSAM, but a testament to the imperative need for constant vigilance in the realm of cyber peace.
The statistics paint a sobering picture. In 2021, more than 1,500 instances of publishing, storing, and transmitting CSAM were reported, shedding a harsh light on the scale of the problem. Even more alarming is the upward trend in cases reported in subsequent years. By 2023, a staggering 450,207 cases of CSAM had already been reported, marking a significant increase from the 204,056 and 163,633 cases reported in 2022 and 2021, respectively.
The Key Aspects of Advisory
The NHRC's advisory commences with a fundamental recommendation - a redefinition of terminology. It suggests replacing the term 'Child Pornography' with 'Child Sexual Abuse Material' (CSAM). This shift in language is not merely semantic; it underscores the gravity of the issue, emphasizing that this is not about pornography but child abuse.
Moreover, the advisory calls for the definition of 'sexually explicit' under Section 67B of the IT Act, 2000. This step is crucial for ensuring the prompt identification and removal of online CSAM. By giving a clear definition, law enforcement can act swiftly in removing such content from the internet.
The digital world knows no borders, and CSAM can easily cross jurisdictional lines. NHRC recognizes this challenge and proposes that laws be harmonized across jurisdictions through bilateral agreements. Moreover, it recommends pushing for the adoption of a UN draft Convention on 'Countering the Use of Information and Communications Technologies for Criminal Purposes' at the General Assembly.
One of the critical aspects of the advisory is the strengthening of law enforcement. NHRC advocates for the creation of Specialized State Police Units in every state and union territory to handle CSAM-related cases. The central government is expected to provide support, including grants, to set up and equip these units.
The NHRC further recommends establishing a Specialized Central Police Unit under the government of India's jurisdiction. This unit will focus on identifying and apprehending CSAM offenders and maintaining a repository of such content. Its role is not limited to law enforcement; it is expected to cooperate with investigative agencies, analyze patterns, and initiate the process for content takedown. This coordinated approach is designed to combat the problem effectively, both on the dark web and open web.
The role of internet intermediaries and social media platforms in controlling CSAM is undeniable. The NHRC advisory emphasizes that intermediaries must deploy technology, such as content moderation algorithms, to proactively detect and remove CSAM from their platforms. This places the onus on the platforms to be proactive in policing their content and ensuring the safety of their users.
New Developments
Platforms using end-to-end encryption services may be required to create additional protocols for monitoring the circulation of CSAM. Failure to do so may invite the withdrawal of the 'safe harbor' clause under Section 79 of the IT Act, 2000. This measure ensures that platforms using encryption technology are not inadvertently providing safe havens for those engaged in illegal activities.
NHRC's advisory extends beyond legal and law enforcement measures; it emphasizes the importance of awareness and sensitization at various levels. Schools, colleges, and institutions are called upon to educate students, parents, and teachers about the modus operandi of online child sexual abusers, the vulnerabilities of children on the internet, and the early signs of online child abuse.
To further enhance awareness, a cyber curriculum is proposed to be integrated into the education system. This curriculum will not only boost digital literacy but also educate students about relevant child care legislation, policies, and the legal consequences of violating them.
NHRC recognizes that survivors of CSAM need more than legal measures and prevention strategies. Survivors are recommended to receive support services and opportunities for rehabilitation through various means. Partnerships with civil society and other stakeholders play a vital role in this aspect. Moreover, psycho-social care centers are proposed to be established in every district to facilitate need-based support services and organization of stigma eradication programs.
NHRC's advisory is a resounding call to action, acknowledging the critical importance of protecting children from the perils of CSAM. By addressing legal gaps, strengthening law enforcement, regulating online platforms, and promoting awareness and support, the NHRC aims to create a safer digital environment for children.
Conclusion
In a world where the internet plays an increasingly central role in our lives, these recommendations are not just proactive but imperative. They underscore the collective responsibility of governments, law enforcement agencies, intermediaries, and society as a whole in safeguarding the rights and well-being of children in the digital age.
NHRC's advisory is a pivotal guide to a more secure and child-friendly digital world. By addressing the rising tide of CSAM and emphasizing the need for constant vigilance, NHRC reaffirms the critical role of organizations, governments, and individuals in ensuring cyber peace and child protection in the digital age. The active contribution from premier cyber resilience firms like Cyber Peace Foundation, amplifies the collective action forging a secure digital space, highlighting the pivotal role played by think tanks in ensuring cyber peace and resilience.
References:
- https://www.hindustantimes.com/india-news/nhrc-issues-advisory-regarding-child-sexual-abuse-material-on-internet-101698473197792.html
- https://ssrana.in/articles/nhrcs-advisory-proliferation-of-child-sexual-abuse-material-csam/
- https://theprint.in/india/specialised-central-police-unit-use-of-technology-to-proactively-detect-csam-nhrc-advisory/1822223/

Introduction
Misinformation has been a significant concern in recent times, especially in the online information landscape. This past month, misinformation has been linked to the communal tensions that have flared up in the North Tripura district. While the law enforcement agencies were quick to respond, misinformation about the law and order situation spread rapidly. Shri Amitabh Ranjanon, Tripura’s Director General of Police, issued a public statement on 21st October 2024, Monday, clarifying “The state's law and order situation has improved, and misinformation is being spread about it”. This instance is a classic example of how misinformation can affect the delivery of good governance to citizens or hamper the relationship between the citizenry and the state mechanisms. Such misinformation undermines the efforts of the law enforcement agencies striving to maintain peace, and distorted narratives can colour public opinion about the authorities and create cycles of misplaced distrust.
DGP's Statement
DGP Amitabh Ranjanon clarified during an event to commemorate Police Commemoration Day, stating that the state has recorded a lesser number of crimes this year compared to the last 10 years. He emphasized that senior police officials promptly respond to any law and order issues and additional forces have been deployed as necessary. Ranjan highlighted the peaceful celebration of Durga Puja as a testament to the effective law enforcement measures in place, demonstrating communal harmony.
Impact of Misinformation in communal settings
Misinformation in communal settings can cause anxiety, fear, and distrust among community members, leading to conflicts. It undermines public confidence in law enforcement and government institutions. The spread of false information can erode trust in law enforcement and government bodies, hindering their ability to address and solve conflicts. Therefore, precise data and accurate information are essential in every environment to avoid the harm caused by misinformation.
Preventive Measures Against Misinformation
- Look for authenticated sources
In a digital landscape filled with information from various sources, it’s essential to differentiate between credible and unreliable content. Authenticated sources are typically reputable organizations and officials. Users must rely on authenticated sources to ensure the information's accuracy and credibility. Users must verify the source, confirm the claims made in the source by comparing them with other credible sources for accuracy, and follow fact-checking practices.
- Exercise caution on social media information
Social media platforms can rapidly disseminate information, but they can also serve as breeding grounds for misinformation. The ease of sharing content can lead to the spread of unverified claims, rumours, or even outright falsehoods. Therefore, exercising caution when engaging with information on these platforms is crucial. Users must scrutinize headlines and images as well, especially since misleading images can distort the truth with the advent of AI. One must always read beyond the headline and check the context of the images used and not make split-second decisions and impressions. Users must engage in critical thinking and share informed opinions responsibly, to promote discussions about the validity of shared content.
- Role of Awareness
Awareness about misinformation is essential for navigating the complexities of modern communication. People can make better decisions and help create a more informed society by being aware of the strategies used to disseminate false information. Users need to become knowledgeable about typical misinformation strategies, hone their cognitive abilities to critically assess internet content, and verify the reliability of sources before they form opinions, make decisions or share ahead.
Final words
By integrating these simple best practices into our daily lives we can cultivate a more informed public, reduce the spread of online misinformation, and enhance critical thinking skills among peers and the larger digital community.
References
- https://www.theweek.in/wire-updates/national/2024/10/21/cal8-tr-dgp.html
- https://www.newindianexpress.com/nation/2024/Oct/21/tripura-dgp-says-misinformation-being-spread-about-states-law-and-order-situation
- https://indianexpress.com/article/north-east-india/tripura/police-inaction-tripura-dgp-amitabh-ranjan-sharp-decline-crime-rate-9632509/
- https://www.newindianexpress.com/nation/2024/Oct/21/tripura-dgp-says-misinformation-being-spread-about-states-law-and-order-situation