#FactCheck - Viral video of Defence Minister Rajnath Singh supporting Israeli attacks on Iran is a deepfake
Executive Summary:
A video of India’s Defence Minister Rajnath Singh is going viral on social media. The post claims that Rajnath Singh is openly supporting Israeli-American attacks against Iran. In the video, he can allegedly be heard saying that Prime Minister Narendra Modi had visited Israel before the war began and warned Tehran that disturbing peace would have serious consequences.
Research by CyberPeace found that the viral video is a deepfake created using Artificial Intelligence (AI). Rajnath Singh has not made any such statement about Iran or the Israel-US conflict.
Claim
A Facebook user “Sheikh Sadeque Ali” shared the video on March 2, 2026. The caption of the post reads, “Indian Defence Minister Rajnath Singh is supporting Israel’s attack on Iran. This clearly shows that India supports the killing of Muslims.”
In the viral video, Rajnath Singh appears to say in English: “Prime Minister Modi’s visit to Israel before the attack on Iran reflects India’s solidarity with its strategic partner… He warned Tehran that hostile actions would have serious consequences for regional peace.”

Fact Check:
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search. During the research , we found the original video on Rajnath Singh’s official YouTube channel. The video was uploaded on November 23, 2025.In the original video, Rajnath Singh was addressing a Sindhi community conference in Delhi. During his speech, he was talking about Sindhi culture and the history of Partition. He did not mention Israel, Iran or any Middle East conflict during the entire program.

Upon closely examining the viral video, technical inconsistencies between the lip movements and the audio (lip-sync discrepancies) can be observed, which strongly indicate that the video may have been generated using AI. To verify this, we analysed the clip using several AI-detection tools. The AI detection tool Hive Moderation indicated that the video has a 99% probability of being AI-generated.

Conclusion:
Our research found that the viral video of Rajnath Singh is a deepfake. He has not made any statement supporting Israel or opposing Iran. The original video is from a Sindhi community event in Delhi, which has been digitally altered using AI to spread a misleading claim.
Related Blogs

Introduction
India is seeing a major change due to the introduction of Artificial Intelligence (AI) across all sectors of government, business, and the digital economy with regard to areas such as governance, healthcare, finance, and the infrastructure. The large scale and rapid pace of AI implementation are expected to lead to efficiency gains, innovations in products and services, and to drive economic growth; however, the growth of AI also creates many serious concerns regarding ethics, legality, and societal ramifications. Issues such as algorithmic bias in the use of algorithms by AI, a lack of transparency in decision-making algorithms, data protection risks resulting from AI employments, and unclear frameworks for determining accountability for AI-related action; bring issues of how we will govern AI in a responsible manner to the forefront of public policy discourse.
India wants to become an AI superpower and leader in technology on the world stage. As such, India has a dual responsibility to fuel innovation without discounting democratic ideals, human rights, and public trust. UNESCO's AI Readiness Assessment Methodology (RAM) is a global tool for AI governance, created to provide concrete policy guidance on how to make ethical AI a reality. The India AI RAM Report is set to be formally released by UNESCO during the India AI Impact Summit 2026, taking place in New Delhi, as a major milestone in India's developing AI governance journey.
What is UNESCO’s AI Readiness Assessment Methodology (RAM)?
UNESCO has created a simple yet effective tool, called the AI Readiness Assessment Methodology (RAM), that can assist governments in determining how well they are prepared to develop, deploy and manage Artificial Intelligence ethically, responsibly and trustworthily. RAM provides a framework for diagnosing and self-assessing the state of a country’s ability to govern AI on the basis of evidence-based decision making rather than serving as a regulatory framework or ranking system.
The most important goal of RAM is to assess a country’s overall state of readiness to govern AI based on four dimensions: institutionally, legally, socially and technologically. In doing so, RAM examines how institutions function, their maturity level and the extent to which various policies align with one another; thereby giving governments an overview of strengths, weaknesses and priorities for reform.
Unlike other frameworks, RAM does not prescribe any one-size-fits-all solutions; instead, it uses a context sensitive approach when implementing the concepts of AI governance due to differing national realities, developmental priorities and social/economic conditions. Using the ethical principles established by UNESCO, RAM converts these principles into practical actions that can guide countries in their transition from abstract commitments to concrete strategies for governing AI.
Key Dimensions Assessed Under RAM
UNESCO's AI Readiness Assessment Methodology (RAM) is a tool used to assess a country's readiness to implement ethical Artificial Intelligence through five interconnected dimensions. These include: the legal and regulatory dimension (which looks at the laws, rules, and safeguards that are currently in place related to AI), the social and cultural dimension (which looks at whether the public is aware of AI, whether it trusts AI, whether AI is an inclusive experience for all people who use AI and whether AI has affected society in various ways), and the economic dimension (which looks at innovation, participation from industry, and readiness of the market for AI).
Also included in the framework/functionality of the RAM are: scientific and educational dimension (which examines a country’s capacity to conduct serious scientific research, including research activities that prepare persons to be employable in AI jobs); and technological and infrastructure dimension (which examines the availability of data, digital infrastructure, and computing capabilities for AI projects in a country).
All five of these dimensions consider the entirety of the scope of an AI readiness evaluation to ensure that AI Governance is more than just a technical issue; rather, it is a condition of a country’s capacity to generate laws, create policy and maintain social equality in relation to all forms of Artificial Intelligence.
Methodology and Nationwide Consultative Process
RAM takes both qualitative and quantitative characteristics together to create an overall understanding of how ready any nation is for AI capabilities. It is designed with flexibility so nations can define their assessments with respect to their own institutional capabilities and development agenda.
Normally, RAM is implemented by an independent expert who is assisted by a national team consisting of various stakeholders. With respect to the RAM process used in India, it was conducted as a national consultation where representatives from across all sectors of society (government, private sector, academia, civil society, and young people) participated in the assessment's creation. This consultation process made sure there were many different viewpoints present, which increased the legitimacy of the assessment results and how relevant they are in each country. The consultation process also yields policy recommendations based on real life governing situations or challenges that are specific to different sectors.
Institutional Partnerships Behind India’s RAM
The India RAM Initiative was developed by the UNESCO South Asia Regional Office (as a partner of IndiaAI Mission and the Indian Ministry of Electronics and Information Technology) and implemented by Ikigai Law with the help of The Patrick J. McGovern Foundation. This demonstrated the need for and importance of partnership in developing governance frameworks for Artificial Intelligence (AI). The result of the RAM process is a collaborative effort that includes evidence-based international norm-setting capabilities from around the world; government policies under the guidance of national political leadership; independent legal-technical implementation; input from civil society; all with the goal of empowering (increasing) India's ability to establish and implement both a consistent (i.e., coherent) and comprehensive (i.e., inclusive) AI Governance Framework.
Significance of the India AI RAM Report and Its Launch
The India AI RAM Report provides a complete initial assessment of India’s AI ecosystem and includes key insights into AI readiness, governance strengths/weaknesses, and potential opportunities across multiple sectors. It identifies priority areas to promote a responsible and trustworthy AI ecosystem in India.
The report will be officially released during the India AI Impact Summit (February 16, 2026 at Bharat Mandapam, New Delhi) where Mr. Abhishek Venkateswaran (National Project Officer-Social and Human Sciences at UNESCO South Asia) offered additional insight into the consultative process and the overall importance of this launch on India's future AI policy path.
Policy Relevance and the Road Ahead
The RAM Framework gives the government a structure and roadmap for developing and implementing AI Governance. In doing this, RAM reinforces the alignment of IndiaAI Mission, which includes safety and trust in AI as one of the pillars. However, the results from this Assessment will not automatically translate to reforming institutions, issuing guidelines specific to sectors, or developing a mechanism for continued evaluation. Implementation will require strong and sustained commitment from political leaders, as well as the commitment of institutions involved in the reforms made possible by RAM's implementation.
Conclusion
UNESCO has developed an AI Readiness Assessment Methodology (AI-RAM) that can greatly advance the way India approaches governance with respect to artificial intelligence (AI). By focusing on "readiness" (doing what needs to be done), "responsibility" (being or having good moral principles) and "inclusivity" (including everyone), the AI-RAM will enable India to become an active participant in discussions around ethical use of AI at a global level. India is now positioned to take on a leadership role in the world by adopting this methodology, which provides a platform for establishing global standards for AI development. The real benefit of the AI-RAM will come from policy measures that will ensure future AI development in India is 'human-centered', 'trustworthy' and 'aligns with democratic values'.
References
- https://icaire.org/files/UNESCORam-en.pdf
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2134492®=3&lang=2
- https://www.facebook.com/unesconewdelhi/videos/unesco-is-set-to-launch-the-india-ai-readiness-assessment-methodology-ram-report/25955631820699516/
- https://www.unesco.org/ethics-ai/en/ram
- https://www.hindustantimes.com/india-news/unesco-meity-launch-exercise-to-assess-india-s-ai-readiness-101749188341803.html#
- https://www.manoramayearbook.in/current-affairs/india/2025/06/09/unesco-ai-readiness-assessment-methodology-ram.html
.png)
Introduction
The fast-paced development of technology and the wider use of social media platforms have led to the rapid dissemination of misinformation with characteristics such as diffusion, fast propagation speed, wide influence, and deep impact through these platforms. Social Media Algorithms and their decisions are often perceived as a black box introduction that makes it impossible for users to understand and recognise how the decision-making process works.
Social media algorithms may unintentionally promote false narratives that garner more interactions, further reinforcing the misinformation cycle and making it harder to control its spread within vast, interconnected networks. Algorithms judge the content based on the metrics, which is user engagement. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items you are more likely to enjoy. This process, initially, was created to cut the clutter and provide you with the best information. However, sometimes it results in unknowingly widespread misinformation due to the viral nature of information and user interactions.
Analysing the Algorithmic Architecture of Misinformation
Social media algorithms, designed to maximize user engagement, can inadvertently promote misinformation due to their tendency to trigger strong emotions, creating echo chambers and filter bubbles. These algorithms prioritize content based on user behaviour, leading to the promotion of emotionally charged misinformation. Additionally, the algorithms prioritize content that has the potential to go viral, which can lead to the spread of false or misleading content faster than corrections or factual content.
Additionally, popular content is amplified by platforms, which spreads it faster by presenting it to more users. Limited fact-checking efforts are particularly difficult since, by the time they are reported or corrected, erroneous claims may have gained widespread acceptance due to delayed responses. Social media algorithms find it difficult to distinguish between real people and organized networks of troll farms or bots that propagate false information. This creates a vicious loop where users are constantly exposed to inaccurate or misleading material, which strengthens their convictions and disseminates erroneous information through networks.
Though algorithms, primarily, aim to enhance user engagement by curating content that aligns with the user's previous behaviour and preferences. Sometimes this process leads to "echo chambers," where individuals are exposed mainly to information that reaffirms their beliefs which existed prior, effectively silencing dissenting voices and opposing viewpoints. This curated experience reduces exposure to diverse opinions and amplifies biased and polarising content, making it arduous for users to discern credible information from misinformation. Algorithms feed into a feedback loop that continuously gathers data from users' activities across digital platforms, including websites, social media, and apps. This data is analysed to optimise user experiences, making platforms more attractive. While this process drives innovation and improves user satisfaction from a business standpoint, it also poses a danger in the context of misinformation. The repetitive reinforcement of user preferences leads to the entrenchment of false beliefs, as users are less likely to encounter fact-checks or corrective information.
Moreover, social networks and their sheer size and complexity today exacerbate the issue. With billions of users participating in online spaces, misinformation spreads rapidly, and attempting to contain it—such as by inspecting messages or URLs for false information—can be computationally challenging and inefficient. The extensive amount of content that is shared daily means that misinformation can be propagated far quicker than it can get fact-checked or debunked.
Understanding how algorithms influence user behaviour is important to tackling misinformation. The personalisation of content, feedback loops, the complexity of network structures, and the role of superspreaders all work together to create a challenging environment where misinformation thrives. Hence, highlighting the importance of countering misinformation through robust measures.
The Role of Regulations in Curbing Algorithmic Misinformation
The EU's Digital Services Act (DSA) applicable in the EU is one of the regulations that aims to increase the responsibilities of tech companies and ensure that their algorithms do not promote harmful content. These regulatory frameworks play an important role they can be used to establish mechanisms for users to appeal against the algorithmic decisions and ensure that these systems do not disproportionately suppress legitimate voices. Independent oversight and periodic audits can ensure that algorithms are not biased or used maliciously. Self-regulation and Platform regulation are the first steps that can be taken to regulate misinformation. By fostering a more transparent and accountable ecosystem, regulations help mitigate the negative effects of algorithmic misinformation, thereby protecting the integrity of information that is shared online. In the Indian context, the Intermediary Guidelines, 2023, Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation on digital platforms. The ‘Intermediaries’ are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or prohibited content. This rule aims to ensure platforms identify and swiftly remove misinformation, and false or misleading content.
Cyberpeace Outlook
Understanding how algorithms prioritise content will enable users to critically evaluate the information they encounter and recognise potential biases. Such cognitive defenses can empower individuals to question the sources of the information and report misleading content effectively. In the future of algorithms in information moderation, platforms should evolve toward more transparent, user-driven systems where algorithms are optimised not just for engagement but for accuracy and fairness. Incorporating advanced AI moderation tools, coupled with human oversight can improve the detection and reduction of harmful and misleading content. Collaboration between regulatory bodies, tech companies, and users will help shape the algorithms landscape to promote a healthier, more informed digital environment.
References:
- https://www.advancedsciencenews.com/misformation-spreads-like-a-nuclear-reaction-on-the-internet/
- https://www.niemanlab.org/2024/09/want-to-fight-misinformation-teach-people-how-algorithms-work/
- Press Release: Press Information Bureau (pib.gov.in)

Executive Summary:
A video clip being circulated on social media allegedly shows the Hon’ble President of India, Smt. Droupadi Murmu, the TV anchor Anjana Om Kashyap and the Hon’ble Chief Minister of Uttar Pradesh, Shri Yogi Adityanath promoting a medicine for diabetes. While The CyberPeace Research Team did a thorough investigation, the claim was found to be not true. The video was digitally edited, with original footage of the heavy weight persons being altered to falsely suggest their endorsement of the medication. Specific discrepancies were found in the lip movements and context of the clips which indicated AI Manipulation. Additionally, the distinguished persons featured in the video were actually discussing unrelated topics in their original footage. Therefore, the claim that the video shows endorsements of a diabetes drug by such heavy weights is debunked. The conclusion drawn from the analysis is that the video is an AI creation and does not reflect any genuine promotion. Furthermore, it's also detected by AI voice detection tools.

Claims:
A video making the rounds on social media purporting to show the Hon'ble President of India, Smt. Draupadi Murmu, TV anchor Anjana Om Kashyap, and Hon'ble Chief Minister of Uttar Pradesh Shri Yogi Adityanath giving their endorsement to a diabetes medicine.

Fact Check:
Upon receiving the post we carefully watched the video and certainly found some discrepancies between lip synchronization and the word that we can hear. Also the voice of Chief Minister of Uttar Pradesh Shri Yogi Adityanath seems to be suspicious which clearly indicates some sign of fabrication. In the video, we can hear Hon'ble President of India Smt. Droupadi Murmu endorses a medicine that cured her diabetes. We then divided the video into keyframes, and reverse-searched one of the frames of the video. We landed on a video uploaded by Aaj Tak on their official YouTube Channel.

We found something similar to the same viral video, we can see the courtesy written as Sansad TV. Taking a cue from this we did some keyword searches and found another video uploaded by the YouTube Channel Sansad TV. In this video, we found no mention of any diabetes medicine. It was actually the Swearing in Ceremony of the Hon’ble President of India, Smt. Droupadi Murmu.

In the second part, there was a man addressed as Dr. Abhinash Mishra who allegedly invented the medicine that cures diabetes. We reverse-searched the image of that person and landed at a CNBC news website where the same face was identified as Dr Atul Gawande who is a professor at Harvard School of Public Health. We watched the video and found no sign of endorsing or talking about any diabetes medicine he invented.

We also extracted the audio from the viral video and analyzed it using the AI audio detection tool named Eleven Labs, which found the audio very likely to be created using the AI Voice generation tool with the probability of 98%.

Hence, the Claim made in the viral video is false and misleading. The Video is digitally edited using different clips and the audio is generated using the AI Voice creation tool to mislead netizens. It is worth noting that we have previously debunked such voice-altered news with bogus claims.
Conclusion:
In conclusion, the viral video claiming that Hon'ble President of India, Smt. Droupadi Murmu and Chief Minister of Uttar Pradesh Shri Yogi Adityanath promoted a diabetes medicine that cured their diabetes, is found to be false. Upon thorough investigation it was found that the video is digitally edited from different clips, the clip of Hon'ble President of India, Smt. Droupadi Murmu is taken from the clip of Oath Taking Ceremony of 15th President of India and the claimed doctor Abhinash Mishra whose video was found in CNBC News Outlet. The real name of the person is Dr. Atul Gawande who is a professor at Harvard School of Public Health. Online users must be careful while receiving such posts and should verify before sharing them with others.
Claim: A video is being circulated on social media claiming to show distinguished individuals promoting a particular medicine for diabetes treatment.
Claimed on: Facebook
Fact Check: Fake & Misleading