#FactCheck - Deepfake Video Falsely Links Shah Rukh Khan to Rajpal Yadav Case
Executive Summary
A video featuring popular comedian Rajpal Yadav has recently gone viral on social media, claiming that he is currently lodged in Tihar Jail in connection with a loan default and cheque bounce case. In connection with this, another video showing Bollywood superstar Shah Rukh Khan is being widely shared online. In the viral clip, Khan is purportedly seen saying that he would help Rajpal Yadav get out of jail and also offer him a role in his upcoming film. However, research by the CyberPeace found the viral video to be fake. The clip is a deepfake, in which the audio has been manipulated using artificial intelligence. In the original video, Shah Rukh Khan is speaking about his life and personal experiences. Although several prominent Bollywood personalities have expressed support for Rajpal Yadav, the claims made in the viral video are misleading.
Claim
An Instagram user named “ayubeditz” shared the viral video on February 11, 2026, with the caption: “Rajpal Yadav bhai, stay strong, we are all with you — Shah Rukh Khan.” The link to the post and its archived version are provided below.

Fact Check
To verify the claim, we extracted key frames from the viral video and conducted a Google reverse image search. This led us to the original video uploaded on a YouTube channel titled “Locarno Film Festival” on August 11, 2024. According to the available information, Shah Rukh Khan was sharing insights about his life and career during a conversation with the festival’s Artistic Director, Giona A. Nazzaro. This raised strong suspicion that the viral video had been edited using AI.

To further examine the authenticity of the audio, we analysed it using AI detection tools. The audio was first checked using Aurigin.ai, which indicated an 83 percent probability that the voice in the viral clip was AI-generated.

Conclusion
The CyberPeace’s research confirmed that the claim associated with Shah Rukh Khan’s viral video is false. The video is a deepfake in which the audio has been altered using artificial intelligence. In the original footage, Khan was discussing his life and experiences, and he did not make any statement about helping Rajpal Yadav.
Related Blogs

Introduction
Monopolies in any sector can have a great impact on economic efficiency and, by extension, on the market and the larger economy. Data monopolies hurt both small startups and large, established companies, and it is typically the biggest corporate players who have the biggest data advantage. Google has recently lost a major antitrust case filed by the U.S. Department of Justice, which focused on the company's search engine dominance and expensive partnerships to promote its products. The lawsuit accused Google of using its dominant position in the search engine market to maintain a monopoly. The case has had a significant impact on consumers and the tech industry as a whole. This dominance allowed Google to raise prices on advertisers without consequences, and delay innovations and privacy features that consumers want when they search online.
Antitrust Allegations Against Google in the US and EU
In the case filed by the US Department of Justice, US District Judge Amit Mehta ruled that Google was monopolistic. In the 10-week-long trial, Google lost the major antitrust lawsuit, and it was established that the tech giant had a monopoly in the web search and advertising sectors. The lawsuit accused Google of using its dominant position in the search engine market to elbow out rivals and maintain a monopoly. The tech giant’s exclusive deals with handset makers were brought before the court as evidence. Additionally, the European Commission has fined Google €1.49 billion for breaching EU antitrust rules in 2019.
The Impact of Big Tech Monopolies on the Digital Ecosystem and Beyond
- Big-tech companies collect vast amounts of personal data, raising concerns about how this data is used and protected. The concentration of data in the hands of a few companies can lead to privacy breaches and misuse of personal information.
- The dominance of a few tech giants in digital advertising markets can stifle competition, leading to higher prices for advertisers and fewer choices for consumers. This concentration also allows these companies to exert major control over what ads are shown and to whom.
- Big-tech platforms have substantial power over the dissemination of information. Their algorithms and policies on content moderation can influence public discourse and may spread misinformation. The lack of competition means fewer alternatives are accessible for users seeking different content moderation policies. In 2021 Google paid $26.3 billion to ensure its search engine is the default on smartphones and browsers and to keep control of its dominant market share.
Regulatory Mechanisms in the Indian Context
In India, antitrust issues are governed by the Competition Act of 2002 and the Competition Commission of India (CCI) checks monopolistic practices. In 2022, the CCI imposed a penalty of Rs 1,337.76 crore on Google for abusing its dominant position in multiple markets for 'anti-competitive practices' in the Android mobile device ecosystem. The Draft Digital Competition Bill, 2024, has been proposed as a legislative reform to regulate a wide range of digital services, including online search engines, social networking platforms, video-sharing sites, interpersonal communication services, operating systems, web browsers, cloud services, advertising services, and online intermediation services. The bill aims to promote competition and fairness in the digital market by addressing anti-competitive practices and dominant position abuses in the digital business space.
Conclusion
Big-tech companies are increasingly under scrutiny from regulators due to concerns over their monopolistic practices, data privacy issues, and the immense influence on markets and public discourse. The U.S. Department of Justice's victory against Google and the European Commission's hefty fines are indicators of a global paradigm shift towards more aggressive regulation to foster competition and protect consumer interests. The combined efforts of regulators across different jurisdictions underscore the recognition that monopolistic practices by such big tech giants can stifle innovation, harm consumers’ interests, and create barriers for new entrants, thus necessitating strong legal frameworks to ensure fair and contestable markets. Overall, the increasing regulatory pressure signifies a pivotal moment for big-tech companies, as they face the challenge of adapting to a more tightly controlled environment where their market dominance and business practices are under intense examination.
References
- https://www.livemint.com/technology/tech-news/googles-future-siege-u-s-court-explores-breaking-up-company-after-landmark-ruling-11723648047735.html
- https://www.thehindu.com/sci-tech/technology/what-is-the-google-monopoly-antitrust-case-and-how-does-it-affect-consumers/article68495551.ece
- https://indianexpress.com/article/business/google-has-an-illegal-monopoly-on-search-us-judge-finds-9497318/

Introduction
India is seeing a major change due to the introduction of Artificial Intelligence (AI) across all sectors of government, business, and the digital economy with regard to areas such as governance, healthcare, finance, and the infrastructure. The large scale and rapid pace of AI implementation are expected to lead to efficiency gains, innovations in products and services, and to drive economic growth; however, the growth of AI also creates many serious concerns regarding ethics, legality, and societal ramifications. Issues such as algorithmic bias in the use of algorithms by AI, a lack of transparency in decision-making algorithms, data protection risks resulting from AI employments, and unclear frameworks for determining accountability for AI-related action; bring issues of how we will govern AI in a responsible manner to the forefront of public policy discourse.
India wants to become an AI superpower and leader in technology on the world stage. As such, India has a dual responsibility to fuel innovation without discounting democratic ideals, human rights, and public trust. UNESCO's AI Readiness Assessment Methodology (RAM) is a global tool for AI governance, created to provide concrete policy guidance on how to make ethical AI a reality. The India AI RAM Report is set to be formally released by UNESCO during the India AI Impact Summit 2026, taking place in New Delhi, as a major milestone in India's developing AI governance journey.
What is UNESCO’s AI Readiness Assessment Methodology (RAM)?
UNESCO has created a simple yet effective tool, called the AI Readiness Assessment Methodology (RAM), that can assist governments in determining how well they are prepared to develop, deploy and manage Artificial Intelligence ethically, responsibly and trustworthily. RAM provides a framework for diagnosing and self-assessing the state of a country’s ability to govern AI on the basis of evidence-based decision making rather than serving as a regulatory framework or ranking system.
The most important goal of RAM is to assess a country’s overall state of readiness to govern AI based on four dimensions: institutionally, legally, socially and technologically. In doing so, RAM examines how institutions function, their maturity level and the extent to which various policies align with one another; thereby giving governments an overview of strengths, weaknesses and priorities for reform.
Unlike other frameworks, RAM does not prescribe any one-size-fits-all solutions; instead, it uses a context sensitive approach when implementing the concepts of AI governance due to differing national realities, developmental priorities and social/economic conditions. Using the ethical principles established by UNESCO, RAM converts these principles into practical actions that can guide countries in their transition from abstract commitments to concrete strategies for governing AI.
Key Dimensions Assessed Under RAM
UNESCO's AI Readiness Assessment Methodology (RAM) is a tool used to assess a country's readiness to implement ethical Artificial Intelligence through five interconnected dimensions. These include: the legal and regulatory dimension (which looks at the laws, rules, and safeguards that are currently in place related to AI), the social and cultural dimension (which looks at whether the public is aware of AI, whether it trusts AI, whether AI is an inclusive experience for all people who use AI and whether AI has affected society in various ways), and the economic dimension (which looks at innovation, participation from industry, and readiness of the market for AI).
Also included in the framework/functionality of the RAM are: scientific and educational dimension (which examines a country’s capacity to conduct serious scientific research, including research activities that prepare persons to be employable in AI jobs); and technological and infrastructure dimension (which examines the availability of data, digital infrastructure, and computing capabilities for AI projects in a country).
All five of these dimensions consider the entirety of the scope of an AI readiness evaluation to ensure that AI Governance is more than just a technical issue; rather, it is a condition of a country’s capacity to generate laws, create policy and maintain social equality in relation to all forms of Artificial Intelligence.
Methodology and Nationwide Consultative Process
RAM takes both qualitative and quantitative characteristics together to create an overall understanding of how ready any nation is for AI capabilities. It is designed with flexibility so nations can define their assessments with respect to their own institutional capabilities and development agenda.
Normally, RAM is implemented by an independent expert who is assisted by a national team consisting of various stakeholders. With respect to the RAM process used in India, it was conducted as a national consultation where representatives from across all sectors of society (government, private sector, academia, civil society, and young people) participated in the assessment's creation. This consultation process made sure there were many different viewpoints present, which increased the legitimacy of the assessment results and how relevant they are in each country. The consultation process also yields policy recommendations based on real life governing situations or challenges that are specific to different sectors.
Institutional Partnerships Behind India’s RAM
The India RAM Initiative was developed by the UNESCO South Asia Regional Office (as a partner of IndiaAI Mission and the Indian Ministry of Electronics and Information Technology) and implemented by Ikigai Law with the help of The Patrick J. McGovern Foundation. This demonstrated the need for and importance of partnership in developing governance frameworks for Artificial Intelligence (AI). The result of the RAM process is a collaborative effort that includes evidence-based international norm-setting capabilities from around the world; government policies under the guidance of national political leadership; independent legal-technical implementation; input from civil society; all with the goal of empowering (increasing) India's ability to establish and implement both a consistent (i.e., coherent) and comprehensive (i.e., inclusive) AI Governance Framework.
Significance of the India AI RAM Report and Its Launch
The India AI RAM Report provides a complete initial assessment of India’s AI ecosystem and includes key insights into AI readiness, governance strengths/weaknesses, and potential opportunities across multiple sectors. It identifies priority areas to promote a responsible and trustworthy AI ecosystem in India.
The report will be officially released during the India AI Impact Summit (February 16, 2026 at Bharat Mandapam, New Delhi) where Mr. Abhishek Venkateswaran (National Project Officer-Social and Human Sciences at UNESCO South Asia) offered additional insight into the consultative process and the overall importance of this launch on India's future AI policy path.
Policy Relevance and the Road Ahead
The RAM Framework gives the government a structure and roadmap for developing and implementing AI Governance. In doing this, RAM reinforces the alignment of IndiaAI Mission, which includes safety and trust in AI as one of the pillars. However, the results from this Assessment will not automatically translate to reforming institutions, issuing guidelines specific to sectors, or developing a mechanism for continued evaluation. Implementation will require strong and sustained commitment from political leaders, as well as the commitment of institutions involved in the reforms made possible by RAM's implementation.
Conclusion
UNESCO has developed an AI Readiness Assessment Methodology (AI-RAM) that can greatly advance the way India approaches governance with respect to artificial intelligence (AI). By focusing on "readiness" (doing what needs to be done), "responsibility" (being or having good moral principles) and "inclusivity" (including everyone), the AI-RAM will enable India to become an active participant in discussions around ethical use of AI at a global level. India is now positioned to take on a leadership role in the world by adopting this methodology, which provides a platform for establishing global standards for AI development. The real benefit of the AI-RAM will come from policy measures that will ensure future AI development in India is 'human-centered', 'trustworthy' and 'aligns with democratic values'.
References
- https://icaire.org/files/UNESCORam-en.pdf
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2134492®=3&lang=2
- https://www.facebook.com/unesconewdelhi/videos/unesco-is-set-to-launch-the-india-ai-readiness-assessment-methodology-ram-report/25955631820699516/
- https://www.unesco.org/ethics-ai/en/ram
- https://www.hindustantimes.com/india-news/unesco-meity-launch-exercise-to-assess-india-s-ai-readiness-101749188341803.html#
- https://www.manoramayearbook.in/current-affairs/india/2025/06/09/unesco-ai-readiness-assessment-methodology-ram.html
.webp)
Introduction
Social media platforms have begun to shape the public understanding of history in today’s digital landscape. You may have encountered videos, images, and posts that claim to reveal an untold story about our past. For example, you might have seen a post on your feed that has a painted or black and white image of a princess and labelled as "the most beautiful princess of Rajasthan who fought countless wars but has been erased from history.” Such emotionally charged narratives spread quickly, without any academic scrutiny or citation. Unfortunately, the originator believes it to be true.
Such unverified content may look harmless. But it profoundly contributes to the systematic distortion of historical information. Such misinformation reoccurs on feeds and becomes embedded in popular memory. It misguides the public discourse and undermines the scholarly research on the relevant topic. Sometimes, it also contributes to communal outrage and social tensions. It is time to recognise that protecting the integrity of our cultural and historical narratives is not only an academic concern but a legal and institutional responsibility. This is where the role of the Ministry of Culture becomes critical.
Pseudohistorical News Information in India
Fake news and misinformation are frequently disseminated via images, pictures, and videos on various messaging applications, which is referred to as “WhatsApp University” in a derogatory way. WhatsApp has become India’s favourite method of communication, while users have to stay very conscious about what they are consuming from forwarded messages. Academic historians strive to understand the past in its context to differentiate it from the present, whereas pseudo-historians try to manipulate history to satisfy their political agendas. Unfortunately, this wave of pseudo-history is expanding rapidly, with platforms like 'WhatsApp University' playing a significant role in amplifying its spread. This has led to an increase in fake historical news and paid journalism. Unlike pseudo-history, academic history is created by professional historians in academic contexts, adhering to strict disciplinary guidelines, including peer review and expert examination of justifications, assertions, and publications.
How to Identify Pseudo-Historic Misinformation
1. Lack of Credible Sources: There is a lack of reliable primary and secondary sources. Instead, pseudohistorical works depend on hearsay and unreliable eyewitness accounts.
2. Selective Use of Evidence: Misinformative posts portray only those facts that support their argument and minimise the facts which is contradictory to their assertions.
3. Incorporation of Conspiracy Theories: They often include conspiracy theories, which postulate secret groups, repressed knowledge. They might mention that evil powers influenced the historical events. Such hypotheses frequently lack any supporting data.
4. Extravagant Claims: Pseudo-historic tales sometimes present unbelievable assertions about historic persons or events.
5. Lack of Peer Review: Such work is generally never published on authentic academic platforms. You would not find them on platforms like LinkedIn, but on platforms like Instagram and Facebook, as they do not pitch for academic publications. Authentic historical research is examined by subject-matter authorities.
6. Neglect of Established Historiographical Methods: Such posts lack knowledge of a recognised methodology and procedures, like the critical study of sources.
7. Ideologically Driven Narratives: Political, communal, ideological, and personal opinions are prioritised in such posts. The author has a prior goal, instead of finding the truth.
8. Exploitation of Gaps in the Historical Record: Pseudo-historians often use missing or unclear parts of history to suggest that regular historians are hiding important secrets. They make the story sound more mysterious than it is.
9. Rejection of Scholarly Consensus: Pseudo-historians often reject the views of experts and historians, choosing instead to believe and promote their strange ideas.
10. Emphasis on Sensationalism: Pseudo-historical works may put more emphasis on sensationalism than academic rigour to pique public interest rather than offer a fair and thorough account of the history.
Legal and Institutional Responsibility
Public opinion is the heart of democracy. It should not be affected by any misinformation or disinformation. Vested interests cannot be allowed to sabotage this public opinion. Specifically, when it concerns academia, it cannot be shared unverified without any fact-checking. Such unverified claims can be called out, and action can be taken only if the authorities take over the charge. In India, the Indian Council of Historical Research (ICHR) regulates the historical academia. As per the official website, their stated aim is to “take all such measures as may be found necessary from time to time to promote historical research and its utilisation in the country,”. However, it is now essential to modernise the functioning of the ICHR to meet the demands of the digital era. Concerned authorities can run campaigns and awareness programmes to question the validity and research of such misinformative posts. Just as there are fact-checking mechanisms for news, there must also be an institutional push to fact-check and regulate historical content online. The following measures can be taken by authorities to strike down such misinformation online:
- Launch a nationwide awareness campaign about historical misinformation.
- Work with scholars, historians, and digital platforms to promote verified content.
- Encourage social media platforms to introduce fact-check labels for historical posts.
- Consider legal frameworks that penalise the deliberate spread of false historical narratives.
History is part of our national heritage, and preserving its accuracy is a matter of public interest. Misinformation and pseudo-history are a combination that misleads the public and weakens the foundation of shared cultural identity. In this digital era, false narratives spread rapidly, and it is important to promote critical thinking, encourage responsible academic work, and ensure that the public has access to accurate and well-researched historical information. Protecting the integrity of history is not just the work of historians — it is a collective responsibility that serves the future of our democracy.
References:
- https://kuey.net/index.php/kuey/article/view/4091
- https://www.drishtiias.com/daily-news-editorials/social-media-and-the-menace-of-false-information