#FactCheck - Deepfake Video Falsely Links Shah Rukh Khan to Rajpal Yadav Case
Executive Summary
A video featuring popular comedian Rajpal Yadav has recently gone viral on social media, claiming that he is currently lodged in Tihar Jail in connection with a loan default and cheque bounce case. In connection with this, another video showing Bollywood superstar Shah Rukh Khan is being widely shared online. In the viral clip, Khan is purportedly seen saying that he would help Rajpal Yadav get out of jail and also offer him a role in his upcoming film. However, research by the CyberPeace found the viral video to be fake. The clip is a deepfake, in which the audio has been manipulated using artificial intelligence. In the original video, Shah Rukh Khan is speaking about his life and personal experiences. Although several prominent Bollywood personalities have expressed support for Rajpal Yadav, the claims made in the viral video are misleading.
Claim
An Instagram user named “ayubeditz” shared the viral video on February 11, 2026, with the caption: “Rajpal Yadav bhai, stay strong, we are all with you — Shah Rukh Khan.” The link to the post and its archived version are provided below.

Fact Check
To verify the claim, we extracted key frames from the viral video and conducted a Google reverse image search. This led us to the original video uploaded on a YouTube channel titled “Locarno Film Festival” on August 11, 2024. According to the available information, Shah Rukh Khan was sharing insights about his life and career during a conversation with the festival’s Artistic Director, Giona A. Nazzaro. This raised strong suspicion that the viral video had been edited using AI.

To further examine the authenticity of the audio, we analysed it using AI detection tools. The audio was first checked using Aurigin.ai, which indicated an 83 percent probability that the voice in the viral clip was AI-generated.

Conclusion
The CyberPeace’s research confirmed that the claim associated with Shah Rukh Khan’s viral video is false. The video is a deepfake in which the audio has been altered using artificial intelligence. In the original footage, Khan was discussing his life and experiences, and he did not make any statement about helping Rajpal Yadav.
Related Blogs

Introduction
We live in a time where technological change is no longer slow or subtle. Robotics, automation, artificial intelligence, and digital systems are transforming the way we work, think, and even imagine the future. This is often celebrated as great progress. But a deeper question quietly waits behind the noise. Is every advancement truly an uplift when seen through the lens of scriptures, culture, and Indian philosophical thought? Are we creating for the good of humanity, or are we only chasing convenience and speed? And what kind of future are we actually preparing, not just for ourselves, but for those who will be born into this world shaped by these tools from the very beginning?
India has long been seen as a land that values balance, purity, and harmony with nature. Its rivers, mountains, forests, and traditions are not just geography or history, they are part of a civilizational way of thinking that connects life, duty, and responsibility. In this context, it becomes important to ask what the long-term cost of our technological appetite might be. Every invention has a footprint. Industries change landscapes. Energy demands reshape ecosystems. Convenience today often hides consequences that only appear years later. Progress, when measured only in speed and output, forgets to ask what it takes away in silence.
There is also a quieter change happening inside the human mind. As tools become smarter, humans begin to feel more powerful. The thought slowly shifts from “I can use this” to “I control this.” With artificial intelligence, the language becomes even bolder. We start hearing phrases like “we can create worlds, faces, voices, even minds.” But history have always warned us about ‘overreach’. Not because power is evil, but because pride blinds judgment. When ability grows faster than wisdom, imbalance follows. We can already see early signs of this in concerns about shrinking attention spans, weakening cognitive habits, and a growing dependence on systems that think for us before we learn to think for ourselves deeply.
None of this is an argument to reject innovation. The idea is not to blacklist technology or romanticise the past. The real question is about direction and responsibility. Advancements are not only for the comfort of the present generation. They shape the mental, moral, and emotional world of future generations who will grow up surrounded by these systems as something normal and unavoidable. What values will guide that world? What habits will it encourage? What will it quietly take away?
This is where the richness of Indian thought becomes relevant, not as nostalgia, but as guidance. Ideas of dharma, restraint, balance, and ethical action were never anti-progress. They were reminders that power without responsibility becomes dangerous, and that ability without humility leads to decline. In modern terms, we talk about safety by design, ethical innovation, and human-centred technology. In older language, we talked about duty, limits, and the consequences of unchecked desire. The words have changed, but the concern is the same.
Perhaps the real question is not whether we are becoming creators, but whether we remember that we are also caretakers. We do not bring existence out of nothing. We reshape what already exists. And in that reshaping, the line between wisdom and arrogance, between progress and pride, becomes the most important line of all.
The futuristic impact of AI, robotics, and technologies
In every yuga, humans have extended the limits of what they can do. What changes is not the desire, but the form it takes. Our ancient history speak of extraordinary abilities, not as fantasies, but as reminders of how power tests character. Figures like Naradmuni (a prominent divine sage (Rishiraja) in Hinduism) are described as moving from one place to another in moments. Others gained immense strength, knowledge, or influence through years of discipline and tapasya. Ravana (Figure from Ramayana) himself was learned and powerful, far beyond ordinary human measure. Sanjaya (the charioteer and advisor of King Dhritarashtra in the Mahabharata) receives the gift of divya drishti and narrates the events of the battlefield without being physically present there, seeing and speaking across distance in a way that still feels remarkable even today.
In this yuga, that ancient search for power and reach has not disappeared, it has only changed its language, and today it speaks through robotics, artificial intelligence, and advanced technologies, making us ask whether we are truly creators or only very advanced arrangers of what already exists.
In this age, science and technology are attempting something similar in a different language. We may not travel like Naradmuni, but we send our voices, images, and thoughts across the world in seconds. We build machines that can see, listen, respond, and even imitate human thinking. Artificial intelligence and robotics promise comfort, speed, and efficiency, and in many ways, they truly improve human life. Yet the old question remains. Not just what can we do, but how far should we go, and at what cost.
When we primarily build for human convenience, we often fail to thoroughly examine the long-term consequences. The environmental impact of large-scale technology is already visible in the pressure on resources, the growth of waste, and the slow damage to air, water, and soil. Nature does not recover at the pace of human ambition. What feels like small compromises today can become heavy burdens for tomorrow.
There is also the impact on the human mind. As systems become more capable, humans risk becoming more dependent. When answers arrive instantly, patience weakens. When machines start deciding for us, the habit of deep thinking slowly fades. Over time, this can affect attention, memory, and judgment. Knowledge becomes easier to reach, but wisdom becomes harder to build. Just as in old stories, the danger is not in having power, but in losing clarity while using it.
Future generations will not encounter these technologies as new inventions. They will be born into them. What we treat today as tools, they will experience as the normal environment of life. This makes responsibility unavoidable. The real question is not only whether these systems work, but what kind of humans they will shape.
The purpose of this reflection is not to reject progress. It is to ask for balance. Building for human comfort is important, but building without studying long-term impact is risky. If this age has the power to create intelligent systems, it must also have the wisdom to protect the environment, care for future generations, and preserve the depth of the human mind. Otherwise, advancement becomes speed without direction, and power without responsibility.
The Acceleration of the Technological Age
The current era has reached a state where technological progress now occurs through instantaneous changes which transform our methods of working and decision-making and future planning. People often view robotics and automation and artificial intelligence as signs of progress yet a less audible inquiry persists through time which asks whether every technological advancement enhances human existence or whether we merely pursue efficient and easy solutions without thinking about their implications. Indian philosophical thought offers a useful lens here, one that does not reject progress but asks whether it aligns with balance, responsibility, and long-term harmony. The definition of intelligence according to this perspective extends beyond computational skills and pattern imitation because it requires people to achieve awareness and intent and their complete understanding. Current machines possess the ability to mimic human reasoning and produce language while they can replicate decision-making processes, but they lack both consciousness and personal experience.
Power, Responsibility, and Ethical Imbalance
The development of new technological capabilities brings with it ethical responsibilities which every society must handle. Human beings must take on new ethical duties which match their increasing capabilities according to historical evidence. The current situation shows that people create new things at a speed which exceeds their ability to think about those innovations. Systems exist to enhance operational performance while they determine human actions and extend their power but they do not always evaluate their complete impact. Indian traditions emphasize dharma, the principle of balance and rightful action, which shows that power without ethical grounding creates destructive human force. The state of imbalance exists without showing its presence at all times. The process of imbalance development takes place through three channels: environmental degradation, social inequalities, and the gradual decline of human control.
The current society demonstrates this transformation through its existing results. The algorithms now determine our consumption choices and our methods of understanding everything around us. The system provides users with personalized comfort, but it also creates hidden patterns that determine their preferences. The process starts with decision assistance before it progresses to decision influence which eventually leads to decision conditioning. The concept of swatantrata as inner freedom becomes more complicated within such an environment. People stop making freedom choices when they find it easier to select between things that exist in their surroundings because they lose their ability to choose. People start to measure their work activities and personal identity through systems that use optimization techniques and digital validation systems, which leads to a decrease of space that exists for individuals to think and consider matters independently.
Technology, Ecology, and Civilizational Values
The environmental impact of technological demand exists together with social transformations. All systems need power while all infrastructure creates environmental effects and all products, we use contain unknown expenses which become apparent after many years. India's civilizational values maintain their dedication to nature because people see rivers and forests and ecosystems as essential parts of existence. Success in modern society measures output as the main achievement while actual value disappears through the evaluation process. The future requires us to create new things but we must also decide which things to keep intact.
The current situation requires progress to be defined differently because it needs to be measured through precise management instead of continuous rapid development. The question now extends beyond technological advancement to include the need for technologies to be operated through intelligent guidance. The increasing abilities of machines create a greater need for people to maintain their essential human characteristics. Human beings must actively maintain their capacity to make ethical decisions and understand their life's meaning and purpose. The future depends on two factors: the “innovations that will emerge and the values that will guide their development.”
Conclusion
It is high time we pause and honestly examine the path we are taking. The question is not whether technology should grow, but whether its overreach should be allowed to shape the future without restraint. We are building faster than ever, developing systems that touch every part of life. That makes it even more important to study their long-term impact, not only on markets or productivity, but on nature, on the human mind, and on the generations who will inherit this tech-driven world.
Progress should benefit those who come after us, not quietly weaken them. A future where people are born into pure convenience, surrounded by tools that think, decide, and act for them, may look comfortable, but comfort alone does not build strong, aware, or responsible human beings. Growth without effort and ease without discipline slowly takes away depth, resilience, and clarity. Technology should support human potential, not replace it.
This is why morality, ethics, and balance cannot be treated as optional ideas. They must guide innovation, not follow it. We do not need to overcreate. We need to create ‘wisely’. We need to build systems that remain under human control, not systems that slowly train humans to surrender their judgment, attention, and responsibility. Tools should remain tools. They should serve life, not define it.
Indian thought has always placed intention at the centre of action. Karma is not judged only by outcome, but by the spirit in which an act is performed. A tool in itself is neither pure nor impure. It becomes one or the other through the hand that uses it. This is a lens through which modern technology can also be examined. Artificial intelligence can help doctors read scans faster, help farmers predict weather patterns, and help students in remote areas access knowledge. At the same time, it can be used to watch, to sort, to exclude, and to reduce human beings to data points that fit neatly into a system. The difference lies not in the machine, but in the values of those who design and deploy it.
The purpose of this reflection is simple. We should build, but we should build with responsibility. We should innovate, but with awareness of consequences. True progress is not just about what is possible today. It is about what remains healthy, meaningful, and sustainable tomorrow. If this age can combine intelligence with humility, and power with restraint, then technology will not become a symbol of overreach. It will become a sign of maturity.

Executive Summary:
Amid the ongoing conflict between the United States, Israel, and Iran, a video circulating widely on social media claims to show American soldiers kneeling and surrendering to Iranian forces. In the clip, several soldiers appear to be sitting on their knees in front of armed personnel, creating the impression that they have been captured on the battlefield.
The video is being shared with the claim that the Iranian military has taken US soldiers prisoner during the war.
However, an research by the CyberPeace found that the claim is false. The viral clip is not authentic and has been generated using artificial intelligence. There is no credible evidence to support the claim that American soldiers have been captured by Iranian forces.
Claim
A Facebook user named “News Tick” shared the video on March 12, 2026, claiming that Iran had released footage of captured US soldiers. In the clip, the soldiers can be seen kneeling while armed personnel stand around them, giving the scene a highly dramatic appearance.

Fact Check
To verify the claim, we first searched the internet using relevant keywords. We found no credible reports from reputable news organizations confirming that US soldiers had been captured by Iran during the conflict. A closer examination of the video revealed several visual inconsistencies. The weapons carried by the soldiers appear unclear and oddly shaped. Additionally, the background looks unusually blurred and overly dramatic. The lighting and textures in the footage also appear artificial—common indicators of AI-generated visuals.
To confirm this suspicion, we analyzed the clip using multiple AI detection tools. The tool Hive Moderation indicated a 99% probability that the video was created using artificial intelligence.

Further analysis using Sightengine also suggested that the video was likely AI-generated, estimating an 80% probability of AI creation.

Conclusion
Our research shows that the viral video claiming to depict American soldiers surrendering and being captured by Iranian forces is fake. The footage has been generated using AI and does not represent a real incident.

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions