#FactCheck - Viral Video of Sachin Tendulkar Commenting on Joe Root Is AI-Generated
A video circulating on social media claims to show former Indian cricketer Sachin Tendulkar commenting on England batter Joe Root’s batting feats. In the clip, Tendulkar is allegedly heard saying that if Joe Root continues scoring centuries, even his (Tendulkar’s) record would be broken. The video further claims that Tendulkar says if Root scores another century, he would give up the bat’s grip, after which the clip abruptly ends.
Users sharing the video are claiming that Sachin Tendulkar has taken a dig at Joe Root through this remark.
Cyber Peace Foundation’s research found the claim to be misleading. Our research clearly establishes that the viral video is not authentic but has been created using Artificial Intelligence (AI) tools and is being shared online with a false narrative.
CLAIM
On January 5, 2025, several users shared the viral video on Instagram, claiming it shows Sachin Tendulkar making remarks about Joe Root’s century-scoring spree.
(Post link and archive link available.)

FACT CHECK
To verify the claim, we extracted keyframes from the viral video and conducted a Google Reverse Image Search. This led us to an interview of Sachin Tendulkar published on the official BBC News YouTube channel on November 18, 2013. The visuals from that interview match exactly with those seen in the viral clip.
This establishes that the visuals used in the viral video are old and have been repurposed with manipulated audio to create a misleading narrative.
Further, Joe Root made his Test debut in 2012. At that time, he had not scored multiple Test centuries and was nowhere close to Sachin Tendulkar’s record tally of hundreds. This timeline itself makes the viral claim factually incorrect.
(Link to the original BBC interview available.)
https://www.youtube.com/watch?v=v6Rz4pgR9UQ

Upon closely examining the viral clip, we noticed that Sachin Tendulkar’s voice sounded unnatural and inconsistent. This raised suspicion of audio manipulation.
We then ran the viral video through an AI detection tool, Aurigin AI. According to the results, the audio in the video was found to be 100 percent AI-generated, confirming that Tendulkar never made the statements attributed to him in the clip.

Conclusion
Our research confirms that the viral video claiming Sachin Tendulkar commented on Joe Root’s centuries is fake. The video has been created using AI-generated audio and misleadingly combined with visuals from a 2013 interview. Users are sharing this manipulated clip on social media with a false claim.
Related Blogs

Introduction
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
.png)
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
- Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
- Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms: Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
- User Empowerment to Counter Misinformation: Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
- Partnership with Fact-Checking/Expert Organizations: Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
References
- https://mark-hurlstone.github.io/THKE.22.BJP.pdf
- https://futurefreespeech.org/wp-content/uploads/2024/01/Empowering-Audiences-Through-%E2%80%98Prebunking-Michael-Bang-Petersen-Background-Report_formatted.pdf
- https://newsreel.pte.hu/news/unprecedented_challenges_Debunking_disinformation
- https://misinforeview.hks.harvard.edu/article/global-vaccination-badnews/

Introduction
The Central Board of Secondary Education (CBSE) has issued a warning to students about fake social media accounts that spread false information about the CBSE. The board has warned students not to trust the information coming from these accounts and has released a list of 30 fake accounts. The board has expressed concern that these handles are misleading students and parents by spreading fake information with the name and logo of the CBSE. The board has has also clarified that it is not responsible for the information being spread from these fake accounts.
The Central Board of Secondary Education (CBSE), a venerable institution in the realm of Indian education, has found itself ensnared in the web of cyber duplicity. Impersonation attacks, a sinister facet of cybercrime, have burgeoned, prompting the Board to adopt a vigilant stance against the proliferation of counterfeit social media handles that masquerade under its esteemed name and emblem.
The CBSE, has revealed a list of approximately 30 spurious handles that have been sowing seeds of disinformation across the social media landscape. These digital doppelgängers, cloaked in the Board's identity, have been identified and exposed. The Board's official beacon in this murky sea of falsehoods is the verified handle '@cbseindia29', a lighthouse guiding the public to the shores of authentic information.
This unfolding narrative signifies the Board's unwavering commitment to tackle the scourge of misinformation and to fortify the bulwarks safeguarding the sanctity of its official communications. By spotlighting the rampant growth of fake social media personas, the CBSE endeavors to shield the public from the detrimental effects of misleading information and to preserve the trust vested in its official channels.
CBSE Impersonator Accounts
The list of identified malefactors, parading under the CBSE banner, serves as a stark admonition to the public to exercise discernment while navigating the treacherous waters of social media platforms. The CBSE has initiated appropriate legal manoeuvres against these unauthorised entities to stymie their dissemination of fallacious narratives.
The Board has previously unfurled comprehensive details concerning the impending board examinations for both Class 10 and Class 12 in the year 2024. These academic assessments are slated to commence from February 15 to April 2, 2024, with a uniform start time of 10:30 AM (IST) across all designated dates.
The CBSE has made it unequivocally clear that there are nefarious entities lurking in the shadows of social media, masquerading in the guise of the CBSE. It has implored students and the general public not to be ensnared by the siren songs emanating from these fraudulent accounts and has also unfurled a list of these imposters. The Board's warning is a beacon of caution, illuminating the path for students as they navigate the digital expanse with the impending commencement of the CBSE Class X and XII exams.
Sounding The Alarm
The Central Board of Secondary Education (CBSE) has sounded the alarm, issuing an advisory to schools, students, and their guardians about the existence of fake social media platform handles that brandish the board’s logo and mislead the academic community. The board has identified about 30 such accounts on the microblogging site 'X' (formerly known as Twitter) that misuse the CBSE logo and acronym, sowing confusion and disarray.
The board is in the process of taking appropriate action against these deceptive entities. CBSE has also stated that it bears no responsibility for any information disseminated by any other source that unlawfully appropriates its name and logo on social media platforms.
Sources reveal that these impostors post false information on various updates, including admissions and exam schedules. After receiving complaints about such accounts on 'X', the CBSE issued the advisory and has initiated action against those operating these accounts, sources said.
The Brute Nature of Impersonation
In the contemporary digital epoch, cybersecurity has ascended to a position of critical importance. It is the bulwark that ensures the sanctity of computer networks is maintained and that computer systems are not marked as prey by cyber predators. Cyberattacks are insidious stratagems executed with the intent of expropriating, manipulating, or annihilating authenticated user or organizational data. It is imperative that cyberattacks be mitigated at their roots so that users and organizations utilizing internet services can navigate the digital domain with a sense of safety and security. Knowledge about cyberattacks thus plays a pivotal role in educating cyber users about the diverse types of cyber threats and the preventive measures to counteract them.
Impersonation Attacks are a vicious form of cyberattack, characterised by the malicious intent to extract confidential information. These attacks revolve around a process where cyber attackers eschew the use of malware or bots to perpetrate their crimes, instead wielding the potent tactic of social engineering. The attacker meticulously researches and harvests information about the legitimate user through platforms such as social media and then exploits this information to impersonate or masquerade as the original, legitimate user.
The threats posed by Impersonation Attacks are particularly insidious because they demand immediate action, pressuring the victim to act without discerning between the authenticated user and the impersonated one. The very nature of an Impersonation Attack is a perilous form of cyber assault, as the original user who is impersonated holds rights to private information. These attacks can be executed by exploiting a resemblance to the original user's identity, such as email IDs. Email IDs with minute differences from the legitimate user are employed in this form of attack, setting it apart from the phishing cyber mechanism. The email addresses are so similar and close to each other that, without paying heed or attention to them, the differences can be easily overlooked. Moreover, the email addresses appear to be correct, as they generally do not contain spelling errors.
Strategies to Prevent
To prevent Impersonation Attacks, the following strategies can be employed:
- Proper security mechanisms help identify malicious emails and thereby filter spamming email addresses on a regular basis.
- Double-checking sensitive information is crucial, especially when important data or funds need to be transferred. It is vital to ensure that the data is transferred to a legitimate user by cross-verifying the email address.
- Ensuring organizational-level security is paramount. Organizations should have specific domain names assigned to them, which can help employees and users distinguish their identity from that of cyber attackers.
- Protection of User Identity is essential. Employees must not publicly share their private identities, which can be exploited by attackers to impersonate their presence within the organization.
Conclusion
The CBSE's struggle against the masquerade of misinformation is a reminder of the vigilance required to safeguard the legitimacy of our digital interactions. As we navigate the complex and uncharted terrain of the internet, let us arm ourselves with the knowledge and discernment necessary to unmask these digital charlatans and uphold the sanctity of truth.
References
- https://timesofindia.indiatimes.com/city/ahmedabad/cbse-warns-against-misuse-of-its-name-by-fake-social-media-handles/articleshow/107644422.cms
- https://www.timesnownews.com/education/cbse-releases-list-of-fake-social-media-handles-asks-not-to-follow-article-107632266
- https://www.etvbharat.com/en/!bharat/cbse-public-advisory-enn24021205856

A video purportedly showing Prime Minister Narendra Modi delivering a politically charged warning during a public address has been widely circulated on social media. The clip is being shared with claims that the Prime Minister spoke about the “saffronisation” of the Indian Army and issued a stern message to Bangladesh by invoking India’s role in the 1971 war.
A detailed verification by the CyberPeace Foundation found these claims to be misleading. The investigation revealed that the viral video has been digitally altered, and the statements attributed to the Prime Minister do not appear in the original speech. The misleading narrative appears to have been created by inserting manipulated audio into authentic video footage.
Claim
An X user, “@Pakpulse247,” shared a video on December 26 claiming that Prime Minister Narendra Modi, while addressing a public gathering, made remarks about the “saffronisation” of the Indian Army and issued a warning to Bangladesh by invoking India’s role in the 1971 war.
The post’s caption alleged that the Prime Minister made these statements during the inauguration of the Rashtra Prerna Sthal in Lucknow, suggesting that Bangladesh should not expect support from Pakistan in the event of an Indian offensive and should remember India’s contribution during the 1971 conflict.
The link to the post is provided below, along with a screenshot of the viral claim.

Fact Check:
During the verification process, the Desk carried out a targeted keyword search and located the complete, original version of the video on the official YouTube channel of the Bharatiya Janata Party, uploaded on December 26, 2025. The video description confirmed that Prime Minister Narendra Modi was addressing the gathering at the inauguration of the Rashtra Prerna Sthal in Lucknow.
A comparison of the visuals showed that the venue, backdrop, and the Prime Minister’s attire were identical to those seen in the viral clip circulating on social media. However, after carefully reviewing the full speech, the Desk found no reference to Pakistan, Bangladesh, or the claims being attributed to him in the viral post.
The link to the original video is provided below, along with a relevant screenshot.
https://www.youtube.com/watch?v=L9rbzU0m30o

Upon further examination of the search results, the Desk located the official English transcript of Prime Minister Narendra Modi’s speech on the PM India website. A thorough review of the complete transcript revealed no mention of the statements attributed to him in the viral social media post.
The link to the official transcription is provided below.
On further examination of the search results, the Desk accessed the official English transcript of Prime Minister Narendra Modi’s speech published on the PM India website. A careful review of the complete transcript confirmed that none of the claims made in the viral social media post appear in the official record.
The link to the transcription is provided below.
Building on these findings, the Desk extracted the audio track from the viral video and analysed it using Resemble AI, an audio-detection tool. The analysis flagged the audio as fake, indicating that it had been digitally manipulated.

Conclusion
The CyberPeace Foundation’s research clearly establishes that the viral video claiming Prime Minister Narendra Modi made remarks about the saffronisation of the Indian Army and issued warnings to Bangladesh is false and misleading. The full original video and official transcript of the speech contain no such references to Pakistan, Bangladesh, or the 1971 war. Furthermore, audio analysis using AI-detection tools confirms that the voice in the viral clip has been digitally manipulated.