#FactCheck - Viral Video Falsely Claims RSS Chief Mohan Bhagwat Called for ‘Saffronisation’ of Indian Army
A video purportedly showing Rashtriya Swayamsevak Sangh (RSS) chief Mohan Bhagwat making remarks about the “saffronisation” of the Indian Army has been widely circulated on social media. The clip claims that Bhagwat called for the removal of non-Hindus from the armed forces and linked the issue to future political leadership changes in the country.
Claim
However, a verification by the Cyber Peace Foundation has established that the video is misleading and has been digitally manipulated.
In the video, Bhagwat is allegedly heard saying that unless more than 50 percent of non-Hindus are removed from the Indian Army by 2028, Prime Minister Narendra Modi would be replaced by Uttar Pradesh Chief Minister Yogi Adityanath. The clip further attributes another statement to him, suggesting that he would resign if the Prime Minister were to demand Nitish Kumar’s resignation.
By the time of publication, the video had been viewed over 7,000 times.( lINK, ARCHIVE Link, Screenshot

Fact Check:
The reverse image search also directed the Desk to a video uploaded on CNN-News18’s official YouTube channel on December 21, 2025. The footage was found to be a longer version of the viral clip and was recorded at the RSS centenary event held in Kolkata on the same date. A comparison of both videos confirmed that the background visuals, stage setup and camera angles were identical.
However, a careful review of the original CNN-News18 video revealed that Mohan Bhagwat did not make any of the statements attributed to him in the viral clip.
In his original address, Bhagwat spoke about unity and referred to concerns over increasing atrocities against Hindus in Bangladesh. He made no reference to the Indian Army, nor did he comment on its composition or alleged saffronisation. Here is the link to the original video, along with a screenshot: https://www.youtube.com/watch?v=KnsAUGfBQBk&t=1s

In the next phase of the investigation, the audio track from the viral video was extracted and analysed using the AI audio detection tool Aurigin. The tool’s assessment indicated that the voice heard in the clip was artificially generated, confirming that the audio did not originate from the original speech.

Conclusion
The claim that RSS chief Mohan Bhagwat called for the saffronisation of the Indian Army is false. PTI Fact Check found that the viral video was digitally manipulated, using genuine footage from an RSS centenary event but pairing it with an AI-generated audio track. The altered video was shared online to mislead viewers by falsely attributing statements Bhagwat never made.
Related Blogs

Introduction
In a world where social media dictates public perception and content created by AI dilutes the difference between fact and fiction, mis/disinformation has become a national cybersecurity threat. Today, disinformation campaigns are designed for their effect, with political manipulation, interference in public health, financial fraud, and even community violence. India, with its 900+ million internet users, is especially susceptible to this distortion online. The advent of deep fakes, AI-text, and hyper-personalised propaganda has made disinformation more plausible and more difficult to identify than ever.
What is Misinformation?
Misinformation is false or inaccurate information provided without intent to deceive. Disinformation, on the other hand, is content intentionally designed to mislead and created and disseminated to harm or manipulate. Both are responsible for what experts have termed an "infodemic", overwhelming people with a deluge of false information that hinders their ability to make decisions.
Examples of impactful mis/disinformation are:
- COVID-19 vaccine conspiracy theories (e.g., infertility or microchips)
- Election-related false news (e.g., EVM hacking so-called)
- Social disinformation (e.g., manipulated videos of riots)
- Financial scams (e.g., bogus UPI cashbacks or RBI refund plans)
How Misinformation Spreads
Misinformation goes viral because of both technology design and human psychology. Social media sites such as Facebook, X (formerly Twitter), Instagram, and WhatsApp are designed to amplify messages that elicit high levels of emotional reactions are usually polarising, sensationalistic, or fear-mongering posts. This causes falsehoods or misinformation to get much more attention and activity than authentic facts, and therefore prioritises virality over truth.
Another major consideration is the misuse of generative AI and deep fakes. Applications like ChatGPT, Midjourney, and ElevenLabs can be used to generate highly convincing fake news stories, audio recordings, or videos imitating public figures. These synthetic media assets are increasingly being misused by bad actors for political impersonation, propagating fabricated news reports, and even carrying out voice-based scams.
To this danger are added coordinated disinformation efforts that are commonly operated by foreign or domestic players with certain political or ideological objectives. These efforts employ networks of bot networks on social media, deceptive hashtags, and fabricated images to sway public opinion, especially during politically sensitive events such as elections, protests, or foreign wars. Such efforts are usually automated with the help of bots and meme-driven propaganda, which makes them scalable and traceless.
Why Misinformation is Dangerous
Mis/disinformation is a significant threat to democratic stability, public health, and personal security. Perhaps one of the most pernicious threats is that it undermines public trust. If it goes unchecked, then it destroys trust in core institutions like the media, judiciary, and electoral system. This erosion of public trust has the potential to destabilise democracies and heighten political polarisation.
In India, false information has had terrible real-world outcomes, especially in terms of creating violence. Misleading messages regarding child kidnappers on WhatsApp have resulted in rural mob lynching. As well, communal riots have been sparked due to manipulated religious videos, and false terrorist warnings have created public panic.
The pandemic of COVID-19 also showed us how misinformation can be lethal. Misinformation regarding vaccine safety, miracle cures, and the source of viruses resulted in mass vaccine hesitancy, utilisation of dangerous treatments, and even avoidable deaths.
Aside from health and safety, mis/disinformation has also been used in financial scams. Cybercriminals take advantage of the fear and curiosity of the people by promoting false investment opportunities, phishing URLs, and impersonation cons. Victims get tricked into sharing confidential information or remitting money using seemingly official government or bank websites, leading to losses in crypto Ponzi schemes, UPI scams, and others.
India’s Response to Misinformation
- PIB Fact Check Unit
The Press Information Bureau (PIB) operates a fact-checking service to debunk viral false information, particularly on government policies. In 3 years, the unit identified more than 1,500 misinformation posts across media.
- Indian Cybercrime Coordination Centre (I4C)
Working under MHA, I4C has collaborated with social media platforms to identify sources of viral misinformation. Through the Cyber Tipline, citizens can report misleading content through 1930 or cybercrime.gov.in.
- IT Rules (The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 [updated as on 6.4.2023]
The Information Technology (Intermediary Guidelines) Rules were updated to enable the government to following aspects:
- Removal of unlawful content
- Platform accountability
- Detection Tools
There are certain detection tool that works as shields in assisting fact-checkers and enforcement bodies to:
- Identify synthetic voice and video scams through technical measures.
- Track misinformation networks.
- Label manipulated media in real-time.
CyberPeace View: Solutions for a Misinformation-Resilient Bharat
- Scale Digital Literacy
"Think Before You Share" programs for rural schools to teach students to check sources, identify clickbait, and not reshare fake news.
- Platform Accountability
Technology platforms need to:
- Flag manipulated media.
- Offer algorithmic transparency.
- Mark AI-created media.
- Provide localised fact-checking across diverse Indian languages.
- Community-Led Verification
Establish WhatsApp and Telegram "Fact Check Hubs" headed by expert organisations, industry experts, journalists, and digital volunteers who can report at the grassroots level fake content.
- Legal Framework for Deepfakes
Formulate targeted legislation under the Bhartiya Nyaya Sanhita (BNS) and other relevant laws to make malicious deepfake and synthetic media use a criminal offense for:
- Electoral manipulation.
- Defamation.
- Financial scams.
- AI Counter-Misinformation Infrastructure
Invest in public sector AI models trained specifically to identify:
- Coordinated disinformation patterns.
- Botnet-driven hashtag campaigns.
- Real-time viral fake news bursts.
Conclusion
Mis/disinformation is more than just a content issue, it's a public health, cybersecurity, and democratic stability challenge. As India enters the digitally empowered world, making a secure, informed, and resilient information ecosystem is no longer a choice; now, it's imperative. Fighting misinformation demands a whole-of-society effort with AI innovation, public education, regulatory overhaul, and tech responsibility. The danger is there, but so is the opportunity to guide the world toward a fact-first, trust-based digital age. It's time to act.
References
- https://www.pib.gov.in/factcheck.aspx
- https://www.meity.gov.in/static/uploads/2024/02/Information-Technology-Intermediary-Guidelines-and-Digital-Media-Ethics-Code-Rules-2021-updated-06.04.2023-.pdf
- https://www.cyberpeace.org
- https://www.bbc.com/news/topics/cezwr3d2085t
- https://www.logically.ai
- https://www.altnews.in

Introduction
There is a rising desire for artificial intelligence (AI) laws that limit threats to public safety and protect human rights while allowing for a flexible and inventive setting. Most AI policies prioritize the use of AI for the public good. The most compelling reason for AI innovation as a valid goal of public policy is its promise to enhance people's lives by assisting in the resolution of some of the world's most difficult difficulties and inefficiencies and to emerge as a transformational technology, similar to mobile computing. This blog explores the complex interplay between AI and internet governance from an Indian standpoint, examining the challenges, opportunities, and the necessity for a well-balanced approach.
Understanding Internet Governance
Before delving into an examination of their connection, let's establish a comprehensive grasp of Internet Governance. This entails the regulations, guidelines, and criteria that influence the global operation and management of the Internet. With the internet being a shared resource, governance becomes crucial to ensure its accessibility, security, and equitable distribution of benefits.
The Indian Digital Revolution
India has witnessed an unprecedented digital revolution, with a massive surge in internet users and a burgeoning tech ecosystem. The government's Digital India initiative has played a crucial role in fostering a technology-driven environment, making technology accessible to even the remotest corners of the country. As AI applications become increasingly integrated into various sectors, the need for a comprehensive framework to govern these technologies becomes apparent.
AI and Internet Governance Nexus
The intersection of AI and Internet governance raises several critical questions. How should data, the lifeblood of AI, be governed? What role does privacy play in the era of AI-driven applications? How can India strike a balance between fostering innovation and safeguarding against potential risks associated with AI?
- AI's Role in Internet Governance:
Artificial Intelligence has emerged as a powerful force shaping the dynamics of the internet. From content moderation and cybersecurity to data analysis and personalized user experiences, AI plays a pivotal role in enhancing the efficiency and effectiveness of Internet governance mechanisms. Automated systems powered by AI algorithms are deployed to detect and respond to emerging threats, ensuring a safer online environment.
A comprehensive strategy for managing the interaction between AI and the internet is required to stimulate innovation while limiting hazards. Multistakeholder models including input from governments, industry, academia, and civil society are gaining appeal as viable tools for developing comprehensive and extensive governance frameworks.
The usefulness of multistakeholder governance stems from its adaptability and flexibility in requiring collaboration from players with a possible stake in an issue. Though flawed, this approach allows for flaws that may be remedied using knowledge-building pieces. As AI advances, this trait will become increasingly important in ensuring that all conceivable aspects are covered.
The Need for Adaptive Regulations
While AI's potential for good is essentially endless, so is its potential for damage - whether intentional or unintentional. The technology's highly disruptive nature needs a strong, human-led governance framework and rules that ensure it may be used in a positive and responsible manner. The fast emergence of GenAI, in particular, emphasizes the critical need for strong frameworks. Concerns about the usage of GenAI may enhance efforts to solve issues around digital governance and hasten the formation of risk management measures.
Several AI governance frameworks have been published throughout the world in recent years, with the goal of offering high-level guidelines for safe and trustworthy AI development. The OECD's "Principles on Artificial Intelligence" (OECD, 2019), the EU's "Ethics Guidelines for Trustworthy AI" (EU, 2019), and UNESCO's "Recommendations on the Ethics of Artificial Intelligence" (UNESCO, 2021) are among the multinational organizations that have released their own principles. However, the advancement of GenAI has resulted in additional recommendations, such as the OECD's newly released "G7 Hiroshima Process on Generative Artificial Intelligence" (OECD, 2023).
Several guidance documents and voluntary frameworks have emerged at the national level in recent years, including the "AI Risk Management Framework" from the United States National Institute of Standards and Technology (NIST), a voluntary guidance published in January 2023, and the White House's "Blueprint for an AI Bill of Rights," a set of high-level principles published in October 2022 (The White House, 2022). These voluntary policies and frameworks are frequently used as guidelines by regulators and policymakers all around the world. More than 60 nations in the Americas, Africa, Asia, and Europe had issued national AI strategies as of 2023 (Stanford University).
Conclusion
Monitoring AI will be one of the most daunting tasks confronting the international community in the next centuries. As vital as the need to govern AI is the need to regulate it appropriately. Current AI policy debates too often fall into a false dichotomy of progress versus doom (or geopolitical and economic benefits versus risk mitigation). Instead of thinking creatively, solutions all too often resemble paradigms for yesterday's problems. It is imperative that we foster a relationship that prioritizes innovation, ethical considerations, and inclusivity. Striking the right balance will empower us to harness the full potential of AI within the boundaries of responsible and transparent Internet Governance, ensuring a digital future that is secure, equitable, and beneficial for all.
References
- The Key Policy Frameworks Governing AI in India - Access Partnership
- AI in e-governance: A potential opportunity for India (indiaai.gov.in)
- India and the Artificial Intelligence Revolution - Carnegie India - Carnegie Endowment for International Peace
- Rise of AI in the Indian Economy (indiaai.gov.in)
- The OECD Artificial Intelligence Policy Observatory - OECD.AI
- Artificial Intelligence | UNESCO
- Artificial intelligence | NIST

Introduction
AI has penetrated most industries and telecom is no exception. According to a survey by Nvidia, enhancing customer experiences is the biggest AI opportunity for the telecom industry, with 35% of respondents identifying customer experiences as their key AI success story. Further, the study found nearly 90% of telecom companies use AI, with 48% in the piloting phase and 41% actively deploying AI. Most telecom service providers (53%) agree or strongly agree that adopting AI would provide a competitive advantage. AI in telecom is primed to be the next big thing and Google has not ignored this opportunity. It is reported that Google will soon add “AI Replies” to the phone app’s call screening feature.
How Does The ‘AI Call Screener’ Work?
With the busy lives people lead nowadays, Google has created a helpful tool to answer the challenge of responding to calls amidst busy schedules. Google Pixel smartphones are now fitted with a new feature that deploys AI-powered calling tools that can help with call screening, note-making during an important call, filtering and declining spam, and most importantly ending the frustration of being on hold.
In the official Google Phone app, users can respond to a caller through “new AI-powered smart replies”. While “contextual call screen replies” are already part of the app, this new feature allows users to not have to pick up the call themselves.
- With this new feature, Google Assistant will be able to respond to the call with a customised audio response.
- The Google Assistant, responding to the call, will ask the caller’s name and the purpose of the call. If they are calling about an appointment, for instance, Google will show the user suggested responses specific to that call, such as ‘Confirm’ or ‘Cancel appointment’.
Google will build on the call-screening feature by using a “multi-step, multi-turn conversational AI” to suggest replies more appropriate to the nature of the call. Google’s Gemini Nano AI model is set to power this new feature and enable it to handle phone calls and messages even if the phone is locked and respond even when the caller is silent.
Benefits of AI-Powered Call Screening
This AI-powered call screening feature offers multiple benefits:
- The AI feature will enhance user convenience by reducing the disruptions caused by spam calls. This will, in turn, increase productivity.
- It will increase call privacy and security by filtering high-risk calls, thereby protecting users from attempts of fraud and cyber crimes such as phishing.
- The new feature can potentially increase efficiency in business communications by screening for important calls, delegating routine inquiries and optimising customer service.
Key Policy Considerations
Adhering to transparent, ethical, and inclusive policies while anticipating regulatory changes can establish Google as a responsible innovator in AI call management. Some key considerations for AI Call Screener from a policy perspective are:
- The AI screen caller will process and transcribe sensitive voice data, therefore, the data handling policies for such need to be transparent to reassure users of regulatory compliance with various laws.
- AI has been at a crossroads in its ethical use and mitigation of bias. It will require the algorithms to be designed to avoid bias and reflect inclusivity in its understanding of language.
- The data that the screener will be using is further complicated by global and regional regulations such as data privacy regulations like the GDPR, DPDP Act, CCPA etc., for consent to record or transcribe calls while focussing on user rights and regulations.
Conclusion: A Balanced Approach to AI in Telecommunications
Google’s AI Call Screener offers a glimpse into the future of automated call management, reshaping customer service and telemarketing by streamlining interactions and reducing spam. As this technology evolves, businesses may adopt similar tools, balancing customer engagement with fewer unwanted calls. The AI-driven screening will also impact call centres, shifting roles toward complex, human-centred interactions while automation handles routine calls. They could have a potential effect on support and managerial roles. Ultimately, as AI call management grows, responsible design and transparency will be in demand to ensure a seamless, beneficial experience for all users.
References
- https://resources.nvidia.com/en-us-ai-in-telco/state-of-ai-in-telco-2024-report
- https://store.google.com/intl/en/ideas/articles/pixel-call-assist-phone-screen/
- https://www.thehindu.com/sci-tech/technology/google-working-on-ai-replies-for-call-screening-feature/article68844973.ece
- https://indianexpress.com/article/technology/artificial-intelligence/google-ai-replies-call-screening-9659612/