#FactCheck - Viral Video of Anti-RSS Slogans Is From 2022 Telangana, Not Uttar Pradesh
Executive Summary
A video showing a group of people wearing Muslim caps raising provocative slogans against the Rashtriya Swayamsevak Sangh (RSS) is being widely shared on social media. Users sharing the clip claim that the incident took place recently in Uttar Pradesh. However, CyberPeace research found the claim to be false. The probe established that the video is neither recent nor related to Uttar Pradesh. In fact, the footage dates back to 2022 and is from Telangana. The slogans heard in the video were raised during a protest against Goshamahal MLA T. Raja Singh, and the clip is now being circulated with a misleading claim.
Claim
On January 21, 2026, a user on social media platform X (formerly Twitter) shared the video claiming it showed people in Uttar Pradesh chanting slogans such as, “Kaat daalo saalon ko, RSS walon ko” and “Gustakh-e-Nabi ka sar chahiye.” The post suggested that such slogans were being raised openly in Uttar Pradesh despite strict law enforcement. Links to the post and its archive are provided below.

Fact Check:
To verify the claim, CyberPeace research conducted a reverse image search using keyframes from the viral video. The same footage was found on a Facebook account where it had been uploaded on August 26, 2022, indicating that the video is not recent.

Further verification led the team to a report published by news portal OpIndia on August 25, 2022, which featured identical visuals from the viral clip. According to the report, the video showed a protest march organised against BJP MLA T. Raja Singh following his alleged controversial remarks about Prophet Muhammad. The report identified one of the individuals in the video as Kaleem Uddin, who was allegedly heard raising the slogan “Kaat daalo saalon ko,” to which the crowd responded “RSS walon ko.” The slogan was linked to incitement against RSS members.

To confirm the location, the video was examined closely. A shop sign reading “Royal Time House” was visible in the footage. Using Google Street View, the same shop was located in Nalgonda, Telangana, conclusively establishing that the video was filmed there and not in Uttar Pradesh.

Conclusion
CyberPeace research confirmed that the viral video is from 2022 and was recorded in Telangana, not Uttar Pradesh. The clip is being falsely circulated with a misleading claim to give it a communal and political angle.
Related Blogs

As AI language models become more powerful, they are also becoming more prone to errors. One increasingly prominent issue is AI hallucinations, instances where models generate outputs that are factually incorrect, nonsensical, or entirely fabricated, yet present them with complete confidence. Recently, ChatGPT released two new models—o3 and o4-mini, which differ from earlier versions as they focus more on step-by-step reasoning rather than simple text prediction. With the growing reliance on chatbots and generative models for everything from news summaries to legal advice, this phenomenon poses a serious threat to public trust, information accuracy, and decision-making.
What Are AI Hallucinations?
AI hallucinations occur when a model invents facts, misattributes quotes, or cites nonexistent sources. This is not a bug but a side effect of how Large Language Models (LLMs) work, and it is only the probability that can be reduced, not their occurrence altogether. Trained on vast internet data, these models predict what word is likely to come next in a sequence. They have no true understanding of the world or facts, they simulate reasoning based on statistical patterns in text. What is alarming is that the newer and more advanced models are producing more hallucinations, not fewer. seemingly counterintuitive. This has been prevalent reasoning-based models, which generate answers step-by-step in a chain-of-thought style. While this can improve performance on complex tasks, it also opens more room for errors at each step, especially when no factual retrieval or grounding is involved.
As per reports shared on TechCrunch, it mentioned that when users asked AI models for short answers, hallucinations increased by up to 30%. And a study published in eWeek found that ChatGPT hallucinated in 40% of tests involving domain-specific queries, such as medical and legal questions. This was not, however, limited to this particular Large Language Model, but also similar ones like DeepSeek. Even more concerning are hallucinations in multimodal models like those used for deepfakes. Forbes reports that some of these models produce synthetic media that not only look real but are also capable of contributing to fabricated narratives, raising the stakes for the spread of misinformation during elections, crises, and other instances.
It is also notable that AI models are continually improving with each version, focusing on reducing hallucinations and enhancing accuracy. New features, such as providing source links and citations, are being implemented to increase transparency and reliability in responses.
The Misinformation Dilemma
The rise of AI-generated hallucinations exacerbates the already severe problem of online misinformation. Hallucinated content can quickly spread across social platforms, get scraped into training datasets, and re-emerge in new generations of models, creating a dangerous feedback loop. However, it helps that the developers are already aware of such instances and are actively charting out ways in which we can reduce the probability of this error. Some of them are:
- Retrieval-Augmented Generation (RAG): Instead of relying purely on a model’s internal knowledge, RAG allows the model to “look up” information from external databases or trusted sources during the generation process. This can significantly reduce hallucination rates by anchoring responses in verifiable data.
- Use of smaller, more specialised language models: Lightweight models fine-tuned on specific domains, such as medical records or legal texts. They tend to hallucinate less because their scope is limited and better curated.
Furthermore, transparency mechanisms such as source citation, model disclaimers, and user feedback loops can help mitigate the impact of hallucinations. For instance, when a model generates a response, linking back to its source allows users to verify the claims made.
Conclusion
AI hallucinations are an intrinsic part of how generative models function today, and such a side-effect would continue to occur until foundational changes are made in how models are trained and deployed. For the time being, developers, companies, and users must approach AI-generated content with caution. LLMs are, fundamentally, word predictors, brilliant but fallible. Recognising their limitations is the first step in navigating the misinformation dilemma they pose.
References
- https://www.eweek.com/news/ai-hallucinations-increase/
- https://www.resilience.org/stories/2025-05-11/better-ai-has-more-hallucinations/
- https://www.ekathimerini.com/nytimes/1269076/ai-is-getting-more-powerful-but-its-hallucinations-are-getting-worse/
- https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/
- https://en.as.com/latest_news/is-chatgpt-having-robot-dreams-ai-is-hallucinating-and-producing-incorrect-information-and-experts-dont-know-why-n/
- https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
- https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
- https://towardsdatascience.com/how-i-deal-with-hallucinations-at-an-ai-startup-9fc4121295cc/
- https://www.informationweek.com/machine-learning-ai/getting-a-handle-on-ai-hallucinations

Introduction
Cybercrime is one of the most pressing concerns in today’s era. As the digital world is evolving rapidly, so do the threats and challenges to curb these cybercrimes. The complexities associated with the evolving cybercrimes make it difficult to detect and investigate by the law enforcement across the world. India is one of those countries that is actively engaged in creating awareness about the cybercrimes and security concerns across the State. At the national level, initiatives like National Cybercrime Reporting Portal, CERT-In and I4C have been established to assist the law enforcement in dealing with cybercrimes in India. According to the press release by the Ministry of Home Affairs, 12,5153 cases of Financial Cyber Frauds were reported in the year 2023, which is the second highest in State-wise Reporting after UP. Maharashtra has been highlighted as one of the States with the highest cybercrime cases for the past few years.
In response to curbing the increasing number of cases, the state of Maharashtra has launched the initiative ‘the Maharashtra Cyber Security Project’. The purpose of this project is to strengthen the system’s defense mechanism by establishing cybersecurity infrastructure, exploiting technological advancements and enhancing the skills of law enforcement agencies.
Maharashtra Cyber Department and the Cyber Security Project
The Maharashtra Cyber Department, also referred as MahaCyber was established in the year 2016 and employs a multi-faceted approach to address cyberthreats. The objective is to provide a user-friendly space to report Cybercrimes, safeguarding Critical Information Infrastructure from cyber threats, empowering the investigation law agencies ultimately improving its efficiency and creating awareness among common people.
The Maharashtra Cyber Security Project aims to strengthen the department, bringing all the aspects of the cyber security system under one facility. The key components of the Maharashtra Cyber Security Project are as follows:
- Command & Control Centre:
The Command & Control Centre will function as a 24/ complaint registration hub and grievance handling mechanism which can be accessed by calling the helpline number, mobile app or on the online portal. The Centre continuously monitors cyber threats, reduce the impact of cyber attacks and ensures that issues are resolved as soon as possible.
- Technology Assisted Investigation (TAI):
Complaints that are registered are analysed and investigated by experts using cutting edge technologies such as Computer Forensic or Mobile Forensic, Voice Analysis System, Image Enhancement Tool, Deepfake Detection Solution to name a few which helps the Maharashtra Cyber Department to collect evidence, identify weak spots and mitigate the cyber threats effectively.
- Computer Emergency Response Team – Maharashtra (CERT-MH):
The CRET-MH works on curbing cybercrimes which are especially targeted to affect the Critical Infrastructure like banks, railway services, electricity of the State and threats related to national security using technologies such as Deep web and Dark web analysis, Darknet & Threat Intelligence Feeds, Vulnerability Management, Cyber Threat Intelligence Platform, Malware Analysis and Network Capture Analysis and coordinates with other agencies.
- Security Operations Centre (SOC):
The SOC looks after the security of the MahaCyber from any cyber threats. It 24/7 monitors the infrastructure for any signs of breach or threats and thus aids in early detection and prevention of any further harm.
- Centre of Excellence (COE):
The Centre of Excellence focuses on training the police officials to equip them with desired tools and technologies to deal with cyber threats. The Centre also works on creating awareness about various cyber threats among the citizens of the state.
- Nodal Cyber Police Station:
The Nodal Cyber Police Station works as a focal point for all cybercrime related law enforcement activities. It is responsible for coordinating the investigation procedure and prevention of cybercrimes within the state. Such Cyber Police Stations have been established in each district of Maharashtra.
Funds of Funds to scale up Startups
The government of Maharashtra through the Fund of Funds for Startups scheme has invested in more than 300 startups that align with the objective of cyber security and digital safety. The government is promoting ideas and cyber defence innovation which will help to push the boundaries of traditional cybersecurity tools and improve the State’s ability to tackle cybercrimes. Such partnerships can be a cost-effective solution that proactively promotes a culture of cybersecurity across industries.
Dynamic Cyber Platform
The government of Maharashtra has been working on creating a dynamic cyber platform that would assist them in tackling cybercrimes and save hundreds of crores of rupees in a short span of time. The platform will act as a link between various stakeholders such as banks, Non-Banking Financial Companies (NBFCs) and social media providers to provide a technology-driven solution to the evolving cybercrimes. As a part of this process, the government has invited tenders and has called top IT companies from the world to participate and aid them in setting up this dynamic cyber platform.
Why Does The Initiative By Maharashtra’s Government Act As A Model For Other States
The components of the Maharashtra Cyber Security Project and the dynamic cyber platform create a comprehensive system which aims at tackling the increasing complexities of cyber threats. The initiative with integration on cutting edge technologies, specialised institutions, expert professionals from various industries and real-time monitoring of cybercrimes sets an example that Maharashtra is well-equipped to prevent, detect and respond to cybercrimes being reported in the State. The project collaborates between government and law enforcement agencies, providing them proper training and addressing grievances of the public. By working on four key areas, i.e. centralised platform for reporting, collaboration between government and private sectors, public awareness and use of advanced technologies, the Cyber Security System in Maharashtra serves as a model for creating secure digital space and tackling cybercrime effectively on a large scale.
Other States in India could certainly adopt similar models and achieve success in curbing cybercrimes. They need to create a dedicated response team consisting of trained personnel, invest in advanced software as used by Maharashtra, foster partnerships with companies or startups involved in AI and technology to build resilient cybersecurity infrastructures. The government of Maharashtra can extend hands to assist other states to establish a model that addresses the evolving cybercrimes efficiently.
References
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2003158
- https://mhcyber.gov.in/about-us
- https://www.youtube.com/watch?v=jjPw-8afTTw
- https://www.ltts.com/press-release/maharashtra-inaugurates-india-first-integrated-cyber-command-control-center-ltts
- https://theprint.in/india/maharashtra-tackling-evolving-cyber-crimes-through-dynamic-platform-cm/2486772/
- https://www.freepressjournal.in/mumbai/maharashtra-dynamic-cyber-security-platform-in-the-offing-says-fadnavis

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.