#FactCheck - Digitally Manipulated Video Misrepresents Surinder Choudhary’s Remarks on PM Modi
Executive Summary
A video circulating on social media claims that Jammu and Kashmir Deputy Chief Minister Surinder Choudhary described Prime Minister Narendra Modi as an agent of Pakistan’s Inter-Services Intelligence (ISI). In the viral clip, Choudhary is allegedly heard accusing the Prime Minister of pushing Kashmir towards Pakistan and claiming that even pro-India Kashmiris are disillusioned with Modi’s policies.
However, research by the CyberPeace research wing has found that the video is digitally manipulated. While the visuals are genuine and taken from a real media interaction, the audio has been fabricated and falsely overlaid to misattribute inflammatory remarks to the Deputy Chief Minister.
Claim
An Instagram account named Conflict Watch shared the video on January 20, claiming that J&K Deputy Chief Minister Surinder Choudhary had called Prime Minister Modi an ISI agent. The video purportedly quoted Choudhary as saying that Modi was elected with Pakistan’s support and that Kashmir would soon become part of Pakistan due to his policies.
Here is the link and archive link to the post, along with a screenshot.

Fact Check:
To verify the claim, the Desk conducted a Google Lens search, which led to a video uploaded on January 20, 2026, on the official YouTube channel of Jammu and Kashmir–based news outlet JKUpdate. The footage was an extended version of the viral clip and featured identical visuals. The original video showed Surinder Choudhary addressing the media on the sidelines of the inaugural two-day JKNC Convention of Block Presidents and Secretaries in the Jammu province. A review of the full media interaction revealed that Choudhary did not make any statements calling Prime Minister Modi an ISI agent or suggesting that Kashmir should join Pakistan.
Instead, in the original footage, Choudhary was seen criticising former Jammu and Kashmir Chief Minister and PDP leader Mehbooba Mufti for supporting the BJP during the bifurcation of Jammu and Kashmir and Ladakh into two Union Territories. He also spoke about the challenges faced by the region after the abrogation of Article 370 and demanded the restoration of full statehood for Jammu and Kashmir. During the interaction, Choudhary said that anyone attempting to divide Jammu and Kashmir at the state or regional level was effectively following Pakistan’s agenda and Jinnah’s two-nation theory. He added that such individuals could not be considered patriots.
Here is the link to the video, along with a screenshot.

In the next phase of the research , the Desk extracted the audio from the viral clip and analysed it using the AI-based audio detection tool Aurigin. The analysis indicated that the voice in the viral video was partially AI-generated, further confirming that the clip had been tampered with.
Below is a screenshot of the result.

Conclusion
Multiple social media users shared a video claiming it showed Jammu and Kashmir Deputy Chief Minister Surinder Choudhary calling Prime Minister Narendra Modi an agent of the ISI. However, the CyberPeace found that the viral video was digitally manipulated. While the visuals were taken from a genuine media interaction with the leader, a fabricated audio track was overlaid to attribute the statements to him falsely.
Related Blogs

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india

Disclaimer:
This report is based on extensive research conducted by CyberPeace Research using publicly available information, and advanced analytical techniques. The findings, interpretations, and conclusions presented are based on the data available at the time of study and aim to provide insights into global ransomware trends.
The statistics mentioned in this report are specific to the scope of this research and may vary based on the scope and resources of other third-party studies. Additionally, all data referenced is based on claims made by threat actors and does not imply confirmation of the breach by CyberPeace. CyberPeace includes this detail solely to provide factual transparency and does not condone any unlawful activities. This information is shared only for research purposes and to spread awareness. CyberPeace encourages individuals and organizations to adopt proactive cybersecurity measures to protect against potential threats.
CyberPeace Research does not claim to have identified or attributed specific cyber incidents to any individual, organization, or nation-state beyond the scope of publicly observable activities and available information. All analyses and references are intended for informational and awareness purposes only, without any intention to defame, accuse, or harm any entity.
While every effort has been made to ensure accuracy, CyberPeace Research is not liable for any errors, omissions, subsequent interpretations and any unlawful activities of the findings by third parties. The report is intended to inform and support cybersecurity efforts globally and should be used as a guide to foster proactive measures against cyber threats.
Executive Summary:
The 2024 ransomware landscape reveals alarming global trends, with 166 Threat Actor Groups leveraging 658 servers/underground resources and mirrors to execute 5,233 claims across 153 countries. Monthly fluctuations in activity indicate strategic, cyclical targeting, with peak periods aligned with vulnerabilities in specific sectors and regions. The United States was the most targeted nation, followed by Canada, the UK, Germany, and other developed countries, with the northwestern hemisphere experiencing the highest concentration of attacks. Business Services and Healthcare bore the brunt of these operations due to their high-value data, alongside targeted industries such as Pharmaceuticals, Mechanical, Metal, Electronics, and Government-related professional firms. Retail, Financial, Technology, and Energy sectors were also significantly impacted.
This research was conducted by CyberPeace Research using a systematic modus operandi, which included advanced OSINT (Open-Source Intelligence) techniques, continuous monitoring of Ransomware Group activities, and data collection from 658 servers and mirrors globally. The team utilized data scraping, pattern analysis, and incident mapping to track trends and identify hotspots of ransomware activity. By integrating real-time data and geographic claims, the research provided a comprehensive view of sectoral and regional impacts, forming the basis for actionable insights.
The findings emphasize the urgent need for proactive Cybersecurity strategies, robust defenses, and global collaboration to counteract the evolving and persistent threats posed by ransomware.
Overview:
This report provides insights into ransomware activities monitored throughout 2024. Data was collected by observing 166 Threat Actor Groups using ransomware technologies across 658 servers/underground resources and mirrors, resulting in 5,233 claims worldwide. The analysis offers a detailed examination of global trends, targeted sectors, and geographical impact.
Top 10 Threat Actor Groups:
The ransomware group ‘ransomhub’ has emerged as the leading threat actor, responsible for 527 incidents worldwide. Following closely are ‘lockbit3’ with 522 incidents and ‘play’ with 351. Other Groups are ‘akira’, ‘hunters’, ‘medusa’, ‘blackbasta’, ‘qilin’, ‘bianlian’, ‘incransom’. These groups usually employ advanced tactics to target critical sectors, highlighting the urgent need for robust cybersecurity measures to mitigate their impact and protect organizations from such threats.

Monthly Ransomware Incidents:
In January 2024, the value began at 284, marking the lowest point on the chart. The trend rose steadily in the subsequent months, reaching its first peak at 557 in May 2024. However, after this peak, the value dropped sharply to 339 in June. A gradual recovery follows, with the value increasing to 446 by August. September sees another decline to 389, but a sharp rise occurs afterward, culminating in the year’s highest point of 645 in November. The year concludes with a slight decline, ending at 498 in December 2024 (till 28th of December).

Top 10 Targeted Countries:
- The United States consistently topped the list as the primary target probably due to its advanced economic and technological infrastructure.
- Other heavily targeted nations include Canada, UK, Germany, Italy, France, Brazil, Spain, and India.
- A total of 153 countries reported ransomware attacks, reflecting the global scale of these cyber threats

Top Affected Sectors:
- Business Services and Healthcare faced the brunt of ransomware threat due to the sensitive nature of their operations.
- Specific industries under threats:
- Pharmaceutical, Mechanical, Metal, and Electronics industries.
- Professional firms within the Government sector.
- Other sectors:
- Retail, Financial, Technology, and Energy sectors were also significant targets.

Geographical Impact:
The continuous and precise OSINT(Open Source Intelligence) work on the platform, performed as a follow-up action to data scraping, allows a complete view of the geography of cyber attacks based on their claims. The northwestern region of the world appears to be the most severely affected by Threat Actor groups. The figure below clearly illustrates the effects of this geographic representation on the map.

Ransomware Threat Trends in India:
In 2024, the research identified 98 ransomware incidents impacting various sectors in India, marking a 55% increase compared to the 63 incidents reported in 2023. This surge highlights a concerning trend, as ransomware groups continue to target India's critical sectors due to its growing digital infrastructure and economic prominence.

Top Threat Actors Group Targeted India:
Among the following threat actors ‘killsec’ is the most frequent threat. ‘lockbit3’ follows as the second most prominent threat, with significant but lower activity than killsec. Other groups, such as ‘ransomhub’, ‘darkvault’, and ‘clop’, show moderate activity levels. Entities like ‘bianlian’, ‘apt73/bashe’, and ‘raworld’ have low frequencies, indicating limited activity. Groups such as ‘aps’ and ‘akira’ have the lowest representation, indicating minimal activity. The chart highlights a clear disparity in activity levels among these threats, emphasizing the need for targeted cybersecurity strategies.

Top Impacted Sectors in India:
The pie chart illustrates the distribution of incidents across various sectors, highlighting that the industrial sector is the most frequently targeted, accounting for 75% of the total incidents. This is followed by the healthcare sector, which represents 12% of the incidents, making it the second most affected. The finance sector accounts for 10% of the incidents, reflecting a moderate level of targeting. In contrast, the government sector experiences the least impact, with only 3% of the incidents, indicating minimal targeting compared to the other sectors. This distribution underscores the critical need for enhanced cybersecurity measures, particularly in the industrial sector, while also addressing vulnerabilities in healthcare, finance, and government domains.

Month Wise Incident Trends in India:
The chart indicates a fluctuating trend with notable peaks in May and October, suggesting potential periods of heightened activity or incidents during these months. The data starts at 5 in January and drops to its lowest point, 2, in February. It then gradually increases to 6 in March and April, followed by a sharp rise to 14 in May. After peaking in May, the metric significantly declines to 4 in June but starts to rise again, reaching 7 in July and 8 in August. September sees a slight dip to 5 before the metric spikes dramatically to its highest value, 24, in October. Following this peak, the count decreases to 10 in November and then drops further to 7 in December.

CyberPeace Advisory:
- Implement Data Backup and Recovery Plans: Backups are your safety net. Regularly saving copies of your important data ensures you can bounce back quickly if ransomware strikes. Make sure these backups are stored securely—either offline or in a trusted cloud service—to avoid losing valuable information or facing extended downtime.
- Enhance Employee Awareness and Training: People often unintentionally open the door to ransomware. By training your team to spot phishing emails, social engineering tricks, and other scams, you empower them to be your first line of defense against attacks.
- Adopt Multi-Factor Authentication (MFA): Think of MFA as locking your door and adding a deadbolt. Even if attackers get hold of your password, they’ll still need that second layer of verification to break in. It’s an easy and powerful way to block unauthorized access.
- Utilize Advanced Threat Detection Tools: Smart tools can make a world of difference. AI-powered systems and behavior-based monitoring can catch ransomware activity early, giving you a chance to stop it in its tracks before it causes real damage.
- Conduct Regular Vulnerability Assessments: You can’t fix what you don’t know is broken. Regularly checking for vulnerabilities in your systems helps you identify weak spots. By addressing these issues proactively, you can stay one step ahead of attackers.
Conclusion:
The 2024 ransomware landscape reveals the critical need for proactive cybersecurity strategies. High-value sectors and technologically advanced regions remain the primary targets, emphasizing the importance of robust defenses. As we move into 2025, it is crucial to anticipate the evolution of ransomware tactics and adopt forward-looking measures to address emerging threats.
Global collaboration, continuous innovation in cybersecurity technologies, and adaptive strategies will be imperative to counteract the persistent and evolving threats posed by ransomware activities. Organizations and governments must prioritize preparedness and resilience, ensuring that lessons learned in 2024 are applied to strengthen defenses and minimize vulnerabilities in the year ahead.

Introduction
In the era of digitalisation, social media has become an essential part of our lives, with people spending a lot of time updating every moment of their lives on these platforms. Social media networks such as WhatsApp, Facebook, and YouTube have emerged as significant sources of Information. However, the proliferation of misinformation is alarming since misinformation can have grave consequences for individuals, organisations, and society as a whole. Misinformation can spread rapidly via social media, leaving a higher impact on larger audiences. Bad actors can exploit algorithms for their benefit or some other agenda, using tactics such as clickbait headlines, emotionally charged language, and manipulated algorithms to increase false information.
Impact
The impact of misinformation on our lives is devastating, affecting individuals, communities, and society as a whole. False or misleading health information can have serious consequences, such as believing in unproven remedies or misinformation about some vaccines can cause serious illness, disability, or even death. Any misinformation related to any financial scheme or investment can lead to false or poor financial decisions that could lead to bankruptcy and loss of long-term savings.
In a democratic nation, misinformation plays a vital role in forming a political opinion, and the misinformation spread on social media during elections can affect voter behaviour, damage trust, and may cause political instability.
Mitigating strategies
The best way to minimise or stop the spreading of misinformation requires a multi-faceted approach. These strategies include promoting media literacy with critical thinking, verifying information before sharing, holding social media platforms accountable, regulating misinformation, supporting critical research, and fostering healthy means of communication to build a resilient society.
To put an end to the cycle of misinformation and move towards a better future, we must create plans to combat the spread of false information. This will require coordinated actions from individuals, communities, tech companies, and institutions to promote a culture of information accuracy and responsible behaviour.
The widespread spread of false information on social media platforms presents serious problems for people, groups, and society as a whole. It becomes clear that battling false information necessitates a thorough and multifaceted strategy as we go deeper into comprehending the nuances of this problem.
Encouraging consumers to develop media literacy and critical thinking abilities is essential to preventing the spread of false information. Being educated is essential for equipping people to distinguish between reliable sources and false information. Giving individuals the skills to assess information critically will enable them to choose the content they share and consume with knowledge. Public awareness campaigns should be used to promote and include initiatives that aim to improve media literacy in school curriculum.
Ways to Stop Misinformation
As we have seen, misinformation can cause serious implications; the best way to minimise or stop the spreading of misinformation requires a multifaceted approach; here are some strategies to combat misinformation.
- Promote Media Literacy with Critical Thinking: Educate individuals about how to critically evaluate information, fact check, and recognise common tactics used to spread misinformation, the users must use their critical thinking before forming any opinion or perspective and sharing the content.
- Verify Information: we must encourage people to verify the information before sharing, especially if it seems sensational or controversial, and encourage the consumption of news or any information from a reputable source of news that follows ethical journalistic standards.
- Accountability: Advocate for social media networks' openness and responsibility in the fight against misinformation. Encourage platforms to put in place procedures to detect and delete fraudulent content while boosting credible sources.
- Regulate Misinformation: Looking at the current situation, it is important to advocate for policies and regulations that address the spread of misinformation while safeguarding freedom of expression. Transparency in online communication by identifying the source of information and disclosing any conflict of interest.
- Support Critical Research: Invest in research and study on the sources, impacts, and remedies to misinformation. Support collaborative initiatives by social scientists, psychologists, journalists, and technology to create evidence-based techniques for countering misinformation.
Conclusion
To prevent the cycle of misinformation and move towards responsible use of the Internet, we must create strategies to combat the spread of false information. This will require coordinated actions from individuals, communities, tech companies, and institutions to promote a culture of information accuracy and responsible behaviour.