#FactCheck - Edited Video of ‘India-India’ Chants at Republican National Convention
Executive Summary:
A video online alleges that people are chanting "India India" as Ohio Senator J.D. Vance meets them at the Republican National Convention (RNC). This claim is not correct. The CyberPeace Research team’s investigation showed that the video was digitally changed to include the chanting. The unaltered video was shared by “The Wall Street Journal” and confirmed via the YouTube channel of “Forbes Breaking News”, which features different music performing while Mr. and Mrs. Usha Vance greeted those present in the gathering. So the claim that participants chanted "India India" is not real.

Claims:
A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).


Fact Check:
Upon receiving the posts, we did keyword search related to the context of the viral video. We found a video uploaded by The Wall Street Journal on July 16, titled "Watch: J.D. Vance Is Nominated as Vice Presidential Nominee at the RNC," at the time stamp 0:49. We couldn’t hear any India-India chants whereas in the viral video, we can clearly hear it.
We also found the video on the YouTube channel of Forbes Breaking News. In the timestamp at 3:00:58, we can see the same clip as the viral video but no “India-India” chant could be heard.

Hence, the claim made in the viral video is false and misleading.
Conclusion:
The viral video claiming to show "India-India" chants during Ohio Senator J.D. Vance's greeting at the Republican National Convention is altered. The original video, confirmed by sources including “The Wall Street Journal” and “Forbes Breaking News” features different music without any such chants. Therefore, the claim is false and misleading.
Claim: A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).
Claimed on: X
Fact Check: Fake & Misleading
Related Blogs

Introduction
On May 21st, 2025, the Department of Telecommunications (DoT) launched the Financial Risk Indicator (FRI) feature, marking an important step towards safeguarding mobile phone users from the risks of financial fraud. This was developed as a part of the Digital Intelligence Platform (DIP), which facilitates coordination between stakeholders to curb the misuse of telecom services for conducting cyber crimes.
What is the Financial Risk Indicator (FRI)?
The FRI is a risk-based metric feature that categorises phone numbers into risk, medium risk, and high risk based on their association with financial fraud in the past. The data pool enabling this intelligence sharing includes the Digital Intelligence Unit (DIU) of the DoT, which engages and sends a list of Mobile Numbers that were disconnected (Mobile Number Revocation List - MNRL) to the following stakeholders, creating a network of checks and balances. They are:
- Intelligence from Non-Banking Finance Companies, and UPI (Unified Payment Interface) gateways.
- The Chakshu facility- a feature on the Sanchar Saathi portal that enables users to report suspected fraudulent communication (Calls, SMS, WhatsApp messages), which has also been roped in.
- Complaints from the National Cybercrime Reporting Portal (NCRP) through the I4C (Indian Cyber Coordination Center).
Some other initiatives taken up concerning securing against digital financial fraud are the Citizen Financial Cyber Fraud Reporting and Management System, the International Incoming Spoofed Calls Prevention System, among others.
A United Stance
The ease of payment and increasing digitisation might have enabled the increasing usage of UPI platforms. However, post-adoption, the responsibility of securing the digital payments infrastructure becomes essential. As per a report by CNBC TV18, UPI fraud cases surged by 85% in FY24. The number of incidents have increased from 7.25 lakh in FY23 to 13.42 lakh in FY24. These cases involved a total value of ₹1,087 crore, compared to ₹573 crore in the previous year, and the number continues to increase.
Nevertheless, UPI platforms are taking their own initiative to combat such crimes. PhonePe, one of the most used digital payment interface as of January 2025 (Statista) has already incorporated the FRI into its PhonePe Protect feature; this blocks transactions with high-risk numbers and issues a warning prior to engaging with numbers that are categorised to be of medium risk.
CyberPeace Insights
The launch of a feature addressing the growing threat of financial fraud is crucial for creating a network of stakeholders to coordinate with law enforcement to better track and prevent crimes. Publicity of these measures will raise public awareness and keep end-users informed. A secure infrastructure for digital payments is necessary in this age, with a robust base mechanism that can adapt to both current and future threats.
References
- https://www.thehawk.in/news/economy-and-business/centre-launches-financial-fraud-risk-indicator-to-safeguard-mobile-users
- https://telanganatoday.com/government-launches-financial-fraud-risk-indicator-to-safeguard-mobile-users
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2130249#:~:text=What%20is%20the%20%E2%80%9CFinancial%20Fraud,High%20risk%20of%20financial%20fraud
- https://www.business-standard.com/industry/news/dot-launches-financial-fraud-risk-indicator-to-aid-cybercrime-detection-125052101912_1.html
- https://www.cnbctv18.com/business/finance/upi-fraud-cases-rise-85-pc-in-fy24-increase-parliament-reply-data-19514295.htm
- https://www.statista.com/statistics/1034443/india-upi-usage-by-platform/#:~:text=In%20January%202025%2C%20PhonePe%20held%20the%20highest,key%20drivers%20of%20UPI%20adoption%20in%20India
- https://telecom.economictimes.indiatimes.com/amp/news/policy/centre-notifies-draft-rules-for-delicensing-lower-6-ghz-band/121260887?nt
.webp)
Introduction
AI-generated fake videos are proliferating on the Internet indeed becoming more common by the day. There is a use of sophisticated AI algorithms that help manipulate or generate multimedia content such as videos, audio, and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake content, and these AI-manipulated videos look realistic. A recent study has shown that 98% of deepfake-generated videos have adult content featuring young girls, women, and children, with India ranking 6th among the nations that suffer from misuse of deepfake technology. This practice has dangerous consequences and could harm an individual's reputation, and criminals could use this technology to create a false narrative about a candidate or a political party during elections.
The working of deepfake videos is based on algorithms that refine the fake content, and the generators are built and trained in such a way as to get the desired output. The process is repeated several times, allowing the generator to improve the content until it seems realistic, making it more flawless. Deepfake videos are created by specific approaches some of them are: -
- Lip syncing: This is the most common technique used in deepfake. Here, the voice recordings of the video, make it appear as to what was originally said by the person appearing in the video.
- Audio deepfake: For Audio-generated deepfake, a generative adversarial network (GAN) is used to colon a person’s voice, based on the vocal patterns and refine it till the desired output is generated.
- Deepfake has become so serious that the technology could be used by bad actors or by cyber-terrorist squads to set their Geo-political agendas. Looking at the present situation in the past few the number of cases has just doubled, targeting children, women and popular faces.
- Greater Risk: in the last few years the cases of deep fake have risen. by the end of the year 2022, the number of cases has risen to 96% against women and children according to a survey.
- Every 60 seconds, a deepfake pornographic video is created, now quicker and more affordable than ever, it takes less than 25 minutes and costs using just one clean face image.
- The connection to deepfakes is that people can become targets of "revenge porn" without the publisher having sexually explicit photographs or films of the victim. They may be made using any number of random pictures or images collected from the internet to obtain the same result. This means that almost everyone who has taken a selfie or shared a photograph of oneself online faces the possibility of a deepfake being constructed in their image.
Deepfake-related security concerns
As deepfakes proliferate, more people are realising that they can be used not only to create non-consensual porn but also as part of disinformation and fake news campaigns with the potential to sway elections and rekindle frozen or low-intensity conflicts.
Deepfakes have three security implications: at the international level, strategic deepfakes have the potential to destroy precarious peace; at the national level, deepfakes may be used to unduly influence elections, and the political process, or discredit opposition, which is a national security concern, and at the personal level, the scope for using Women suffer disproportionately from exposure to sexually explicit content as compared to males, and they are more frequently threatened.
Policy Consideration
Looking at the present situation where the cases of deepfake are on the rise against women and children, the policymakers need to be aware that deepfakes are utilized for a variety of valid objectives, including artistic and satirical works, which policymakers should be aware of. Therefore, simply banning deepfakes is not a way consistent with fundamental liberties. One conceivable legislative option is to require a content warning or disclaimer. Deepfake is an advanced technology and misuse of deepfake technology is a crime.
What are the existing rules to combat deepfakes?
It's worth noting that both the IT Act of 2000 and the IT Rules of 2021 require social media intermediaries to remove deep-fake videos or images as soon as feasible. Failure to follow these guidelines can result in up to three years in jail and a Rs 1 lakh fine. Rule 3(1)(b)(vii) requires social media intermediaries to guarantee that its users do not host content that impersonates another person, and Rule 3(2)(b) requires such content to be withdrawn within 24 hours of receiving a complaint. Furthermore, the government has stipulated that any post must be removed within 36 hours of being published online. Recently government has also issued an advisory to social media intermediaries to identify misinformation and deepfakes.
Conclusion
It is important to foster ethical and responsible consumption of technology. This can only be achieved by creating standards for both the creators and users, educating individuals about content limits, and providing information. Internet-based platforms should also devise techniques to deter the uploading of inappropriate information. We can reduce the negative and misleading impacts of deepfakes by collaborating and ensuring technology can be used in a better manner.
References
- https://timesofindia.indiatimes.com/life-style/parenting/moments/how-social-media-scandals-like-deepfake-impact-minors-and-students-mental-health/articleshow/105168380.cms?from=mdr
- https://www.aa.com.tr/en/science-technology/deepfake-technology-putting-children-at-risk-say-experts/2980880
- https://wiisglobal.org/deepfakes-as-a-security-issue-why-gender-matters/

Introduction
In today’s digital era, warfare is being redefined. Defence Minister Rajnath Singh recently stated that “we are in the age of Grey Zone and hybrid warfare where cyber-attacks, disinformation campaigns and economic warfare have become tools to achieve politico-military aims without a single shot being fired.” The crippling cyberattacks on Estonia in 2007, Russia’s interference in the 2016 US elections, and the ransomware strike on the Colonial Pipeline in the United States in 2021 all demonstrate how states are now using cyberspace to achieve strategic goals while carefully circumventing the threshold of open war.
Legal Complexities: Attribution, Response, and Accountability
Grey zone warfare challenges the traditional notions of security and international conventions on peace due to inherent challenges such as :
- Attribution
The first challenge in cyber warfare is determining who is responsible. Threat actors hide behind rented botnets, fake IP addresses, and servers scattered across the globe. Investigators can follow digital trails, but those trails often point to machines, not people. That makes attribution more of an educated guess than a certainty. A wrong guess could lead to misattribution of blame, which could beget a diplomatic crisis, or worse, a military one. - Proportional Response
Even if attribution is clear, designing a response can be a challenge. International law does give room for countermeasures if they are both ‘necessary’ and ‘proportionate’. But defining these qualifiers can be a long-drawn, contested process. Effectively, governments employ softer measures such as protests or sanctions, tighten their cyber defences or, in extreme cases, strike back digitally. - Accountability
States can be held responsible for waging cyber attacks under the UN’s Draft Articles on State Responsibility. But these are non-binding and enforcement depends on collective pressure, which can be slow and inconsistent. In cyberspace, accountability often ends up being more symbolic than real, leaving plenty of room for repeat offences.
International and Indian Legal Frameworks
Cyber law is a step behind cyber warfare since existing international frameworks are often inadequate. For example, the Tallinn Manual 2.0, the closest thing we have to a rulebook for cyber conflict, is just a set of guidelines. It says that if a cyber operation can be tied to a state, even through hired hackers or proxies, then that state can be held responsible. But attribution is a major challenge. Similarly, the United Nations has tried to build order through its Group of Governmental Experts (GGE) that promotes norms like “don’t attack. However, these norms are not binding, effectively leaving practice to diplomacy and trust.
India is susceptible to routine attacks from hostile actors, but does not yet have a dedicated cyber warfare law. While Section 66F of the IT ACT, 2000, talks about cyber terrorism, and Section 75 lets Indian courts examine crimes committed abroad if they impact India, grey-zone tactics like fake news campaigns, election meddling, and influence operations fall into a legal vacuum.
Way Forward
- Strengthen International Cooperation
Frameworks like the Tallinn Manual 2.0 can form the basis for future treaties. Bilateral and multilateral agreements between countries are essential to ensure accountability and cooperation in tackling grey zone activities. - Develop Grey Zone Legislation
India currently relies on the IT Act, 2000, but this law needs expansion to specifically cover grey zone tactics such as election interference, propaganda, and large-scale disinformation campaigns. - Establish Active Monitoring Systems
India must create robust early detection systems to identify grey zone operations in cyberspace. Agencies can coordinate with social media platforms like Instagram, Facebook, X (Twitter), and YouTube, which are often exploited for propaganda and disinformation, to improve monitoring frameworks. - Dedicated Theatre Commands for Cyber Operations
Along with the existing Defence Cyber Agency, India should consider specialised theatre commands for grey zone and cyber warfare. This would optimise resources, enhance coordination, and ensure unified command in dealing with hybrid threats.
Conclusion
Grey zone warfare in cyberspace is no longer an optional tactic used by threat actors but a routine activity. India lacks the early detection systems, robust infrastructure, and strong cyber laws to counter grey-zone warfare. To counter this, India needs sharper attribution tools for early detection and must actively push for stronger international rules in this global landscape. More importantly, instead of merely blaming without clear plans, India should focus on preparing for solid retaliation strategies. By doing so, India can also learn to use cyberspace strategically to achieve politico-military aims without firing a single shot.
References
- Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations (Michael N. Schmitt)
- UN Document on International Law in Cyberspace (UN Digital Library)
- NATO Cyber Defence Policy
- Texas Law Review: State Responsibility and Attribution of Cyber Intrusions
- Deccan Herald: Defence Minister on Grey Zone Warfare
- VisionIAS: Grey Zone Warfare
- Sachin Tiwari, The Reality of Cyber Operations in the Grey Zone