#FactCheck - Edited Video of ‘India-India’ Chants at Republican National Convention
Executive Summary:
A video online alleges that people are chanting "India India" as Ohio Senator J.D. Vance meets them at the Republican National Convention (RNC). This claim is not correct. The CyberPeace Research team’s investigation showed that the video was digitally changed to include the chanting. The unaltered video was shared by “The Wall Street Journal” and confirmed via the YouTube channel of “Forbes Breaking News”, which features different music performing while Mr. and Mrs. Usha Vance greeted those present in the gathering. So the claim that participants chanted "India India" is not real.

Claims:
A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).


Fact Check:
Upon receiving the posts, we did keyword search related to the context of the viral video. We found a video uploaded by The Wall Street Journal on July 16, titled "Watch: J.D. Vance Is Nominated as Vice Presidential Nominee at the RNC," at the time stamp 0:49. We couldn’t hear any India-India chants whereas in the viral video, we can clearly hear it.
We also found the video on the YouTube channel of Forbes Breaking News. In the timestamp at 3:00:58, we can see the same clip as the viral video but no “India-India” chant could be heard.

Hence, the claim made in the viral video is false and misleading.
Conclusion:
The viral video claiming to show "India-India" chants during Ohio Senator J.D. Vance's greeting at the Republican National Convention is altered. The original video, confirmed by sources including “The Wall Street Journal” and “Forbes Breaking News” features different music without any such chants. Therefore, the claim is false and misleading.
Claim: A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).
Claimed on: X
Fact Check: Fake & Misleading
Related Blogs

CAPTCHA, or the Completely Automated Public Turing Test to Tell Computers and Humans Apart function, is an image or distorted text that users have to identify or interpret to prove they are human. 2007 marked the inception of CAPTCHA, and Google developed its free service called reCAPTCHA, one of the most commonly used technologies to tell computers apart from humans. CAPTCHA protects websites from spam and abuse by using tests considered easy for humans but were supposed to be difficult for bots to solve.
But, now this has changed. With AI becoming more and more sophisticated, it is now capable of solving CAPTCHA tests at a rate that is more accurate than humans, rendering them increasingly ineffective. This raises the question of whether CAPTCHA is still effective as a detection tool with the advancements of AI.
CAPTCHA Evolution: From 2007 Till Now
CAPTCHA has evolved through various versions to keep bots at bay. reCAPTCHA v1 relied on distorted text recognition, v2 introduced image-based tasks and behavioural analysis, and v3 operated invisibly, assigning risk scores based on user interactions. While these advancements improved user experience and security, AI now solves CAPTCHA with 96% accuracy, surpassing humans (50-86%). Bots can mimic human behaviour, undermining CAPTCHA’s effectiveness and raising the question: is it still a reliable tool for distinguishing real people from bots?
Smarter Bots and Their Rise
AI advancements like machine learning, deep learning and neural networks have developed at a very fast pace in the past decade, making it easier for bots to bypass CAPTCHA. They allow the bots to process and interpret the CAPTCHA types like text and images with almost human-like behaviour. Some examples of AI developments against bots are OCR or Optical Character Recognition. The earlier versions of CAPTCHA relied on distorted text: AI because of this tech is able to recognise and decipher the distorted text, making CAPTCHA useless. AI is trained on huge datasets which allows Image Recognition by identifying the objects that are specific to the question asked. These bots can mimic human habits and patterns by Behavioural Analysis and therefore fool the CAPTCHA.
To defeat CAPTCHA, attackers have been known to use Adversarial Machine Learning, which refers to AI models trained specifically to defeat CAPTCHA. They collect CAPTCHA datasets and answers and create an AI that can predict correct answers. The implications that CAPTCHA failures have on platforms can range from fraud to spam to even cybersecurity breaches or cyberattacks.
CAPTCHA vs Privacy: GDPR and DPDP
GDPR and the DPDP Act emphasise protecting personal data, including online identifiers like IP addresses and cookies. Both frameworks mandate transparency when data is transferred internationally, raising compliance concerns for reCAPTCHA, which processes data on Google’s US servers. Additionally, reCAPTCHA's use of cookies and tracking technologies for risk scoring may conflict with the DPDP Act's broad definition of data. The lack of standardisation in CAPTCHA systems highlights the urgent need for policymakers to reevaluate regulatory approaches.
CyberPeace Analysis: The Future of Human Verification
CAPTCHA, once a cornerstone of online security, is losing ground as AI outperforms humans in solving these challenges with near-perfect accuracy. Innovations like invisible CAPTCHA and behavioural analysis provided temporary relief, but bots have adapted, exploiting vulnerabilities and undermining their effectiveness. This decline demands a shift in focus.
Emerging alternatives like AI-based anomaly detection, biometric authentication, and blockchain verification hold promise but raise ethical concerns like privacy, inclusivity, and surveillance. The battle against bots isn’t just about tools but it’s about reimagining trust and security in a rapidly evolving digital world.
AI is clearly winning the CAPTCHA war, but the real victory will be designing solutions that balance security, user experience and ethical responsibility. It’s time to embrace smarter, collaborative innovations to secure a human-centric internet.
References
- https://www.business-standard.com/technology/tech-news/bot-detection-no-longer-working-just-wait-until-ai-agents-come-along-124122300456_1.html
- https://www.milesrote.com/blog/ai-defeating-recaptcha-the-evolving-battle-between-bots-and-web-security
- https://www.technologyreview.com/2023/10/24/1081139/captchas-ai-websites-computing/
- https://datadome.co/guides/captcha/recaptcha-gdpr/

Introduction
The use of AI in content production, especially images and videos, is changing the foundations of evidence. AI-generated videos and images can mirror a person’s facial features, voice, or actions with a level of fidelity to which the average individual may not be able to distinguish real from fake. The ability to provide creative solutions is indeed a beneficial aspect of this technology. However, its misuse has been rapidly escalating over recent years. This creates threats to privacy and dignity, and facilitates the creation of dis/misinformation. Its real-world consequences are the manipulation of elections, national security threats, and the erosion of trust in society.
Why India Needs Deepfake Regulation
Deepfake regulation is urgently needed in India, evidenced by the recent Rashmika Mandanna incident, where a hoax deepfake of an actress created a scandal throughout the country. This was the first time that an individual's image was superimposed on the body of another woman in a viral deepfake video that fooled many viewers and created outrage among those who were deceived by the video. The incident even led to law enforcement agencies issuing warnings to the public about the dangers of manipulated media.
This was not an isolated incident; many influencers, actors, leaders and common people have fallen victim to deepfake pornography, deepfake speech scams, defraudations, and other malicious uses of deepfake technology. The rapid proliferation of deepfake technology is outpacing any efforts by lawmakers to regulate its widespread use. In this regard, a Private Member’s Bill was introduced in the Lok Sabha in its Winter Session. This proposal was presented to the Lok Sabha as an individual MP's Private Member's Bill. Even though these have had a low rate of success in being passed into law historically, they do provide an opportunity for the government to take notice of and respond to emerging issues. In fact, Private Member's Bills have been the catalyst for government action on many important matters and have also provided an avenue for parliamentary discussion and future policy creation. The introduction of this Bill demonstrates the importance of addressing the public concern surrounding digital impersonation and demonstrates that the Parliament acknowledges digital deepfakes to be a significant concern and, therefore, in need of a legislative framework to combat them.
Key Features Proposed by the New Deepfake Regulation Bill
The proposed legislation aims to create a strong legal structure around the creation, distribution and use of deepfake content in India. Its five core proposals are:
1. Prior Consent Requirement: individuals must give their written approval before producing or distributing deepfake media, including digital representations of themselves, as well as their faces, images, likenesses and voices. This aims to protect women, celebrities, minors, and everyday citizens against the use of their identities with the intent to harm them or their reputations or to harass them through the production of deepfakes.
2. Penalties for Malicious Deepfakes: Serious criminal consequences should be placed for creating or sharing deepfake media, particularly when it is intended to cause harm (defame, harass, impersonate, deceive or manipulate another person). The Bill also addresses financially fraudulent use of deepfakes, political misinformation, interfering with elections and other types of explicit AI-generated media.
3. Establishment of a Deepfake Task Force: To look at the potential impact of deepfakes on national security, elections and public order, as well as on public safety and privacy. This group will work with academic institutions, AI research labs and technology companies to create advanced tools for the detection of deepfakes and establish best practices for the safe and responsible use of generative AI.
4. Creation of a Deepfake Detection and Awareness Fund: To assist with the development of tools for detecting deepfakes, increasing the capacity of law enforcement agencies to investigate cybercrime, promoting public awareness of deepfakes through national campaigns, and funding research on artificial intelligence safety and misinformation.
How Other Countries Are Handling Deepfakes
1. United States
Many States in the United States, including California and Texas, have enacted laws to prohibit the use of politically deceptive deepfakes during elections. Additionally, the Federal Government is currently developing regulations requiring that AI-generated content be clearly labelled. Social Media Platforms are also being encouraged to implement a requirement for users to disclose deepfakes.
2. United Kingdom
In the United Kingdom, it is illegal to create or distribute intimate deepfake images without consent; violators face jail time. The Online Safety Act emphasises the accountability of digital media providers by requiring them to identify, eliminate, and avert harmful synthetic content, which makes their role in curating safe environments all the more important.
3. European Union:
The EU has enacted the EU AI Act, which governs the use of deepfakes by requiring an explicit label to be affixed to any AI-generated content. The absence of a label would subject an offending party to potentially severe regulatory consequences; therefore, any platform wishing to do business in the EU should evaluate the risks associated with deepfakes and adhere strictly to the EU's guidelines for transparency regarding manipulated media.
4. China:
China has among the most rigorous regulations regarding deepfakes anywhere on the planet. All AI-manipulated media will have to be marked with a visible watermark, users will have to authenticate their identities prior to being allowed to use advanced AI tools, and online platforms have a legal requirement to take proactive measures to identify and remove synthetic materials from circulation.
Conclusion
Deepfake technology has the potential to be one of the greatest (and most dangerous) innovations of AI technology. There is much to learn from incidents such as that involving Rashmika Mandanna, as well as the proliferation of deepfake technology that abuses globally, demonstrating how easily truth can be altered in the digital realm. The new Private Member's Bill created by India seeks to provide for a comprehensive framework to address these abuses based on prior consent, penalties that actually work, technical preparedness, and public education/awareness. With other nations of the world moving towards increased regulation of AI technology, proposals such as this provide a direction for India to become a leader in the field of responsible digital governance.
References
- https://www.ndtv.com/india-news/lok-sabha-introduces-bill-to-regulate-deepfake-content-with-consent-rules-9761943
- https://m.economictimes.com/news/india/shiv-sena-mp-introduces-private-members-bill-to-regulate-deepfakes/articleshow/125802794.cms
- https://www.bbc.com/news/world-asia-india-67305557
- https://www.akingump.com/en/insights/blogs/ag-data-dive/california-deepfake-laws-first-in-country-to-take-effect
- https://codes.findlaw.com/tx/penal-code/penal-sect-21-165/
- https://www.mishcon.com/news/when-ai-impersonates-taking-action-against-deepfakes-in-the-uk#:~:text=As%20of%2031%20January%202024,of%20intimate%20deepfakes%20without%20consent.
- https://www.politico.eu/article/eu-tech-ai-deepfakes-labeling-rules-images-elections-iti-c2pa/
- https://www.reuters.com/article/technology/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSKBN1Y30VT/
.webp)
Introduction
The ongoing armed conflict between Israel and Hamas/ Palestine is in the news all across the world. The latest conflict was triggered by unprecedented attacks against Israel by Hamas militants on October 7, killing thousands of people. Israel has launched a massive counter-offensive against the Islamic militant group. Amid the war, the bad information and propaganda spreading on various social media platforms, tech researchers have detected a network of 67 accounts that posted false content about the war and received millions of views. The ‘European Commission’ has sent a letter to Elon Musk, directing them to remove illegal content and disinformation; otherwise, penalties can be imposed. The European Commission has formally requested information from several social media giants on their handling of content related to the Israel-Hamas war. This widespread disinformation impacts and triggers the nature of war and also impacts the world and affects the goodwill of the citizens. The bad group, in this way, weaponise the information and fuels online hate activity, terrorism and extremism, flooding political polarisation with hateful content on social media. Online misinformation about the war is inciting extremism, violence, hate and different propaganda-based ideologies. The online information environment surrounding this conflict is being flooded with disinformation and misinformation, which amplifies the nature of war and too many fake narratives and videos are flooded on social media platforms.
Response of social media platforms
As there is a proliferation of online misinformation and violent content surrounding the war, It imposes a question on social media companies in terms of content moderation and other policy shifts. It is notable that Instagram, Facebook and X(Formerly Twitter) all have certain features in place giving users the ability to decide what content they want to view. They also allow for limiting the potentially sensitive content from being displayed in search results.
The experts say that It is of paramount importance to get a sort of control in this regard and define what is permissible online and what is not, Hence, what is required is expertise to determine the situation, and most importantly, It requires robust content moderation policies.
During wartime, people who are aggrieved or provoked are often targeted by this internet disinformation that blends ideological beliefs and spreads conspiracy theories and hatred. This is not a new phenomenon, it is often observed that disinformation-spreading groups emerged and became active during such war and emergency times and spread disinformation and propaganda-based ideologies and influence the society at large by misrepresenting the facts and planted stories. Social media has made it easier to post user-generated content without properly moderating it. However, it is a shared responsibility of tech companies, users, government guidelines and policies to collectively define and follow certain mechanisms to fight against disinformation and misinformation.
Digital Services Act (DSA)
The newly enacted EU law, i.e. Digital Services Act, pushes various larger online platforms to prevent posts containing illegal content and also puts limits on targeted advertising. DSA enables to challenge the of illegal online content and also poses requirements to prevent misinformation and disinformation and ensure more transparency over what the users see on the platforms. Rules under the DSA cover everything from content moderation & user privacy to transparency in operations. DSA is a landmark EU legislation moderating online platforms. Large tech platforms are now subject to content-related regulation under this new EU law ‘The Digital Services Act’, which also requires them to prevent the spread of misinformation and disinformation and overall ensure a safer online environment.
Indian Scenario
The Indian government introduced the Intermediary Guidelines (Intermediary Guidelines and Digital Media Ethics Code) Rules, updated in 2023 which talks about the establishment of a "fact check unit" to identify false or misleading online content. Digital Personal Data Protection, 2023 has also been enacted which aims to protect personal data. The upcoming Digital India bill is also proposed to be tabled in the parliament, this act will replace the current Information & Technology Act, of 2000. The upcoming Digital India bill can be seen as future-ready legislation to strengthen India’s current cybersecurity posture. It will comprehensively deal with the aspects of ensuring privacy, data protection, and fighting growing cyber crimes in the evolving digital landscape and ensuring a safe digital environment. Certain other entities including civil societies are also actively engaged in fighting misinformation and spreading awareness for safe and responsible use of the Internet.
Conclusion:
The widespread disinformation and misinformation content amid the Israel-Hamas war showcases how user-generated content on social media shows you the illusion of reality. There is widespread misinformation, misleading content or posts on social media platforms, and misuse of new advanced AI technologies that even make it easier for bad actors to create synthetic media content. It is also notable that social media has connected us like never before. Social media is a great platform with billions of active social media users around the globe, it offers various conveniences and opportunities to individuals and businesses. It is just certain aspects that require the attention of all of us to prevent the bad use of social media. The social media platforms and regulatory authorities need to be vigilant and active in clearly defining and improving the policies for content regulation and safe and responsible use of social media which can effectively combat and curtail the bad actors from misusing social media for their bad motives. As a user, it's the responsibility of users to exercise certain duties and promote responsible use of social media. With the increasing penetration of social media and the internet, misinformation is rampant all across the world and remains a global issue which needs to be addressed properly by implementing strict policies and adopting best practices to fight the misinformation. Users are encouraged to flag and report misinformative or misleading content on social media and should always verify it from authentic sources. Hence creating a safer Internet environment for everyone.
References:
- https://abcnews.go.com/US/experts-fear-hate-extremism-social-media-israel-hamas-war/story?id=104221215
- https://edition.cnn.com/2023/10/14/tech/social-media-misinformation-israel-hamas/index.html
- https://www.nytimes.com/2023/10/13/business/israel-hamas-misinformation-social-media-x.html
- https://www.africanews.com/2023/10/24/fact-check-misinformation-about-the-israel-hamas-war-is-flooding-social-media-here-are-the//
- https://www.theverge.com/23845672/eu-digital-services-act-explained