#FactCheck - Edited Video of ‘India-India’ Chants at Republican National Convention
Executive Summary:
A video online alleges that people are chanting "India India" as Ohio Senator J.D. Vance meets them at the Republican National Convention (RNC). This claim is not correct. The CyberPeace Research team’s investigation showed that the video was digitally changed to include the chanting. The unaltered video was shared by “The Wall Street Journal” and confirmed via the YouTube channel of “Forbes Breaking News”, which features different music performing while Mr. and Mrs. Usha Vance greeted those present in the gathering. So the claim that participants chanted "India India" is not real.

Claims:
A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).


Fact Check:
Upon receiving the posts, we did keyword search related to the context of the viral video. We found a video uploaded by The Wall Street Journal on July 16, titled "Watch: J.D. Vance Is Nominated as Vice Presidential Nominee at the RNC," at the time stamp 0:49. We couldn’t hear any India-India chants whereas in the viral video, we can clearly hear it.
We also found the video on the YouTube channel of Forbes Breaking News. In the timestamp at 3:00:58, we can see the same clip as the viral video but no “India-India” chant could be heard.

Hence, the claim made in the viral video is false and misleading.
Conclusion:
The viral video claiming to show "India-India" chants during Ohio Senator J.D. Vance's greeting at the Republican National Convention is altered. The original video, confirmed by sources including “The Wall Street Journal” and “Forbes Breaking News” features different music without any such chants. Therefore, the claim is false and misleading.
Claim: A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).
Claimed on: X
Fact Check: Fake & Misleading
Related Blogs

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions

Introduction
A famous quote, “Half knowledge is always dangerous”, but “Too much knowledge of anything can lead to destruction”. Recently very infamous spyware and malware named WyrmSpy and Dragon Egg were invented by a Chinese group of hackers APT41. The APT41 is a state-endorsed Clandstein active group based in the People’s Republic of China that has been active since 2012. In contrast to numerous countries-government supported, APT has a footprint record jeopardising both government organisations for clandestine activities as well as different private organisations or enterprises for their financial gain. APT41 group aims at Android devices through spyware wyrmspy and dragon egg, which masquerades as a legitimate application. According to the U.S. jury legal accusation from 2019 to 2020, the group was entangled in threatening over more than 100 public and private individuals and organisations in the United States and around the world.Moreover, a detailed analysis report was shared by the Lookout Threat Researchers, that has been actively monitoring and tracking both spyware and malware.
Briefing about how spyware attacks on Android devices take place
To begin with, this malware imitates a real source Android application to show some sort of notification. Once it is successfully installed on the user’s machine, proclaims multiple device’s permission to enable data filtration.
Wyrmspy complies with log files, photos, device locations, SMS(read and write), and audio recordings. It has also authenticated that there are no detection malware activities found on google play even after running multiple security levels. These malicious things are made with the intent to obtain rooting access privileges to the device and monitor activities to the specified commands received from the C2 servers.
Similarly, Dragon Egg can collect data files, contacts, locations, and audio recordings, and it also accesses camera photos once it successfully trade-off the device. Dragon egg receives a payload that is also known as “smallmload.jar”, which is either from APK(Android Packet Kit).
WyrmSpy initially masquerades as a default operation system application, and Dragon Egg simulates a third-party keyboard/ messaging application.
Overview of APT41 Chinese group background
APT41 is a Chinese-based stealth activity-carrying group that is said to be active since mid-2006. Rumours about APT41 that it was also a part of the 2nd Bureau of the People’s Liberation Army (PLA) General Staff Department’s (GSD) 3rd Department. Owning to that fact, 2006 has seen 140+ organisations’ security getting compromised, ranging from 20 strategically crucial companies.APT is also recognised for rationally plundering hundreds of terabytes of data from at least 141 organisations between 2006 and 2013. It typically begins with spear-phishing emails to the targeted victims. These sent emails contain official templates along with language pretending to be from a legitimate real source, carrying a malicious attachment. As the victim opens the attached file, the backdoor bestows the control of the targeted machine to the APT groups machine. Once there is an unauthorised gain of access, the attacker visits and revisits the victim’s machine. The group remains dormant for lengthy durations, more likely for months or even for years.
Advisory points need to adhere to while using Android devices
- The security patch update is necessary at least once a week
- Clearing up unwanted junk files.
- Cache files of every frequently used application need to clear out.
- Install only required applications from
Google play store. - Download only necessary APK files only it comes from trusted resources.
- Before giving device permission, it is advisable to run your files or URLs on VirusTotal.com this website will give a good closure to the malicious intent.
- Install good antivirus software.
- Individuals need to check the source of the email before opening an attachment to it.
- Never collect or add any randomly found device to your system
- Moreover, the user needs to keep track of their device activity. Rather than using devices just for entertainment purposes, it is more important to look for data protection on that device.
Conclusion
Network Crack Program Hacker Group (NCPH), which grew as an APT41 group with malicious intent, earlier performed the role of grey hat hacker, this group somehow grew up greedy to enhance more money laundering by hacking networks, devices, etc. As this group conducts a supply chain of attacks to gain unauthorised access to the network throughout the world, targeting hundreds of companies, including an extensive selection of industries such as social media, telecommunications, government, defence, education, and manufacturing. Last but not least, many more fraud-making groups with malicious intent will be forming and implementing in the future. It is on individuals and organisations to secure themselves but practise basic security levels to safeguard themselves against such threats and attacks.

Introduction
Twitter is a popular social media plate form with millions of users all around the world. Twitter’s blue tick system, which verifies the identity of high-profile accounts, has been under intense scrutiny in recent years. The platform must face backlash from its users and brands who have accused it of basis, inaccuracy, and inconsistency in its verification process. This blog post will explore the questions raised on the verification process and its impact on users and big brands.
What is Twitter’s blue trick System?
The blue tick system was introduced in 2009 to help users identify the authenticity of well-known public figures, Politicians, celebrities, sportspeople, and big brands. The Twitter blue Tick system verifies the identity of high-profile accounts to display a blue badge next to your username.
According to a survey, roughly there are 294,000 verified Twitter Accounts which means they have a blue tick badge with them and have also paid the subscription for the service, which is nearly $7.99 monthly, so think about those subscribers who have paid the amount and have also lost their blue badge won’t they feel cheated?
The Controversy
Despite its initial aim, the blue tick system has received much criticism from consumers and brands. Twitter’s irregular and non-transparent verification procedure has sparked accusations of prejudice and inaccuracy. Many Twitter users have complained that the network’s verification process is random and favours account with huge followings or celebrity status. In contrast, others have criticised the platform for certifying accounts that promote harmful or controversial content.
Furthermore, the verification mechanism has generated user confusion, as many need to understand the significance of the blue tick badge. Some users have concluded that the blue tick symbol represents a Twitter endorsement or that the account is trustworthy. This confusion has resulted in users following and engaging with verified accounts that promote misleading or inaccurate data, undermining the platform’s credibility.
How did the Blue Tick Row start in India?
On 21 May 2021, when the government asked Twitter to remove the blue badge from several profiles of high-profile Indian politicians, including the Indian National Congress Party Vice-President Mr Rahul Ghandhi.
The blue badge gives the users an authenticated identity. Many celebrities, including Amitabh Bachchan, popularly known as Big B, Vir Das, Prakash Raj, Virat Kohli, and Rohit Sharma, have lost their blue tick despite being verified handles.
What is the Twitter policy on blue tick?
To Twitter’s policy, blue verification badges may be removed from accounts if the account holder violates the company’s verification policy or terms of service. In such circumstances, Twitter typically notifies the account holder of the removal of the verification badge and the reason for the removal. In the instance of the “Twitter blue badge row” in India, however, it appears that Twitter did not notify the impacted politicians or their representatives before revoking their verification badges. Twitter’s lack of communication has exacerbated the controversy around the episode, with some critics accusing the company of acting arbitrarily and not following due process.
Is there a solution?
The “Twitter blue badge row” has no simple answer since it involves a complex convergence of concerns about free expression, social media policies, and government laws. However, here are some alternatives:
- Establish clear guidelines: Twitter should develop and constantly implement clear guidelines and policies for the verification process. All users, including politicians and government officials, would benefit from greater transparency and clarity.
- Increase transparency: Twitter’s decision-making process for deleting or restoring verification badges should be more open. This could include providing explicit reasons for badge removal, notifying impacted users promptly, and offering an appeals mechanism for those who believe their credentials were removed unfairly.
- Engage in constructive dialogue: Twitter should engage in constructive dialogue with government authorities and other stakeholders to address concerns about the platform’s content moderation procedures. This could contribute to a more collaborative approach to managing online content, leading to more effective and accepted policies.
- Follow local rules and regulations: Twitter should collaborate with the Indian government to ensure it conforms to local laws and regulations while maintaining freedom of expression. This could involve adopting more precise standards for handling requests for material removal or other actions from governments and other organisations.
Conclusion
To sum up, the “Twitter blue tick row” in India has highlighted the complex challenges that Social media faces daily in handling the conflicting interests of free expression, government rules, and their own content moderation procedures. While Twitter’s decision to withdraw the blue verification badges of several prominent Indian politicians garnered anger from the government and some public members, it also raised questions about the transparency and uniformity of Twitter’s verification procedure. In order to deal with this issue, Twitter must establish clear verification procedures and norms, promote transparency in its decision-making process, participate in constructive communication with stakeholders, and adhere to local laws and regulations. Furthermore, the Indian government should collaborate with social media platforms to create more effective and acceptable laws that balance the necessity for free expression and the protection of citizens’ rights. The “Twitter blue tick row” is just one example of the complex challenges that social media platforms face in managing online content, and it emphasises the need for greater collaboration among platforms, governments, and civil society organisations to develop effective solutions that protect both free expression and citizens’ rights.