#FactCheck - Edited Video of ‘India-India’ Chants at Republican National Convention
Executive Summary:
A video online alleges that people are chanting "India India" as Ohio Senator J.D. Vance meets them at the Republican National Convention (RNC). This claim is not correct. The CyberPeace Research team’s investigation showed that the video was digitally changed to include the chanting. The unaltered video was shared by “The Wall Street Journal” and confirmed via the YouTube channel of “Forbes Breaking News”, which features different music performing while Mr. and Mrs. Usha Vance greeted those present in the gathering. So the claim that participants chanted "India India" is not real.

Claims:
A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).


Fact Check:
Upon receiving the posts, we did keyword search related to the context of the viral video. We found a video uploaded by The Wall Street Journal on July 16, titled "Watch: J.D. Vance Is Nominated as Vice Presidential Nominee at the RNC," at the time stamp 0:49. We couldn’t hear any India-India chants whereas in the viral video, we can clearly hear it.
We also found the video on the YouTube channel of Forbes Breaking News. In the timestamp at 3:00:58, we can see the same clip as the viral video but no “India-India” chant could be heard.

Hence, the claim made in the viral video is false and misleading.
Conclusion:
The viral video claiming to show "India-India" chants during Ohio Senator J.D. Vance's greeting at the Republican National Convention is altered. The original video, confirmed by sources including “The Wall Street Journal” and “Forbes Breaking News” features different music without any such chants. Therefore, the claim is false and misleading.
Claim: A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).
Claimed on: X
Fact Check: Fake & Misleading
Related Blogs

The Ghibli trend has been in the news for the past couple of weeks for multiple reasons, be it good or bad. The nostalgia that everyone has for the art form has made people turn a blind eye to what the trend means to the artists who painstakingly create the art. The open-source platforms may be trained on artistic material without the artist's ‘explicit permission’ making it so that the rights of the artists are downgraded. The artistic community has reached a level where they are questioning their ability to create, which can be recreated by this software in a couple of seconds and without any thought as to what it is doing. OpenAI’s update on ChatGPT makes it simple for users to create illustrations that are like the style created by Hayao Miyazaki and made into anything from personal pictures to movie scenes and making them into Ghibli-style art. The updates in AI to generate art, including Ghibli-style, may raise critical questions about artistic integrity, intellectual property, and data privacy risks.
AI and the Democratization of Creativity
AI-powered tools have lowered barriers and enable more people to engage with artistic expression. AI allows people to create appealing content in the form of art regardless of their artistic capabilities. The update of ChatGPT has made it so that art has been democratized, and the abilities of the user don't matter. It makes art accessible, efficient and a creative experiment to many.
Unfortunately, these developments also pose challenges for the original artistry and the labour of human creators. The concern doesn't just stop at AI replacing artists, but also about the potential misuse it can lead to. This includes unauthorized replication of distinct styles or deepfake applications. When it is used ethically, AI can enhance artistic processes. It can assist with repetitive tasks, improving efficiency, and enabling creative experimentation.
However, its ability to mimic existing styles raises concerns. The potential that AI-generated content has could lead to a devaluation of human artists' work, potential copyright issues, and even data privacy risks. Unauthorized training of AI models that create art can be exploited for misinformation and deepfakes, making human oversight essential. Few artists believe that AI artworks are disrupting the accepted norms of the art world. Additionally, AI can misinterpret prompts, producing distorted or unethical imagery that contradicts artistic intent and cultural values, highlighting the critical need for human oversight.
The Ethical and Legal Dilemmas
The main dilemma that surrounds trends such as the Ghibli trend is whether it compromises human efforts by blurring the line between inspiration and infringement of artistic freedom. Further, an issue that is not considered by most users is whether the personal content (personal pictures in this case) uploaded on AI models is posing a risk to their privacy. This leads to the issue where the potential misuse of AI-generated content can be used to spread misinformation through misleading or inappropriate visuals.
The negative effects can only be balanced if a policy framework is created that can ensure the fair use of AI in Art. Further, this should ensure that the training of AI models is done in a manner that is fair to the artists who are the original creators of a style. Human oversight is needed to moderate the AI-generated content. This oversight can be created by creating ethical AI usage guidelines for platforms that host AI-generated art.
Conclusion: What Can Potentially Be Done?
AI is not a replacement for human effort, it is to ease human effort. We need to promote a balanced AI approach that protects the integrity of artists and, at the same time, continues to foster innovation. And finally, strengthening copyright laws to address AI-generated content. Labelling AI content and ensuring that this content is disclosed as AI-generated is the first step. Furthermore, there should be fair compensation made to the human artists based on whose work the AI model is trained. There is an increasing need to create global AI ethics guidelines to ensure that there is transparency, ethical use and human oversight in AI-driven art. The need of the hour is that industries should work collaboratively with regulators to ensure that there is responsible use of AI.
References
- https://medium.com/@haileyq/my-experience-with-studio-ghibli-style-ai-art-ethical-debates-in-the-gpt-4o-era-b84e5a24cb60
- https://www.bbc.com/future/article/20241018-ai-art-the-end-of-creativity-or-a-new-movement
%20(1).webp)
Introduction
Bumble’s launch of its ‘Opening Move’ feature has sparked a new narrative on safety and privacy within the digital dating sphere and has garnered mixed reactions from users. It was launched against the backdrop of women stating that the ‘message first’ policy of Bumble was proving to be tedious. Addressing the large-scale review, Bumble launched its ‘Opening Move’ feature, whereby users can either craft or select from pre-set questions which potential matches may choose to answer to start the conversation at first glance. These questions are a segue into meaningful and insightful conversation from the get-go and overstep the traditional effort to start engaging chats between matched users. This feature is an optional feature that users may enable and as such does not prevent a user from exercising the autonomy previously in place.
Innovative Approach to Conversation Starters
Many users consider this feature as innovative; not only does it act as a catalyst for fluid conversation but also cultivates insightful dialogue, fostering meaningful interactions that are devoid of the constraint of superficial small talk. The ‘Opening Moves’ feature may also be aligned with unique scientific research indicating that individuals form their initial attractions within 3-seconds of intimate interaction, thereby proving to be a catalyst to the decision-making process of an individual in the attraction time frame.
Organizational Benefits and Data Insights
From an organisational standpoint, the feature is a unique solution towards localisation challenges faced by apps; the option of writing a personalised ‘Opening Move’ implies setting prompts that are culturally relevant and appropriate in a specific area. Moreover, it is anticipated that Bumble may enhance and improve user experience within the platform through data analysis. Data from responses to an ‘Opening Move’ may provide valuable insights into user preferences and patterns by analysing which pre-set prompts garner more responses over others and how often is a user-written ‘Opening Move’ successful in obtaining a response in comparison with Bumble’s pre-set prompts. A quick glance at Bumble’s privacy policy[1] shows that data storing and transferring of chats between users are not shared with third parties, further safeguarding personal privacy. However, Bumble does use the chat data for its own internal purposes after removing personally identifiable information from chats. The manner of such review and removal of data has not been specified, which may raise challenges depending upon whether the reviewer is a human or an algorithm.
However, some users perceive the feature as counterproductive to the company’s principle of ‘women make the first move’. While Bumble aims to market the feature as a neutral ground for matched users based on the exercise of choice, users see it as a step back into the heteronormative gender expectations that most dating apps conform to, putting the onus of the ‘first move’ on men. Many male users have complained that the feature acts as a catalyst for men to opt out of the dating app and would most likely refrain from interacting with profiles enabled with the ‘Opening Move’ feature, since the pressure to answer in a creative manner is disproportionate with the likelihood their response actually being entertained.[2] Coupled with the female users terming the original protocol as ‘too much effort’, the preset questions of the ‘Opening Move’ feature may actively invite users to categorise potential matches according to arbitrary questions that undermine real-life experiences, perspectives and backgrounds of each individual.[3]
Additionally, complications are likely to arise when a notorious user sets a question that indirectly gleans personal or sensitive, identifiable information. The individual responding may be bullied or be subjected to hateful slurs when they respond to such carefully crafted conversation prompts.
Safety and Privacy Concerns
On the corollary, the appearance of choice may translate into more challenges for women on the platform. The feature may spark an increase in the number of unsolicited, undesirable messages and images from a potential match. The most vulnerable groups at present remain individuals who identify as females and other sexual minorities.[4] At present, there appears to be no mechanism in place to proactively monitor the content of responses, relying instead on user reporting. This approach may prove to be impractical given the potential volume of objectionable messages, necessitating a more efficient solution to address this issue. It is to be noted that in spite of a user reporting, the current redressal systems of online platforms remain lax, largely inadequate and demonstrate ineffectiveness in addressing user concerns or grievances. This lack of proactiveness is violative of the right to redressal provided under the Digital Personal Data Protection Act, 2023. It is thought that the feature may actually take away user autonomy that Bumble originally aimed to grant since Individuals who identify as introverted, shy, soft-spoken, or non-assertive may refrain from reporting harassing messages altogether, potentially due to discomfort or reluctance to engage in confrontation. Resultantly, it is anticipated that there would be a sharp uptake in cases pertaining to cyberbullying, harassment and hate speech (especially vulgar communications) towards both the user and the potential match.
From an Indian legal perspective, dating apps have to adhere to the Information Technology Act, 2000 [5], the Information Technology (Intermediary and Digital Media Ethics) Rules 2021 [6] and the Digital Personal Data Protection Act, 2023, that regulates a person’s digital privacy and set standards on the kind of content an intermediary may host. An obligation is cast upon an intermediary to uprise its users on what content is not allowed on its platform in addition to mandating intimation of the user’s digital rights. The lack of automated checks, as mentioned above, is likely to make Bumble non-compliant with the ethical guidelines.
The optional nature of the ‘Opening Move’ grants users some autonomy. However, some technical updates may enhance the user experience of this feature. Technologies like AI are an effective aid in behavioural and predictive analysis. An upgraded ‘matching’ algorithm can analyse the number of un-matches a profile receives, thereby identifying and flagging a profile having multiple lapsed matches. Additionally, the design interface of the application bearing a filter option to filter out flagged profiles would enable a user to be cautious while navigating through the matches. Another possible method of weeding out notorious profiles is by deploying a peer-review system of profiles whereby a user has a singular check-box that enables them to flag a profile. Such a checkbox would ideally be devoid of any option for writing personal comments and would bear a check box stating whether the profile is most or least likely to bully/harass. This would ensure that a binary, precise response is recorded and any coloured remarks are avoided. [7]
Governance and Monitoring Mechanisms
From a governance point of view, a monitoring mechanism on the manner of crafting questions is critical. Systems should be designed to detect certain words/sentences and a specific manner of framing sentences to disallow questions contrary to the national legal framework. An onscreen notification having instructions on generally acceptable manner of conversations as a reminder to users to maintain cyber hygiene while conversing is also proposed as a mandated requirement for platforms. The notification/notice may also include guidelines on what information is safe to share in order to safeguard user privacy. Lastly, a revised privacy policy should establish the legal basis for processing responses to ‘Opening Moves’, thereby bringing it in compliance with national legislations such as the Digital Personal Data Protection Act, 2023.
Conclusion
Bumble's 'Opening Move' feature marks the company’s ‘statement’ step to address user concerns regarding initiating conversations on the platform. While it has been praised for fostering more meaningful interactions, it also raises not only ethical concerns but also concerns over user safety. While the 'Opening Move' feature can potentially enhance user experience, its success is largely dependent on Bumble's ability to effectively navigate the complex issues associated with this feature. A more robust monitoring mechanism that utilises newer technology is critical to address user concerns and to ensure compliance with national laws on data privacy.
Endnotes:
- [1] Bumble’s privacy policy https://bumble.com/en-us/privacy
- [2] Discussion thread, r/bumble, Reddit https://www.reddit.com/r/Bumble/comments/1cgrs0d/women_on_bumble_no_longer_have_to_make_the_first/?share_id=idm6DK7e0lgkD7ZQ2TiTq&utm_content=2&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1&rdt=65068
- [3] Mcrea-Hedley, Olivia, “Love on the Apps: When did Dating Become so Political?”, 8 February 2024 https://www.service95.com/the-politics-of-dating-apps/
- [4] Gewirtz-Meydan, A., Volman-Pampanel, D., Opuda, E., & Tarshish, N. (2024). ‘Dating Apps: A New Emerging Platform for Sexual Harassment? A Scoping Review. Trauma, Violence, & Abuse, 25(1), 752-763. https://doi.org/10.1177/15248380231162969
- [5] Information Technology Act, 2000 https://www.indiacode.nic.in/bitstream/123456789/13116/1/it_act_2000_updated.pdf
- [6] Information Technology (Intermediary Guidelines and Digital Media Ethics) Rules 2021 https://www.meity.gov.in/writereaddata/files/Information%20Technology%20%28Intermediary%20Guidelines%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20%28updated%2006.04.2023%29-.pdf
- [7] Date Confidently: Engaging Features in a Dating App (Use Cases), Consaguous, 10 July 2023 https://www.consagous.co/blog/date-confidently-engaging-features-in-a-dating-app-use-cases

Introduction
In the dynamic realm of online gaming, where virtual worlds and competitive landscapes converge, ensuring the safety of players has become an imperative task. As the digital gaming community expands, so do the challenges of navigating potential risks and threats. There is a need for crucial strategies and measures aimed at safeguarding players and fostering a secure environment where gamers can fully immerse themselves in their passion without compromising their well-being. Online gaming, a thriving industry, makes gamers attractive targets for cyber theft, including account takeovers (ATO). ATO involves stealing characters, inventory, in-game currencies, achievements, and skins, with high-level accounts as prime targets. Gamers face real-life consequences as fraud within games can compromise personal information, including location, credentials, credit card details, and more. Protecting oneself involves maintaining privacy in sharing information, enabling two-factor authentication, and employing strong, unique passwords with security solutions that provide additional safeguards for an uninterrupted gaming experience.
Online Gaming Carries The Following Major Risks
Viruses and malware: Searching for less expensive or free downloads of your preferred games puts you in danger of accidentally downloading malware and viruses.
Theft of identity: Hackers gather information that is personally identifiable to create victimised identities. The chat feature is one of the possible risks of playing video games online with random people.
Invasion of a profile: It's not advisable to use an identical password and username across all of your preferred video game platforms since if hackers manage to obtain your login information, they may hack all of your player accounts and perhaps take control of them.
Swatting and doxing: Doxxing is the practice of hackers publishing your residential location or telephone number online after obtaining your private data. Swatting is a dangerous harassment tactic originating from online gaming, involving false emergency reports to provoke an excessive police response at the unsuspecting victim's location.
How Hacking Poses Serious Risks to Online Gaming Security
The video game industry has experienced rapid growth in recent times, catering to millions of players throughout the globe who relish an extensive array of engaging adventures. But because of its widespread use, hackers are now more likely to target it in an attempt to take advantage of its weaknesses.
Hackers are drawn to the gaming business for a number of reasons.
Due to its enormous income potential, this sector is an appealing option for investment. Players' large audience offers a treasure trove of private data that can be used for fraudulent transactions and other nefarious activities. Because of its high exposure, the sector is a tempting target for attackers looking to achieve recognition or make an impression. Customers wish to add modifications, cheats, or other external software to their contest, which increases the threat. In this sector, there is fierce competition, and winners take home large cash awards. This encourages players to use DDoS attacks to their advantage in order to outperform their rivals.
Importance of Secure Servers
Upgrade server applications and Modifications
Maintaining the most recent versions of all server software is a basic step in gaming server security. Updates and patches are regularly released by developers to address security flaws, therefore it's imperative to install them right away. If you ignore updates, your server becomes vulnerable to known vulnerabilities and a prime target for cybercriminals.
Put Strict Access Controls in Place
It is essential to manage who has permission to access your gaming system to avoid violations and unwanted access. Use strong password regulations and mandate complicated passwords for administrators on the system.
Two-factor authentication (2FA) into place
Restrict access rights to those who need them for administrative tasks to lessen the possibility of unlicensed individuals taking over a server.Safety Measures Players should be urged to adhere to best practices, which include:
Using secure passwords.
Avoid clicking on dubious links.
Updating software & apps regularly.
Upgrading antivirus software regularlyImproving cybersecurity practices and bringing attention to possible risks can greatly improve the general population's safety in gaming.
Conclusion
The internet gaming industry's rapid expansion has resulted in increased security threats in addition to recreation. Players confront various threats, including growing hacking attempts, sensitive information leaks, malware, identity theft, and doxing. To reduce these dangers, secure servers are essential. They emphasise the importance of frequent upgrades, restricting access, and user training. It becomes essential to enable security measures to keep ahead of emerging dangers. Enhancing safety measures guarantees a more secure gaming environment, safeguarding the large population that participates in this quickly changing digital space.
References
https://www.kaspersky.com/resource-center/threats/top-10-online-gaming-risks
https://www.imperva.com/blog/cyber-attacks-gaming-industry/
https://www.techslang.com/securing-gaming-servers-cybersecurity-best-practices-for-online-gaming-communities/
https://www.vox.com/policy-and-politics/2018/1/13/16888710/barris-swatting-death-charges