#FactCheck - Edited Video of ‘India-India’ Chants at Republican National Convention
Executive Summary:
A video online alleges that people are chanting "India India" as Ohio Senator J.D. Vance meets them at the Republican National Convention (RNC). This claim is not correct. The CyberPeace Research team’s investigation showed that the video was digitally changed to include the chanting. The unaltered video was shared by “The Wall Street Journal” and confirmed via the YouTube channel of “Forbes Breaking News”, which features different music performing while Mr. and Mrs. Usha Vance greeted those present in the gathering. So the claim that participants chanted "India India" is not real.
Claims:
A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).
Fact Check:
Upon receiving the posts, we did keyword search related to the context of the viral video. We found a video uploaded by The Wall Street Journal on July 16, titled "Watch: J.D. Vance Is Nominated as Vice Presidential Nominee at the RNC," at the time stamp 0:49. We couldn’t hear any India-India chants whereas in the viral video, we can clearly hear it.
We also found the video on the YouTube channel of Forbes Breaking News. In the timestamp at 3:00:58, we can see the same clip as the viral video but no “India-India” chant could be heard.
Hence, the claim made in the viral video is false and misleading.
Conclusion:
The viral video claiming to show "India-India" chants during Ohio Senator J.D. Vance's greeting at the Republican National Convention is altered. The original video, confirmed by sources including “The Wall Street Journal” and “Forbes Breaking News” features different music without any such chants. Therefore, the claim is false and misleading.
Claim: A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).
Claimed on: X
Fact Check: Fake & Misleading
Related Blogs
Executive Summary:
This report discloses a new cyber threat contributing to the list of threats targeting internet users in the name of "Aarong Ramadan Gifts". The fraudsters are imitating the popular Bangladeshi brand Aarong, which is known for its Bengali ethnic wear and handicrafts, and allure the victims with the offer of exclusive gifts for Ramadan. The moment when users click on the link, they are taken through a fictitious path of quizzes, gift boxes, and social proof, that simply could damage their personal information and system devices. Through knowing how this is done we can educate users to take caution and stop themselves from falling into cyber threats.
False Claim:
The false message accompanied by a link on social media, claims that Aarong, one of the most respected brands in Bangladesh for their exquisite ethnic wear and handicrafts, is providing Ramadan gifts exclusively through online promotion. And while that may be the facade of the scam, its real aim is to lead users to click on harmful links that may end up in their personal data and devices being compromised.
The Deceptive Journey:
- The Landing page starts with a salutation and a catchy photo of Aarong store, and later moves ahead encouraging the visitors to take a part of a short quiz to claim the gift. This is designed for the purpose of creating a false image of authenticity and trustworthiness.
- A certain area at the end of the page looks like a social media comment section, and users are posting the positive impacts the claim has on them. This is one of the techniques to build the image of a solid base of support and many partakers.
- The quiz starts with a few easy questions on how much the user knows about Aarong and their demographics. This data is vital in the development of more complex threats and can be used to address specific targets in the future.
- After the user hits the OK button, the screen displays a matrix of the Gift boxes, and the user then needs to make at least 3 attempts to attain the reward. This is a commonly used approach which allows the scammer to keep users engaged longer and increases the chances of making them comply with the fraudulent scheme.
- The user is instructed to share the campaign on WhatsApp from this point of the campaign, and the user must keep clicking the WhatsApp button until the progress bar is complete. This is a way to both expand and perpetuate the scam, affecting many more users.
- After completing the steps, the user is shown instructions on how to claim the prize.
The Analysis:
- The home page and quiz are structured to maintain a false impression of genuineness and proficiency, thus allowing the victims to partake in the fraudulent design. The compulsion to forward the message in WhatsApp is the way they inspire more and more users and eventually get into the scam.
- The final purpose of the scam could be to obtain personal data from the user and eventually enter their devices, which could lead to a higher risk of cyber threats, such as identity theft, financial theft, or malware installation.
- We have also cross-checked and as of now there is no well established and credible source or any official notification that has confirmed such an offer advertised by Aarong.
- The campaign is hosted on a third party domain instead of the official Website, this raised suspicion. Also the domain has been registered recently.
- The intercepted request revealed a connection to a China-linked analytical service, Baidu in the backend.
- Domain Name: apronicon.top
- Registry Domain ID: D20231130G10001G_13716168-top
- Registrar WHOIS Server: whois.west263[.]com
- Registrar URL: www.west263[.]com
- Updated Date: 2024-02-28T07:21:18Z
- Creation Date: 2023-11-30T03:27:17Z (Recently created)
- Registry Expiry Date: 2024-11-30T03:27:17Z
- Registrar: Chengdu west dimension digital
- Registrant State/Province: Hei Long Jiang
- Registrant Country: CN (China)
- Name Server: amos.ns.cloudflare[.]com
- Name Server: zara.ns.cloudflare[.]com
Note: Cybercriminal used Cloudflare technology to mask the actual IP address of the fraudulent website.
CyberPeace Advisory:
- Do not open those messages received from social platforms in which you think that such messages are suspicious or unsolicited. In the beginning, your own discretion can become your best weapon.
- Falling prey to such scams could compromise your entire system, potentially granting unauthorized access to your microphone, camera, text messages, contacts, pictures, videos, banking applications, and more. Keep your cyber world safe against any attacks.
- Never, in any case, reveal such sensitive data as your login credentials and banking details to entities you haven't validated as reliable ones.
- Before sharing any content or clicking on links within messages, always verify the legitimacy of the source. Protect not only yourself but also those in your digital circle.
- For the sake of the truthfulness of offers and messages, find the official sources and companies directly. Verify the authenticity of alluring offers before taking any action.
Conclusion:
Aarong Ramadan Gift scam is a fraudulent act that takes advantage of the victims' loyalty to a reputable brand. The realization of the mechanism used to make the campaign look real, can actually help us become more conscious and take measures to our community not to be inattentive against cyberthreats. Be aware, check the credibility, and spread awareness to others wherever you can, to contribute in building a security conscious digital space.
Introduction
In recent times the evolution of cyber laws has picked up momentum, primarily because of new and emerging technologies. However, just as with any other law, the same is also strengthened and substantiated by judicial precedents and judgements. Recently Delhi High Court has heard a matter between Tata Sky and Linkedin, where the court has asked them to present their Chief Grievance Officer details and SoP per the intermediary guidelines 2021.
Furthermore, in another news, officials from RBI and Meity have been summoned by the Parliamentary Standing Committee in order to address the rising issues of cyber securities and cybercrimes in India. This comes on the very first day of the monsoon session of the parliament this year. As we move towards the aspects of digital India, addressing these concerns are of utmost importance to safeguard the Indian Netizen.
The Issue
Tata Sky changed its name to Tata Play last year and has since then made its advent in the OTT sector as well. As the rebranding took place, the company was very cautious of anyone using the name Tata Sky in a bad light. Tata Play found that a lot of people on Linkedin had posted their work experience in Tata Sky for multiple years, as any new recruiter cannot verify the same. This poses a misappropriation of the brand’s name. This issue was reported to Linkedin multiple times by officials of Tata Play, but no significant action was seen. This led to an issue between the two brands; hence, a matter has been filed in front of the Hon’ble Delhi High Court to address the issue. The court has taken due cognisance of the issue, and hence in accordance with the Intermediary Guidelines 2021, the court has directed Linkedlin to provide the details of their Cheif Grievance Officer in the public domain and also to share the SoP for the redressal of issues and grievances. The guidelines made it mandatory for all intermediaries to set up a dedicated office in India and appoint a Chief Grievance Officer responsible for effective and efficient redressal of the platform-related offences and grievances within the stipulated period.
The job platform has also been ordered to share the SoPs and the various requirements and safety checks for users to create profiles over Linkedin. The policy of Linkedin is focused towards the users as well as the companies existing on the platform in order to create a synergy between the two.
RBI and Meity Official at Praliament
As we go deeper into cyberspace, especially after the pandemic, we have seen an exponential rise in cybercrimes. Based on statistics, 4 out of 10 people have been victims of cybercrimes in 2022-23, and it is estimated that 70% of the population has been subjected to direct or indirect cybercrime. As per the latest statistics, 85% of Indian children have been subjected to cyberbullying in some form or the other.
The government has taken note of the rising numbers of such crimes and threats, and hence the Parliamentary Committee has summoned the officials from RBI and the Ministery of Electronics and Information Technology to the parliament on July 20, 2023, i.e. the first day of monsoon session at the parliament. This comes at a very crucial time as the Digital Personal Data Protection Bill is to be tabled in the parliament this session and this marks the revamping of the legislation and regulations in the Indian cyberspace. As emerging technologies have started to surround us it is pertinent to create legal safeguards and practices to protect the Indian Netizen at large.
Conclusion
The legal crossroads between Tata Sky and Linkedin will go a long way in establishing the mandates under the Intermediary guidelines in the form of legal precedents. The compliance with the rule of law is the most crucial aspect of any democracy. Hence the separation of power between the Legislature, Judiciary and Execution has been fundamental in safeguarding basic and fundamental rights. Similarly, the RBI and Meity officials being summoned to the parliament shows the transparency in the system and defines the true spirit of democracy., which will contribute towards creating a safe and secured Indian Cyberspace.
Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide