#FactCheck - Viral Video of Argentina Football Team Dancing to Bhojpuri Song is Misleading
Executive Summary:
A viral video of the Argentina football team dancing in the dressing room to a Bhojpuri song is being circulated in social media. After analyzing the originality, CyberPeace Research Team discovered that this video was altered and the music was edited. The original footage was posted by former Argentine footballer Sergio Leonel Aguero in his official Instagram page on 19th December 2022. Lionel Messi and his teammates were shown celebrating their win at the 2022 FIFA World Cup. Contrary to viral video, the song in this real-life video is not from Bhojpuri language. The viral video is cropped from a part of Aguero’s upload and the audio of the clip has been changed to incorporate the Bhojpuri song. Therefore, it is concluded that the Argentinian team dancing to Bhojpuri song is misleading.

Claims:
A video of the Argentina football team dancing to a Bhojpuri song after victory.


Fact Check:
On receiving these posts, we split the video into frames, performed the reverse image search on one of these frames and found a video uploaded to the SKY SPORTS website on 19 December 2022.

We found that this is the same clip as in the viral video but the celebration differs. Upon further analysis, We also found a live video uploaded by Argentinian footballer Sergio Leonel Aguero on his Instagram account on 19th December 2022. The viral video was a clip from his live video and the song or music that’s playing is not a Bhojpuri song.

Thus this proves that the news that circulates in the social media in regards to the viral video of Argentina football team dancing Bhojpuri is false and misleading. People should always ensure to check its authenticity before sharing.
Conclusion:
In conclusion, the video that appears to show Argentina’s football team dancing to a Bhojpuri song is fake. It is a manipulated version of an original clip celebrating their 2022 FIFA World Cup victory, with the song altered to include a Bhojpuri song. This confirms that the claim circulating on social media is false and misleading.
- Claim: A viral video of the Argentina football team dancing to a Bhojpuri song after victory.
- Claimed on: Instagram, YouTube
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Meta is the leader in social media platforms and has been successful in having a widespread network of users and services across global cyberspace. The corporate house has been responsible for revolutionizing messaging and connectivity since 2004. The platform has brought people closer together in terms of connectivity, however, being one of the most popular platforms is an issue as well. Popular platforms are mostly used by cyber criminals to gain unauthorised data or create chatrooms to maintain anonymity and prevent tracking. These bad actors often operate under fake names or accounts so that they are not caught. The platforms like Facebook and Instagram have been often in the headlines as portals where cybercriminals were operating and committing crimes.
To keep the data of the netizen safe and secure Paytm under first of its kind service is offering customers protection against cyber fraud through an insurance policy available for fraudulent mobile transactions up to Rs 10,000 for a premium of Rs 30. The cover ‘Paytm Payment Protect’ is provided through a group insurance policy issued by HDFC Ergo. The company said that the plan is being offered to increase the trust in digital payments, which will push up adoption.
Meta’s Cybersecurity
Meta has one of the best cyber security in the world but that diest mean that it cannot be breached. The social media giant is the most vulnerable platform in cases of data breaches as various third parties are also involved. As seen the in the case of Cambridge Analytica, a huge chunk of user data was available to influence the users in terms of elections. Meta needs to be ahead of the curve to have a safe and secure platform, for this Meta has deployed various AI and ML driven crawlers and software which work o keeping the platform safe for its users and simultaneously figure out which accounts may be used by bad actors and further removes the criminal accounts. The same is also supported by the keen participation of the user in terms of the reporting mechanism. Meta-Cyber provides visibility of all OT activities, observes continuously the PLC and SCADA for changes and configuration, and checks the authorization and its levels. Meta is also running various penetration and bug bounty programs to reduce vulnerabilities in their systems and applications, these testers are paid heavily depending upon the scope of the vulnerability they found.
CyberRoot Risk Investigation
Social media giant Meta has taken down over 40 accounts operated by an Indian firm CyberRoot Risk Analysis, allegedly involved in hack-for-hire services along with this Meta has taken down 900 fraudulently run accounts, these accounts are said to be operated from China by an unknown entity. CyberRoot Risk Analysis was responsible for sharing malware over the platform and used it to impersonate themselves just as their targets, i.e lawyers, doctors, entrepreneurs, and industries like – cosmetic surgery, real estate, investment firms, pharmaceutical, private equity firms, and environmental and anti-corruption activists. They would get in touch with such personalities and then share malware hidden in files which would often lead to data breaches subsequently leading to different types of cybercrimes.
Meta and its team is working tirelessly to eradicate the influence of such bad actors from their platforms, use of AI and Ml based tools have increased exponentially.
Paytm CyberFraud Cover
Paytm is offering customers protection against cyber fraud through an insurance policy available for fraudulent mobile transactions up to Rs 10,000 for a premium of Rs 30. The cover ‘Paytm Payment Protect’ is provided through a group insurance policy issued by HDFC Ergo. The company said that the plan is being offered to increase the trust in digital payments, which will push up adoption. The insurance cover protects transactions made through UPI across all apps and wallets. The insurance coverage has been obtained by One97 Communications, which operates under the Paytm brand.
The exponential increase in the use of digital payments during the pandemic has made more people susceptible to cyber fraud. While UPI has all the digital safeguards in place, most UPI-related frauds are undertaken by confidence tricksters who get their victims to authorise a transaction by passing collect requests as payments. There are also many fraudsters collecting payments by pretending to be merchants. These types of frauds have resulted in a loss of more than Rs 63 crores in the previous financial year. The issue of data insurance is new to India but is indeed the need of the hour, majority of netizens are unaware of the value of their data and hence remain ignorant towards data protection, such steps will result in safer data management and protection mechanisms, thus safeguarding the Indian cyberspace.
Conclusion
cyberspace is at a critical juncture in terms of data protection and privacy, with new legislation coming out on the same we can expect new and stronger policies to prevent cybercrimes and cyber-attacks. The efforts by tech giants like Meta need to gain more speed in terms of the efficiency of cyber safety of the platform and the user to make sure that the future of the platforms remains secured strongly. The concept of data insurance needs to be shared with netizens to increase awareness about the subject. The initiative by Paytm will be a monumental initiative as this will encourage more platforms and banks to commit towards coverage for cyber crimes. With the increasing cases of cybercrimes, such financial coverage has come as a light of hope and security for the netizens.

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india

Introduction
Your iPhone isn’t just a device: it’s a central hub for almost everything in your life. From personal photos and videos to sensitive data, it holds it all. You rely on it for essential services, from personal to official communications, sharing of information, banking and financial transactions, and more. With so much critical information stored on your device, protecting it from cyber threats becomes essential. This is where the iOS Lockdown Mode feature comes in as a digital bouncer to keep cyber crooks at bay.
Apple introduced the ‘lockdown’ mode in 2022. It is a new optional security feature and is available on iPhones, iPads, and Mac devices. It works as an extreme and optional protection mechanism for a certain segment of users who might be at a higher risk of being targeted by serious cyber threats and intrusions into their digital security. So people like journalists, activists, government officials, celebrities, cyber security professionals, law enforcement professionals, and lawyers etc are some of the intended beneficiaries of the feature. Sometimes the data on their devices can be highly confidential and it can cause a lot of disruption if leaked or compromised by cyber threats. Given how prevalent cyber attacks are in this day and age, the need for such a feature cannot be overstated. This feature aims at providing an additional firewall by limiting certain functions of the device and hence reducing the chances of the user being targeted in any digital attack.
How to Enable Lockdown Mode in Your iPhone
On your iPhone running on iOS 16 Developer Beta 3, you just need to go to Settings - Privacy and Security - Lockdown Mode. Tap on Turn on Lockdown Mode, and read all the information regarding the features that will be unavailable on your device if you go forward, and if you’re satisfied with the same all you have to do is scroll down and tap on Turn on Lockdown Mode. Your iPhone will get restarted with Lockdown Mode enabled.
Easy steps to enable lockdown mode are as follows:
- Open the Settings app.
- Tap Privacy & Security.
- Scroll down, tap Lockdown Mode, then tap Turn On Lockdown Mode.
How Lockdown Mode Protects You
Lockdown Mode is a security feature that prevents certain apps and features from functioning properly when enabled. For example, your device will not automatically connect to Wi-Fi networks without security and will disconnect from a non-secure network when Lockdown Mode is activated. Many other features may be affected because the system will prioritise security standards above the typical operational functions. Since lockdown mode restricts certain features and activities, one can exclude a particular app or website in Safari from being impacted and limited by restrictions. Only exclude trusted apps or websites if necessary.
References:
- https://support.apple.com/en-in/105120#:~:text=Tap%20Privacy%20%26%20Security.,then%20enter%20your%20device%20passcode
- https://www.business-standard.com/technology/tech-news/apple-lockdown-mode-what-is-it-and-how-it-prevents-spyware-attacks-124041200667_1.html