#FactCheck - Debunked: Viral Video Falsely Claims Allu Arjun Joins Congress Campaign
Executive Summary
The viral video, in which south actor Allu Arjun is seen supporting the Congress Party's campaign for the upcoming Lok Sabha Election, suggests that he has joined Congress Party. Over the course of an investigation, the CyberPeace Research Team uncovered that the video is a close up of Allu Arjun marching as the Grand Marshal of the 2022 India Day parade in New York to celebrate India’s 75th Independence Day. Reverse image searches, Allu Arjun's official YouTube channel, the news coverage, and stock images websites are also proofs of this fact. Thus, it has been firmly established that the claim that Allu Arjun is in a Congress Party's campaign is fabricated and misleading
Claims:
The viral video alleges that the south actor Allu Arjun is using his popularity and star status as a way of campaigning for the Congress party during the 2024 upcoming Lok Sabha elections.
Fact Check:
Initially, after hearing the news, we conducted a quick search using keywords to relate it to actor Allu Arjun joining the Congress Party but came across nothing related to this. However, we found a video by SoSouth posted on Feb 20, 2022, of Allu Arjun’s Father-in-law Kancharla Chandrasekhar Reddy joining congress and quitting former chief minister K Chandrasekhar Rao's party.
Next, we segmented the video into keyframes, and then reverse searched one of the images which led us to the Federation of Indian Association website. It says that the picture is from the 2022 India Parade. The picture looks similar to the viral video, and we can compare the two to help us determine if they are from the same event.
Taking a cue from this, we again performed a keyword search using “India Day Parade 2022”. We found a video uploaded on the official Allu Arjun YouTube channel, and it’s the same video that has been shared on Social Media in recent times with different context. The caption of the original video reads, “Icon Star Allu Arjun as Grand Marshal @ 40th India Day Parade in New York | Highlights | #IndiaAt75”
The Reverse Image search results in some more evidence of the real fact, we found the image on Shutterstock, the description of the photo reads, “NYC India Day Parade, New York, NY, United States - 21 Aug 2022 Parade Grand Marshall Actor Allu Arjun is seen on a float during the annual Indian Day Parade on Madison Avenue in New York City on August 21, 2022.”
With this, we concluded that the Claim made in the viral video of Allu Arjun supporting the Lok Sabha Election campaign 2024 is baseless and false.
Conclusion:
The viral video circulating on social media has been put out of context. The clip, which depicts Allu Arjun's participation in the Indian Day parade in 2022, is not related to the ongoing election campaigns for any Political Party.
Hence, the assertion that Allu Arjun is campaigning for the Congress party is false and misleading.
- Claim: A video, which has gone viral, says that actor Allu Arjun is rallying for the Congress party.
- Claimed on: X (Formerly known as Twitter) and YouTube
- Fact Check: Fake & Misleading
Related Blogs
Introduction
The Ministry of Electronics and Information Technology (MeitY) issued an advisory on March 1 2024, urging platforms to prevent bias, discrimination, and threats to electoral integrity by using AI, generative AI, LLMs, or other algorithms. The advisory requires that AI models deemed unreliable or under-tested in India must obtain explicit government permission before deployment. While leveraging Artificial Intelligence models, Generative AI, software, or algorithms in their computer resources, Intermediaries and platforms need to ensure that they prevent bias, discrimination, and threats to electoral integrity. As Intermediaries are required to follow due diligence obligations outlined under “Information Technology (Intermediary Guidelines and Digital Media Ethics Code)Rules, 2021, updated as of 06.04.2023”. This advisory is issued to urge the intermediaries to abide by the IT rules and regulations and compliance therein.
Key Highlights of the Advisories
- Intermediaries and platforms must ensure that users of Artificial Intelligence models/LLM/Generative AI, software, or algorithms do not allow users to host, display, upload, modify, publish, transmit, store, update, or share unlawful content, as per Rule 3(1)(b) of the IT Rules.
- The government emphasises intermediaries and platforms to prevent bias or discrimination in their use of Artificial Intelligence models, LLMs, and Generative AI, software, or algorithms, ensuring they do not threaten the integrity of the electoral process.
- The government requires explicit permission to use deemed under-testing or unreliable AI models, LLMs, or algorithms on the Indian internet. Further, it must be deployed with proper labelling of potential fallibility or unreliability. Further, users can be informed through a consent popup mechanism.
- The advisory specifies that all users should be well informed about the consequences of dealing with unlawful information on platforms, including disabling access, removing non-compliant information, suspension or termination of access or usage rights of the user to their user account and imposing punishment under applicable law. It entails that users are clearly informed, through terms of services and user agreements, about the consequences of engaging with unlawful information on the platform.
- The advisory also indicates measures advocating to combat deepfakes or misinformation. The advisory necessitates identifying synthetically created content across various formats, advising platforms to employ labels, unique identifiers, or metadata to ensure transparency. Furthermore, the advisory mandates the disclosure of software details and tracing the first originator of such synthetically created content.
Rajeev Chandrasekhar, Union Minister of State for IT, specified that
“Advisory is aimed at the Significant platforms, and permission seeking from Meity is only for large platforms and will not apply to startups. Advisory is aimed at untested AI platforms from deploying on the Indian Internet. Process of seeking permission , labelling & consent based disclosure to user about untested platforms is insurance policy to platforms who can otherwise be sued by consumers. Safety & Trust of India's Internet is a shared and common goal for Govt, users and Platforms.”
Conclusion
MeitY's advisory sets the stage for a more regulated Al landscape. The Indian government requires explicit permission for the deployment of under-testing or unreliable Artificial Intelligence models on the Indian Internet. Alongside intermediaries, the advisory also applies to digital platforms that incorporate Al elements. Advisory is aimed at significant platforms and will not apply to startups. This move safeguards users and fosters innovation by promoting responsible AI practices, paving the way for a more secure and inclusive digital environment.
References
- https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf
- https://economictimes.indiatimes.com/tech/technology/govts-ai-advisory-will-not-apply-to-startups-mos-it-rajeev-chandrasekhar/articleshow/108197797.cms?from=mdr
- https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf
Introduction
Snapchat's Snap Map redefined location sharing with an ultra-personalised feature that allows users to track where they and their friends are, discover hotspots, and even explore events worldwide. In November 2024, Snapchat introduced a new addition to its Family Center, aiming to bolster teen safety. This update enables parents to request and share live locations with their teens, set alerts for specific locations, and monitor who their child shares their location with.
While designed with keeping safety in mind, such tracking tools raise significant privacy concerns. Misusing these features could expose teens to potential harm, amplifying the debate around safeguarding children’s online privacy. This blog delves into the privacy and safety challenges Snap Map poses under existing data protection laws, highlighting critical gaps and potential risks.
Understanding Snapmap: How It Works and Why It’s Controversial
Snap Map, built on technology from Snap's acquisition of social mapping startup Zenly, revolutionises real-time location sharing by letting users track friends, send messages, and explore the world through an interactive map. With over 350 million active users by Q4 2023, and India leading with 202.51 million Snapchat users, Snap Map has become a global phenomenon.
This opt-in feature allows users to customise their location-sharing settings, offering modes like "Ghost Mode" for privacy, sharing with all friends, or selectively with specific contacts. However, location updates occur only when the app is in use, adding a layer of complexity to privacy management.
While empowering users to connect and share, Snap Map’s location-sharing capabilities raise serious concerns. Unintentional sharing or misuse of this tool could expose users—especially teens—to risks like stalking or predatory behaviour. As Snap Map becomes increasingly popular, ensuring its safe use and addressing its potential for harm remains a critical challenge for users and regulators.
The Policy Vacuum: Protecting Children’s Data Privacy
Given the potential misuse of location-sharing features, evaluating the existing regulatory frameworks for protecting children's geolocation privacy is important. Geolocation features remain under-regulated in many jurisdictions, creating opportunities for misuse, such as stalking or unauthorised surveillance. Presently, multiple international and national jurisdictions are in the process of creating and implementing privacy laws. The most notable examples are the COPPA in the US, GDPR in the EU and the DPDP Act which have made considerable progress in privacy for children and their online safety. COPPA and GDPR prioritise children’s online safety through strict data protections, consent requirements, and limits on profiling. India’s DPDP Act, 2023, prohibits behavioral tracking and targeted ads for children, enhancing privacy. However, it lacks safeguards against geolocation tracking, leaving a critical gap in protecting children from risks posed by location-based features.
Balancing Innovation and Privacy: The Role of Social Media Platforms
Privacy is an essential element that needs to be safeguarded and this is specifically important for children as they are vulnerable to harm they cannot always foresee. Social media companies must uphold their responsibility to create platforms that do not become a breeding ground for offences against children. Some of the challenges that platforms face in implementing a safe online environment are robust parental control and consent mechanisms to ensure parents are informed about their children’s online presence and options to opt out of services that they feel are not safe for their children. Platforms need to maintain a level of privacy that allows users to know what data is collected by the platform, sharing and retention data policies.
Policy Recommendations: Addressing the Gaps
Some of the recommendations for addressing the gaps in the safety of minors are as follows:
- Enhancing privacy and safety for minors by taking measures such as mandatory geolocation restrictions for underage users.
- Integrating clear consent guidelines for data protection for users.
- Collaboration between stakeholders such as government, social media platforms, and civil society is necessary to create awareness about location-sharing risks among parents and children.
Conclusion
Safeguarding privacy, especially of children, with the introduction of real-time geolocation tools like Snap Map, is critical. While these features offer safety benefits, they also present the danger of misuse, potentially harming vulnerable teens. Policymakers must urgently update data protection laws and incorporate child-specific safeguards, particularly around geolocation tracking. Strengthening regulations and enhancing parental controls are essential to protect young users. However, this must be done without stifling technological innovation. A balanced approach is needed, where safety is prioritised, but innovation can still thrive. Through collaboration between governments, social media platforms, and civil society, we can create a digital environment that ensures safety and progress.
References
- https://indianexpress.com/article/technology/tech-news-technology/snapchat-family-center-real-time-location-sharing-travel-notifications-9669270/
- https://economictimes.indiatimes.com/tech/technology/snapchat-unveils-location-sharing-features-to-safeguard-teen-users/articleshow/115297065.cms?from=mdr
- https://www.thehindu.com/sci-tech/technology/snapchat-adds-more-location-safety-features-for-teens/article68871301.ece
- https://www.moneycontrol.com/technology/snapchat-expands-parental-control-with-location-tracking-to-make-it-easier-for-parents-to-track-their-kids-article-12868336.html
- https://www.statista.com/statistics/545967/snapchat-app-dau/
What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.