#FactCheck - Debunked: Viral Video Falsely Claims Allu Arjun Joins Congress Campaign
Research Wing
Innovation and Research
PUBLISHED ON
Apr 22, 2024
10
Executive Summary
The viral video, in which south actor Allu Arjun is seen supporting the Congress Party's campaign for the upcoming Lok Sabha Election, suggests that he has joined Congress Party. Over the course of an investigation, the CyberPeace Research Team uncovered that the video is a close up of Allu Arjun marching as the Grand Marshal of the 2022 India Day parade in New York to celebrate India’s 75th Independence Day. Reverse image searches, Allu Arjun's official YouTube channel, the news coverage, and stock images websites are also proofs of this fact. Thus, it has been firmly established that the claim that Allu Arjun is in a Congress Party's campaign is fabricated and misleading
Claims:
The viral video alleges that the south actor Allu Arjun is using his popularity and star status as a way of campaigning for the Congress party during the 2024 upcoming Lok Sabha elections.
Initially, after hearing the news, we conducted a quick search using keywords to relate it to actor Allu Arjun joining the Congress Party but came across nothing related to this. However, we found a video by SoSouth posted on Feb 20, 2022, of Allu Arjun’s Father-in-law Kancharla Chandrasekhar Reddy joining congress and quitting former chief minister K Chandrasekhar Rao's party.
Next, we segmented the video into keyframes, and then reverse searched one of the images which led us to the Federation of Indian Association website. It says that the picture is from the 2022 India Parade. The picture looks similar to the viral video, and we can compare the two to help us determine if they are from the same event.
Taking a cue from this, we again performed a keyword search using “India Day Parade 2022”. We found a video uploaded on the official Allu Arjun YouTube channel, and it’s the same video that has been shared on Social Media in recent times with different context. The caption of the original video reads, “Icon Star Allu Arjun as Grand Marshal @ 40th India Day Parade in New York | Highlights | #IndiaAt75”
The Reverse Image search results in some more evidence of the real fact, we found the image on Shutterstock, the description of the photo reads, “NYC India Day Parade, New York, NY, United States - 21 Aug 2022 Parade Grand Marshall Actor Allu Arjun is seen on a float during the annual Indian Day Parade on Madison Avenue in New York City on August 21, 2022.”
With this, we concluded that the Claim made in the viral video of Allu Arjun supporting the Lok Sabha Election campaign 2024 is baseless and false.
Conclusion:
The viral video circulating on social media has been put out of context. The clip, which depicts Allu Arjun's participation in the Indian Day parade in 2022, is not related to the ongoing election campaigns for any Political Party.
Hence, the assertion that Allu Arjun is campaigning for the Congress party is false and misleading.
Claim: A video, which has gone viral, says that actor Allu Arjun is rallying for the Congress party.
Claimed on: X (Formerly known as Twitter) and YouTube
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.
The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.
Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.
A short video clip of Prime Minister Narendra Modi is going viral on social media. In the clip, he can be heard saying, “What sins did we commit in our previous life that we were born in India?” Users are sharing this video claiming that the Prime Minister insulted India and its people during a foreign visit. However, an research by the CyberPeace found that the claim is misleading. The viral clip is taken out of context from a longer speech delivered by Modi during his visit to Shanghai, China, in 2015
Claim:
A Facebook user named “Bittu Yadav” shared the reel, portraying the statement as anti-India. The caption reads:“Look at this, and you supporters—see how your ‘leader’ is praising the country.”
To verify the claim, we searched relevant keywords on Google and found the full video uploaded on May 16, 2015, on the official YouTube channel of the Bharatiya Janata Party. The video shows Prime Minister Narendra Modi addressing the Indian community in Shanghai, China.
In the 57-minute speech, at around 51 minutes 25 seconds, Modi was referring to the pessimistic atmosphere in India before 2014. He said: “Within a year… people used to say, ‘Leave it, nothing will happen now. Who knows what sins we committed in our previous life that we were born in India’… From that mindset, today the world says that if there is a country growing at the fastest pace, it is India.”
This clearly shows that Modi was citing a past sentiment to highlight how perceptions about India have changed over time, not expressing his personal view. Media reports from his May 2015 China visit also noted that he addressed around 5,000 members of the Indian community in Shanghai, where he spoke about India’s economic growth and initiatives like “Make in India.”
The viral claim is false. The video has been edited and shared out of context. In reality, Prime Minister Narendra Modi was referring to a past mindset before 2014 while highlighting the change in India’s global perception.
Smartphones have revolutionised human connectivity. In 2023, it was estimated that almost 96% of the global digital population is accessing the internet via their mobile phones and India alone has 1.05 billion users. Information consumption has grown exponentially due to the enhanced accessibility that these mobiles provide. These devices allow accessibility to information no matter where one is, and have completely transformed how we engage with the world around us, be it to skim through work emails while commuting, video streaming during breaks, reading an ebook at our convenience or even catching up on news at any time or place. Mobile phones grant us instant access to the web and are always within reach.
But this instant connection has its downsides too, and one of the most worrying of these is the rampant rise of misinformation. These tiny screens and our constant, on-the-go dependence on them can be directly linked to the spread of “fake news,” as people consume more and more content in rapid bursts, without taking the time to really process the same or think deeply about its authenticity. There is an underlying cultural shift in how we approach information and learning currently: the onslaught of vast amounts of “bite-sized information” discourages people from researching what they’re being told or shown. The focus has shifted from learning deeply to consuming more and sharing faster. And this change in audience behaviour is making us vulnerable to misinformation, disinformation and unchecked foreign influence.
The Growth of Mobile Internet Access
More than 5 billion people are connected to the internet and web traffic is increasing rapidly. The developed countries in North America and Europe are experiencing mobile internet penetration at a universal rate. Contrastingly, the developing countries of Africa, Asia, and Latin America are experiencing rapid growth in this penetration. The introduction of affordable smartphones and low-cost mobile data plans has expanded access to internet connectivity. 4G and 5G infrastructure development have further bridged any connectivity gaps. This widespread access to the mobile internet has democratised information, allowing millions of users to participate in the digital economy. Access to educational resources while at the same time engaging in global conversations is one such example of the democratisation of information. This reduces the digital divide between diverse groups and empowers communities with unprecedented access to knowledge and opportunities.
The Nature of Misinformation in the Mobile Era
Misinformation spread has become more prominent in recent times and one of the contributing factors is the rise of mobile internet. This instantaneous connection has made social media platforms like Facebook, WhatsApp, and X (formerly Twitter) available on a single compact and portable device. These social media platforms enable users to share content instantly and to a wide user base, many times without verifying its accuracy. The virality of social media sharing, where posts can reach thousands of users in seconds, accelerates the spread of false information. This ease of sharing, combined with algorithms that prioritise engagement, creates a fertile ground for misinformation to flourish, misleading vast numbers of people before corrections or factual information can be disseminated.
Some of the factors that are amplifying misinformation sharing through mobile internet are algorithmic amplification which prioritises engagement, the ease of sharing content due to instant access and user-generated content, the limited media literacy of users and the echo chambers which reinforce existing biases and spread false information.
Gaps and Challenges due to the increased accessibility of Mobile Internet
Despite growing concerns about misinformation spread due to mobile internet, policy responses remain inadequate, particularly in developing countries. These gaps include: the lack of algorithm regulation, as social media platforms prioritise engaging content, often fueling misinformation. Inadequate international cooperation further complicates enforcement, as addressing the cross-border nature of misinformation has been a struggle for national regulations.
Additionally, balancing content moderation with free speech remains challenging, with efforts to curb misinformation sometimes leading to concerns over censorship.
Finally, a deficit in media literacy leaves many vulnerable to false information. Governments and international organisations must prioritise public education to equip users with the required skills to evaluate online content, especially in low-literacy regions.
CyberPeace Recommendations
Addressing misinformation via mobile internet requires a collaborative, multi-stakeholder approach.
Governments should mandate algorithm transparency, ensuring social media platforms disclose how content is prioritised and give users more control.
Collaborative fact-checking initiatives between governments, platforms, and civil society could help flag or correct false information before it spreads, especially during crises like elections or public health emergencies.
International organisations should lead efforts to create standardised global regulations to hold platforms accountable across borders.
Additionally, large-scale digital literacy campaigns are crucial, teaching the public how to assess online content and avoid misinformation traps.
Conclusion
Mobile internet access has transformed information consumption and bridged the digital divide. At the same time, it has also accelerated the spread of misinformation. The global reach and instant nature of mobile platforms, combined with algorithmic amplification, have created significant challenges in controlling the flow of false information. Addressing this issue requires a collective effort from governments, tech companies, and civil society to implement transparent algorithms, promote fact-checking, and establish international regulatory standards. Digital literacy should be enhanced to empower users to assess online content and counter any risks that it poses.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.