#Factcheck-Allu Arjun visits Shiva temple after success of Pushpa 2? No, image is from 2017
Executive Summary:
Recently, a viral post on social media claiming that actor Allu Arjun visited a Shiva temple to pray in celebration after the success of his film, PUSHPA 2. The post features an image of him visiting the temple. However, an investigation has determined that this photo is from 2017 and does not relate to the film's release.

Claims:
The claim states that Allu Arjun recently visited a Shiva temple to express his thanks for the success of Pushpa 2, featuring a photograph that allegedly captures this moment.

Fact Check:
The image circulating on social media, that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2, is misleading.
After conducting a reverse image search, we confirmed that this photograph is from 2017, taken during the actor's visit to the Tirumala Temple for a personal event, well before Pushpa 2 was ever announced. The context has been altered to falsely connect it to the film's success. Additionally, there is no credible evidence or recent reports to support the claim that Allu Arjun visited a temple for this specific reason, making the assertion entirely baseless.

Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The claim that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2 is false. The image circulating is actually from an earlier time. This situation illustrates how misinformation can spread when an old photo is used to construct a misleading story. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The image claims Allu Arjun visited Shiva temple after Pushpa 2’s success.
- Claimed On: Facebook
- Fact Check: False and Misleading
Related Blogs

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Executive Summary:
A misleading video has been widely shared online, falsely portraying Pandit Jawaharlal Nehru stating that he was not involved in the Indian independence struggle and he even opposed it. The video is a manipulated excerpt from Pandit Nehru’s final major interview in 1964 with American TV host Arnold Mich. The original footage available on India’s state broadcaster Prasar Bharati’s YouTube channel shows Pandit Nehru discussing about Muhammad Ali Jinnah, stating that Jinnah did not participate in the independence movement and opposed it. The viral video falsely edits Pandit Nehru’s comments to create a false narrative, which has been debunked upon reviewing the full, unedited interview.

Claims:
In the viral video, Pandit Jawaharlal Nehru states that he was not involved in the fight for Indian independence and even opposed it.




Fact check:
Upon receiving the posts, we thoroughly checked the video and then we divided the video into keyframes using the inVid tool. We reverse-searched one of the frames of the video. We found a video uploaded by Prasar Bharati Archives official YouTube channel on 14 May 2019.

The description of the video reads, “Full video recording of what was perhaps Pandit Jawaharlal Nehru's last significant interview to American TV Host Arnold Mich Jawaharlal Nehru's last TV Interview - May 1964e his death. Another book by Chandrika Prasad provides a date of 18th May 1964 when the interview was aired in New York, this is barely a few days before the death of Pandit Nehru on 27th May 1964.”
On reviewing the full video, we found that the viral clip of Pandit Nehru runs from 14:50 to 15:45. In this portion, Pandit Nehru is speaking about Muhammad Ali Jinnah, a key leader of the Muslim League.
At the timestamp 14:34, the American TV interviewer Arnold Mich says, “You and Mr. Gandhi and Mr. Jinnah, you were all involved at that point of Independence and then partition in the fight for Independence of India from the British domination.” Pandit Nehru replied, “Mr. Jinnah was not involved in the fight for independence at all. In fact, he opposed it. Muslim League was started in about 1911 I think. It was started really by the British encouraged by them so as to create factions, they did succeed to some extent. And ultimately there came the partition.”
Upon thoroughly analyzing we found that the viral video is an edited version of the real video to misrepresent the actual context of the video.
We also found the same interview uploaded on a Facebook page named Nehru Centre for Social Research on 1 December 2021.

Hence, the viral claim video is misleading and fake.
Hence, the viral video is fake and misleading and netizens must be careful while believing in such an edited video.
Conclusion:
In conclusion, the viral video claiming that Pandit Jawaharlal Nehru stated that he was not involved in the Indian independence struggle is found to be falsely edited. The original footage reveals that Pandit Nehru was referring to Muhammad Ali Jinnah's participation in the struggle, not his own. This explanation debunks the false story conveyed by the manipulated video.
- Claim: Pandit Jawaharlal Nehru stated that he was not involved in the struggle for Indian independence and even he opposed it.
- Claimed on: YouTube, LinkedIn, Facebook, X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
.webp)
Executive Summary:
A viral video of the Argentina football team dancing in the dressing room to a Bhojpuri song is being circulated in social media. After analyzing the originality, CyberPeace Research Team discovered that this video was altered and the music was edited. The original footage was posted by former Argentine footballer Sergio Leonel Aguero in his official Instagram page on 19th December 2022. Lionel Messi and his teammates were shown celebrating their win at the 2022 FIFA World Cup. Contrary to viral video, the song in this real-life video is not from Bhojpuri language. The viral video is cropped from a part of Aguero’s upload and the audio of the clip has been changed to incorporate the Bhojpuri song. Therefore, it is concluded that the Argentinian team dancing to Bhojpuri song is misleading.

Claims:
A video of the Argentina football team dancing to a Bhojpuri song after victory.


Fact Check:
On receiving these posts, we split the video into frames, performed the reverse image search on one of these frames and found a video uploaded to the SKY SPORTS website on 19 December 2022.

We found that this is the same clip as in the viral video but the celebration differs. Upon further analysis, We also found a live video uploaded by Argentinian footballer Sergio Leonel Aguero on his Instagram account on 19th December 2022. The viral video was a clip from his live video and the song or music that’s playing is not a Bhojpuri song.

Thus this proves that the news that circulates in the social media in regards to the viral video of Argentina football team dancing Bhojpuri is false and misleading. People should always ensure to check its authenticity before sharing.
Conclusion:
In conclusion, the video that appears to show Argentina’s football team dancing to a Bhojpuri song is fake. It is a manipulated version of an original clip celebrating their 2022 FIFA World Cup victory, with the song altered to include a Bhojpuri song. This confirms that the claim circulating on social media is false and misleading.
- Claim: A viral video of the Argentina football team dancing to a Bhojpuri song after victory.
- Claimed on: Instagram, YouTube
- Fact Check: Fake & Misleading