#Factcheck-Allu Arjun visits Shiva temple after success of Pushpa 2? No, image is from 2017
Executive Summary:
Recently, a viral post on social media claiming that actor Allu Arjun visited a Shiva temple to pray in celebration after the success of his film, PUSHPA 2. The post features an image of him visiting the temple. However, an investigation has determined that this photo is from 2017 and does not relate to the film's release.

Claims:
The claim states that Allu Arjun recently visited a Shiva temple to express his thanks for the success of Pushpa 2, featuring a photograph that allegedly captures this moment.

Fact Check:
The image circulating on social media, that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2, is misleading.
After conducting a reverse image search, we confirmed that this photograph is from 2017, taken during the actor's visit to the Tirumala Temple for a personal event, well before Pushpa 2 was ever announced. The context has been altered to falsely connect it to the film's success. Additionally, there is no credible evidence or recent reports to support the claim that Allu Arjun visited a temple for this specific reason, making the assertion entirely baseless.

Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The claim that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2 is false. The image circulating is actually from an earlier time. This situation illustrates how misinformation can spread when an old photo is used to construct a misleading story. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The image claims Allu Arjun visited Shiva temple after Pushpa 2’s success.
- Claimed On: Facebook
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
A video of Pakistani Olympic gold medalist and Javelin player Arshad Nadeem wishing Independence Day to the People of Pakistan, with claims of snoring audio in the background is getting viral. CyberPeace Research Team found that the viral video is digitally edited by adding the snoring sound in the background. The original video published on Arshad's Instagram account has no snoring sound where we are certain that the viral claim is false and misleading.

Claims:
A video of Pakistani Olympic gold medalist Arshad Nadeem wishing Independence Day with snoring audio in the background.

Fact Check:
Upon receiving the posts, we thoroughly checked the video, we then analyzed the video in TrueMedia, an AI Video detection tool, and found little evidence of manipulation in the voice and also in face.


We then checked the social media accounts of Arshad Nadeem, we found the video uploaded on his Instagram Account on 14th August 2024. In that video, we couldn’t hear any snoring sound.

Hence, we are certain that the claims in the viral video are fake and misleading.
Conclusion:
The viral video of Arshad Nadeem with a snoring sound in the background is false. CyberPeace Research Team confirms the sound was digitally added, as the original video on his Instagram account has no snoring sound, making the viral claim misleading.
- Claim: A snoring sound can be heard in the background of Arshad Nadeem's video wishing Independence Day to the people of Pakistan.
- Claimed on: X,
- Fact Check: Fake & Misleading
.webp)
Introduction to Grooming
The term grooming is believed to have been first used by a group of investigators in the 1970s to describe patterns of seduction of an offender towards a child. It eventually evolved and began being commonly used by law enforcement agencies and has now replaced the term seduction for this behavioural pattern. At its core, grooming refers to conditioning a child by an adult offender to further their wrong motives. In its most popular sense, it refers to the sexual victimisation of children whereby an adult befriends a minor and builds an emotional connection to sexually abuse, exploit and even trafficking such a victim. The onset of technology has shifted the offline physical proximity of perpetrators to the internet, enabling groomers to integrate themselves completely into the victim’s life by maintaining consistent contact. It is noted that while grooming can occur online and offline, groomers often establish online contact before moving the ‘relationship’ offline to commit sexual offences.
Underreporting and Vulnerability of Teenagers
Given the elusive nature of the crime, cyber grooming remains one of the most underreported crimes by victims, who are often unaware or embarrassed to share their experiences. Teenagers are particularly more susceptible to cyber grooming since they not only have more access to the internet but also engage in more online risk-taking behaviours such as posting sensitive and personal pictures. Studies indicate that individuals aged 18 to 23 often lack awareness regarding the grooming process. They frequently engage in relationships with groomers without recognising the deceptive and manipulative tactics employed, mistakenly perceiving these relationships as consensual rather than abusive.
Rise of Cyber Grooming incidents after COVID-19 pandemic
There has been an uptick in cyber grooming after the COVID-19 pandemic, whereby an adult poses as a teenager or a child and befriends a minor on child-friendly websites or social media outlets and builds an emotional connection with the victim. The main goal is to obtain intimate and personal data of the minor, often in the form of sexual chats, pictures or videos, to threaten and coerce them into continuing such acts. The grooming process usually begins with seemingly harmless inquiries about the minor's age, interests, and family background. Over time, these questions gradually shift to topics concerning sexual experiences and desires. Research and data indicate that online grooming is primarily carried out by males, who frequently choose their victims based on attractiveness, ease of access, and the ability to exploit the minor's vulnerabilities.
Beyond Sexual Exploitation: Ideological and Commercial Grooming
Grooming is not confined to sexual exploitation. The rise of technology has expanded the influence of extremist ideological groups, granting them access to children who can be coerced into adopting their beliefs. This phenomenon, known as ideological grooming, presents significant personal, social, national security, and law enforcement challenges. Additionally, a new trend, termed digital commercial grooming, involves malicious actors manipulating minors into procuring and using drugs. Violent extremists are improving their online recruitment strategies, learning from each other to target and recruit supporters more effectively and are constantly leveraging children’s vulnerabilities to reinforce anti-government ideologies.
Policy Recommendations to Combat Cyber Grooming
To address the pervasive issue of cyber grooming and child recruitment by extremist groups, several policy recommendations can be implemented. Social media and online platforms should enhance their monitoring and reporting systems to swiftly detect and remove grooming behaviours. This includes investing in AI technologies for content moderation and employing dedicated teams to respond to reports promptly. Additionally, collaborative efforts with cybersecurity experts and child psychologists to develop educational campaigns and tools that teach children about online safety and identify grooming tactics should be mandated. Legislation should also be strengthened to include provisions specifically addressing cyber grooming, ensuring strict penalties for offenders and protections for victims. In this regard, international cooperation among law enforcement agencies and tech companies is essential to create a unified approach to tackling cross-border online threats to children's safety and security.
References:
- Lanning, Kenneth “The Evolution of Grooming: Concept and Term”, Journal of Interpersonal Violence, 2018, Vol. 33 (1) 5-16. https://www.nationalcac.org/wp-content/uploads/2019/05/The-evolution-of-grooming-Concept-and-term.pdf
- Jonie Chiu, Ethel Quayle, “Understanding online grooming: An interpretative phenomenological analysis of adolescents' offline meetings with adult perpetrators”, Child Abuse & Neglect, Volume 128, 2022, 105600, ISSN 0145-2134,https://doi.org/10.1016/j.chiabu.2022.105600. https://www.sciencedirect.com/science/article/pii/S014521342200120X
- “Online child sexual exploitation and abuse”, Sharinnf Electronic Resources on Laws and Crime, United Nations Office for Drugs and Crime. https://sherloc.unodc.org/cld/en/education/tertiary/cybercrime/module-12/key-issues/online-child-sexual-exploitation-and-abuse.html
- Mehrotra, Karishma, “In the pandemic, more Indian children are falling victim to online grooming for sexual exploitation” The Scroll.in, 18 September 2021. https://scroll.in/magazine/1005389/in-the-pandemic-more-indian-children-are-falling-victim-to-online-grooming-for-sexual-exploitation
- Lorenzo-Dus, Nuria, “Digital Grooming: Discourses of Manipulation and Cyber-Crime”, 18 December 2022 https://academic.oup.com/book/45362
- Strategic orientations on a coordinated EU approach to prevention of radicalisation in 2022-2023 https://home-affairs.ec.europa.eu/system/files/2022-03/2022-2023%20Strategic%20orientations%20on%20a%20coordinated%20EU%20approach%20to%20prevention%20of%20radicalisation_en.pdf
- “Handbook on Children Recruited and Exploited by Terrorist and Violent Extremist Groups: The Role of the Justice System”, United Nations Office on Drugs and Crime, 2017. https://www.unodc.org/documents/justice-and-prison-reform/Child-Victims/Handbook_on_Children_Recruited_and_Exploited_by_Terrorist_and_Violent_Extremist_Groups_the_Role_of_the_Justice_System.E.pdf

Introduction
Artificial Intelligence (AI) has transcended its role as a futuristic tool; it is already an integral part of the decision-making process in various sectors, including governance, the medical field, education, security, and the economy, worldwide. On the one hand, there are concerns about the nature of AI, its advantages and disadvantages, and the risks it may pose to the world. There are also doubts about the technology’s capacity to provide effective solutions, especially when threats such as misinformation, cybercrime, and deepfakes are becoming more common.
Recently, global leaders have reiterated that the use of AI should continue to be human-centric, transparent, and governed responsibly. The issue of offering unbridled access to innovators, while also preventing harm, is a dilemma that must be resolved.
AI as a Global Public Good
In earlier times only the most influential states and large corporations controlled the supply and use of advanced technologies, and they guarded them as national strategic assets. In contrast, AI has emerged as a digital innovation that exists and evolves within a deeply interconnected environment, which makes access far more distributed than before. Usage of AI in a specific country will not only bring its pros and cons to that particular place, but the rest of the world as well. For instance, deepfake scams and biased algorithms will not only affect the people in the country where they are created but also in all other countries where such people might be doing business or communicating.
The Growing Threat of AI Misuse
- Deepfakes, Crime, and Digital Terrorism
The application of artificial intelligence in the wrong way is quickly becoming one of the main security problems. Deepfake technology is being used to carry out electoral misinformation spread, communicate lies, and create false narratives. Cybercriminals are now making use of AI to make phishing attacks faster and more efficient, hack into security systems, and come up with elaborate social engineering tactics. In the case of extremist groups, AI has the power to give a better quality of propaganda, recruitment, and coordination.
- Solution - Human Oversight and Safety-by-Design
To overcome these dangers, a global AI system must be developed based on the principles of safety-by-design. This means incorporating moral safeguards right from the development phase rather than reacting after the damage is done. Moreover, human control is just as vital. Artificial intelligence (AI) systems that influence public confidence, security, or human rights should always be under the control of human decision-makers. Automated decision-making where there is no openness or the possibility of auditing could lead to black-box systems being developed, where the assignment of responsibility is unclear.
Three Pillars of a Responsible AI Framework
- Equitable Access to AI Technologies
One of the major hindrances to global AI development is the non-uniformity of access. The provision of high-end computing capability, data infrastructure, and AI research resources is still highly localised in some areas. A sustainable framework needs to be set up so that smaller countries, rural areas, and people speaking different languages will also be able to share the benefits of AI. The distribution of access fairly will be a gradual process, but at the same time, it will lead to the creation of new ideas and improvements in the different places where the local markets are. Thus, there would be no digital divide, and the AI future would not be exclusively determined by the wealthy economies. - Population-Level Skilling and Talent Readiness
AI will have an impact on worldwide working areas. Thus, societies must not only equip their people with the existing job skills but also with the future technology-based skills. Massive AI literacy programs, digital competencies enhancement, and cross-disciplinary education are very important. Forecasting human resources for roles in AI governance, data ethics, cyber security, and modern technologies will help prevent large scale displacement while also promoting growth that is genuinely inclusive. - Responsible and Human-Centric Deployment
Adoption of Responsible AI makes sure that technology is used for social good and not just for making profits. The human-centred AI directs its applications to the sectors like healthcare, agriculture, education, disaster management, and public services, especially the underserved regions in the world that are most in need of these innovations. This strategy guarantees that progress in technology will improve human life instead of making the situation worse for the poor or taking away the responsibility from humans.
Need for a Global AI Governance Framework
- Why International Cooperation Matters
AI governance cannot be fragmented. Different national regulations lead to the creation of loopholes that allow bad actors to operate in different countries. Hence, global coordination and harmonisation of safety frameworks is of utmost importance. A single AI governance framework should stipulate:
- Clear responsible prohibition on AI misuse in terrorism, deepfakes, and cybercrime .
- Transparency and algorithm audits as a compulsory requirement.
- Independent global oversight bodies.
- Ethical codes of conduct in harmony with humanitarian laws.
Framework like this makes it clear that AI will be shaped by common values rather than being subject to the influence of different interest groups.
- Talent Mobility and Open Innovation
If AI is to be universally accepted, then global mobility of talent must be made easier. The flow of innovation takes place when the interaction between researchers, engineers, and policymakers is not limited by borders.
- AI, Equity, and Global Development
The rapid concentration of technology in a few hands poses the risk of widening the gap in equality among countries. Most developing countries are facing the problems of poor infrastructure, lack of education and digital resources. By regarding them only as technology markets and not as partners in innovation, they become even more isolated from the mainstream of development. An AI development mix of human-centred and technology-driven must consider that the global stillness is broken only by the inclusion of the participation of the whole world. For example, the COVID-19 pandemic has already demonstrated how technology can be a major factor in the building of healthcare and crisis resilience. As a matter of fact, when fairly used, AI has a significant role to play in the realisation of the Sustainable Development Goals.
Conclusion
AI is located at a crucial junction. It can either enhance human progress or increase the digital risks. Making sure that AI is a global good goes beyond mere sophisticated technology; it requires moral leadership, inclusion in governance, and collaboration between countries. Preventing misuse by means of openness, supervision by humans, and policies that are responsible will be vital in keeping public trust. Properly guided, AI can make society more resilient, speed up development, and empower future generations. The future we choose is determined by how responsibly we act today.
As PM Modi stated ‘AI should serve as a global good, and at the same time nations must stay vigilant against its misuse’. CyberPeace reinforces this vision by advocating responsible innovation and a secure digital future for all.
References
- https://www.hindustantimes.com/india-news/ai-a-global-good-but-must-guard-against-misuse-pm-101763922179359.html
- https://www.deccanherald.com/india/g20-summit-pm-modi-goes-against-donald-trumps-stand-seeks-global-governance-for-ai-3807928
- https://timesofindia.indiatimes.com/india/need-global-compact-to-prevent-ai-misuse-pm-modi/articleshow/125525379.cms