#Factcheck-Allu Arjun visits Shiva temple after success of Pushpa 2? No, image is from 2017
Executive Summary:
Recently, a viral post on social media claiming that actor Allu Arjun visited a Shiva temple to pray in celebration after the success of his film, PUSHPA 2. The post features an image of him visiting the temple. However, an investigation has determined that this photo is from 2017 and does not relate to the film's release.

Claims:
The claim states that Allu Arjun recently visited a Shiva temple to express his thanks for the success of Pushpa 2, featuring a photograph that allegedly captures this moment.

Fact Check:
The image circulating on social media, that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2, is misleading.
After conducting a reverse image search, we confirmed that this photograph is from 2017, taken during the actor's visit to the Tirumala Temple for a personal event, well before Pushpa 2 was ever announced. The context has been altered to falsely connect it to the film's success. Additionally, there is no credible evidence or recent reports to support the claim that Allu Arjun visited a temple for this specific reason, making the assertion entirely baseless.

Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The claim that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2 is false. The image circulating is actually from an earlier time. This situation illustrates how misinformation can spread when an old photo is used to construct a misleading story. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The image claims Allu Arjun visited Shiva temple after Pushpa 2’s success.
- Claimed On: Facebook
- Fact Check: False and Misleading
Related Blogs

Introduction
Cybercrimes have been traversing peripheries and growing at a fast pace. Cybercrime is known to be an offensive action that either targets or operates through a computer, a computer network or a networked device, according to Kaspersky. In the “Era of globalisation” and a “Digitally coalesced world”, there has been an increase in International cybercrime. Cybercrime could be for personal or political objectives. Nevertheless, Cybercrime aims to sabotage networks for motives other than gain and be carried out either by organisations or individuals. Some of the cybercriminals have no national boundaries and are considered a global threat. They are likewise inordinately technically adept and operate avant-garde strategies.
The 2023 Global Risk Report points to exacerbating geopolitical apprehensions that have increased the advanced persistent threats (APTs), which are evolving globally as they are ubiquitous. Christine Lagarde, the president of the European Central Bank and former head of the International Monetary Fund (IMF), in 2020 cautioned that a cyber attack could lead to a severe economic predicament. Contemporary technologies and hazardous players have grown at an exceptional gait over the last few decades. Also, cybercrime has heightened on the agenda of nation-states, establishments and global organisations, as per the World Economic Forum (WEF).
The Role of the United Nations Ad Hoc Committee
In two shakes, the United Nations (UN) has a major initiative to develop a new and more inclusive approach to addressing cybercrime and is presently negotiating a new convention on cybercrime. The following convention seeks to enhance global collaboration in the combat against cybercrime. The UN has a central initiative to develop a unique and more inclusive strategy for addressing cybercrime. The UN passed resolution 74/247, which designated an open-ended ad hoc committee (AHC) in December 2019 entrusted with setting a broad global convention on countering the use of information and Communication Technologies (ICTs) for illicit pursuits.
The Cybercrime treaty, if adopted by the UN General Assembly (UNGA) would be the foremost imperative UN mechanism on a cyber point. The treaty could further become a crucial international legal framework for global collaboration on arraigning cyber criminals, precluding and investigating cybercrime. There have correspondingly been numerous other national and international measures to counter the criminal use of ICTs. However, the UN treaty is intended to tackle cybercrime and enhance partnership and coordination between states. The negotiations of the Ad Hoc Committee with the member states will be completed by early 2024 to further adopt the treaty during the UNGA in September 2024.
However, the following treaty is said to be complex. Some countries endorse a treaty that criminalises cyber-dependent offences and a comprehensive spectrum of cyber-enabled crimes. The proposals of Russia, Belarus, China, Nicaragua and Cuba have included highly controversial recommendations. Nevertheless, India has backed for criminalising crimes associated with ‘cyber terrorism’ and the suggestions of India to the UN Ad Hoc committee are in string with its regulatory strategy in the country. Similarly, the US, Japan, the UK, European Union (EU) member states and Australia want to include core cyber-dependent crimes.
Nonetheless, though a new treaty could become a practical instrument in the international step against cybercrime, it must conform to existing global agencies and networks that occupy similar areas. This convention will further supplement the "Budapest Cybercrime Convention" on cybercrime that materialised in the 1990s and was signed in Budapest in the year 2001.
Conclusion
According to Cyber Security Ventures, global cybercrime is expected to increase by 15 per cent per year over the next five years, reaching USD 10.5 trillion annually by 2025, up from USD 3 trillion in 2015. The UN cybercrime convention aims to be more global. That being the case, next-generation tools should have state-of-the-art technology to deal with new cyber crimes and cyber warfare. The global crevasse in nation-states due to cybercrime is beyond calculation. It could lead to a great cataclysm in the global economy and threaten the political interest of the countries on that account. It is crucial for global governments and international organisations. It is necessary to strengthen the collaboration between establishments (public and private) and law enforcement mechanisms. An “appropriately designed policy” is henceforward the need of the hour.
References
- https://www.kaspersky.co.in/resource-center/threats/what-is-cybercrime
- https://www.cyberpeace.org/
- https://www.interpol.int/en/Crimes/Cybercrime
- https://www.bizzbuzz.news/bizz-talk/ransomware-attacks-on-startups-msmes-on-the-rise-in-india-cyberpeace-foundation-1261320
- https://www.financialexpress.com/business/digital-transformation-cyberpeace-foundation-receives-4-million-google-org-grant-3282515/
- https://www.chathamhouse.org/2023/08/what-un-cybercrime-treaty-and-why-does-it-matter
- https://www.weforum.org/agenda/2023/01/global-rules-crack-down-cybercrime/
- https://www.weforum.org/publications/global-risks-report-2023/
- https://www.imf.org/external/pubs/ft/fandd/2021/03/global-cyber-threat-to-financial-systems-maurer.htm
- https://www.eff.org/issues/un-cybercrime-treaty#:~:text=The%20United%20Nations%20is%20currently,of%20billions%20of%20people%20worldwide.
- https://cybersecurityventures.com/hackerpocalypse-cybercrime-report-2016/
- https://www.coe.int/en/web/cybercrime/the-budapest-convention
- https://economictimes.indiatimes.com/tech/technology/counter-use-of-technology-for-cybercrime-india-tells-un-ad-hoc-group/articleshow/92237908.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst
- https://consultation.dpmc.govt.nz/un-cybercrime-convention/principlesandobjectives/supporting_documents/Background.pdf
- https://unric.org/en/a-un-treaty-on-cybercrime-en-route/

Introduction
Generative AI models are significant consumers of computational resources and energy required for training and running models. While AI is being hailed as a game-changer, however underneath the shiny exterior, cracks are present which significantly raises concerns for its environmental impact. The development, maintenance, and disposal of AI technology all come with a large carbon footprint. The energy consumption of AI models, particularly large-scale models or image generation systems, these models rely on data centers powered by electricity, often from non-renewable sources, which exacerbates environmental concerns and contributes to substantial carbon emissions.
As AI adoption grows, improving energy efficiency becomes essential. Optimising algorithms, reducing model complexity, and using more efficient hardware can lower the energy footprint of AI systems. Additionally, transitioning to renewable energy sources for data centers can help mitigate their environmental impact. There is a growing need for sustainable AI development, where environmental considerations are integral to model design and deployment.
A breakdown of how generative AI contributes to environmental risks and the pressing need for energy efficiency:
- Gen AI during the training phase has high power consumption, when vast amounts of computational power which is often utilising extensive GPU clusters for weeks or at times even months, consumes a substantial amount of electricity. Post this phase, the inference phase where the deployment of these models takes place for real-time inference, can be energy-extensive especially when we take into account the millions of users of Gen AI.
- The main source of energy used for training and deploying AI models often comes from non-renewable sources which then contribute to the carbon footprint. The data centers where the computations for Gen AI take place are a significant source of carbon emissions if they rely on the use of fossil fuels for their energy needs for the training and deployment of the models. According to a study by MIT, training an AI can produce emissions that are equivalent to around 300 round-trip flights between New York and San Francisco. According to a report by Goldman Sachs, Data Companies will use 8% of US power by 2030, compared to 3% in 2022 as their energy demand grows by 160%.
- The production and disposal of hardware (GPUs, servers) necessary for AI contribute to environmental degradation. Mining for raw materials and disposing of electronic waste (e-waste) are additional environmental concerns. E-waste contains hazardous chemicals, including lead, mercury, and cadmium, that can contaminate soil and water supplies and endanger both human health and the environment.
Efforts by the Industry to reduce the environmental risk posed by Gen AI
There are a few examples of how companies are making efforts to reduce their carbon footprint, reduce energy consumption and overall be more environmentally friendly in the long run. Some of the efforts are as under:
- Google's TPUs in particular the Google Tensor are designed specifically for machine learning tasks and offer a higher performance-per-watt ratio compared to traditional GPUs, leading to more efficient AI computations during the shorter periods requiring peak consumption.
- Researchers at Microsoft, for instance, have developed a so-called “1 bit” architecture that can make LLMs 10 times more energy efficient than the current leading system. This system simplifies the models’ calculations by reducing the values to 0 or 1, slashing power consumption but without sacrificing its performance.
- OpenAI has been working on optimizing the efficiency of its models and exploring ways to reduce the environmental impact of AI and using renewable energy as much as possible including the research into more efficient training methods and model architectures.
Policy Recommendations
We advocate for the sustainable product development process and press the need for Energy Efficiency in AI Models to counter the environmental impact that they have. These improvements would not only be better for the environment but also contribute to the greater and sustainable development of Gen AI. Some suggestions are as follows:
- AI needs to adopt a Climate justice framework which has been informed by a diverse context and perspectives while working in tandem with the UN’s (Sustainable Development Goals) SDGs.
- Working and developing more efficient algorithms that would require less computational power for both training and inference can reduce energy consumption. Designing more energy-efficient hardware, such as specialized AI accelerators and next-generation GPUs, can help mitigate the environmental impact.
- Transitioning to renewable energy sources (solar, wind, hydro) can significantly reduce the carbon footprint associated with AI. The World Economic Forum (WEF) projects that by 2050, the total amount of e-waste generated will have surpassed 120 million metric tonnes.
- Employing techniques like model compression, which reduces the size of AI models without sacrificing performance, can lead to less energy-intensive computations. Optimized models are faster and require less hardware, thus consuming less energy.
- Implementing scattered learning approaches, where models are trained across decentralized devices rather than centralized data centers, can lead to a better distribution of energy load evenly and reduce the overall environmental impact.
- Enhancing the energy efficiency of data centers through better cooling systems, improved energy management practices, and the use of AI for optimizing data center operations can contribute to reduced energy consumption.
Final Words
The UN Sustainable Development Goals (SDGs) are crucial for the AI industry just as other industries as they guide responsible innovation. Aligning AI development with the SDGs will ensure ethical practices, promoting sustainability, equity, and inclusivity. This alignment fosters global trust in AI technologies, encourages investment, and drives solutions to pressing global challenges, such as poverty, education, and climate change, ultimately creating a positive impact on society and the environment. The current state of AI is that it is essentially utilizing enormous power and producing a product not efficiently utilizing the power it gets. AI and its derivatives are stressing the environment in such a manner which if it continues will affect the clean water resources and other non-renewable power generation sources which contributed to the huge carbon footprint of the AI industry as a whole.
References
- https://cio.economictimes.indiatimes.com/news/artificial-intelligence/ais-hunger-for-power-can-be-tamed/111302991
- https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/
- https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
- https://www.scientificamerican.com/article/ais-climate-impact-goes-beyond-its-emissions/
- https://insights.grcglobalgroup.com/the-environmental-impact-of-ai/

Introduction
The integration of Artificial Intelligence into our daily workflows has compelled global policymakers to develop legislative frameworks to govern its impact efficiently. The question that we arrive at here is: While AI is undoubtedly transforming global economies, who governs the transformation? The EU AI Act was the first of its kind legislation to govern Artificial Intelligence, making the EU a pioneer in the emerging technology regulation space. This blog analyses the EU's Draft AI Rules and Code of Practice, exploring their implications for ethics, innovation, and governance.
Background: The Need for AI Regulation
AI adoption has been happening at a rapid pace and is projected to contribute $15.7 trillion to the global economy by 2030. The AI market size is expected to grow by at least 120% year-over-year. Both of these statistics have been stated in arguments citing concrete examples of AI risks (e.g., bias in recruitment tools, misinformation spread through deepfakes). Unlike the U.S., which relies on sector-specific regulations, the EU proposes a unified framework to address AI's challenges comprehensively, especially with the vacuum that exists in the governance of emerging technologies such as AI. It should be noted that the GDPR or the General Data Protection Regulation has been a success with its global influence on data privacy laws and has started a domino effect for the creation of privacy regulations all over the world. This precedent emphasises the EU's proactive approach towards regulations which are population-centric.
Overview of the Draft EU AI Rules
This Draft General Purpose AI Code of Practice details the AI rules for the AI Act rules and the providers of general-purpose AI models with systemic risks. The European AI Office facilitated the drawing up of the code, and was chaired by independent experts and involved nearly 1000 stakeholders and EU member state representatives and observers both European and international observers.
14th November 2024 marks the publishing of the first draft of the EU’s General-Purpose AI Code of Practice, established by the EU AI Act. As per Article 56 of the EU AI Act, the code outlines the rules that operationalise the requirements, set out for General-Purpose AI (GPAI) model under Article 53 and GPAI models with systemic risks under Article 55. The AI Act is legislation that finds its base in product safety and relies on setting harmonised standards in order to support compliance. These harmonised standards are essentially sets of operational rules that have been established by the European Standardisation bodies, such as the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC) and the European Telecommunications Standards Institute. Industry experts, civil society and trade unions are translating the requirements set out by the EU sectoral legislation into the specific mandates set by the European Commission. The AI Act obligates the developers, deployers and users of AI on mandates for transparency, risk management and compliance mechanisms
The Code of Practice for General Purpose AI
The most popular applications of GPAI include ChatGPT and other foundational models such as CoPilot from Microsoft, BERT from Google, Llama from Meta AI and many others and they are under constant development and upgradation. The 36-pages long draft Code of Practice for General Purpose AI is meant to serve as a roadmap for tech companies to comply with the AI Act and avoid paying penalties. It focuses on transparency, copyright compliance, risk assessment, and technical/governance risk mitigation as the core areas for the companies that are developing GPAIs. It also lays down guidelines that look to enable greater transparency on what goes into developing GPAIs.
The Draft Code's provisions for risk assessment focus on preventing cyber attacks, large-scale discrimination, nuclear and misinformation risks, and the risk of the models acting autonomously without oversight.
Policy Implications
The EU’s Draft AI Rules and Code of Practice represent a bold step in shaping the governance of general-purpose AI, positioning the EU as a global pioneer in responsible AI regulation. By prioritising harmonised standards, ethical safeguards, and risk mitigation, these rules aim to ensure AI benefits society while addressing its inherent risks. While the code is a welcome step, the compliance burdens on MSMEs and startups could hinder innovation, whereas, the voluntary nature of the Code raises concerns about accountability. Additionally, harmonising these ambitious standards with varying global frameworks, especially in regions like the U.S. and India, presents a significant challenge to achieving a cohesive global approach.
Conclusion
The EU’s initiative to regulate general-purpose AI aligns with its legacy of proactive governance, setting the stage for a transformative approach to balancing innovation with ethical accountability. However, challenges remain. Striking the right balance is crucial to avoid stifling innovation while ensuring robust enforcement and inclusivity for smaller players. Global collaboration is the next frontier. As the EU leads, the world must respond by building bridges between regional regulations and fostering a unified vision for AI governance. This demands active stakeholder engagement, adaptive frameworks, and a shared commitment to addressing emerging challenges in AI. The EU’s Draft AI Rules are not just about regulation, they are about leading a global conversation.
References
- https://indianexpress.com/article/technology/artificial-intelligence/new-eu-ai-code-of-practice-draft-rules-9671152/
- https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice
- https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft#:~:text=Drafting%20of%20the%20Code%20of%20Practice%20is%20taking%20place%20under,the%20drafting%20of%20the%20code.
- https://copyrightblog.kluweriplaw.com/2024/12/16/first-draft-of-the-general-purpose-ai-code-of-practice-has-been-released/