#Factcheck-Allu Arjun visits Shiva temple after success of Pushpa 2? No, image is from 2017
Executive Summary:
Recently, a viral post on social media claiming that actor Allu Arjun visited a Shiva temple to pray in celebration after the success of his film, PUSHPA 2. The post features an image of him visiting the temple. However, an investigation has determined that this photo is from 2017 and does not relate to the film's release.

Claims:
The claim states that Allu Arjun recently visited a Shiva temple to express his thanks for the success of Pushpa 2, featuring a photograph that allegedly captures this moment.

Fact Check:
The image circulating on social media, that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2, is misleading.
After conducting a reverse image search, we confirmed that this photograph is from 2017, taken during the actor's visit to the Tirumala Temple for a personal event, well before Pushpa 2 was ever announced. The context has been altered to falsely connect it to the film's success. Additionally, there is no credible evidence or recent reports to support the claim that Allu Arjun visited a temple for this specific reason, making the assertion entirely baseless.

Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The claim that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2 is false. The image circulating is actually from an earlier time. This situation illustrates how misinformation can spread when an old photo is used to construct a misleading story. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The image claims Allu Arjun visited Shiva temple after Pushpa 2’s success.
- Claimed On: Facebook
- Fact Check: False and Misleading
Related Blogs

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21

Introduction
In today’s time, everything is online, and the world is interconnected. Cases of data breaches and cyberattacks have been a reality for various organisations and industries, In the recent case (of SAS), Scandinavian Airlines experienced a cyberattack that resulted in the exposure of customer details, highlighting the critical importance of preventing customer privacy. The incident is a wake-up call for Airlines and businesses to evaluate their cyber security measures and learn valuable lessons to safeguard customers’ data. In this blog, we will explore the incident and discuss the strategies for protecting customers’ privacy in this age of digitalisation.
Analysing the backdrop
The incident has been a shocker for the aviation industry, SAS Scandinavian Airlines has been a victim of a cyberattack that compromised consumer data. Let’s understand the motive of cyber crooks and the technique they used :
Motive Behind the Attack: Understanding the reasons that may have driven the criminals is critical to comprehending the context of the Scandinavian Airlines cyber assault. Financial gain, geopolitical conflicts, activism, or personal vendettas are common motivators for cybercriminals. Identifying the purpose of the assault can provide insight into the attacker’s aims and the possible impact on both the targeted organisation and its consumers. Understanding the attack vector and strategies used by cyber attackers reveals the amount of complexity and possible weaknesses in an organisation’s cybersecurity defences. Scandinavian Airlines’ cyber assault might have included phishing, spyware, ransomware, or exploiting software weaknesses. Analysing these tactics allows organisations to strengthen their security against similar assaults.
Impact on Victims: The Scandinavian Airlines (SAS) cyber attack victims, including customers and individuals related to the company, have suffered substantial consequences. Data breaches and cyber-attack have serious consequences due to the leak of personal information.
1)Financial Losses and Fraudulent Activities: One of the most immediate and upsetting consequences of a cyber assault is the possibility of financial loss. Exposed personal information, such as credit card numbers, can be used by hackers to carry out illegal activities such as unauthorised transactions and identity theft. Victims may experience financial difficulties and the need to spend time and money resolving these concerns.
2)Concerns about privacy and personal security: A breach of personal data can significantly impact the privacy and personal security of victims. The disclosed information, including names, addresses, and contact information, might be exploited for nefarious reasons, such as targeted phishing or physical harassment. Victims may have increased anxiety about their safety and privacy, which can interrupt their everyday life and create mental pain.
3) Reputational Damage and Trust Issues: The cyber attack may cause reputational harm to persons linked with Scandinavian Airlines, such as workers or partners. The breach may diminish consumers’ and stakeholders’ faith in the organisation, leading to a bad view of its capacity to protect personal information. This lack of trust might have long-term consequences for the impacted people’s professional and personal relationships.
4) Emotional Stress and Psychological Impact: The psychological impact of a cyber assault can be severe. Fear, worry, and a sense of violation induced by having personal information exposed can create emotional stress and psychological suffering. Victims may experience emotions of vulnerability, loss of control, and distrust toward digital platforms, potentially harming their overall quality of life.
5) Time and Effort Required for Remediation: Addressing the repercussions of a cyber assault demands significant time and effort from the victims. They may need to call financial institutions, reset passwords, monitor accounts for unusual activity, and use credit monitoring services. Resolving the consequences of a data breach may be a difficult and time-consuming process, adding stress and inconvenience to the victims’ lives.
6) Secondary Impacts: The impacts of an online attack could continue beyond the immediate implications. Future repercussions for victims may include trouble acquiring credit or insurance, difficulties finding future work, and continuous worry about exploiting their personal information. These secondary effects can seriously affect victims’ financial and general well-being.
Apart from this, the trust lost would take time to rebuild.

Takeaways from this attack
The cyber-attack on Scandinavian Airlines (SAS) is a sharp reminder of cybercrime’s ever-present and increasing menace. This event provides crucial insights that businesses and people may use to strengthen cybersecurity defences. In the lessons that were learned from the Scandinavian Airlines cyber assault and examine the steps that may be taken to improve cybersecurity and reduce future risks. Some of the key points that can be considered are as follows:
Proactive Risk Assessment and Vulnerability Management: The cyber assault on Scandinavian Airlines emphasises the significance of regular risk assessments and vulnerability management. Organisations must proactively identify and fix possible system and network vulnerabilities. Regular security audits, penetration testing, and vulnerability assessments can help identify flaws before bad actors exploit them.
Strong security measures and best practices: To guard against cyber attacks, it is necessary to implement effective security measures and follow cybersecurity best practices. Lessons from the Scandinavian Airlines cyber assault emphasise the importance of effective firewalls, up-to-date antivirus software, secure setups, frequent software patching, and strong password rules. Using multi-factor authentication and encryption technologies for sensitive data can also considerably improve security.
Employee Training and Awareness: Human mistake is frequently a big component in cyber assaults. Organisations should prioritise employee training and awareness programs to educate employees about phishing schemes, social engineering methods, and safe internet practices. Employees may become the first line of defence against possible attacks by cultivating a culture of cybersecurity awareness.
Data Protection and Privacy Measures: Protecting consumer data should be a key priority for businesses. Lessons from the Scandinavian Airlines cyber assault emphasise the significance of having effective data protection measures, such as encryption and access limits. Adhering to data privacy standards and maintaining safe data storage and transfer can reduce the risks connected with data breaches.
Collaboration and Information Sharing: The Scandinavian Airlines cyber assault emphasises the need for collaboration and information sharing among the cybersecurity community. Organisations should actively share threat intelligence, cooperate with industry partners, and stay current on developing cyber threats. Sharing information and experiences can help to build the collective defence against cybercrime.
Conclusion
The Scandinavian Airlines cyber assault is a reminder that cybersecurity must be a key concern for organisations and people. Organisations may improve their cybersecurity safeguards, proactively discover vulnerabilities, and respond effectively to prospective attacks by learning from this occurrence and adopting the lessons learned. Building a strong cybersecurity culture, frequently upgrading security practices, and encouraging cooperation within the cybersecurity community are all critical steps toward a more robust digital world. We may aim to keep one step ahead of thieves and preserve our important information assets by constantly monitoring and taking proactive actions.
.jpg)
Introduction
The Indian Cabinet has approved a comprehensive national-level IndiaAI Mission with a budget outlay ofRs.10,371.92 crore. The mission aims to strengthen the Indian AI innovation ecosystem by democratizing computing access, improving data quality, developing indigenous AI capabilities, attracting top AI talent, enabling industry collaboration, providing startup risk capital, ensuring socially-impactful A projects, and bolstering ethical AI. The mission will be implemented by the'IndiaAI' Independent Business Division (IBD) under the Digital India Corporation (DIC) and consists of several components such as IndiaAI Compute Capacity, IndiaAI Innovation Centre (IAIC), IndiaAI Datasets Platform, India AI Application Development Initiative, IndiaAI Future Skills, IndiaAI Startup Financing, and Safe & Trusted AI over the next 5 years.
This financial outlay is intended to befulfilled through a public-private partnership model, to ensure a structured implementation of the IndiaAI Mission. The main objective is to create and nurture an ecosystem for India’s AI innovation. This mission is intended to act as a catalyst for shaping the future of AI for India and the world. AI has the potential to become an active enabler of the digital economy and the Indian government aims to harness its full potential to benefit its citizens and drive the growth of its economy.
Key Objectives of India's AI Mission
● With the advancements in data collection, processing and computational power, intelligent systems can be deployed in varied tasks and decision-making to enable better connectivity and enhance productivity.
● India’s AI Mission will concentrate on benefiting India and addressing societal needs in primary areas of healthcare, education, agriculture, smart cities and infrastructure, including smart mobility and transportation.
● This mission will work with extensive academia-industry interactions to ensure the development of core research capability at the national level. This initiative will involve international collaborations and efforts to advance technological frontiers by generating new knowledge and developing and implementing innovative applications.
The strategies developed for implementing the IndiaAI Mission are via Public-Private Partnerships, Skilling initiatives and AI Policy and Regulation. An example of the work towards the public-private partnership is the pre-bid meeting that the IT Ministry hosted on 29th August2024, which saw industrial participation from Nvidia, Intel, AMD, Qualcomm, Microsoft Azure, AWS, Google Cloud and Palo Alto Networks.
Components of IndiaAI Mission
The IndiaAI Compute Capacity: The IndiaAI Compute pillar will build a high-end scalable AI computing ecosystem to cater to India's rapidly expanding AI start-ups and research ecosystem. The ecosystem will comprise AI compute infrastructure of 10,000 or more GPUs, built through public-private partnerships. An AI marketplace will offer AI as a service and pre-trained models to AI innovators.
The IndiaAI Innovation Centre will undertake the development and deployment of indigenous Large Multimodal Models (LMMs) and domain-specific foundational models in critical sectors. The IndiaAI Datasets Platform will streamline access to quality on-personal datasets for AI innovation.
The IndiaAI Future Skills pillar will mitigate barriers to entry into AI programs and increase AI courses in undergraduate, master-level, and Ph.D. programs. Data and AI Labs will be set up in Tier 2 and Tier 3 cities across India to impart foundational-level courses.
The IndiaAI Startup Financing pillar will support and accelerate deep-tech AI startups, providing streamlined access to funding for futuristic AI projects.
The Safe & Trusted AI pillar will enable the implementation of responsible AI projects and the development of indigenous tools and frameworks, self-assessment check lists for innovators, and other guidelines and governance frameworks by recognising the need for adequate guardrails to advance the responsible development, deployment, and adoption of AI.
CyberPeace Considerations for the IndiaAI Mission
● Data privacy and security are paramount as emerging privacy instruments aim to ensure ethical AI use. Addressing bias and fairness in AI remains a significant challenge, especially with poor-quality or tampered datasets that can lead to flawed decision-making, posing risks to fairness, privacy, and security.
● Geopolitical tensions and export control regulations restrict access to cutting-edge AI technologies and critical hardware, delaying progress and impacting data security. In India, where multilingualism and regional diversity are key characteristics, the unavailability of large, clean, and labeled datasets in Indic languages hampers the development of fair and robust AI models suited to the local context.
● Infrastructure and accessibility pose additional hurdles in India’s AI development. The country faces challenges in building computing capacity, with delays in procuring essential hardware, such as GPUs like Nvidia’s A100 chip, hindering businesses, particularly smaller firms. AI development relies heavily on robust cloud computing infrastructure, which remains in its infancy in India. While initiatives like AIRAWAT signal progress, significant gaps persist in scaling AI infrastructure. Furthermore, the scarcity of skilled AI professionals is a pressing concern, alongside the high costs of implementing AI in industries like manufacturing. Finally, the growing computational demands of AI lead to increased energy consumption and environmental impact, raising concerns about balancing AI growth with sustainable practices.
Conclusion
We advocate for ethical and responsible AI development adoption to ensure ethical usage, safeguard privacy, and promote transparency. By setting clear guidelines and standards, the nation would be able to harness AI's potential while mitigating risks and fostering trust. The IndiaAI Mission will propel innovation, build domestic capacities, create highly-skilled employment opportunities, and demonstrate how transformative technology can be used for social good and enhance global competitiveness.
References
● https://pib.gov.in/PressReleasePage.aspx?PRID=2012375