#Factcheck-Allu Arjun visits Shiva temple after success of Pushpa 2? No, image is from 2017
Executive Summary:
Recently, a viral post on social media claiming that actor Allu Arjun visited a Shiva temple to pray in celebration after the success of his film, PUSHPA 2. The post features an image of him visiting the temple. However, an investigation has determined that this photo is from 2017 and does not relate to the film's release.

Claims:
The claim states that Allu Arjun recently visited a Shiva temple to express his thanks for the success of Pushpa 2, featuring a photograph that allegedly captures this moment.

Fact Check:
The image circulating on social media, that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2, is misleading.
After conducting a reverse image search, we confirmed that this photograph is from 2017, taken during the actor's visit to the Tirumala Temple for a personal event, well before Pushpa 2 was ever announced. The context has been altered to falsely connect it to the film's success. Additionally, there is no credible evidence or recent reports to support the claim that Allu Arjun visited a temple for this specific reason, making the assertion entirely baseless.

Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The claim that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2 is false. The image circulating is actually from an earlier time. This situation illustrates how misinformation can spread when an old photo is used to construct a misleading story. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The image claims Allu Arjun visited Shiva temple after Pushpa 2’s success.
- Claimed On: Facebook
- Fact Check: False and Misleading
Related Blogs

Introduction
The recent cyber-attack on Jaguar Land Rover (JLR), one of the world's best-known car makers, has revealed extensive weaknesses in the interlinked character of international supply chains. The incident highlights the increasing cybersecurity issues of industries going through digital transformation. With its production stopped in several UK factories, supply chain disruptions, and service delays to its customers worldwide, this cyber-attack shows how cyber events can ripple into operation, finance, and reputation risks for large businesses.
The Anatomy of a Breakdown
Jaguar Land Rover, a Tata Motors subsidiary, was forced to disable its IT infrastructure because of a cyber-attack over the weekend. This shut down was already an emergency shut down to mitigate damage and the disruption to business was serious.
- No Production - The car plants at Halewood (Merseyside) and Solihull (West Midlands) and the engine plant (Wolverhampton) were all completely shut down.
- Sales and Distribution: Car sales were significantly impaired during a high-volume registration period in September, although certain transactions still passed through manual procedures.
- Global Effect: The breakdown did not reach only the UK, dealers and fix experts across the world, including in Australia, suffered with inaccessible parts databases.
JLR called the recovery process "extremely complex" as it involved a controlled recovery of systems and implementing alternative workarounds for offline services. The overall effects include the immediate and massive impact to their suppliers and customers, and has raised larger questions regarding the sustainability of digital ecosystems in the automobile value chain.
The Human Impact: Beyond JLR's Factories
The implications of the cyber-attack have extended beyond the production lines of JLR:
- Independent Garages: Repair centres such as Nyewood Express of West Sussex indicated that they could not use vital parts databases, which brought repair activities to a standstill and left clients waiting indefinitely.
- Global Dealers: Land Rover experts as distant as Tasmania indicated total system crashes, highlighting global dependency on centralized IT systems.
- Customer Frustration: Regular customers in need of urgent repairs were stranded by the inability to order replacement parts from original manufacturers.
This attack is an example of the cascading effect of cyber disruptions among interconnected industries, a single point of failure paralyzing complete ecosystems.
The Culprit: The Hacker Collective
The hack is justifiably claimed by a so-called hacker collective "Scattered Lapsus$ Hunters." The so-called hacking collective says that it consists of young English-speaking hackers and has previously targeted blue-chip brands like Marks & Spencer. While the attackers seem not to have publicly declared whether they exfiltrated sensitive information or deployed ransomware, they went ahead and posted screenshots of internal JLR documents-the kind of documents that probably are not supposed to see the light of day, including troubleshooting guides and system logs-implicating what can only be described as grossly unauthorized access into some of Jaguar Land Rover's core IT systems.
Jaguar Land Rover had gone on record to claim with no apropos proof or evidence that it probably did not see anyone getting into customer data; however, the very occurrence of this attack raises some very serious questions on insider threats, social engineering concepts, and how efficient cybersecurity governance architectures really are.
Cybersecurity Weaknesses and Lessons Learned
The JLR attack depicts some of the common weaknesses associated with large-scale manufacturing organizations:
- Centralized IT Dependencies: Today's auto firms are based on worldwide IT systems for operations, logistics, and customer care. Compromise can lead to broad outages.
- Supply Chain Vulnerabilities: Tier-2 and Tier-1 suppliers use OEM systems for placing and tracing components. Interrupting at the OEM level automatically stops their processes.
- Inadequate Incident Visibility: Several suppliers complained about no clear information from JLR, which increased uncertainty and financial loss.
- Rise of Youth Hacking Groups: Involvement of youth hacker groups highlight the necessity for active monitoring and community-level cybersecurity awareness initiatives.
Broader Industry Context
With ever-increasing cyber-attacks on the automotive industry, an area currently being rapidly digitalised through connected cars, IoT-based factories, and cloud-based operations, this series of incidents falls within such a context. In 2023, JLR awarded an £800 million contract to Tata Consultancy Services (TCS) for services in support of the company's digital transformation and cybersecurity enhancement. This attack shows that, no matter how much is spent, poorly conceptualised security programs can never stand up to ever-changing cyber threats.
What Can Organizations Do? – Cyberpeace Recommendations
To contain risks and develop a resilience against such events, organizations need to implement a multi-layered approach to cybersecurity:
- Adopt Zero Trust Architecture - Presume breach as the new normal. Verify each user, device, and application before access is given, even inside the internal network.
- Enhance Supply Chain Security - Perform targeted assessments on a routine basis to identify risk factors in diminishing suppliers. Include rigorous cybersecurity provisions in the agreements with suppliers, namely disclosure of vulnerabilities and the agreed period for incident response.
- Durable Backups and Their Restoration - Backward hampers are kept isolated and encrypted to continue operations in case of ransomware incidents or any other occur in system compromise.
- Periodic Red Team Exercises - Simulate cyber-attacks on IT and OT systems to examine if vulnerabilities exist and evaluate current incident response measures.
- Employee Training and Insider Threat Monitoring - Social engineering being the forefront of attack vectors, continuous training and behavioural monitoring will have to be done to avoid credential disposal.
- Public-Private Partnership - Interact with several government agencies and cybersecurity groups for sharing threat intelligence and enforcing best practices complementary to ISO/IEC 27001 and NIST Cybersecurity Framework.
Conclusion
The hacking at Jaguar Land Rover is perhaps one of a thousand reminders that cybersecurity can no longer be seen as a back-office job but rather as an issue of business continuity at the very core of the organization. In the process of digital transformation, the attack surface grows, making the entities targeted by cybercriminals. Operation security demands that cybersecurity be ensured on a proactive basis through resilient supply chains and stakeholders working together. The JLR attack is not an isolated event; it is a warning for the entire automobile sector to maintain security at every level of digitalization.
References
- https://www.bbc.com/news/articles/c1jzl1lw4y1o
- https://www.theguardian.com/business/2025/sep/07/disruption-to-jaguar-land-rover-after-cyber-attack-may-last-until-october
- https://uk.finance.yahoo.com/news/jaguar-factory-workers-told-stay-073458122.html
.webp)
Introduction
Union Minister of State for Electronics and IT, Rajeev Chandrasekhar, announced that rules for the Digital Personal Data Protection (DPDP) Act are expected to be released by the end of January. The rules will be subject to a month-long consultation process, but their notification may be delayed until after the general elections in April-May 2024. Chandrasekhar mentioned changes to the current IT regulations would be made in the next few days to address the problem of deepfakes on social networking sites.
The government has observed a varied response from platforms regarding advisory measures on deepfakes, leading to the decision to enforce more specific rules. During the Digital India Dialogue, platforms were made aware of existing provisions and the consequences of non-compliance. An advisory was issued, and new amended IT rules will be released if satisfaction with compliance is not achieved.
When Sachin Tendulkar reported a deepfake on a site where he was seen endorsing a gaming application, it raised concerns about the exploitation of deepfakes. Tendulkar urged the reporting of such incidents and underlined the need for social media companies to be watchful, receptive to grievances, and quick to address disinformation and deepfakes.
The DPDP Act, 2023
The Digital Personal Data Protection Act (DPDP) 2023 is a brand-new framework for digital personal data protection that aims to protect individuals' digital personal data. The act ensures compliance by the platforms collecting personal data. The act aims to provide consent-based data collection techniques. DPDP Act 2023 is an important step toward protecting individual privacy. The Act, which requires express consent for the acquisition, administration, and processing of personal data, seeks to guarantee that organisations follow the stated objective for which user consent was granted. This proactive strategy coincides with global data protection trends and demonstrates India's commitment to safeguarding user information in the digital era.
Amendments to IT rules
Minister Chandrasekhar declared that existing IT regulations would be amended in order to combat the rising problem of deepfakes and disinformation on social media platforms. These adjustments, which will be published over the next few days, are primarily aimed at countering widespread of false information and deepfake. The decision follows a range of responses from platforms to deepfake recommendations made during Digital India Dialogues.
The government's stance: blocking non-compliant platforms
Minister Chandrasekhar reaffirmed the government's commitment to enforcing the updated guidelines. If platforms fail to follow compliance, the government may consider banning them. This severe position demonstrates the government's commitment to safeguarding Indian residents from the possible harm caused by false information.
Empowering Users with Education and Awareness
In addition to the upcoming DPDP Act Rules/recommendations and IT regulation changes, the government recognises the critical role that user education plays in establishing a robust digital environment. Minister Rajeev Chandrasekhar emphasised the necessity for comprehensive awareness programs to educate individuals about their digital rights and the need to protect personal information.
These instructional programs seek to equip users to make informed decisions about giving consent to their data. By developing a culture of digital literacy, the government hopes to guarantee that citizens have the information to safeguard themselves in an increasingly linked digital environment.
Balancing Innovation with User Protection
As India continues to explore its digital frontier, the junction of technology innovation and user safety remains a difficult balance. The upcoming Rules on the DPDP Act and modifications to existing IT rules represent the government's proactive efforts to build a strong framework that supports innovation while protecting user privacy and combating disinformation. Recognising the changing nature of the digital world, the government is actively participating in continuing discussions with stakeholders such as industry professionals, academia, and civil society. These conversations promote a collaborative approach to policy creation, ensuring that legislation is adaptable to the changing nature of cyber risks and technology breakthroughs. Such inclusive talks demonstrate the government's dedication to transparent and participatory governance, in which many viewpoints contribute to the creation of effective and nuanced policy. These advances reflect an important milestone in India's digital journey, as the country prepares to set a good example by creating responsible and safe digital ecosystems for its residents.
Reference :
- https://economictimes.indiatimes.com/tech/technology/govt-may-release-personal-data-bill-rules-in-a-fortnight/articleshow/106162669.cms?from=mdr
- https://www.business-standard.com/india-news/dpdp-rules-expected-to-be-released-by-end-of-the-month-mos-chandrasekhar-124011600679_1.html

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/