#FactCheck – False Claim of Lord Ram's Hologram in Srinagar - Video Actually from Dehradun
Executive Summary:
A video purporting to be from Lal Chowk in Srinagar, which features Lord Ram's hologram on a clock tower, has gone popular on the internet. The footage is from Dehradun, Uttarakhand, not Jammu and Kashmir, the CyberPeace Research Team discovered.
Claims:
A Viral 48-second clip is getting shared over the Internet mostly in X and Facebook, The Video shows a car passing by the clock tower with the picture of Lord Ram. A screen showcasing songs about Lord Ram is shown when the car goes forward and to the side of the road.

The Claim is that the Video is from Kashmir, Srinagar

Similar Post:

Fact Check:
The CyberPeace Research team found that the Information is false. Firstly we did some keyword search relating to the Caption and found that the Clock Tower in Srinagar is not similar to the Video.

We found an article by NDTV mentioning Srinagar Lal Chowk’s Clock Tower, It's the only Clock Tower in the Middle of Road. We are somewhat confirmed that the Video is not From Srinagar. We then ran a reverse image search of the Video by breaking down into frames.
We found another Video that visualizes a similar structure tower in Dehradun.

Taking a cue from this we then Searched for the Tower in Dehradun and tried to see if it matches with the Video, and yes it’s confirmed that the Tower is a Clock Tower in Paltan Bazar, Dehradun and the Video is actually From Dehradun but not from Srinagar.
Conclusion:
After a thorough Fact Check Investigation of the Video and the originality of the Video, we found that the Visualisation of Lord Ram in the Clock Tower is not from Srinagar but from Dehradun. Internet users who claim the Visual of Lord Ram from Srinagar is totally Baseless and Misinformation.
- Claim: The Hologram of Lord Ram on the Clock Tower of Lal Chowk, Srinagar
- Claimed on: Facebook, X
- Fact Check: Fake
Related Blogs

Introduction
Recent advances in space exploration and technology have increased the need for space laws to control the actions of governments and corporate organisations. India has been attempting to create a robust legal framework to oversee its space activities because it is a prominent player in the international space business. In this article, we’ll examine India’s current space regulations and compare them to the situation elsewhere in the world.
Space Laws in India
India started space exploration with Aryabhtta, the first satellite, and Rakesh Sharma, the first Indian astronaut, and now has a prominent presence in space as many international satellites are now launched by India. NASA and ISRO work closely on various projects

India currently lacks any space-related legislation. Only a few laws and regulations, such as the Indian Space Research Organisation (ISRO) Act of 1969 and the National Remote Sensing Centre (NRSC) Guidelines of 2011, regulate space-related operations. However, more than these rules and regulations are essential to control India’s expanding space sector. India is starting to gain traction as a prospective player in the global commercial space sector. Authorisation, contracts, dispute resolution, licencing, data processing and distribution related to earth observation services, certification of space technology, insurance, legal difficulties related to launch services, and stamp duty are just a few of the topics that need to be discussed. The necessary statute and laws need to be updated to incorporate space law-related matters into domestic laws.
India’s Space Presence
Space research activities were initiated in India during the early 1960s when satellite applications were in experimental stages, even in the United States. With the live transmission of the Tokyo Olympic Games across the Pacific by the American Satellite ‘Syncom-3’ demonstrating the power of communication satellites, Dr Vikram Sarabhai, the founding father of the Indian space programme, quickly recognised the benefits of space technologies for India.
As a first step, the Department of Atomic Energy formed the INCOSPAR (Indian National Committee for Space Research) under the leadership of Dr Sarabhai and Dr Ramanathan in 1962. The Indian Space Research Organisation (ISRO) was formed on August 15, 1969. The prime objective of ISRO is to develop space technology and its application to various national needs. It is one of the six largest space agencies in the world. The Department of Space (DOS) and the Space Commission were set up in 1972, and ISRO was brought under DOS on June 1, 1972.

Since its inception, the Indian space programme has been orchestrated well. It has three distinct elements: satellites for communication and remote sensing, the space transportation system and application programmes. Two major operational systems have been established – the Indian National Satellite (INSAT) for telecommunication, television broadcasting, and meteorological services and the Indian Remote Sensing Satellite (IRS) for monitoring and managing natural resources and Disaster Management Support.
Global Scenario
The global space race has been on and ever since the moon landing in 1969, and it has now transformed into the new cold war among developed and developing nations. The interests and assets of a nation in space need to be safeguarded by the help of effective and efficient policies and internationally ratified laws. All nations with a presence in space do not believe in good for all policy, thus, preventive measures need to be incorporated into the legal system. A thorough legal framework for space activities is being developed by the United Nations Office for Outer Space Affairs (UNOOSA). The “Outer Space Treaty,” a collection of five international agreements on space law, establishes the foundation of international space law. The agreements address topics such as the peaceful use of space, preventing space from becoming militarised, and who is responsible for damage caused by space objects. Well-established space laws govern both the United States and the United Kingdom. The National Aeronautics and Space Act, which was passed in the US in 1958 and established the National Aeronautics and Space Administration (NASA) to oversee national space programmes, is in place there. The Outer Space Act of 1986 governs how UK citizens and businesses can engage in space activity.

Conclusion
India must create a thorough legal system to govern its space endeavours. In the space sector, there needs to be a legal framework to avoid ambiguity and confusion, which may have detrimental effects. The Pacific use of space for the benefit of humanity should be covered by domestic space legislation in India. The overall scenario demonstrates the requirement for a clearly defined legal framework for the international acknowledgement of a nation’s space activities. India is fifth in the world for space technology, which is an impressive accomplishment, and a strong legal system will help India maintain its place in the space business.

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/

Amid the popularity of OpenAI’s ChatGPT and Google’s announcement of introducing its own Artificial Intelligence chatbot called Bard, there has been much discussion over how such tools can impact India at a time when the country is aiming for an AI revolution.
During the Budget Session, Finance Minister Nirmala Sitharaman talked about AI, while her colleague, Minister of State (MoS) for Electronics and Information Technology Rajeev Chandrasekhar discussed it at the India Stack Developer Conference.
While Sitharaman stated that the government will establish three centres of excellence in AI in the country, Chandrashekhar at the event mentioned that India Stack, which includes digital solutions like Aadhaar, Digilocker and others, will become more sophisticated over time with the inclusion of AI.
As AI chatbots become the buzzword, News18 discusses with experts how such tech tools will impact India.
AI IN INDIA
Many experts believe that in a country like India, which is extremely diverse in nature and has a sizeable population, the introduction of technologies and their right adoption can bring a massive digital revolution.
For example, Manoj Gupta, Cofounder of Plotch.ai, a full-stack AI-enabled SaaS product, told News18 that Bard is still experimental and not open to everyone to use while ChatGPT is available and can be used to build applications on top of it.
He said: “Conversational chatbots are interesting since they have the potential to automate customer support and assisted buying in e-commerce. Even simple banking applications can be built that can use ChatGPT AI models to answer queries like bank balance, service requests etc.”
According to him, such tools could be extremely useful for people who are currently excluded from the digital economy due to language barriers.
Ashwini Vaishnaw, Union Minister for Communications, Electronics & IT, has also talked about using such tools to reduce communication issues. At World Economic Forum in Davos, he said: “We integrated our Bhashini language AI tool, which translates from one Indian language to another Indian language in real-time, spoken and text everything. We integrated that with ChatGPT and are seeing very good results.”
‘DOUBLE-EDGED SWORD’
Sundar Balasubramanian, Managing Director, India & SAARC, at Check Point Software, told News18 that generative AI like ChatGPT is a “double-edged sword”.
According to him, used in the right way, it can help developers write and fix code quicker, enable better chat services for companies, or even be a replacement for search engines, revolutionising the way people search for information.
“On the flip side, hackers are also leveraging ChatGPT to accelerate their bad acts and we have already seen examples of such exploitations. ChatGPT has lowered the bar for novice hackers to enter the field as they are able to learn quicker and hack better through asking the AI tool for answers,” he added.
Balasubramanian also stated that CPR has seen the quality of phishing emails improve tremendously over the past 3 months, making it increasingly difficult to discern between legitimate sources and a targeted phishing scam.
“Despite the emergence of the use of generative AI impacting cybercrime, Check Point is continually reminding organisations and individuals of the significance of being vigilant as ChatGPT and Codex become more mature, it can affect the threat landscape, for both good and bad,” he added.
While the real-life applications of ChatGPT include several things ranging from language translation to explaining tricky math problems, Balasubramanian said it can also be used for making the work of cyber researchers and developers more efficient.
“Generative AI or tools like ChatGPT can be used to detect potential threats by analysing large amounts of data and identifying patterns that may indicate malicious activity. This can help enterprises quickly identify and respond to a potential threat before it escalates to something more,” he added.
POSITIVE FACTORS
Major Vineet Kumar, Founder and Global President of CyberPeace Foundation, believes that the deployment of AI chatbots has proven to be highly beneficial in India, where a booming economy and increasing demand for efficient customer service have led to a surge in their use. According to him, both ChatGPT and Bard have the potential to bring significant positive change to various industries and individuals in India.
“ChatGPT has already made an impact by revolutionising customer service, providing instant and accurate support, and reducing wait time. It has automated tedious and complicated tasks for businesses and educational institutions, freeing up valuable time for more significant activities. In the education sector, ChatGPT has also improved learning experiences by providing quick and reliable information to students and educators,” he added.
He also said there are several possible positive impacts that the AI chatbots, ChatGPT and Bard, could have in India and these include improved customer experience, increased productivity, better access to information, improved healthcare, improved access to education and better financial services.
Reference Link : https://www.news18.com/news/explainers/confused-about-chatgpt-bard-experts-tell-news18-how-openai-googles-ai-chatbots-may-impact-india-7026277.html