#FactCheck - Debunking Viral Photo: Tears of Photographer Not Linked to Ram Mandir Opening
Executive Summary:
A photographer breaking down in tears in a viral photo is not connected to the Ram Mandir opening. Social media users are sharing a collage of images of the recently dedicated Lord Ram idol at the Ayodhya Ram Mandir, along with a claimed shot of the photographer crying at the sight of the deity. A Facebook post that posts this video says, "Even the cameraman couldn't stop his emotions." The CyberPeace Research team found that the event happened during the AFC Asian Cup football match in 2019. During a match between Iraq and Qatar, an Iraqi photographer started crying since Iraq had lost and was out of the competition.
Claims:
The photographer in the widely shared images broke down in tears at seeing the icon of Lord Ram during the Ayodhya Ram Mandir's consecration. The Collage was also shared by many users in other Social Media like X, Reddit, Facebook. An Facebook user shared and the Caption of the Post reads,




Fact Check:
CyberPeace Research team reverse image searched the Photographer, and it landed to several memes from where the picture was taken, from there we landed to a Pinterest Post where it reads, “An Iraqi photographer as his team is knocked out of the Asian Cup of Nations”

Taking an indication from this we did some keyword search and tried to find the actual news behind this Image. We landed at the official Asian Cup X (formerly Twitter) handle where the image was shared 5 years ago on 24 Jan, 2019. The Post reads, “Passionate. Emotional moment for an Iraqi photographer during the Round of 16 clash against ! #AsianCup2019”

We are now confirmed about the News and the origin of this image. To be noted that while we were investigating the Fact Check we also found several other Misinformation news with the Same photographer image and different Post Captions which was all a Misinformation like this one.
Conclusion:
The recent Viral Image of the Photographer claiming to be associated with Ram Mandir Opening is Misleading, the Image of the Photographer was a 5 years old image where the Iraqi Photographer was seen Crying during the Asian Cup Football Competition but not of recent Ram Mandir Opening. Netizens are advised not to believe and share such misinformation posts around Social Media.
- Claim: A person in the widely shared images broke down in tears at seeing the icon of Lord Ram during the Ayodhya Ram Mandir's consecration.
- Claimed on: Facebook, X, Reddit
- Fact Check: Fake
Related Blogs

Introduction
Deepfake technology, which combines the words "deep learning" and "fake," uses highly developed artificial intelligence—specifically, generative adversarial networks (GANs)—to produce computer-generated content that is remarkably lifelike, including audio and video recordings. Because it can provide credible false information, there are concerns about its misuse, including identity theft and the transmission of fake information. Cybercriminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
India Topmost destination for deepfake attacks
According to Sumsub’s identity fraud report 2023, a well-known digital identity verification company with headquarters in the UK. India, Bangladesh, and Pakistan have become an important participants in the Asia-Pacific identity fraud scene with India’s fraud rate growing exponentially by 2.99% from 2022 to 2023. They are among the top ten nations most impacted by the use of deepfake technology. Deepfake technology is being used in a significant number of cybercrimes, according to the newly released Sumsub Identity Fraud Report for 2023, and this trend is expected to continue in the upcoming year. This highlights the need for increased cybersecurity awareness and safeguards as identity fraud poses an increasing concern in the area.
How Deeepfake Works
Deepfakes are a fascinating and worrisome phenomenon that have emerged in the modern digital landscape. These realistic-looking but wholly artificial videos have become quite popular in the last few months. Such realistic-looking, but wholly artificial, movies have been ingrained in the very fabric of our digital civilisation as we navigate its vast landscape. The consequences are enormous and the attraction is irresistible.
Deep Learning Algorithms
Deepfakes examine large datasets, frequently pictures or videos of a target person, using deep learning techniques, especially Generative Adversarial Networks. By mimicking and learning from gestures, speech patterns, and facial expressions, these algorithms can extract valuable information from the data. By using sophisticated approaches, generative models create material that mixes seamlessly with the target context. Misuse of this technology, including the dissemination of false information, is a worry. Sophisticated detection techniques are becoming more and more necessary to separate real content from modified content as deepfake capabilities improve.
Generative Adversarial Networks
Deepfake technology is based on GANs, which use a dual-network design. Made up of a discriminator and a generator, they participate in an ongoing cycle of competition. The discriminator assesses how authentic the generated information is, whereas the generator aims to create fake material, such as realistic voice patterns or facial expressions. The process of creating and evaluating continuously leads to a persistent improvement in Deepfake's effectiveness over time. The whole deepfake production process gets better over time as the discriminator adjusts to become more perceptive and the generator adapts to produce more and more convincing content.
Effect on Community
The extensive use of Deepfake technology has serious ramifications for several industries. As technology develops, immediate action is required to appropriately manage its effects. And promoting ethical use of technologies. This includes strict laws and technological safeguards. Deepfakes are computer trickery that mimics prominent politicians' statements or videos. Thus, it's a serious issue since it has the potential to spread instability and make it difficult for the public to understand the true nature of politics. Deepfake technology has the potential to generate totally new characters or bring stars back to life for posthumous roles in the entertainment industry. It gets harder and harder to tell fake content from authentic content, which makes it simpler for hackers to trick people and businesses.
Ongoing Deepfake Assaults In India
Deepfake videos continue to target popular celebrities, Priyanka Chopra is the most recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her look the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Actions Considered by Authorities
A PIL was filed requesting the Delhi High Court that access to websites that produce deepfakes be blocked. The petitioner's attorney argued in court that the government should at the very least establish some guidelines to hold individuals accountable for their misuse of deepfake and AI technology. He also proposed that websites should be asked to identify information produced through AI as such and that they should be prevented from producing illegally. A division bench highlighted how complicated the problem is and suggested the government (Centre) to arrive at a balanced solution without infringing the right to freedom of speech and expression (internet).
Information Technology Minister Ashwini Vaishnaw stated that new laws and guidelines would be implemented by the government to curb the dissemination of deepfake content. He presided over a meeting involving social media companies to talk about the problem of deepfakes. "We will begin drafting regulation immediately, and soon, we are going to have a fresh set of regulations for deepfakes. this might come in the way of amending the current framework or ushering in new rules, or a new law," he stated.
Prevention and Detection Techniques
To effectively combat the growing threat posed by the misuse of deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and adjusting the constantly changing terrain.
Conclusion
Advanced artificial intelligence-powered deepfake technology produces extraordinarily lifelike computer-generated information, raising both creative and moral questions. Misuse of tech or deepfake presents major difficulties such as identity theft and the propagation of misleading information, as demonstrated by examples in India, such as the latest deepfake video involving Priyanka Chopra. It is important to develop critical thinking abilities, use detection strategies including analyzing audio quality and facial expressions, and keep up with current trends in order to counter this danger. A thorough strategy that incorporates fact-checking, preventative tactics, and awareness-raising is necessary to protect against the negative effects of deepfake technology. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain. Creating a true cyber-safe environment for netizens.
References:
- https://yourstory.com/2023/11/unveiling-deepfake-technology-impact
- https://www.indiatoday.in/movies/celebrities/story/deepfake-alert-priyanka-chopra-falls-prey-after-rashmika-mandanna-katrina-kaif-and-alia-bhatt-2472293-2023-12-05
- https://www.csoonline.com/article/1251094/deepfakes-emerge-as-a-top-security-threat-ahead-of-the-2024-us-election.html
- https://timesofindia.indiatimes.com/city/delhi/hc-unwilling-to-step-in-to-curb-deepfakes-delhi-high-court/articleshow/105739942.cms
- https://www.indiatoday.in/india/story/india-among-top-targets-of-deepfake-identity-fraud-2472241-2023-12-05
- https://sumsub.com/fraud-report-2023/

A report by MarketsandMarkets in 2024 showed that the global AI market size is estimated to grow from USD 214.6 billion in 2024 to USD 1,339.1 billion in 2030, at a CAGR of 35.7%. AI has become an enabler of productivity and innovation. A Forbes Advisor survey conducted in 2023 reported that 56% of businesses use AI to optimise their operations and drive efficiency. Further, 51% use AI for cybersecurity and fraud management, 47% employ AI-powered digital assistants to enhance productivity and 46% use AI to manage customer relationships.
AI has revolutionised business functions. According to a Forbes survey, 40% of businesses rely on AI for inventory management, 35% harness AI for content production and optimisation and 33% deploy AI-driven product recommendation systems for enhanced customer engagement. This blog addresses the opportunities and challenges posed by integrating AI into operational efficiency.
Artificial Intelligence and its resultant Operational Efficiency
AI has exemplary optimisation or efficiency capabilities and is widely used to do repetitive tasks. These tasks include payroll processing, data entry, inventory management, patient registration, invoicing, claims processing, and others. AI use has been incorporated into such tasks as it can uncover complex patterns using NLP, machine learning, and deep learning beyond human capabilities. It has also shown promise in improving the decision-making process for businesses in time-critical, high-pressure situations.
AI-driven efficiency is visible in industries such as the manufacturing industry for predictive maintenance, in the healthcare industry for streamlining diagnostics and in logistics for route optimisation. Some of the most common real-world examples of AI increasing operational efficiency are self-driving cars (Tesla), facial recognition (Apple Face ID), language translation (Google Translate), and medical diagnosis (IBM Watson Health)
Harnessing AI has advantages as it helps optimise the supply chain, extend product life cycles, and ultimately conserve resources and cut operational costs.
Policy Implications for AI Deployment
Some of the policy implications for development for AI deployment are as follows:
- Develop clear and adaptable regulatory frameworks for the ongoing and future developments in AI. The frameworks need to ensure that innovation is not hindered while managing the potential risks.
- As AI systems rely on high-quality data that is accessible and interoperable to function effectively and without proper data governance, these systems may produce results that are biased, inaccurate and unreliable. Therefore, it is necessary to ensure data privacy as it is essential to maintain trust and prevent harm to individuals and organisations.
- Policy developers need to focus on creating policies that upskill the workforce which complements AI development and therefore job displacement.
- To ensure cross-border applicability and efficiency of standardising AI policies, the policy-makers need to ensure that international cooperation is achieved when developing the policies.
Addressing Challenges and Risks
Some of the main challenges that emerge with the development of AI are algorithmic bias, cybersecurity threats and the dependence on exclusive AI solutions or where the company retains exclusive control over the source codes. Some policy approaches that can be taken to mitigate these challenges are:
- Having a robust accountability mechanism.
- Establishing identity and access management policies that have technical controls like authentication and authorisation mechanisms.
- Ensure that the learning data that AI systems use follows ethical considerations such as data privacy, fairness in decision-making, transparency, and the interpretability of AI models.
Conclusion
AI can contribute and provide opportunities to drive operational efficiency in businesses. It can be an optimiser for productivity and costs and foster innovation for different industries. But this power of AI comes with its own considerations and therefore, it must be balanced with proactive policies that address the challenges that emerge such as the need for data governance, algorithmic bias and risks associated with cybersecurity. A solution to overcome these challenges is establishing an adaptable regulatory framework, fostering workforce upskilling and promoting international collaborations. As businesses integrate AI into core functions, it becomes necessary to leverage its potential while safeguarding fairness, transparency, and trust. AI is not just an efficiency tool, it has become a stimulant for organisations operating in a rapidly evolving digital world.
References
- https://indianexpress.com/article/technology/artificial-intelligence/ai-indian-businesses-long-term-gain-operational-efficiency-9717072/
- https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-market-74851580.html
- https://www.forbes.com/councils/forbestechcouncil/2024/08/06/smart-automation-ais-impact-on-operational-efficiency/
- https://www.processexcellencenetwork.com/ai/articles/ai-operational-excellence
- https://www.leewayhertz.com/ai-for-operational-efficiency/
- https://www.forbes.com/councils/forbestechcouncil/2024/11/04/bringing-ai-to-the-enterprise-challenges-and-considerations/

Introduction
Holi 2025 is just around the corner. In fact, in the Braj region, Mathura and Vrindavan, the celebrations have already begun, starting from Basant Panchami on 2nd February 2025. Temples in Vrindavan are sprinkling flowers on devotees, creating mesmerising scenes with the spirit of devotion. While cities like Delhi, Bangalore, Mumbai, etc., are all set, with pre-bookings for Holi events, parties and music festivals.
However, in the current digital era, cybercriminals attempt to conduct manipulative campaigns to deceive innocent people. They attempt to send fake cashback offers, freebies, lucrative deals, giveaways, and phishing scams under the guise of Holi deals and offers. The upcoming festival of colors requires you to know the warning signs so you can remain alert and safeguard against digital scams.
How Scammers Might Target You
Holi is a time for joy, colors, and celebrations, but cybercriminals see it as the perfect opportunity to trick people into falling for scams. With increased online shopping, event bookings, and digital transactions, scammers exploit the festive mood to steal money and personal information. Here are some common Holi-related cyber scams and how they operate:
- Exclusive Fake Holi Offers
Scammers send out promotional messages via WhatsApp, SMS, or email claiming to offer exclusive Holi discounts. For example, you might receive a message like:
"Get 70% off on Holi color packs! Limited-time deal! Click here to order now."
However, clicking the link leads to a fraudulent website designed to steal your card details or make unauthorized transactions.
- Fake Holi Cashback Offers
You may get an SMS that reads:
"Congratulations! You’ve won ₹500 cashback for your Holi purchases. Claim now by clicking this link."
The link may take you to a phishing page that asks for your UPI PIN or bank login credentials, allowing scammers to siphon off your money.
- Fake Quizzes to Win Freebies
Scammers circulate links to Holi-themed quizzes or surveys promising free gifts like branded clothing, sweets, or smart gadgets. These often ask users to enter personal details such as phone numbers, email addresses, or even Aadhaar numbers. Once entered, the scammers misuse this information for identity theft or further phishing attempts.
- Fake Social Media Giveaways
Many fraudsters create fake Instagram and Facebook pages mimicking well-known brands, announcing contests with tempting prizes. For example:
"Holi Giveaway! Win a free Bluetooth speaker or chance to win smartphone by following us and sending a small registration fee!"
Once you pay, the page disappears, leaving you with nothing but regret.
- Targeted Phishing Scams
During Holi, phishing attempts surge as scammers disguise themselves as banks, e-wallet services, or e-commerce platforms. You might receive an email with a subject like:
"Urgent: Your Holi order needs confirmation, update your details now!"
The email contains a fake link that, when clicked, prompts you to enter sensitive login information, which the scammers then use to access your account.
- Clickbait Links on Social Media
Cybercriminals circulate enticing headlines such as:
"This New Holi Color Is Banned – Find Out Why!"
These links often lead to malware-infected pages that compromise your device security or steal browsing data.
- Bogus Online Booking Platforms
With many people looking for Holi event tickets or holiday stays, scammers set up fake booking websites. Imagine you come across a site advertising "Holi Pool Party – Entry Just INR 299!" you eagerly make the payment, only to find out later that the event never existed.
How to Stay Safe This Festive Season
- Verify offers directly from official websites instead of clicking on random links.
- Avoid sharing personal or banking details on unfamiliar platforms.
- Look for HTTPS in website URLs before making any payments.
- Be cautious of unsolicited messages, even if they appear to be from known contacts.
- If an offer seems too good to be true, it it is likely a scam or deception.
Conclusion:
As Holi 2025 approaches, make sure your online security remains a priority. Keep an eye on potential frauds that attempt to take advantage of the festive seasons like Holi. Protect yourself against various cyber threats. Before engaging with any Internet content, prioritize the verification of sources. Let us safeguard our celebrations with critical cyber security precautions. Wishing you all a cyber-safe and Happy Holi 2025!