#FactCheck - Debunking Viral Photo: Tears of Photographer Not Linked to Ram Mandir Opening
Executive Summary:
A photographer breaking down in tears in a viral photo is not connected to the Ram Mandir opening. Social media users are sharing a collage of images of the recently dedicated Lord Ram idol at the Ayodhya Ram Mandir, along with a claimed shot of the photographer crying at the sight of the deity. A Facebook post that posts this video says, "Even the cameraman couldn't stop his emotions." The CyberPeace Research team found that the event happened during the AFC Asian Cup football match in 2019. During a match between Iraq and Qatar, an Iraqi photographer started crying since Iraq had lost and was out of the competition.
Claims:
The photographer in the widely shared images broke down in tears at seeing the icon of Lord Ram during the Ayodhya Ram Mandir's consecration. The Collage was also shared by many users in other Social Media like X, Reddit, Facebook. An Facebook user shared and the Caption of the Post reads,




Fact Check:
CyberPeace Research team reverse image searched the Photographer, and it landed to several memes from where the picture was taken, from there we landed to a Pinterest Post where it reads, “An Iraqi photographer as his team is knocked out of the Asian Cup of Nations”

Taking an indication from this we did some keyword search and tried to find the actual news behind this Image. We landed at the official Asian Cup X (formerly Twitter) handle where the image was shared 5 years ago on 24 Jan, 2019. The Post reads, “Passionate. Emotional moment for an Iraqi photographer during the Round of 16 clash against ! #AsianCup2019”

We are now confirmed about the News and the origin of this image. To be noted that while we were investigating the Fact Check we also found several other Misinformation news with the Same photographer image and different Post Captions which was all a Misinformation like this one.
Conclusion:
The recent Viral Image of the Photographer claiming to be associated with Ram Mandir Opening is Misleading, the Image of the Photographer was a 5 years old image where the Iraqi Photographer was seen Crying during the Asian Cup Football Competition but not of recent Ram Mandir Opening. Netizens are advised not to believe and share such misinformation posts around Social Media.
- Claim: A person in the widely shared images broke down in tears at seeing the icon of Lord Ram during the Ayodhya Ram Mandir's consecration.
- Claimed on: Facebook, X, Reddit
- Fact Check: Fake
Related Blogs

Introduction
Twitter is a popular social media plate form with millions of users all around the world. Twitter’s blue tick system, which verifies the identity of high-profile accounts, has been under intense scrutiny in recent years. The platform must face backlash from its users and brands who have accused it of basis, inaccuracy, and inconsistency in its verification process. This blog post will explore the questions raised on the verification process and its impact on users and big brands.
What is Twitter’s blue trick System?
The blue tick system was introduced in 2009 to help users identify the authenticity of well-known public figures, Politicians, celebrities, sportspeople, and big brands. The Twitter blue Tick system verifies the identity of high-profile accounts to display a blue badge next to your username.
According to a survey, roughly there are 294,000 verified Twitter Accounts which means they have a blue tick badge with them and have also paid the subscription for the service, which is nearly $7.99 monthly, so think about those subscribers who have paid the amount and have also lost their blue badge won’t they feel cheated?
The Controversy
Despite its initial aim, the blue tick system has received much criticism from consumers and brands. Twitter’s irregular and non-transparent verification procedure has sparked accusations of prejudice and inaccuracy. Many Twitter users have complained that the network’s verification process is random and favours account with huge followings or celebrity status. In contrast, others have criticised the platform for certifying accounts that promote harmful or controversial content.
Furthermore, the verification mechanism has generated user confusion, as many need to understand the significance of the blue tick badge. Some users have concluded that the blue tick symbol represents a Twitter endorsement or that the account is trustworthy. This confusion has resulted in users following and engaging with verified accounts that promote misleading or inaccurate data, undermining the platform’s credibility.
How did the Blue Tick Row start in India?
On 21 May 2021, when the government asked Twitter to remove the blue badge from several profiles of high-profile Indian politicians, including the Indian National Congress Party Vice-President Mr Rahul Ghandhi.
The blue badge gives the users an authenticated identity. Many celebrities, including Amitabh Bachchan, popularly known as Big B, Vir Das, Prakash Raj, Virat Kohli, and Rohit Sharma, have lost their blue tick despite being verified handles.
What is the Twitter policy on blue tick?
To Twitter’s policy, blue verification badges may be removed from accounts if the account holder violates the company’s verification policy or terms of service. In such circumstances, Twitter typically notifies the account holder of the removal of the verification badge and the reason for the removal. In the instance of the “Twitter blue badge row” in India, however, it appears that Twitter did not notify the impacted politicians or their representatives before revoking their verification badges. Twitter’s lack of communication has exacerbated the controversy around the episode, with some critics accusing the company of acting arbitrarily and not following due process.
Is there a solution?
The “Twitter blue badge row” has no simple answer since it involves a complex convergence of concerns about free expression, social media policies, and government laws. However, here are some alternatives:
- Establish clear guidelines: Twitter should develop and constantly implement clear guidelines and policies for the verification process. All users, including politicians and government officials, would benefit from greater transparency and clarity.
- Increase transparency: Twitter’s decision-making process for deleting or restoring verification badges should be more open. This could include providing explicit reasons for badge removal, notifying impacted users promptly, and offering an appeals mechanism for those who believe their credentials were removed unfairly.
- Engage in constructive dialogue: Twitter should engage in constructive dialogue with government authorities and other stakeholders to address concerns about the platform’s content moderation procedures. This could contribute to a more collaborative approach to managing online content, leading to more effective and accepted policies.
- Follow local rules and regulations: Twitter should collaborate with the Indian government to ensure it conforms to local laws and regulations while maintaining freedom of expression. This could involve adopting more precise standards for handling requests for material removal or other actions from governments and other organisations.
Conclusion
To sum up, the “Twitter blue tick row” in India has highlighted the complex challenges that Social media faces daily in handling the conflicting interests of free expression, government rules, and their own content moderation procedures. While Twitter’s decision to withdraw the blue verification badges of several prominent Indian politicians garnered anger from the government and some public members, it also raised questions about the transparency and uniformity of Twitter’s verification procedure. In order to deal with this issue, Twitter must establish clear verification procedures and norms, promote transparency in its decision-making process, participate in constructive communication with stakeholders, and adhere to local laws and regulations. Furthermore, the Indian government should collaborate with social media platforms to create more effective and acceptable laws that balance the necessity for free expression and the protection of citizens’ rights. The “Twitter blue tick row” is just one example of the complex challenges that social media platforms face in managing online content, and it emphasises the need for greater collaboration among platforms, governments, and civil society organisations to develop effective solutions that protect both free expression and citizens’ rights.

Introduction
The spread of information in the quickly changing digital age presents both advantages and difficulties. The phrases "misinformation" and "disinformation" are commonly used in conversations concerning information inaccuracy. It's important to counter such prevalent threats, especially in light of how they affect countries like India. It becomes essential to investigate the practical ramifications of misinformation/disinformation and other prevalent digital threats. Like many other nations, India has had to deal with the fallout from fraudulent internet actions in 2023, which has highlighted the critical necessity for strong cybersecurity safeguards.
The Emergence of AI Chatbots; OpenAI's ChatGPT and Google's Bard
The launch of OpenAI's ChatGPT in November 2022 was a major turning point in the AI space, inspiring the creation of rival chatbot ‘Google's Bard’ (Launched in 2023). These chatbots represent a significant breakthrough in artificial intelligence (AI) as they produce replies by combining information gathered from huge databases, driven by Large Language Models (LLMs). In the same way, AI picture generators that make use of diffusion models and existing datasets have attracted a lot of interest in 2023.
Deepfake Proliferation in 2023
Deepfake technology's proliferation in 2023 contributed to misinformation/disinformation in India, affecting politicians, corporate leaders, and celebrities. Some of these fakes were used for political purposes while others were for creating pornographic and entertainment content. Social turmoil, political instability, and financial ramifications were among the outcomes. The lack of tech measures about the same added difficulties in detection & prevention, causing widespread synthetic content.
Challenges of Synthetic Media
Problems of synthetic media, especially AI-powered or synthetic Audio video content proliferated widely during 2023 in India. These included issues with political manipulation, identity theft, disinformation, legal and ethical issues, security risks, difficulties with identification, and issues with media integrity. It covered an array of consequences, ranging from financial deception and the dissemination of false information to swaying elections and intensifying intercultural conflicts.
Biometric Fraud Surge in 2023
Biometric fraud in India, especially through the Aadhaar-enabled Payment System (AePS), has become a major threat in 2023. Due to the AePS's weaknesses being exploited by cybercriminals, many depositors have had their hard-earned assets stolen by fraudulent activity. This demonstrates the real effects of biometric fraud on those who have had their Aadhaar-linked data manipulated and unauthorized access granted. The use of biometric data in financial systems raises more questions about the security and integrity of the nation's digital payment systems in addition to endangering individual financial stability.
Government strategies to counter digital threats
- The Indian Union Government has sent a warning to the country's largest social media platforms, highlighting the importance of exercising caution when spotting and responding to deepfake and false material. The advice directs intermediaries to delete reported information within 36 hours, disable access in compliance with IT Rules 2021, and act quickly against content that violates laws and regulations. The government's dedication to ensuring the safety of digital citizens was underscored by Union Minister Rajeev Chandrasekhar, who also stressed the gravity of deepfake crimes, which disproportionately impact women.
- The government has recently come up with an advisory to social media intermediaries to identify misinformation and deepfakes and to make sure of the compliance of Information Technology (IT) Rules 2021. It is the legal obligation of online platforms to prevent the spread of misinformation and exercise due diligence or reasonable efforts to identify misinformation and deepfakes.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2021 were amended in 2023. The online gaming industry is required to abide by a set of rules. These include not hosting harmful or unverified online games, not promoting games without approval from the SRB, labelling real-money games with a verification mark, educating users about deposit and winning policies, setting up a quick and effective grievance redressal process, requesting user information, and forbidding the offering of credit or financing for real-money gaming. These steps are intended to guarantee ethical and open behaviour throughout the online gaming industry.
- With an emphasis on Personal Data Protection, the government enacted the Digital Personal Data Protection Act, 2023. It is a brand-new framework for digital personal data protection which aims to protect the individual's digital personal data.
- The " Cyber Swachhta Kendra " (Botnet Cleaning and Malware Analysis Centre) is a part of the Government of India's Digital India initiative under the (MeitY) to create a secure cyberspace. It uses malware research and botnet identification to tackle cybersecurity. It works with antivirus software providers and internet service providers to establish a safer digital environment.
Strategies by Social Media Platforms
Various social media platforms like YouTube, and Meta have reformed their policies on misinformation and disinformation. This shows their comprehensive strategy for combating deepfake, misinformation/disinformation content on the network. The platform YouTube prioritizes eliminating content that transgresses its regulations, decreasing the amount of questionable information that is recommended, endorsing reliable news sources, and assisting reputable authors. YouTube uses unambiguous facts and expert consensus to thwart misrepresentation. In order to quickly delete information that violates policies, a mix of content reviewers and machine learning is used throughout the enforcement process. Policies are designed in partnership with external experts and producers. In order to improve the overall quality of information that users have access to, the platform also gives users the ability to flag material, places a strong emphasis on media literacy, and gives precedence to giving context.
Meta’s policies address different misinformation categories, aiming for a balance between expression, safety, and authenticity. Content directly contributing to imminent harm or political interference is removed, with partnerships with experts for assessment. To counter misinformation, the efforts include fact-checking partnerships, directing users to authoritative sources, and promoting media literacy.
Promoting ‘Tech for Good’
By 2024, the vision for "Tech for Good" will have expanded to include programs that enable people to understand the ever-complex digital world and promote a more secure and reliable online community. The emphasis is on using technology to strengthen cybersecurity defenses and combat dishonest practices. This entails encouraging digital literacy and providing users with the knowledge and skills to recognize and stop false information, online dangers, and cybercrimes. Furthermore, the focus is on promoting and exposing effective strategies for preventing cybercrime through cooperation between citizens, government agencies, and technology businesses. The intention is to employ technology's good aspects to build a digital environment that values security, honesty, and moral behaviour while also promoting innovation and connectedness.
Conclusion
In the evolving digital landscape, difficulties are presented by false information powered by artificial intelligence and the misuse of advanced technology by bad actors. Notably, there are ongoing collaborative efforts and progress in creating a secure digital environment. Governments, social media corporations, civil societies and tech companies have shown a united commitment to tackling the intricacies of the digital world in 2024 through their own projects. It is evident that everyone has a shared obligation to establish a safe online environment with the adoption of ethical norms, protective laws, and cybersecurity measures. The "Tech for Good" goal for 2024, which emphasizes digital literacy, collaboration, and the ethical use of technology, seems promising. The cooperative efforts of people, governments, civil societies and tech firms will play a crucial role as we continue to improve our policies, practices, and technical solutions.
References:
- https://news.abplive.com/fact-check/deepfakes-ai-driven-misinformation-year-2023-brought-new-era-of-digital-deception-abpp-1651243
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445

Introduction
Words come easily, but not necessarily the consequences that follow. Imagine a 15-year-old child on the internet hoping that the world will be nice to him and help him gain confidence, but instead, someone chooses to be mean on the internet, or the child becomes the victim of a new kind of cyberbullying, i.e., online trolling. The consequences of trolling can have serious repercussions, including eating disorders, substance abuse, conduct issues, body dysmorphia, negative self-esteem, and, in tragic cases, self-harm and suicide attempts in vulnerable individuals. The effects of online trolling can include anxiety, depression, and social isolation. This is one example, and hate speech and online abuse can touch anyone, regardless of age, background, or status. The damage may take different forms, but its impact is far-reaching. In today’s digital age, hate speech spreads rapidly through online platforms, often amplified by AI algorithms.
As we celebrate today, i.e., 18th June, the International Day for Countering Hate Speech, if we have ever been mean to someone on the internet, we pledge never to repeat that kind of behaviour, and if we have been the victim, we will stand against the perpetrator and report it.
This year, the theme for the International Day for Countering Hate Speech is “Hate Speech and Artificial Intelligence Nexus: Building coalitions to reclaim inclusive and secure environments free of hatred. UN Secretary-General Antonio Guterres, in his statement, said, “Today, as this year’s theme reminds us, hate speech travels faster and farther than ever, amplified by Artificial Intelligence. Biased algorithms and digital platforms are spreading toxic content and creating new spaces for harassment and abuse."
Coded Convictions: How AI Reflects and Reinforces Ideologies
Algorithms have swiftly taken the place of feelings; they tamper with your taste, and they do so with a lighter foot, invisibly. They are becoming an important component of social media user interaction and content distribution. While these tools are designed to improve user experience, they frequently inadvertently spread divisive ideologies and push extremist propaganda. This amplification can strengthen the power of extremist organisations, spread misinformation, and deepen societal tensions. This phenomenon, known as “algorithmic radicalisation,” demonstrates how social media companies may utilise a discriminating content selection approach to entice people down ideological rabbit holes and shape their ideas. AI-driven algorithms often prioritise engagement over ethics, enabling divisive and toxic content to trend and placing vulnerable groups, especially youth and minorities, at risk. The UN’s Strategy and Plan of Action on Hate Speech, launched on June 18, 2019, recognises that while AI holds promise for early detection and prevention of harmful speech, it also demands stringent human rights safeguards. Without regulation, these tools can themselves become purveyors of bias and exclusion.
India’s Constitutional Resolve and Civilizational Ethos against Hate
India has always taken pride in being inclusive and united rather than divided. As far as hate speech is concerned, India's stand is no different. The United Nations, India believes in the same values as its international counterpart. Although India has won many battles against hate speech, the war is not over and is now more prominent than ever due to the advancement in communication technologies. In India, while the right to freedom of speech and expression is protected under Article 19(1)(a), its exercise is limited subject to reasonable restrictions under Article 19(2). Landmark rulings such as Ramji Lal Modi v. State of U.P. and Amish Devgan v. UOI have clarified that speech can be curbed if it incites violence or undermines public order. Section 69A of the IT Act, 2000, empowers the government to block content, and these principles are also reflected in Section 196 of the BNS, 2023 (153A IPC) and Section 299 of the BNS, 2023 (295A IPC). Platforms are also required to track down the creators of harmful content and remove it within a reasonable hour and fulfil their due diligence requirements under IT rules.
While there is no denying that India needs to be well-equipped and prepared normatively to tackle hate propaganda and divisive forces. India’s rich culture and history, rooted in philosophies of Vasudhaiva Kutumbakam (the world is one family) and pluralistic traditions, have long stood as a beacon of tolerance and coexistence. By revisiting these civilizational values, we can resist divisive forces and renew our collective journey toward harmony and peaceful living.
CyberPeace Message
The ultimate goal is to create internet and social media platforms that are better, safer and more harmonious for each individual, irrespective of his/her/their social and cultural background. CyberPeace stands resolute on promoting digital media literacy, cyber resilience, and consistently pushing for greater accountability for social media platforms.
References
- https://www.un.org/en/observances/countering-hate-speech
- https://www.artemishospitals.com/blog/the-impact-of-trolling-on-teen-mental-health
- https://www.orfonline.org/expert-speak/from-clicks-to-chaos-how-social-media-algorithms-amplify-extremism
- https://www.techpolicy.press/indias-courts-must-hold-social-media-platforms-accountable-for-hate-speech/