#FactCheck - Manipulated Image Alleging Disrespect Towards PM Circulates Online
Executive Summary:
A manipulated image showing someone making an offensive gesture towards Prime Minister Narendra Modi is circulating on social media. However, the original photo does not display any such behavior towards the Prime Minister. The CyberPeace Research Team conducted an analysis and found that the genuine image was published in a Hindustan Times article in May 2019, where no rude gesture was visible. A comparison of the viral and authentic images clearly shows the manipulation. Moreover, The Hitavada also published the same image in 2019. Further investigation revealed that ABPLive also had the image.

Claims:
A picture showing an individual making a derogatory gesture towards Prime Minister Narendra Modi is being widely shared across social media platforms.



Fact Check:
Upon receiving the news, we immediately ran a reverse search of the image and found an article by Hindustan Times, where a similar photo was posted but there was no sign of such obscene gestures shown towards PM Modi.

ABP Live and The Hitavada also have the same image published on their website in May 2019.


Comparing both the viral photo and the photo found on official news websites, we found that almost everything resembles each other except the derogatory sign claimed in the viral image.

With this, we have found that someone took the original image, published in May 2019, and edited it with a disrespectful hand gesture, and which has recently gone viral across social media and has no connection with reality.
Conclusion:
In conclusion, a manipulated picture circulating online showing someone making a rude gesture towards Prime Minister Narendra Modi has been debunked by the Cyberpeace Research team. The viral image is just an edited version of the original image published in 2019. This demonstrates the need for all social media users to check/ verify the information and facts before sharing, to prevent the spread of fake content. Hence the viral image is fake and Misleading.
- Claim: A picture shows someone making a rude gesture towards Prime Minister Narendra Modi
- Claimed on: X, Instagram
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
The judiciary as an institution has always been kept on a pedestal and is often seen as the embodiment of justice. From Dictatorship to Democracy, the judiciary plays a central role; even where the judiciary is controlled, the legitimacy of the policies, in one sense or another, is derived from it. In democracies around the world, the independence and well-being of the judiciary are seen as the barometer of democracy’s strength. In this global age, where technology is omnipresent, it seems the judiciary is no exception. Now more than ever, when the judiciary is at the centre of evaluative focus, it becomes imperative to make the judiciary transparent. Digitisation of the judiciary is not just an administrative reform; it is an extension of constitutionalism into the technological realm, an effort to ensure that justice is accessible, transparent, and efficient. On July 25, which is the International Day on Judicial Well-being, is commemorated every year with a clear message that judicial well-being supports “anti-corruption, access to justice, and sustainable peace.”
Digitisation by Design: Justice in the Age of Transformation
The Prime Minister of India envisioned the future of the Indian legal system in alignment with the digitised world, as when he said, “Technology will integrate police, forensics, jails, and courts, and will speed up their work as well. We are moving towards a justice system that will be fully future-ready,” he said, almost predicting the future. Although there are many challenges in the face of this future, there are various initiatives that ease the transition. To clarify, India is streamlining operations, reducing delays, and enhancing access to justice for all by integrating AI into legal research, case management, judicial procedures, and law enforcement. Machine Learning (ML), Natural Language Processing (NLP), Optical Character Recognition (OCR), and predictive analytics are just a few of the AI-powered technologies that are currently being used to increase crime prevention, automate administrative duties, and improve case monitoring.
The digitisation of Indian courts is a structural necessity rather than just a question of contemporary convenience. Miscarriages of justice have frequently resulted from the growing backlog of cases, challenges with maintaining records, and the loss of physical files. In the seminal case of State of U.P. v. Abhay Raj Singh, the courts acknowledged that a conviction could be overturned by missing records alone. With millions of legal documents at risk, digitisation becomes a shield against such a collapse and a tool for preserving judicial memory.
Judicial Digitalisation in India: Institutional Initiatives and Infrastructural Advancements
For centuries, towering bundles of courtroom files stood as dusty monuments to knowledge, sacred, chaotic, and accessible to a select few. But as we now stand in 2025, the physical boundaries of a traditional courtroom have blurred, and the Indian government is actively working towards transforming the legal system. The e-Courts Mission Mode Project is a flagship initiative that aims to utilise Information and Communication Technology (ICT) to modernise and advance the Indian judiciary. This groundbreaking effort, led by the Department of Justice, Government of India, is being carried out in close coordination with the Supreme Court of India’s e-Committee. As a news report suggests, the Supreme Court (SC) held 7.5 lakh hearings through video conferencing between 2020 and 2024, as stated by the Ministry of Law and Justice, responding to a query in the Rajya Sabha on Thursday. Technological tools such as the Supreme Court Vidhik Anuvaad Software (SUVAS), the Case Information Software (CIS), and the Supreme Court Portal for Assistance in Court’s Efficiency (SUPACE) were established to make all pertinent case facts easily available. In another move, the Registry, SC, in close coordination with IIT, Madras, has created and implemented AI and ML-based technologies that are integrated with the Registry’s electronic filing software. This serves as a statement to the fact that cybersecurity and digital infrastructure are no longer administrative add-ons but essential building blocks for ensuring judicial transparency, efficiency, and resilience.
E-Governance and Integrity: The Judiciary in Transition
The United Nations recognises the fundamentals of the judiciary’s well-being and how corruption acts like water to the rust and taints the integrity of not a single judge in general but creates a perception of the whole institution. This threat of corruption is recognised by the United Nations Convention against Corruption (UNCAC), particularly Article 11, which urges the protection of the judiciary’s independence and integrity. Digitisation, while it cannot operate in a vacuum, acts as a structural antidote to corruption by embedding transparency into the fabric of justice delivery as automated registry systems, e-filing, and real-time access to case data drastically reduce discretionary power and the potential for behind-the-scenes manipulation. However, digital systems are only as ethical as the people who design, maintain, and oversee them, bringing their own limitations.
Conclusion: CyberPeace and the Future of Ethical Digital Justice
The potential of digitalisation resides not just in efficiency but also in equity, as India’s judiciary balances tradition and change. A robust democracy, where justice is lit by code rather than hidden under files, is built on a foundation of an open, accessible, and technologically advanced court. This change is not risk-free, though. Secure justice must also be a component of digital justice. The very values that digitisation seeks to preserve are at risk from algorithmic opacity, data breaches, and insecure technologies.
Our vision is not just of a digitalised court system but of a digitally just society, one where judicial data is protected, legal processes are democratised, and innovation upholds constitutionalism. Therefore, as a step forward, CyberPeace resolves to support AI upskilling for legal professionals, advocate for secure-by-design court infrastructure, and facilitate dialogue between technologists and judicial actors to build trust in the digital justice ecosystem. CyberPeace is dedicated to cyber transparency, privacy protection, and ethical AI.
References
- https://www.un.org/en/observances/judicial-well-being
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2106239
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2106239
- https://www.barandbench.com/view-point/facilitating-legal-access-digitalization-of-supreme-court-high-court-records
- https://www.pib.gov.in/PressReleaseIframePage.aspx?PRID=2085127
- https://www.medianama.com/2024/12/223-supreme-court-seven-lakh-video-conferences-four-year-rajya-sabha/

Introduction
In 2022, Oxfam’s India Inequality report revealed the worsening digital divide, highlighting that only 38% of households in the country are digitally literate. Further, only 31% of the rural population uses the internet, as compared to 67% of the urban population. Over time, with the increasing awareness about the importance of digital privacy globally, the definition of digital divide has translated into a digital privacy divide, whereby different levels of privacy are afforded to different sections of society. This further promotes social inequalities and impedes access to fundamental rights.
Digital Privacy Divide: A by-product of the digital divide
The digital divide has evolved into a multi-level issue from its earlier interpretations; level I implies the lack of physical access to technologies, level II refers to the lack of digital literacy and skills and recently, level III relates to the impacts of digital access. Digital Privacy Divide (DPD) refers to the various gaps in digital privacy protection provided to users based on their socio-demographic patterns. It forms a subset of the digital divide, which involves uneven distribution, access and usage of information and communication technology (ICTs). Typically, DPD exists when ICT users receive distinct levels of digital privacy protection. As such, it forms a part of the conversation on digital inequality.
Contrary to popular perceptions, DPD, which is based on notions of privacy, is not always based on ideas of individualism and collectivism and may constitute internal and external factors at the national level. A study on the impacts of DPD conducted in the U.S., India, Bangladesh and Germany highlighted that respondents in Germany and Bangladesh expressed more concerns about their privacy compared to respondents in the U.S. and India. This suggests that despite the U.S. having a strong tradition of individualistic rights, that is reflected in internal regulatory frameworks such as the Fourth Amendment, the topic of data privacy has not garnered enough interest from the population. Most individuals consider forgoing the right to privacy as a necessary evil to access many services, and schemes and to stay abreast with technological advances. Research shows that 62%- 63% of Americans believe that companies and the government collecting data have become an inescapable necessary evil in modern life. Additionally, 81% believe that they have very little control over what data companies collect and about 81% of Americans believe that the risk of data collection outweighs the benefits. Similarly, in Japan, data privacy is thought to be an adopted concept emerging from international pressure to regulate, rather than as an ascribed right, since collectivism and collective decision-making are more valued in Japan, positioning the concept of privacy as subjective, timeserving and an idea imported from the West.
Regardless, inequality in privacy preservation often reinforces social inequality. Practices like surveillance that are geared towards a specific group highlight that marginalised communities are more likely to have less data privacy. As an example, migrants, labourers, persons with a conviction history and marginalised racial groups are often subject to extremely invasive surveillance under suspicions of posing threats and are thus forced to flee their place of birth or residence. This also highlights the fact that focus on DPD is not limited to those who lack data privacy but also to those who have (either by design or by force) excess privacy. While on one end, excessive surveillance, carried out by both governments and private entities, forces immigrants to wait in deportation centres during the pendency of their case, the other end of the privacy extreme hosts a vast number of undocumented individuals who avoid government contact for fear of deportation, despite noting high rates of crime victimization.
DPD is also noted among groups with differential knowledge and skills in cyber security. For example, in India, data privacy laws mandate that information be provided on order of a court or any enforcement agency. However, individuals with knowledge of advanced encryption are adopting communication channels that have encryption protocols that the provider cannot control (and resultantly able to exercise their right to privacy more effectively), in contrast with individuals who have little knowledge of encryption, implying a security as well as an intellectual divide. While several options for secure communication exist, like Pretty Good Privacy, which enables encrypted emailing, they are complex and not easy to use in addition to having negative reputations, like the Tor Browser. Cost considerations also are a major factor in propelling DPD since users who cannot afford devices like those by Apple, which have privacy by default, are forced to opt for devices that have relatively poor in-built encryption.
Children remain the most vulnerable group. During the pandemic, it was noted that only 24% of Indian households had internet facilities to access e-education and several reported needing to access free internet outside of their homes. These public networks are known for their lack of security and privacy, as traffic can be monitored by the hotspot operator or others on the network if proper encryption measures are not in place. Elsewhere, students without access to devices for remote learning have limited alternatives and are often forced to rely on Chromebooks and associated Google services. In response to this issue, Google provided free Chromebooks and mobile hotspots to students in need during the pandemic, aiming to address the digital divide. However, in 2024, New Mexico was reported to be suing Google for allegedly collecting children’s data through its educational products provided to the state's schools, claiming that it tracks students' activities on their personal devices outside of the classroom. It signified the problems in ensuring the privacy of lower-income students while accessing basic education.
Policy Recommendations
Digital literacy is one of the critical components in bridging the DPD. It enables individuals to gain skills, which in turn effectively addresses privacy violations. Studies show that low-income users remain less confident in their ability to manage their privacy settings as compared to high-income individuals. Thus, emphasis should be placed not only on educating on technology usage but also on privacy practices since it aims to improve people’s Internet skills and take informed control of their digital identities.
In the U.S., scholars have noted the role of libraries and librarians in safeguarding intellectual privacy. The Library Freedom Project, for example, has sought to ensure that the skills and knowledge required to ensure internet freedoms are available to all. The Project channelled one of the core values of the library profession i.e. intellectual freedom, literacy, equity of access to recorded knowledge and information, privacy and democracy. As a result, the Project successfully conducted workshops on internet privacy for the public and also openly objected to the Department of Homeland Security’s attempts to shut down the use of encryption technologies in libraries. The International Federation of Library Association adopted a Statement of Privacy in the Library Environment in 2015 that specified “when libraries and information services provide access to resources, services or technologies that may compromise users’ privacy, libraries should encourage users to be aware of the implications and provide guidance in data protection and privacy.” The above should be used as an indicative case study for setting up similar protocols in inclusive public institutions like Anganwadis, local libraries, skill development centres and non-government/non-profit organisations in India, where free education is disseminated. The workshops conducted must inculcate two critical aspects; firstly, enhancing the know-how of using public digital infrastructure and popular technologies (thereby de-alienating technology) and secondly, shifting the viewpoint of privacy as a right an individual has and not something that they own.
However, digital literacy should not be wholly relied on, since it shifts the responsibility of privacy protection to the individual, who may not either be aware or cannot be controlled. Data literacy also does not address the larger issue of data brokers, consumer profiling, surveillance etc. Resultantly, an obligation on companies to provide simplified privacy summaries, in addition to creating accessible, easy-to-use technical products and privacy tools, should be necessitated. Most notable legislations address this problem by mandating notices and consent for collecting personal data of users, despite slow enforcement. However, the Digital Personal Data Protection Act 2023 in India aims to address DPD by not only mandating valid consent but also ensuring that privacy policies remain accessible in local languages, given the diversity of the population.
References
- https://idronline.org/article/inequality/indias-digital-divide-from-bad-to-worse/
- https://arxiv.org/pdf/2110.02669
- https://arxiv.org/pdf/2201.07936#:~:text=The%20DPD%20index%20is%20a,(33%20years%20and%20over).
- https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/
- https://eprints.lse.ac.uk/67203/1/Internet%20freedom%20for%20all%20Public%20libraries%20have%20to%20get%20serious%20about%20tackling%20the%20digital%20privacy%20divi.pdf
- /https://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=6265&context=law_lawreview
- https://eprints.lse.ac.uk/67203/1/Internet%20freedom%20for%20all%20Public%20libraries%20have%20to%20get%20serious%20about%20tackling%20the%20digital%20privacy%20divi.pdf
- https://bosniaca.nub.ba/index.php/bosniaca/article/view/488/pdf
- https://www.hindustantimes.com/education/just-24-of-indian-households-have-internet-facility-to-access-e-education-unicef/story-a1g7DqjP6lJRSh6D6yLJjL.html
- https://www.forbes.com/councils/forbestechcouncil/2021/05/05/the-pandemic-has-unmasked-the-digital-privacy-divide/
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.isc.meiji.ac.jp/~ethicj/Privacy%20protection%20in%20Japan.pdf
- https://socialchangenyu.com/review/the-surveillance-gap-the-harms-of-extreme-privacy-and-data-marginalization/

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions