#FactCheck - Visuals of Jharkhand Police catching a truck load of cash and gold coins is an AI-generated image
Executive Summary:
An image has been spread on social media about the truck carrying money and gold coins impounded by Jharkhand Police that also during lok sabha elections in 2024. The Research Wing, CyberPeace has verified the image and found it to be generated using artificial intelligence. There are no credible news articles supporting claims about the police having made such a seizure in Jharkhand. The images were checked using AI image detection tools and proved to be AI made. It is advised to share any image or content after verifying its authenticity.

Claims:
The viral social media post depicts a truck intercepted by the Jharkhand Police during the 2024 Lok Sabha elections. It was claimed that the truck was filled with large amounts of cash and gold coins.



Fact Check:
On receiving the posts, we started with keyword-search to find any relevant news articles related to this post. If such a big incident really happened it would have been covered by most of the media houses. We found no such similar articles. We have closely analysed the image to find any anomalies that are usually found in AI generated images. And found the same.

The texture of the tree in the image is found to be blended. Also, the shadow of the people seems to be odd, which makes it more suspicious and is a common mistake in most of the AI generated images. If we closely look at the right hand of the old man wearing white attire, it is clearly visible that the thumb finger is blended with his apparel.
We then analysed the image in an AI image detection tool named ‘Hive Detector’. Hive Detector found the image to be AI-generated.

To validate the AI fabrication, we checked with another AI image detection tool named ‘ContentAtScale AI detection’ and it detected the image as 82% AI. Generated.

After validation of the viral post using AI detection tools, it is apparent that the claim is misleading and fake.
Conclusion:
The viral image of the truck impounded by Jharkhand Police is found to be fake and misleading. The viral image is found to be AI-generated. There has been no credible source that can support the claim made. Hence, the claim made is false and misleading. The Research Wing, CyberPeace previously debunked such AI-generated images with misleading claims. Netizens must verify such news that circulates in Social Media with bogus claims before sharing it further.
- Claim: The photograph shows a truck intercepted by Jharkhand Police during the 2024 Lok Sabha elections, which was allegedly loaded with huge amounts of cash and gold coins.
- Claimed on: Facebook, Instagram, X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
The rise of misinformation, disinformation, and synthetic media content on the internet and social media platforms has raised serious concerns, emphasizing the need for responsible use of social media to maintain information accuracy and combat misinformation incidents. With online misinformation rampant all over the world, the World Economic Forum's 2024 Global Risk Report, notably ranks India amongst the highest in terms of risk of mis/disinformation.
The widespread online misinformation on social media platforms necessitates a joint effort between tech/social media platforms and the government to counter such incidents. The Indian government is actively seeking to collaborate with tech/social media platforms to foster a safe and trustworthy digital environment and to also ensure compliance with intermediary rules and regulations. The Ministry of Information and Broadcasting has used ‘extraordinary powers’ to block certain YouTube channels, X (Twitter) & Facebook accounts, allegedly used to spread harmful misinformation. The government has issued advisories regulating deepfake and misinformation, and social media platforms initiated efforts to implement algorithmic and technical improvements to counter misinformation and secure the information landscape.
Efforts by the Government and Social Media Platforms to Combat Misinformation
- Advisory regulating AI, deepfake and misinformation
The Ministry of Electronics and Information Technology (MeitY) issued a modified advisory on 15th March 2024, in suppression of the advisory issued on 1st March 2024. The latest advisory specifies that the platforms should inform all users about the consequences of dealing with unlawful information on platforms, including disabling access, removing non-compliant information, suspension or termination of access or usage rights of the user to their user account and imposing punishment under applicable law. The advisory necessitates identifying synthetically created content across various formats, and instructs platforms to employ labels, unique identifiers, or metadata to ensure transparency.
- Rules related to content regulation
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Updated as on 6.4.2023) have been enacted under the IT Act, 2000. These rules assign specific obligations on intermediaries as to what kind of information is to be hosted, displayed, uploaded, published, transmitted, stored or shared. The rules also specify provisions to establish a grievance redressal mechanism by platforms and remove unlawful content within stipulated time frames.
- Counteracting misinformation during Indian elections 2024
To counter misinformation during the Indian elections the government and social media platforms made their best efforts to ensure the electoral integrity was saved from any threat of mis/disinformation. The Election Commission of India (ECI) further launched the 'Myth vs Reality Register' to combat misinformation and to ensure the integrity of the electoral process during the general elections in 2024. The ECI collaborated with Google to empower the citizenry by making it easy to find critical voting information on Google Search and YouTube. In this way, Google has supported the 2024 Indian General Election by providing high-quality information to voters and helping people navigate AI-generated content. Google connected voters to helpful information through product features that show data from trusted institutions across its portfolio. YouTube showcased election information panels, featuring content from authoritative sources.
- YouTube and X (Twitter) new ‘Notes Feature’
- Notes Feature on YouTube: YouTube is testing an experimental feature that allows users to add notes to provide relevant, timely, and easy-to-understand context for videos. This initiative builds on previous products that display helpful information alongside videos, such as information panels and disclosure requirements when content is altered or synthetic. YouTube clarified that the pilot will be available on mobiles in the U.S. and in the English language, to start with. During this test phase, viewers, participants, and creators are invited to give feedback on the quality of the notes.
- Community Notes feature on X: Community Notes on X aims to enhance the understanding of potentially misleading posts by allowing users to add context to them. Contributors can leave notes on any post, and if enough people rate the note as helpful, it will be publicly displayed. The algorithm is open source and publicly available on GitHub, allowing anyone to audit, analyze, or suggest improvements. However, Community Notes do not represent X's viewpoint and cannot be edited or modified by their teams. A post with a Community Note will not be labelled, removed, or addressed by X unless it violates the X Rules, Terms of Service, or Privacy Policy. Failure to abide by these rules can result in removal from accessing Community Notes and/or other remediations. Users can report notes that do not comply with the rules by selecting the menu on a note and selecting ‘Report’ or using the provided form.
CyberPeace Policy Recommendations
Countering widespread online misinformation on social media platforms requires a multipronged approach that involves joint efforts from different stakeholders. Platforms should invest in state-of-the-art algorithms and technology to detect and flag suspected misleading information. They should also establish trustworthy fact-checking protocols and collaborate with expert fact-checking groups. Campaigns, seminars, and other educational materials must be encouraged by the government to increase public awareness and digital literacy about the mis/disinformation risks and impacts. Netizens should be empowered with the necessary skills and ability to discern fact and misleading information to successfully browse true information in the digital information age. The joint efforts by Government authorities, tech companies, and expert cyber security organisations are vital in promoting a secure and honest online information landscape and countering the spread of mis/disinformation. Platforms must encourage netizens/users to foster appropriate online conduct while using platforms and abiding by the terms & conditions and community guidelines of the platforms. Encouraging a culture of truth and integrity on the internet, honouring differing points of view, and confirming facts all help to create a more reliable and information-resilient environment.
References:
- https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf
- https://blog.google/intl/en-in/company-news/outreach-initiatives/supporting-the-2024-indian-general-election/
- https://blog.youtube/news-and-events/new-ways-to-offer-viewers-more-context/
- https://help.x.com/en/using-x/community-notes

Modern international trade heavily relies on data transfers for the exchange of digital goods and services. User data travels across multiple jurisdictions and legal regimes, each with different rules for processing it. Since international treaties and standards for data protection are inadequate, states, in an effort to protect their citizens' data, have begun extending their domestic privacy laws beyond their borders. However, this opens a Pandora's box of legal and administrative complexities for both, the data protection authorities and data processors. The former must balance the harmonization of domestic data protection laws with their extraterritorial enforcement, without overreaching into the sovereignty of other states. The latter must comply with the data privacy laws in all states where it collects, stores, and processes data. While the international legal community continues to grapple with these challenges, India can draw valuable lessons to refine the Digital Personal Data Protection Act, 2023 (DPDP) in a way that effectively addresses these complexities.
Why Extraterritorial Application?
Since data moves freely across borders and entities collecting such data from users in multiple states can misuse it or use it to gain an unfair competitive advantage in local markets, data privacy laws carry a clause on their extraterritorial application. Thus, this principle is utilized by states to frame laws that can ensure comprehensive data protection for their citizens, irrespective of the data’s location. The foremost example of this is the European Union’s (EU) General Data Protection Regulation (GDPR), 2016, which applies to any entity that processes the personal data of its citizens, regardless of its location. Recently, India has enacted the DPDP Act of 2023, which includes a clause on extraterritorial application.
The Extraterritorial Approach: GDPR and DPDP Act
The GDPR is considered the toughest data privacy law in the world and sets a global standard in data protection. According to Article 3, its provisions apply not only to data processors within the EU but also to those established outside its territory, if they offer goods and services to and conduct behavioural monitoring of data subjects within the EU. The enforcement of this regulation relies on heavy penalties for non-compliance in the form of fines up to €20 million or 4% of the company’s global turnover, whichever is higher, in case of severe violations. As a result, corporations based in the USA, like Meta and Clearview AI, have been fined over €1.5 billion and €5.5 million respectively, under the GDPR.
Like the GDPR, the DPDP Act extends its jurisdiction to foreign companies dealing with personal data of data principles within Indian territory under section 3(b). It has a similar extraterritorial reach and prescribes a penalty of up to Rs 250 crores in case of breaches. However, the Act or DPDP Rules, 2025, which are currently under deliberation, do not elaborate on an enforcement mechanism through which foreign companies can be held accountable.
Lessons for India’s DPDP on Managing Extraterritorial Application
- Clarity in Definitions: GDPR clearly defines ‘personal data’, covering direct information such as name and identification number, indirect identifiers like location data, and, online identifiers that can be used to identify the physical, physiological, genetic, mental, economic, cultural, or social identity of a natural person. It also prohibits revealing special categories of personal data like religious beliefs and biometric data to protect the fundamental rights and freedoms of the subjects. On the other hand, the DPDP Act/ Rules define ‘personal data’ vaguely, leaving a broad scope for Big Tech and ad-tech firms to bypass obligations.
- International Cooperation: Compliance is complex for companies due to varying data protection laws in different countries. The success of regulatory measures in such a scenario depends on international cooperation for governing cross-border data flows and enforcement. For DPDP to be effective, India will have to foster cooperation frameworks with other nations.
- Adequate Safeguards for Data Transfers: The GDPR regulates data transfers outside the EU via pre-approved legal mechanisms such as standard contractual clauses or binding corporate rules to ensure that the same level of protection applies to EU citizens’ data even when it is processed outside the EU. The DPDP should adopt similar safeguards to ensure that Indian citizens’ data is protected when processed abroad.
- Revised Penalty Structure: The GDPR mandates a penalty structure that must be effective, proportionate, and dissuasive. The supervisory authority in each member state has the power to impose administrative fines as per these principles, up to an upper limit set by the GDPR. On the other hand, the DPDP’s penalty structure is simplistic and will disproportionately impact smaller businesses. It must take into regard factors such as nature, gravity, and duration of the infringement, its consequences, compliance measures taken, etc.
- Governance Structure: The GDPR envisages a multi-tiered governance structure comprising of
- National-level Data Protection Authorities (DPAs) for enforcing national data protection laws and the GDPR,
- European Data Protection Supervisor (EDPS) for monitoring the processing of personal data by EU institutions and bodies,
- European Commission (EC) for developing GDPR legislation
- European Data Protection Board (EDPB) for enabling coordination between the EC, EDPS, and DPAs
In contrast, the Data Protection Board (DPB) under DPDP will be a single, centralized body overseeing compliance and enforcement. Since its members are to be appointed by the Central Government, it raises questions about the Board’s autonomy and ability to apply regulations consistently. Further, its investigative and enforcement capabilities are not well defined.
Conclusion
The protection of the human right to privacy ( under the International Covenant on Civil and Political Rights and the Universal Declaration of Human Rights) in today’s increasingly interconnected digital economy warrants international standard-setting on cross-border data protection. In the meantime, States relying on the extraterritorial application of domestic laws is unavoidable. While India’s DPDP takes measures towards this, they must be refined to ensure clarity regarding implementation mechanisms. They should push for alignment with data protection laws of other States, and account for the complexity of enforcement in cases involving extraterritorial jurisdiction. As India sets out to position itself as a global digital leader, a well-crafted extraterritorial framework under the DPDP Act will be essential to promote international trust in India’s data governance regime.
Sources
- https://gdpr-info.eu/art-83-gdpr/
- https://gdpr-info.eu/recitals/no-150/
- https://gdpr-info.eu/recitals/no-51/
- https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://www.eqs.com/compliance-blog/biggest-gdpr-fines/#:~:text=ease%20the%20burden.-,At%20a%20glance,In%20summary
- https://gdpr-info.eu/art-3-gdpr/
- https://www.legal500.com/developments/thought-leadership/gdpr-v-indias-dpdpa-key-differences-and-compliance-implications/#:~:text=Both%20laws%20cover%20'personal%20data,of%20personal%20data%20as%20sensitive.
.webp)
Misinformation spread has become a cause for concern for all stakeholders, be it the government, policymakers, business organisations or the citizens. The current push for combating misinformation is rooted in the growing awareness that misinformation leads to sentiment exploitation and can result in economic instability, personal risks, and a rise in political, regional, and religious tensions. The circulation of misinformation poses significant challenges for organisations, brands and administrators of all types. The spread of misinformation online poses a risk not only to the everyday content consumer, but also creates concerns for the sharer but the platforms themselves. Sharing misinformation in the digital realm, intentionally or not, can have real consequences.
Consequences for Platforms
Platforms have been scrutinised for the content they allow to be published and what they don't. It is important to understand not only how this misinformation affects platform users, but also its impact and consequences for the platforms themselves. These consequences highlight the complex environment that social media platforms operate in, where the stakes are high from the perspective of both business and societal impact. They are:
- Legal Consequences: Platforms can be fined by regulators if they fail to comply with content moderation or misinformation-related laws and a prime example of such a law is the Digital Services Act of the EU, which has been created for the regulation of digital services that act as intermediaries for consumers and goods, services, and content. They can face lawsuits by individuals, organisations or governments for any damages due to misinformation. Defamation suits are part of the standard practice when dealing with misinformation-causing vectors. In India, the Prohibition of Fake News on Social Media Bill of 2023 is in the pipeline and would establish a regulatory body for fake news on social media platforms.
- Reputational Consequences: Platforms employ a trust model where the user trusts it and its content. If a user loses trust in the platform because of misinformation, it can reduce engagement. This might even lead to negative coverage that affects the public opinion of the brand, its value and viability in the long run.
- Financial Consequences: Businesses that engage with the platform may end their engagement with platforms accused of misinformation, which can lead to a revenue drop. This can also have major consequences affecting the long-term financial health of the platform, such as a decline in stock prices.
- Operational Consequences: To counter the scrutiny from regulators, the platform might need to engage in stricter content moderation policies or other resource-intensive tasks, increasing operational costs for the platforms.
- Market Position Loss: If the reliability of a platform is under question, then, platform users can migrate to other platforms, leading to a loss in the market share in favour of those platforms that manage misinformation more effectively.
- Freedom of Expression vs. Censorship Debate: There needs to be a balance between freedom of expression and the prevention of misinformation. Censorship can become an accusation for the platform in case of stricter content moderation and if the users feel that their opinions are unfairly suppressed.
- Ethical and Moral Responsibilities: Accountability for platforms extends to moral accountability as they allow content that affects different spheres of the user's life such as public health, democracy etc. Misinformation can cause real-world harm like health misinformation or inciting violence, which leads to the fact that platforms have social responsibility too.
Misinformation has turned into a global issue and because of this, digital platforms need to be vigilant while they navigate the varying legal, cultural and social expectations across different jurisdictions. Efforts to create standardised practices and policies have been complicated by the diversity of approaches, leading platforms to adopt flexible strategies for managing misinformation that align with global and local standards.
Addressing the Consequences
These consequences can be addressed by undertaking the following measures:
- The implementation of a more robust content moderation system by the platforms using a combination of AI and human oversight for the identification and removal of misinformation in an effective manner.
- Enhancing the transparency in platform policies for content moderation and decision-making would build user trust and reduce the backlash associated with perceived censorship.
- Collaborations with fact checkers in the form of partnerships to help verify the accuracy of content and reduce the spread of misinformation.
- Engage with regulators proactively to stay ahead of legal and regulatory requirements and avoid punitive actions.
- Platforms should Invest in media literacy initiatives and help users critically evaluate the content available to them.
Final Takeaways
The accrual of misinformation on digital platforms has resulted in presenting significant challenges across legal, reputational, financial, and operational functions for all stakeholders. As a result, a critical need arises where the interlinked, but seemingly-exclusive priorities of preventing misinformation and upholding freedom of expression must be balanced. Platforms must invest in the creation and implementation of a robust content moderation system with in-built transparency, collaborating with fact-checkers, and media literacy efforts to mitigate the adverse effects of misinformation. In addition to this, adapting to diverse international standards is essential to maintaining their global presence and societal trust.
References
- https://pirg.org/edfund/articles/misinformation-on-social-media/
- https://www.mdpi.com/2076-0760/12/12/674
- https://scroll.in/article/1057626/israel-hamas-war-misinformation-is-being-spread-across-social-media-with-real-world-consequences
- https://www.who.int/europe/news/item/01-09-2022-infodemics-and-misinformation-negatively-affect-people-s-health-behaviours--new-who-review-finds