#FactCheck -AI-Generated Image Falsely Claims Mukesh and Nita Ambani Gifted Luxury Car to Suryakumar Yadav
Executive Summary
A picture circulating on social media allegedly shows Reliance Industries chairman Mukesh Ambani and Nita Ambani presenting a luxury car to India’s T20 team captain Suryakumar Yadav. The image is being widely shared with the claim that the Ambani family gifted the cricketer a luxury car in recognition of his outstanding performance. However, research conducted by the CyberPeace found the viral claim to be false. The research revealed that the image being circulated online is not authentic but generated using artificial intelligence (AI).
Claim
On February 8, 2025, a Facebook user shared the viral image claiming that Mukesh Ambani and Nita Ambani gifted a luxury car to Suryakumar Yadav following his brilliant innings. The post has been widely circulated across social media platforms. In another instance, a user shared a collage in which one image shows Suryakumar Yadav receiving an award, while another depicts him with Nita Ambani, further amplifying the claim.
- https://www.facebook.com/61559815349585/posts/122207061746327178/?rdid=0MukeT6c7WK1uB8m#
- https://archive.ph/wip/UH9Xh

Fact Check:
Upon closely examining the viral image, certain visual inconsistencies raised suspicion that it might be AI-generated. To verify its authenticity, the image was analysed using the AI detection tool Hive Moderation, which indicated a 99 percent probability that the image was AI-generated.

In the next step of the research, the image was also analysed using another AI detection tool, Sightengine, which found a 98 percent likelihood that the image was created using artificial intelligence.

Conclusion
The research clearly establishes that the viral image claiming Mukesh Ambani and Nita Ambani gifted a luxury car to Suryakumar Yadav is misleading. The picture is not real and has been generated using AI.
Related Blogs

Introduction
The Digital Personal Data Protection (DPDP) Act 2023 of India is a significant transition for privacy legislation in this age of digital data. A key element of this new law is a requirement for organisations to have appropriate, user-friendly consent mechanisms in place for their customers so that collection, use or removal of an individual's personal data occurs in a clear and compliant manner. As a means of putting this requirement into practice, the Ministry of Electronics and Information Technology (MeitY) issued a comprehensive Business Requirements Document (BRD) in June 2025 to guide organizations, as well as Consent Managers, on how to create a Consent Management System (CMS). This document establishes the technical and functional framework by which organizations and individuals (Data Principals) will exercise control over the way their data is gathered, used and removed.
Understanding the BRD and Its Purpose
BRD represents an optional guide created as part of the "Code for Consent" programme run by MeitY in India. The purpose of the BRD is to provide guidance to startups, digital platforms and other enterprises on how to create a technology system that supports management of user consent per the requirements of the DPDP Act. Although the contents of the BRD do not carry any legal weight, it lays out a clear path for organisations to create their own consent mechanisms using best practices that align with the principles of transparency, accountability and purpose limitation in the DPDP Act.
The goal is threefold:
- Enable complete consent lifecycle management from collection to withdrawal.
- Empower individuals to manage their consents actively and transparently.
- Support data fiduciaries and processors with an interoperable system that ensures compliance.
Key Components of the Consent Management System
The BRD proposes the development of a modular Consent Management System (CMS) that provides users with secure APIs and user-friendly interfaces. This system will allow for a variety of features and modules, including:
- Consent Lifecycle Management – consent should be specific, informed and tied to an explicit purpose. The CMS will manage the collection, validation, renewal, updates and withdrawal of consent. Each transaction of consent will create a tamper-proof “consent artifact,” which will include the timestamp of creation as well as an ID identifying the purpose for which it was given.
- User Dashboard – A user will be able to view and modify the status of their active, expired or withdrawn consent and revoke access at any time via the multilingual user-friendly interface. This would make the system accessible to people from different regions and cultures.
- Notification Engine – The CMS will automatically notify users, fiduciaries and processors of any action taken with respect to consent, in order to ensure real-time updates and accountability.
- Grievance Redress Mechanism – The CMS will include a complaints mechanism that allows users to submit complaints related to the misuse of consent or the denial of their rights. This will enable tracking of the complaint resolution status, and will allow for escalation if necessary.
- Audit and Logging – As part of the CMS's internal controls for compliance and regulatory purposes, the CMS must maintain an immutable record of every instance of consent for auditing and regulatory review. The records must be encrypted, time-stamped, and linked permanently to a user and purpose ID.
- Cookie Consent Management – A separate module will enable users to manage cookie consent for websites separately from any other consents.
Roles and Responsibilities
The BRD identifies the various stakeholders involved and their associated responsibilities.
- Data Principals (Users): The user has full authority to give, withhold, amend, or revoke their consent for the use of their personal data, at any time.
- Data Fiduciaries (Companies): Companies (the fiduciaries) must collect the data principals' consents for each particular reason and must only begin processing a data subject's personal data after validating that consent through the CMS. Companies must also provide the data principals with any information or notifications needed, as well as how to resolve their complaints.
- Data Processors: Data Processors must strictly adhere to the consent stated in the CMS, and Data Processors may only process personal data on behalf of the Data Fiduciary.
- Consent Managers: The Consent Managers are independent entities that are registered with the Data Protection Board. They are responsible for administering the CMS, allowing users to manage their consent across different platforms.
This layered structure ensures transparency and shared responsibility for the consent ecosystem.
Technical Specifications and Security
The following principles of the DPDP Act must be followed to remain compliant with the DPDP Act.
- End-to-End Encryption: All exchanges of data with users must be encrypted using a minimum of TSL 1.3 and also encrypting within that standard.
- API-First Approach: API’s will be utilized to validate, withdraw and update consent in a secured manner using external sources.
- Interoperability/Accessibility: The CMS needs to allow for users to utilize several different languages (e.g. Hindi, Tamil, etc.) and be appropriate for use with various types of mobile devices and different abilities.
- Data Retention Policy: The CMS should also include automatic deletion of consent data (when the consent has expired or has been withdrawn) in order to maintain compliance with data retention limits.
Legal Relevance and Timelines
While the BRD itself is not enforceable, it is directly aligned with the upcoming enforcement of the DPDP Act, 2023. The Act was passed in August 2023 but is expected to come into effect in stages, once officially notified by the central government. Draft implementation rules, including those defining the role of Consent Managers, were released for public consultation in early 2025.
For businesses, the BRD serves as an early compliance tool—offering both a conceptual roadmap and technical framework to prepare before the law is enforced. Legal experts have described it as a critical resource for aligning data governance systems with emerging regulatory expectations.
Implications for Businesses
Organizations that collect and process user data will be required to overhaul their consent workflows:
- No blanket consents: Every data processing activity must have explicit, separate consent.
- Granular audit logs: Companies must maintain tamper-proof logs for every consent action.
- Integration readiness: Enterprises need to integrate their platforms with third-party or in-house CMS platforms via the specified APIs.
- Grievance redress and user support: Systems must be in place to handle complaints and withdrawal requests in a timely, verifiable manner.
Failing to comply once the DPDP Act is in force may expose companies to penalties, reputational damage, and potential regulatory action.
Conclusion
The BRD on Consent Management of India is a forward-looking initiative laying a technological framework that is an essential component of the DPDP Act concerning user consent; Although not yet a legal document, it provides an extent of going into all the necessary discipline for companies to prepare. As data protection grows in importance, developing consent mechanisms based on security, transparency, and the needs of the user is no longer just a regulatory requirement, but rather a requirement for the development of trust. This is the time for businesses to establish or implement CMS solutions that support this objective to be better equipped for the future of data governance in India.
References
- https://d38ibwa0xdgwxx.cloudfront.net/whatsnew-docs/8d5409f5-d26c-4697-b10e-5f6fb2d583ef.pdf
- https://ssrana.in/articles/ministry-releases-business-requirement-document-for-consent-management-under-the-dpdp-act-2023/
- https://dpo-india.com/Blogs/consent-dpdpa/
- https://corporate.cyrilamarchandblogs.com/2025/06/the-ghost-in-the-machine-the-recent-business-requirement-document-on-consent/
- https://www.mondaq.com/india/privacy-protection/1660964/analysis-of-the-business-requirement-document-for-consent-management-system

Executive Summary:
Recently, our team encountered a post on X (formerly Twitter) pretending Chandra Arya, a Member of Parliament of Canada is speaking in Kannada and this video surfaced after he filed his nomination for the much-coveted position of Prime Minister of Canada. The video has taken the internet by storm and is being discussed as much as words can be. In this report, we shall consider the legitimacy of the above claim by examining the content of the video, timing and verifying information from reliable sources.

Claim:
The viral video claims Chandra Arya spoke Kannada after filing his nomination for the Canadian Prime Minister position in 2025, after the resignation of Justin Trudeau.

Fact Check:
Upon receiving the video, we performed a reverse image search of the key frames extracted from the video, we found that the video has no connection to any nominations for the Canadian Prime Minister position.Instead, we found that it was an old video of his speech in the Canadian Parliament in 2022. Simultaneously, an old post from the X (Twitter) handle of Mr. Arya’s account was posted at 12:19 AM, May 20, 2022, which clarifies that the speech has no link with the PM Candidature post in the Canadian Parliament.
Further our research led us to a YouTube video posted on a verified channel of Hindustan Times dated 20th May 2022 with a caption -
“India-born Canadian MP Chandra Arya is winning hearts online after a video of his speech at the Canadian Parliament in Kannada went viral. Arya delivered a speech in his mother tongue - Kannada. Arya, who represents the electoral district of Nepean, Ontario, in the House of Commons, the lower house of Canada, tweeted a video of his address, saying Kannada is a beautiful language spoken by about five crore people. He said that this is the first time when Kannada is spoken in any Parliament outside India. Netizens including politicians have lauded Arya for the video.”

Conclusion:
The viral video claiming that Chandra Arya spoke in Kannada after filing his nomination for the Canadian Prime Minister position in 2025 is completely false. The video, dated May 2022, shows Chandra Arya delivering an address in Kannada in the Canadian Parliament, unrelated to any political nominations or events concerning the Prime Minister's post. This incident highlights the need for thorough fact-checking and verifying information from credible sources before sharing.
- Claim: Misleading Claim About Chandra Arya’s PM Candidacy
- Claimed on: X (Formerly Known As Twitter)
- Fact Check: False and Misleading

Introduction
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
.png)
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
- Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
- Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms: Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
- User Empowerment to Counter Misinformation: Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
- Partnership with Fact-Checking/Expert Organizations: Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
References
- https://mark-hurlstone.github.io/THKE.22.BJP.pdf
- https://futurefreespeech.org/wp-content/uploads/2024/01/Empowering-Audiences-Through-%E2%80%98Prebunking-Michael-Bang-Petersen-Background-Report_formatted.pdf
- https://newsreel.pte.hu/news/unprecedented_challenges_Debunking_disinformation
- https://misinforeview.hks.harvard.edu/article/global-vaccination-badnews/