#FactCheck - AI Artwork Misattributed: Mahendra Singh Dhoni Sand Sculptures Exposed as AI-Generated
Executive Summary:
A recent claim going around on social media that a child created sand sculptures of cricket legend Mahendra Singh Dhoni, has been proven false by the CyberPeace Research Team. The team discovered that the images were actually produced using an AI tool. Evident from the unusual details like extra fingers and unnatural characteristics in the sculptures, the Research Team discerned the likelihood of artificial creation. This suspicion was further substantiated by AI detection tools. This incident underscores the need to fact-check information before posting, as misinformation can quickly go viral on social media. It is advised everyone to carefully assess content to stop the spread of false information.

Claims:
The claim is that the photographs published on social media show sand sculptures of cricketer Mahendra Singh Dhoni made by a child.




Fact Check:
Upon receiving the posts, we carefully examined the images. The collage of 4 pictures has many anomalies which are the clear sign of AI generated images.

In the first image the left hand of the sand sculpture has 6 fingers and in the word INDIA, ‘A’ is not properly aligned i.e not in the same line as other letters. In the second image, the finger of the boy is missing and the sand sculpture has 4 fingers in its front foot and has 3 legs. In the third image the slipper of the boy is not visible whereas some part of the slipper is visible, and in the fourth image the hand of the boy is not looking like a hand. These are some of the major discrepancies clearly visible in the images.
We then checked using an AI Image detection tool named ‘Hive’ image detection, Hive detected the image as 100.0% AI generated.

We then checked it in another AI image detection named ContentAtScale AI image detection, and it found to be 98% AI generated.

From this we concluded that the Image is AI generated and has no connection with the claim made in the viral social media posts. We have also previously debunked AI Generated artwork of sand sculpture of Indian Cricketer Virat Kohli which had the same types of anomalies as those seen in this case.
Conclusion:
Taking into consideration the distortions spotted in the images and the result of AI detection tools, it can be concluded that the claim of the pictures representing the child's sand sculptures of cricketer Mahendra Singh Dhoni is false. The pictures are created with Artificial Intelligence. It is important to check and authenticate the content before posting it to social media websites.
- Claim: The frame of pictures shared on social media contains child's sand sculptures of cricket player Mahendra Singh Dhoni.
- Claimed on: X (formerly known as Twitter), Instagram, Facebook, YouTube
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
The spread of misinformation online has become a significant concern, with far-reaching social, political, economic and personal implications. The degree of vulnerability to misinformation differs from person to person, dependent on psychological elements such as personality traits, familial background and digital literacy combined with contextual factors like information source, repetition, emotional content and topic. How to reduce misinformation susceptibility in real-world environments where misinformation is regularly consumed on social media remains an open question. Inoculation theory has been proposed as a way to reduce susceptibility to misinformation by informing people about how they might be misinformed. Psychological inoculation campaigns on social media are effective at improving misinformation resilience at scale.
Prebunking has gained prominence as a means to preemptively build resilience against anticipated exposure to misinformation. This approach, grounded in Inoculation Theory, allows people to analyse and avoid manipulation without prior knowledge of specific misleading content by helping them build generalised resilience. We may draw a parallel here with broad spectrum antibiotics that can be used to fight infections and protect the body against symptoms before one is able to identify the particular pathogen at play.
Inoculation Theory and Prebunking
Inoculation theory is a promising approach to combat misinformation in the digital age. It involves exposing individuals to weakened forms of misinformation before encountering the actual false information. This helps develop resistance and critical thinking skills to identify and counter deceptive content.
Inoculation theory has been established as a robust framework for countering unwanted persuasion and can be applied within the modern context of online misinformation:
- Preemptive Inoculation: Preemptive inoculation entails exposing people to weaker kinds of misinformation before they encounter genuine erroneous information. Individuals can build resistance and critical thinking abilities by being exposed to typical misinformation methods and strategies.
- Technique/logic based Inoculation: Individuals can educate themselves about typical manipulative strategies used in online misinformation, which could be emotionally manipulative language, conspiratorial reasoning, trolling and logical fallacies. Learning to recognise these tactics as indicators of misinformation is an important first step to being able to recognise and reject the same. Through logical reasoning, individuals can recognize such tactics for what they are: attempts to distort the facts or spread misleading information. Individuals who are equipped with the capacity to discern weak arguments and misleading methods may properly evaluate the reliability and validity of information they encounter on the Internet.
- Educational Campaigns: Educational initiatives that increase awareness about misinformation, its consequences, and the tactics used to manipulate information can be useful inoculation tools. These programmes equip individuals with the knowledge and resources they need to distinguish between reputable and fraudulent sources, allowing them to navigate the online information landscape more successfully.
- Interactive Games and Simulations: Online games and simulations, such as ‘Bad News,’ have been created as interactive aids to protect people from misinformation methods. These games immerse users in a virtual world where they may learn about the creation and spread of misinformation, increasing their awareness and critical thinking abilities.
- Joint Efforts: Combining inoculation tactics with other anti-misinformation initiatives, such as accuracy primes, building resilience on social media platforms, and media literacy programmes, can improve the overall efficacy of our attempts to combat misinformation. Expert organisations and people can build a stronger defence against the spread of misleading information by using many actions at the same time.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
Implementation of the Inoculation Theory on social media platforms can be seen as an effective strategy point for building resilience among users and combating misinformation. Tech/social media platforms can develop interactive and engaging content in the form of educational prebunking videos, short animations, infographics, tip sheets, and misinformation simulations. These techniques can be deployed through online games, collaborations with influencers and trusted sources that help design and deploy targeted campaigns whilst also educating netizens about the usefulness of Inoculation Theory so that they can practice critical thinking.
The approach will inspire self-monitoring amongst netizens so that people consume information mindfully. It is a powerful tool in the battle against misinformation because it not only seeks to prevent harm before it occurs, but also actively empowers the target audience. In other words, Inoculation Theory helps build people up, and takes them on a journey of transformation from ‘potential victim’ to ‘warrior’ in the battle against misinformation. Through awareness-building, this approach makes people more aware of their own vulnerabilities and attempts to exploit them so that they can be on the lookout while they read, watch, share and believe the content they receive online.
Widespread adoption of Inoculation Theory may well inspire systemic and technological change that goes beyond individual empowerment: these interventions on social media platforms can be utilized to advance digital tools and algorithms so that such interventions and their impact are amplified. Additionally, social media platforms can explore personalized inoculation strategies, and customized inoculation approaches for different audiences so as to be able to better serve more people. One such elegant solution for social media platforms can be to develop a dedicated prebunking strategy that identifies and targets specific themes and topics that could be potential vectors for misinformation and disinformation. This will come in handy, especially during sensitive and special times such as the ongoing elections where tools and strategies for ‘Election Prebunks’ could be transformational.
Conclusion
Applying Inoculation Theory in the modern context of misinformation can be an effective method of establishing resilience against misinformation, help in developing critical thinking and empower individuals to discern fact from fiction in the digital information landscape. The need of the hour is to prioritize extensive awareness campaigns that encourage critical thinking, educate people about manipulation tactics, and pre-emptively counter false narratives associated with information. Inoculation strategies can help people to build mental amour or mental defenses against malicious content and malintent that they may encounter in the future by learning about it in advance. As they say, forewarned is forearmed.
References
- https://www.science.org/doi/10.1126/sciadv.abo6254
- https://stratcomcoe.org/publications/download/Inoculation-theory-and-Misinformation-FINAL-digital-ISBN-ebbe8.pdf

Introduction
AI has revolutionized the way we look at growing technologies. AI is capable of performing complex tasks in fasten time. However, AI’s potential misuse led to increasing cyber crimes. As there is a rapid expansion of generative AI tools, it has also led to growing cyber scams such as Deepfake, voice cloning, cyberattacks targeting Critical Infrastructure and other organizations, and threats to data protection and privacy. AI is empowered by giving the realistic output of AI-created videos, images, and voices, which cyber attackers misuse to commit cyber crimes.
It is notable that the rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience in performing several tasks and are capable of assisting individuals and business entities. On the other hand, since these technologies are easily accessible, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
What is Deepfake?
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content which looks very realistic but, in actuality, is fake. Voice cloning is also a part of deepfake. To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology.
How Deepfake Can Harm Organizations or Enterprises?
- Reputation: Deepfakes have a negative impact on the reputation of the organization. It’s a reputation which is at stake. Fake representations or interactions between an employee and user, for example, misrepresenting CEO online, could damage an enterprise’s credibility, resulting in user and other financial losses. To be protective against such incidents of deepfake, organisations must thoroughly monitor online mentions and keep tabs on what is being said or posted about the brand. Deepfake-created content can also be misused to Impersonate leaders, financial officers and officials of the organisation.
- Misinformation: Deepfake can be used to spread misrepresentation or misinformation about the organisation by utilising the deepfake technology in the wrong way.
- Deepfake Fraud calls misrepresenting the organisation: There have been incidents where bad actors pretend to be from legitimate organisations and seek personal information. Such as helpline fraudsters, fake representatives from hotel booking departments, fake loan providers, etc., where bad actors use voice clones or deepfake-oriented fake video calls in order to propose themselves as belonging to legitimate organisations and, in actuality, they are deceiving people.
How can organizations combat AI-driven cybercrimes such as deepfake?
- Cybersecurity strategy: Organisations need to keep in place a wide range of cybersecurity strategies or use advanced tools to combat the evolving disinformation and misrepresentation caused by deepfake technology. Cybersecurity tools can be utilised to detect deepfakes.
- Social media monitoring: Social media monitoring can be done to detect any unusual activity. Organisations can select or use relevant tools and implement technologies to detect deepfakes and demonstrate media provenance. Real-time verification capabilities and procedures can be used. Reverse image searches, like TinEye, Google Image Search, and Bing Visual Search, can be extremely useful if the media is a composition of images.
- Employee Training: Employee education on cybersecurity will also play a significant role in strengthening the overall cybersecurity posture of the organisation.
Conclusion
There have been incidents where AI-driven tools or technology have been misused by cybercriminals or bad actors. Synthetic videos developed by AI are used by bad actors. Generative AI has gained significant popularity for many capabilities that produce synthetic media. There are concerns about synthetic media, such as its misuse of disinformation operations designed to influence the public and spread false information. In particular, synthetic media threats that organisations most often face include undermining the brand, financial gain, threat to the security or integrity of the organisation itself and Impersonation of the brand’s leaders for financial gain.
Synthetic media attempts to target organisations intending to defraud the organisation for financial gain. Example includes fake personal profiles on social networking sites and deceptive deepfake calls, etc. The organisation needs to have the proper cyber security strategy in place to combat such evolving threats. Monitoring and detection should be performed by the organisations and employee training on empowering on cyber security will also play a crucial role to effectively deal with evolving threats posed by the misuse of AI-driven technologies.
References:
- https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF
- https://www.securitymagazine.com/articles/98419-how-to-mitigate-the-threat-of-deepfakes-to-enterprise-organizations
.webp)
Introduction
In the sprawling and ever-evolving landscape of cybercrime, phishing links, phoney emails, and dubious investment offers are no longer the only tools used by scammers. Cybercriminals are becoming skilled at taking advantage of commonplace digital behaviours, undermining confidence, and turning popular features of our most essential apps into weapons. A fast expanding international threat has been revealed by the National Cybercrime Threat Analytics Unit (NCTAU) of the Indian Cybercrime Coordination Centre(I4C)’s most recent advisory on “WhatsApp account renting”. This scam uses QR codes to trick users into connecting their WhatsApp accounts to fraudulent sites under the guise of a “quick income” opportunity. What initially appears innocuous turns into a tool for thieves to take control of accounts and use them for illicit purposes.
The Global Rise of Cyber Mule Networks
Initially the word “mule” in cybercrime networks referred to a bank account used, knowingly often unknowingly, to transfer or “launder” money obtained from fraud and illegal activities. In light of the evolving nature of this cybercrime, Cyber mules in the present scenario can be referred to as, individuals who knowingly or unknowingly allow their digital identities, devices, or bank accounts to be used for illegal activity.
Various cybersecurity companies as well as Europol and Interpol, have frequently cautioned that hackers are increasingly using digital mule recruiting, which frequently takes the form of the following:
- Work-from-home Offers
- Streams of passive income
- Monetisation of social media
- Roles for verification assistants
- Apps that earn commissions
Earlier versions involved money transfers through personal bank accounts . Criminals now want your digital identity rather than just your money, as the trend has been reported to be changing.
Scammers frequently “rent” victims’ Facebook, LINE, Telegram, and WeChat accounts in parts of Southeast Asia and Africa in order to conduct impersonation frauds or assist with criminal operations. The WhatsApp variant that is making its way to India is a logical progression, although it comes only with the widely used WhatsApp Web linked-device capability.
How the WhatsApp Account Renting Scam Works
I4C’s advisory dated 15th October, 2025, highlights a sophisticated yet psychologically simple scheme that exploits trust, curiosity, and the illusion of easy income.The scam’s lifetime is as follows:
1. The Hook: “Automatically Earn Passive Income”
Threat actors claim users can earn daily rewards by connecting their WhatsApp accounts to a new “partner platform” in their polished and professional Instagram and Facebook ads.
This strategy imitates international scam factories in Cambodia and Myanmar, where victims are lured into investment schemes or bogus tasks by social media advertisements.
2.The Redirect: Rogue APKs & Fake Websites
When victims click on the advertisement, they are sent to
- Fake dashboards for earnings
- Untrustworthy websites that imitate authentic financial interfaces
- Instructions for installing Android APKs from sources other than the Play Store
- These APKs often carry spyware or remote-access malware.
3.The Trap: Scanning a QR Code
The user is asked to scan a QR code through WhatsApp’s “Linked Devices” feature, which is normally used for WhatsApp Web.
Without ever touching the victim’s phone, the con artist obtains complete session access to their WhatsApp account as soon as the QR is scanned.
Threat actors are able to:
- Transmit and receive messages
- Get access to contact lists
- Participate in or start groups
- Assume the victim’s identity
- Conduct frauds using their identities
4.The Illusion: A Multi-Level Commission Structure
A pyramid-style earnings model is displayed to maintain credibility:
- 10% off direct invites
- 5% of secondary invites
- 2% of tertiary invitations
These figures are designed to encourage victims to recruit more users, increasing the number of compromised WhatsApp accounts.
5.The Misuse: “Mule WhatsApp accounts”
The victim’s account becomes a digital mule once it is connected, allowing fraudsters to:
- Start UPI fraud and phishing
- Distribute harmful links
- Impersonate the victim to scam their contacts
- Participate in bulk messaging campaigns
- Get additional mule accounts
Precautions Issued by I4C
I4C has advised citizens to take the following precautions:
- You could face criminal charges or similar consequences if you carelessly rent or link your WhatsApp account for money
- Installing APKs from non-official app shops should be avoided
- Advertisements that promise automatic revenue, referral bonuses, or passive income should be avoided.
- Regularly check linked devices on WhatsApp: Settings → Linked Devices
- Use WhatsApp’s Official support page to report hacked accounts or impersonation: https://www.whatsapp.com/contact/forms/1534459096974129
- Report financial fraud immediately by calling 1930 or visiting cybercrime.gov.in
CyberPeace Outlook
The WhatsApp account rental fraud is not an isolated phenomenon; rather, it is the latest mutation of a global cybercrime apparatus that feeds on social engineering, digital identity theft, and international mule networks. Its simplicity, all it takes to take over your digital life is a QR code scan, makes it especially hazardous. I4C’s timely warning serves as an important reminder that easy money is nearly always a trap in the digital world and that, if we let our guard down, our most reliable platforms can become attack surfaces. Stay informed, and stay safe. In order to protect our identities, data, and communities, cyber hygiene is now a must.
References
- https://www.cnbctv18.com/personal-finance/mule-account-fraud-on-the-rise-what-it-is-and-how-to-shttps://i4c.mha.gov.in/theme/resources/advisories/Mule%20Whatsapp%20V1.4.pdftay-safe-19662507.htm
- https://i4c.mha.gov.in/theme/resources/advisories/Mule%20Whatsapp%20V1.4.pdf