#FactCheck - AI Artwork Misattributed: Mahendra Singh Dhoni Sand Sculptures Exposed as AI-Generated
Executive Summary:
A recent claim going around on social media that a child created sand sculptures of cricket legend Mahendra Singh Dhoni, has been proven false by the CyberPeace Research Team. The team discovered that the images were actually produced using an AI tool. Evident from the unusual details like extra fingers and unnatural characteristics in the sculptures, the Research Team discerned the likelihood of artificial creation. This suspicion was further substantiated by AI detection tools. This incident underscores the need to fact-check information before posting, as misinformation can quickly go viral on social media. It is advised everyone to carefully assess content to stop the spread of false information.

Claims:
The claim is that the photographs published on social media show sand sculptures of cricketer Mahendra Singh Dhoni made by a child.




Fact Check:
Upon receiving the posts, we carefully examined the images. The collage of 4 pictures has many anomalies which are the clear sign of AI generated images.

In the first image the left hand of the sand sculpture has 6 fingers and in the word INDIA, ‘A’ is not properly aligned i.e not in the same line as other letters. In the second image, the finger of the boy is missing and the sand sculpture has 4 fingers in its front foot and has 3 legs. In the third image the slipper of the boy is not visible whereas some part of the slipper is visible, and in the fourth image the hand of the boy is not looking like a hand. These are some of the major discrepancies clearly visible in the images.
We then checked using an AI Image detection tool named ‘Hive’ image detection, Hive detected the image as 100.0% AI generated.

We then checked it in another AI image detection named ContentAtScale AI image detection, and it found to be 98% AI generated.

From this we concluded that the Image is AI generated and has no connection with the claim made in the viral social media posts. We have also previously debunked AI Generated artwork of sand sculpture of Indian Cricketer Virat Kohli which had the same types of anomalies as those seen in this case.
Conclusion:
Taking into consideration the distortions spotted in the images and the result of AI detection tools, it can be concluded that the claim of the pictures representing the child's sand sculptures of cricketer Mahendra Singh Dhoni is false. The pictures are created with Artificial Intelligence. It is important to check and authenticate the content before posting it to social media websites.
- Claim: The frame of pictures shared on social media contains child's sand sculptures of cricket player Mahendra Singh Dhoni.
- Claimed on: X (formerly known as Twitter), Instagram, Facebook, YouTube
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In the labyrinthine corridors of the digital age, where information zips across the globe with the ferocity of a tempest, the truth often finds itself ensnared in a web of deception. It is within this intricate tapestry of reality and falsehood that we find ourselves examining two distinct yet equally compelling cases of misinformation, each a testament to the pervasive challenges that beset our interconnected world.
Case 1: The Deceptive Video: Originating in Malaysia, Misattributed to Indian Railway Development
A misleading video claiming to showcase Indian railway construction has been debunked as footage from Malaysia's East Coast Rail Link (ECRL). Fact-checking efforts by India TV traced the video's origin to Malaysia, revealing deceptive captions in Tamil and Hindi. The video was initially posted on Twitter on January 9, 2024, announcing the commencement of track-laying for Malaysia's East Coast Railway. Further investigation reveals the ECRL as a joint venture between Malaysia and China, involving the laying of tracks along the east coast, challenging assertions of Indian railway development. The ECRL's track-laying initiative, initiated in December 2023, is part of China's Belt and Road initiative, covering 665 kilometers across states like Kelantan, Terengganu, Pahang, and Selangor, with a completion target set for 2025.
The video in question, a digital chameleon, had its origins not in the bustling landscapes of India but within the verdant bounds of Malaysia. Specifically, it was a scene captured from the East Coast Rail Link (ECRL) project, a monumental joint venture between Malaysia and China, unfurling across 665 kilometers of Malaysian terrain. This ambitious endeavor, part of the grand Belt and Road initiative, is a testament to the collaborative spirit that defines our era, with tracks stretching from Kelantan to Selangor, and a completion horizon set for the year 2025.
The unveiling of this grand project was graced by none other than Malaysia’s King Sultan Abdullah Sultan Ahmad Shah, in Pahang, underscoring the strategic alliance with China and the infrastructural significance of the ECRL. Yet, despite the clarity of its origins, the video found itself cloaked in a narrative of Indian development, a falsehood that spread like wildfire across the digital savannah.
Through the meticulous application of keyframe analysis and reverse image searches, the truth was laid bare. Reports from reputable sources such as the Associated Press and the Global Times, featuring the very same machinery, corroborated the video's true lineage. This revelation not only highlighted the ECRL's geopolitical import but also served as a clarion call for the critical role of fact-checking in an era where misinformation proliferates with reckless abandon.
Case 2: Kerala's Incident: Investigating Fake Narratives
Kerala Chief Minister Pinarayi Vijayan has registered 53 cases related to spreading fake narratives on social media to incite communal sentiments following the blasts at a Christian religious gathering in October 2023. Vijayan said cases have been registered against online news portals, editors, and Malayalam television channels. The state police chief has issued directions to monitor social media to stop fake news spread and take appropriate actions.
In a different corner of the world, the serene backdrop of Kerala was shattered by an event that would ripple through the fabric of its society. The Kalamassery blast, a tragic occurrence at a Christian religious gathering, claimed the lives of eight individuals and left over fifty wounded. In the wake of this calamity, a man named Dominic Martin surrendered, claiming responsibility for the heinous act.
Yet, as the investigation unfolded, a different kind of violence emerged—one that was waged not with explosives but with words. A barrage of fake narratives began to circulate through social media, igniting communal tensions and distorting the narrative of the incident. The Kerala Chief Minister, Pinarayi Vijayan, informed the Assembly that 53 cases had been registered across the state, targeting individuals and entities that had fanned the flames of discord through their digital utterances.
The Kerala police, vigilant guardians of truth, embarked on a digital crusade to quell the spread of these communally instigative messages. With a particular concentration of cases in Malappuram district, the authorities worked tirelessly to dismantle the network of fake profiles that propagated religious hatred. Social media platforms were directed to assist in this endeavor, revealing the IP addresses of the culprits and enabling the cyber cell divisions to take decisive action.
In the aftermath of the blasts, the Chief Minister and the state police chief ordered special instructions to monitor social media platforms for content that could spark communal uproar. Cyber patrolling became the order of the day, as a 20-member probe team was constituted to deeply investigate the incident.
Conclusion
These two cases, disparate in their nature and geography, converge on a singular point: the fragility of truth in the digital age. They highlight the imperative for vigilance and the pursuit of accuracy in a world where misinformation can spread like wildfire. As we navigate this intricate cyberscape, it is imperative to be mindful of the power of fact-checking and the importance of media literacy, for they are the light that guides us through the fog of falsehoods to the shores of veracity.
These narratives are not merely stories of deception thwarted; they are a call to action, a reminder of our collective responsibility to safeguard the integrity of our shared reality. Let us, therefore, remain steadfast in our quest for the truth, for it is only through such diligence that we can hope to preserve the sanctity of our discourse and the cohesion of our societies.
References:
- https://www.indiatvnews.com/fact-check/fact-check-misleading-video-claims-malaysian-rail-project-indian-truth-ecrl-india-railway-development-pm-modi-2024-01-29-914282
- https://sahilonline.org/kalamasserry-blast-53-cases-registered-across-kerala-for-spreading-fake-news

Introduction
In the digital landscape, there is a rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience to users in performing several tasks and are capable of assisting individuals and business entities. Certain regulatory mechanisms are also established for the ethical and reasonable use of such advanced technologies. However, these technologies are easily accessible; hence, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies, new cyber threats have emerged.
Deepfake Scams
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content, which looks very realistic but, in actuality, is fake.
Voice cloning
To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology. Recently, in Kerala, a man fell victim to an AI-based video call on WhatsApp. He received a video call from a person claiming to be his former colleague. The scammer, using AI deepfake technology, impersonated the face of his former colleague and asked for financial help of 40,000.
Uttarakhand Police issues warning admitting the rising trend of AI-based scams
Recently, Uttarakhand police’s Special Task Force (STF) has issued a warning admitting the widespread of AI technology-based scams such as deepfake or voice cloning scams targeting innocent people. Police expressed concern that several incidents have been reported where innocent people are lured by cybercriminals. Cybercriminals exploit advanced technologies and manipulate innocent people to believe that they are talking to their close ones or friends, but in actuality, they are fake voice clones or deepfake video calls. In this way, cybercriminals ask for immediate financial help, which ultimately leads to financial losses for victims of such scams.
Tamil Nadu Police Issues advisory on deepfake scams
To deceive people, cyber criminals misuse deepfake technologies and target them for financial gain. Recently, Tamilnadu Police Cyberwing have issued an advisory on rising deepfake scams. Fraudsters are creating highly convincing images, videos or voice clones to defraud innocent people and make them victims of financial fraud. The advisory states that you limit the personal data you share you share online and adjust privacy settings. Advisory says to promptly report any suspicious activity or cyber crimes to 1930 or the National Cyber Crime Reporting portal.
Best practices
- Pay attention if you notice compromised video quality because deepfake videos often have compromised or poor video quality and unusual blur resolution, which poses a question to its genuineness. Deepfake videos often loop or unusually freeze, which indicates that the video content might be fabricated.
- Whenever you receive requests for any immediate financial help, act responsively and verify the situation by directly contacting the person on his primary contact number.
- You need to be vigilant and cautious, since scammers often possess a sense of urgency, leading to giving no time for the victim to think about it and deceiving them by making a quick decision. Scammers pose sudden emergencies and demand financial support on an urgent basis.
- Be aware of the recent scams and follow the best practices to stay protected from rising cyber frauds.
- Verify the identity of unknown callers.
- Utilise privacy settings on your social media.
- Pay attention if you notice any suspicious nature, and avoid sharing voice notes with unknown users because scammers might use them as voice samples and create your voice clone.
- If you fall victim to such frauds, one powerful resource available is the National Cyber Crime Reporting Portal (www.cybercrime.gov.in) and the 1930 toll-free helpline number where you can report cyber fraud, including any financial crimes.
Conclusion
AI-powered technologies are leveraged by cybercriminals to commit cyber crimes such as deepfake scams, voice clone scams, etc. Where innocent people are lured by scammers. Hence there is a need for awareness and caution among the people. We should be vigilant and aware of the growing incidents of AI-based cyber scams. Must follow the best practices to stay protected.
References:
- https://www.the420.in/ai-voice-cloning-cyber-crime-alert-uttarakhand-police/
- https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/exploiting-ai-how-cybercriminals-misuse-abuse-ai-and-ml#:~:text=AI%20and%20ML%20Misuses%20and%20Abuses%20in%20the%20Future&text=Through%20the%20use%20of%20AI,and%20business%20processes%20are%20compromised.
- https://www.ndtv.com/india-news/kerala-man-loses-rs-40-000-to-ai-based-deepfake-scam-heres-what-it-is-4217841
- https://news.bharattimes.co.in/t-n-cybercrime-police-issue-advisory-on-deepfake-scams/
.webp)
Introduction
According to Statista, the global artificial intelligence software market is forecast to grow by around 126 billion US dollars by 2025. This will include a 270% increase in enterprise adoption over the past four years. The top three verticals in the Al market are BFSI (Banking, Financial Services, and Insurance), Healthcare & Life Sciences, and Retail & e-commerce. These sectors benefit from vast data generation and the critical need for advanced analytics. Al is used for fraud detection, customer service, and risk management in BFSI; diagnostics and personalised treatment plans in healthcare; and retail marketing and inventory management.
The Chairperson of the Competition Commission of India’s Chief, Smt. Ravneet Kaur raised a concern that Artificial Intelligence has the potential to aid cartelisation by automating collusive behaviour through predictive algorithms. She explained that the mere use of algorithms cannot be anti-competitive but in case the algorithms are manipulated, then that is a valid concern about competition in markets.
This blog focuses on how policymakers can balance fostering innovation and ensuring fair competition in an AI-driven economy.
What is the Risk Created by AI-driven Collusion?
AI uses predictive algorithms, and therefore, they could lead to aiding cartelisation by automating collusive behaviour. AI-driven collusion could be through:
- The use of predictive analytics to coordinate pricing strategies among competitors.
- The lack of human oversight in algorithm-induced decision-making leads to tacit collusion (competitors coordinate their actions without explicitly communicating or agreeing to do so).
AI has been raising antitrust concerns and the most recent example is the partnership between Microsoft and OpenAI, which has raised concerns among other national competition authorities regarding potential competition law issues. While it is expected that the partnership will potentially accelerate innovation, it also raises concerns about potential anticompetitive effects such as market foreclosure or the creation of barriers to entry for competitors and, therefore, has been under consideration in the German and UK courts. The problem here is in detecting and proving whether collusion is taking place.
The Role of Policy and Regulation
The uncertainties induced by AI regarding its effects on competition create the need for algorithmic transparency and accountability in mitigating the risks of AI-driven collusion. It leads to the need to build and create regulatory frameworks that mandate the disclosure of algorithmic methodologies and establish a set of clear guidelines for the development of AI and its deployment. These frameworks or guidelines should encourage an environment of collaboration between competition watchdogs and AI experts.
The global best practices and emerging trends in AI regulation already include respect for human rights, sustainability, transparency and strong risk management. The EU AI Act could serve as a model for other jurisdictions, as it outlines measures to ensure accountability and mitigate risks. The key goal is to tailor AI regulations to address perceived risks while incorporating core values such as privacy, non-discrimination, transparency, and security.
Promoting Innovation Without Stifling Competition
Policymakers need to ensure that they balance regulatory measures with innovation scope and that the two priorities do not hinder each other.
- Create adaptive and forward-thinking regulatory approaches to keep pace with technological advancements that take place at the pace of development and allow for quick adjustments in response to new AI capabilities and market behaviours.n
- Competition watchdogs need to recruit domain experts to assess competition amid rapid changes in the technology landscape. Create a multi-stakeholder approach that involves regulators, industry leaders, technologists and academia who can create inclusive and ethical AI policies.
- Businesses can be provided incentives such as recognition through certifications, grants or benefits in acknowledgement of adopting ethical AI practices.
- Launch studies such as the CCI’s market study to study the impact of AI on competition. This can lead to the creation of a driving force for sustainable growth with technological advancements.
Conclusion: AI and the Future of Competition
We must promote a multi-stakeholder approach that enhances regulatory oversight, and incentivising ethical AI practices. This is needed to strike a delicate balance that safeguards competition and drives sustainable growth. As AI continues to redefine industries, embracing collaborative, inclusive, and forward-thinking policies will be critical to building an equitable and innovative digital future.
The lawmakers and policymakers engaged in the drafting of the frameworks need to ensure that they are adaptive to change and foster innovation. It is necessary to note that fair competition and innovation are not mutually exclusive goals, they are complementary to each other. Therefore, a regulatory framework that promotes transparency, accountability, and fairness in AI deployment must be established.
References
- https://www.thehindu.com/sci-tech/technology/ai-has-potential-to-aid-cartelisation-fair-competition-integral-for-sustainable-growth-cci-chief/article69041922.ece
- https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-market-74851580.html
- https://www.ey.com/en_in/insights/ai/how-to-navigate-global-trends-in-artificial-intelligence-regulation#:~:text=Six%20regulatory%20trends%20in%20Artificial%20Intelligence&text=These%20include%20respect%20for%20human,based%20approach%20to%20AI%20regulation.
- https://www.business-standard.com/industry/news/ai-has-potential-to-aid-fair-competition-for-sustainable-growth-cci-chief-124122900221_1.html