#FactCheck - AI Artwork Misattributed: Mahendra Singh Dhoni Sand Sculptures Exposed as AI-Generated
Executive Summary:
A recent claim going around on social media that a child created sand sculptures of cricket legend Mahendra Singh Dhoni, has been proven false by the CyberPeace Research Team. The team discovered that the images were actually produced using an AI tool. Evident from the unusual details like extra fingers and unnatural characteristics in the sculptures, the Research Team discerned the likelihood of artificial creation. This suspicion was further substantiated by AI detection tools. This incident underscores the need to fact-check information before posting, as misinformation can quickly go viral on social media. It is advised everyone to carefully assess content to stop the spread of false information.
Claims:
The claim is that the photographs published on social media show sand sculptures of cricketer Mahendra Singh Dhoni made by a child.
Fact Check:
Upon receiving the posts, we carefully examined the images. The collage of 4 pictures has many anomalies which are the clear sign of AI generated images.
In the first image the left hand of the sand sculpture has 6 fingers and in the word INDIA, ‘A’ is not properly aligned i.e not in the same line as other letters. In the second image, the finger of the boy is missing and the sand sculpture has 4 fingers in its front foot and has 3 legs. In the third image the slipper of the boy is not visible whereas some part of the slipper is visible, and in the fourth image the hand of the boy is not looking like a hand. These are some of the major discrepancies clearly visible in the images.
We then checked using an AI Image detection tool named ‘Hive’ image detection, Hive detected the image as 100.0% AI generated.
We then checked it in another AI image detection named ContentAtScale AI image detection, and it found to be 98% AI generated.
From this we concluded that the Image is AI generated and has no connection with the claim made in the viral social media posts. We have also previously debunked AI Generated artwork of sand sculpture of Indian Cricketer Virat Kohli which had the same types of anomalies as those seen in this case.
Conclusion:
Taking into consideration the distortions spotted in the images and the result of AI detection tools, it can be concluded that the claim of the pictures representing the child's sand sculptures of cricketer Mahendra Singh Dhoni is false. The pictures are created with Artificial Intelligence. It is important to check and authenticate the content before posting it to social media websites.
- Claim: The frame of pictures shared on social media contains child's sand sculptures of cricket player Mahendra Singh Dhoni.
- Claimed on: X (formerly known as Twitter), Instagram, Facebook, YouTube
- Fact Check: Fake & Misleading
Related Blogs
Introduction
The ramifications of cybercrime can be far-reaching. Depending on the size of the attack, even entire countries can be affected if their critical infrastructure is connected to the internet. The vast majority of security breaches start within the perimeter and most internet attacks are socially engineered. Unwittingly trusting any email or web request from an unknown sender creates a potential danger for those organisations that depend on the Internet for their business functions. In this ever-evolving digital downtown yet another group has emerged from its darkest corners of targeting the UK’s very bastion of British and global heritage; a treasure trove of around 14 million volumes, ancient manuscripts, in the precious British Library. A group self-identified as Rhysida. Their bold maneuver, executed with the stealth of seasoned cyber brigands, has cast a shadow as long and dark as those found in the Gothic novels that rest on the library's shelves. The late October cyber-attack has thrust the British Library into an unnerving state of chaos, a situation more commonly aligned with works of dystopian fiction than the everyday reality of a revered institution.
The Modus Operandi
The gang uses all-new Rhysida ransomware to jeopardize Virtual Private Networks, which is typically used by library staff to gain access to their employee’s systems remotely. The Ransomware presents itself as a regular decoy file in a familiar fashion as regular phishing attacks in an email, tricking its victim and downloading itself into the host system. Once the malware enters the system it stays dormant and lurks around the system for a period of time. The new malware has significantly reduced the dwell time from 4 days to less than 24 hours which enables it to evade periodic system checks to avoid detection.
Implications of Cyber Attack
Implications of the cyber-attack have been sobering and multifaceted. The library's systems, which serve as the lifeline for countless scholars, students, and the reading public, were left in disarray, unsettlingly reminiscent of a grand mansion invaded by incorporeal thieves. The violation has reverberated through the digital corridors of this once-impenetrable fortress, and the virtual aftershocks are ongoing. Patrons, who traverse a diverse spectrum of society, but share a common reverence for knowledge, received unsettling news: the possibility that their private data has been compromised—a sanctity breached, revealing yet again how even the most hallowed of spaces are not impervious to modern threats.
It is with no small sense of irony that we consider the nature of the stolen goods—names, email addresses, and the like. It is not the physical tomes of inestimable value that have been ransacked, but rather the digital footprints of those who sought the wisdom within the library's walls. This virtual Pandora's Box, now unleashed onto the dark web, has been tagged with a monetary value. Rhysida has set the ominous asking price of a staggering $740,000 worth of cryptocurrency for the compromised data, flaunting their theft with a hubris that chills the spine.
Yet, in this convoluted narrative unfolds a subplot that offers some measure of consolation. Payment information purports the library has not been included in this digital heist, offering a glint of reassurance amidst the prevailing uncertainty. This digital storm has had seismic repercussions: the library's website and interconnected systems have been besieged and access to the vast resources significantly hampered. The distressing notice of a 'major technology outage' transformed the digital facade from a portal for endless learning to a bulletin of sorrow, projecting the sombre message across virtual space.
The Impact
The impact of this violation will resonate far beyond the mere disruption of services; it signals the dawn of an era where venerable institutions of culture and learning must navigate the depths of cybersecurity. As the library grapples with the breach, a new front has opened in the age-old battle for the preservation of knowledge. The continuity of such an institution in a digitised world will be tested, and the outcome will define the future of digital heritage management. As the institution rallies, led by Roly Keating, its Chief Executive, one observes not a defeatist retreat, but a stoic, strategic regrouping. Gratitude is extended to patrons and partners whose patience has become as vital a resource as the knowledge the library preserves. The reassurances given, while acknowledging the laborious task ahead, signal not just an intention to repair but to fortify, to adapt, to evolve amidst adversity.
This wretched turn of events serves as a portentous reminder that threats to our most sacred spaces have transformed. The digital revolution has indeed democratised knowledge but has also exposed it to neoteric threats. The British Library, a repository of the past, must now confront a distinctly modern adversary. It requires us to posit whether our contemporary guardians of history are equipped to combat those who wield malicious code as their weapons of choice.
Best Practices for Cyber Resilience
It is crucial to keep abreast with recent developments in cyberspace and emerging trends. Libraries in the digital age must ensure the protection of their patron’s data by applying comprehensive security protocols to safeguard the integrity, availability and confidentiality of sensitive information of their patrons. A few measures that can be applied by libraries include.
- Secured Wi-Fi networks: Libraries offering public Wi-Fi facilities must secure them with strong encryption protocols such as WPA 3. Libraries should establish separate networks for internal operations allowing separation of staff and public networks to protect sensitive information.
- Staff Training Programs: To avoid human error it is imperative that comprehensive training programs are conducted on a regular basis to generate greater awareness of cyber threats among staff and educate them about best practices of cyber hygiene and data security.
- Data Backups and Recovery Protocols: Patrons' sensitive data should be updated and backed up regularly. Proper verification of the user’s data integrity is crucial and should be stored securely in a dedicated repository to ensure full recovery of the user’s data in the event of a breach.
- Strong Authentication: Strong authentication to enhance library defenses is crucial to combat cyber threats. Staff and Patrons should be educated on strong password usage and the implementation of Multi-Factor Authentication to add an extra layer of security.
Conclusion
Finally, whatever the future holds, what remains unassailable is the cultural edifice that is the British Library. Its trials and tribulations, like those of the volumes it safeguards, become a part of a larger narrative of endurance and defiance. In the canon of history—filled with conflicts and resolutions—the library, like the lighter anecdotes and tragic tales it harbours, will decidedly hold its place. And perhaps, with some assurance, we might glean from the sentiment voiced by Milton—an assurance that the path from turmoil to enlightenment, though fraught with strenuous challenges, is paved with lessons learned and resilience rediscovered. Cyberspace is constantly evolving hence it is in our best interest to keep abreast of all developments in this digital sphere. Maximum threats can be avoided if we are vigilant.
References:
Introduction
YouTube is testing a new feature called ‘Notes,’ which allows users to add community-sourced context to videos. The feature allows users to clarify if a video is a parody or if it is misrepresenting information. The feature builds on existing features to provide helpful content alongside videos. Currently under testing, the feature will be available to a limited number of eligible contributors who will be invited to write notes on videos. These notes will appear publicly under a video if they are found to be broadly helpful. Viewers will be able to rate notes into three categories: ‘Helpful,’ ‘Somewhat helpful,’ or ‘Unhelpful’. Based on the ratings, YouTube will determine which notes are published. The feature will first be rolled out on mobile devices in the U.S. in English. The Google-owned platform will look at ways to improve the feature over time, including whether it makes sense to expand it to other markets.
YouTube To Roll Out The New ‘Notes’ Feature
YouTube is testing an experimental feature that allows users to add notes to provide relevant, timely, and easy-to-understand context for videos. This initiative builds on previous products that display helpful information alongside videos, such as information panels and disclosure requirements when content is altered or synthetic. YouTube in its blog clarified that the pilot will be available on mobiles in the U.S. and in the English language, to start with. During this test phase, viewers, participants, and creators are invited to give feedback on the quality of the notes.
YouTube further stated in its blog that a limited number of eligible contributors will be invited via email or Creator Studio notifications to write notes so that they can test the feature and add value to the system before the organisation decides on next steps and whether or not to expand the feature. Eligibility criteria include having an active YouTube channel in good standing with Yotube’s Community Guidelines.
Viewers in the U.S. will start seeing notes on videos in the coming weeks and months. In this initial pilot, third-party evaluators will rate the helpfulness of notes, which will help train the platform’s systems. As the pilot moves forward, contributors themselves will rate notes as well.
Notes will appear publicly under a video if they are found to be broadly helpful. People will be asked whether they think a note is helpful, somewhat helpful, or unhelpful and the reasons for the same. For example, if a note is marked as ‘Helpful,’ the evaluator will have the opportunity to specify if it is so because it cites high-quality sources or is written clearly and neutrally. A bridging-based algorithm will be used to consider these ratings and determine what notes are published. YouTube is excited to explore new ways to make context-setting even more relevant, dynamic, and unique to the videos we are watching, at scale, across the huge variety of content on YouTube.
CyberPeace Analysis: How Can Notes Help Counter Misinformation
The potential effectiveness of countering misinformation on YouTube using the proposed ‘Notes’ feature is significant. Enabling contributors to include notes on videos can offer relevant and accurate context to clarify any misleading or false information in the video. These notes can aid in enhancing viewers' comprehension of the content and detecting misinformation. The participation from users to rate the added notes as helpful, somewhat helpful, and unhelpful adds a heightened layer of transparency and public participation in identifying the accuracy of the content.
As YouTube intends to gather feedback from its various stakeholders to improve the feature over time, one can look forward to improved policy and practical over time: the feedback mechanism will allow for continuous refinement of the feature, ensuring it effectively addresses misinformation. The platform employs algorithms to identify helpful notes that cater to a broad audience across different perspectives. This helps showcase accurate information and combat misinformation.
Furthermore, along with the Notes feature, YouTube should explore and implement prebunking and debunking strategies on the platform by promoting educational content and empowering users to discern between fact and any misleading information.
Conclusion
The new feature, currently in the testing phase, aims to counter misinformation by providing context, enabling user feedback, leveraging algorithms, promoting transparency, and continuously improving information quality. Considering the diverse audience on the platform and high volumes of daily content consumption, it is important for both the platform operators and users to engage with factual, verifiable information. The fallout of misinformation on such a popular platform can be immense, and so, any mechanism or feature that can help counter the same must be developed to its full potential. Apart from this new Notes feature, YouTube has also implemented certain measures in the past to counter misinformation, such as providing authenticated sources to counter any election misinformation during the recent 2024 elections in India. These efforts are a welcome contribution to our shared responsibility as netizens to create a trustworthy, factual and truly-informational digital ecosystem.
References:
- https://blog.youtube/news-and-events/new-ways-to-offer-viewers-more-context/
- https://www.thehindu.com/sci-tech/technology/internet/youtube-tests-feature-that-will-let-users-add-context-to-videos/article68302933.ece
Introduction
The fast-paced development of technology and the wider use of social media platforms have led to the rapid dissemination of misinformation with characteristics such as diffusion, fast propagation speed, wide influence, and deep impact through these platforms. Social Media Algorithms and their decisions are often perceived as a black box introduction that makes it impossible for users to understand and recognise how the decision-making process works.
Social media algorithms may unintentionally promote false narratives that garner more interactions, further reinforcing the misinformation cycle and making it harder to control its spread within vast, interconnected networks. Algorithms judge the content based on the metrics, which is user engagement. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items you are more likely to enjoy. This process, initially, was created to cut the clutter and provide you with the best information. However, sometimes it results in unknowingly widespread misinformation due to the viral nature of information and user interactions.
Analysing the Algorithmic Architecture of Misinformation
Social media algorithms, designed to maximize user engagement, can inadvertently promote misinformation due to their tendency to trigger strong emotions, creating echo chambers and filter bubbles. These algorithms prioritize content based on user behaviour, leading to the promotion of emotionally charged misinformation. Additionally, the algorithms prioritize content that has the potential to go viral, which can lead to the spread of false or misleading content faster than corrections or factual content.
Additionally, popular content is amplified by platforms, which spreads it faster by presenting it to more users. Limited fact-checking efforts are particularly difficult since, by the time they are reported or corrected, erroneous claims may have gained widespread acceptance due to delayed responses. Social media algorithms find it difficult to distinguish between real people and organized networks of troll farms or bots that propagate false information. This creates a vicious loop where users are constantly exposed to inaccurate or misleading material, which strengthens their convictions and disseminates erroneous information through networks.
Though algorithms, primarily, aim to enhance user engagement by curating content that aligns with the user's previous behaviour and preferences. Sometimes this process leads to "echo chambers," where individuals are exposed mainly to information that reaffirms their beliefs which existed prior, effectively silencing dissenting voices and opposing viewpoints. This curated experience reduces exposure to diverse opinions and amplifies biased and polarising content, making it arduous for users to discern credible information from misinformation. Algorithms feed into a feedback loop that continuously gathers data from users' activities across digital platforms, including websites, social media, and apps. This data is analysed to optimise user experiences, making platforms more attractive. While this process drives innovation and improves user satisfaction from a business standpoint, it also poses a danger in the context of misinformation. The repetitive reinforcement of user preferences leads to the entrenchment of false beliefs, as users are less likely to encounter fact-checks or corrective information.
Moreover, social networks and their sheer size and complexity today exacerbate the issue. With billions of users participating in online spaces, misinformation spreads rapidly, and attempting to contain it—such as by inspecting messages or URLs for false information—can be computationally challenging and inefficient. The extensive amount of content that is shared daily means that misinformation can be propagated far quicker than it can get fact-checked or debunked.
Understanding how algorithms influence user behaviour is important to tackling misinformation. The personalisation of content, feedback loops, the complexity of network structures, and the role of superspreaders all work together to create a challenging environment where misinformation thrives. Hence, highlighting the importance of countering misinformation through robust measures.
The Role of Regulations in Curbing Algorithmic Misinformation
The EU's Digital Services Act (DSA) applicable in the EU is one of the regulations that aims to increase the responsibilities of tech companies and ensure that their algorithms do not promote harmful content. These regulatory frameworks play an important role they can be used to establish mechanisms for users to appeal against the algorithmic decisions and ensure that these systems do not disproportionately suppress legitimate voices. Independent oversight and periodic audits can ensure that algorithms are not biased or used maliciously. Self-regulation and Platform regulation are the first steps that can be taken to regulate misinformation. By fostering a more transparent and accountable ecosystem, regulations help mitigate the negative effects of algorithmic misinformation, thereby protecting the integrity of information that is shared online. In the Indian context, the Intermediary Guidelines, 2023, Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation on digital platforms. The ‘Intermediaries’ are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or prohibited content. This rule aims to ensure platforms identify and swiftly remove misinformation, and false or misleading content.
Cyberpeace Outlook
Understanding how algorithms prioritise content will enable users to critically evaluate the information they encounter and recognise potential biases. Such cognitive defenses can empower individuals to question the sources of the information and report misleading content effectively. In the future of algorithms in information moderation, platforms should evolve toward more transparent, user-driven systems where algorithms are optimised not just for engagement but for accuracy and fairness. Incorporating advanced AI moderation tools, coupled with human oversight can improve the detection and reduction of harmful and misleading content. Collaboration between regulatory bodies, tech companies, and users will help shape the algorithms landscape to promote a healthier, more informed digital environment.
References:
- https://www.advancedsciencenews.com/misformation-spreads-like-a-nuclear-reaction-on-the-internet/
- https://www.niemanlab.org/2024/09/want-to-fight-misinformation-teach-people-how-algorithms-work/
- Press Release: Press Information Bureau (pib.gov.in)