#FactCheck - MS Dhoni Sculpture Falsely Portrayed as Chanakya 3D Recreation
Executive Summary:
A widely used news on social media is that a 3D model of Chanakya, supposedly made by Magadha DS University matches with MS Dhoni. However, fact-checking reveals that it is a 3D model of MS Dhoni not Chanakya. This MS Dhoni-3D model was created by artist Ankur Khatri and Magadha DS University does not appear to exist in the World. Khatri uploaded the model on ArtStation, calling it an MS Dhoni similarity study.

Claims:
The image being shared is claimed to be a 3D rendering of the ancient philosopher Chanakya created by Magadha DS University. However, people are noticing a striking similarity to the Indian cricketer MS Dhoni in the image.



Fact Check:
After receiving the post, we ran a reverse image search on the image. We landed on a Portfolio of a freelance character model named Ankur Khatri. We found the viral image over there and he gave a headline to the work as “MS Dhoni likeness study”. We also found some other character models in his portfolio.



Subsequently, we searched for the mentioned University which was named as Magadha DS University. But found no University with the same name, instead the name is Magadh University and it is located in Bodhgaya, Bihar. We searched the internet for any model, made by Magadh University but found nothing. The next step was to conduct an analysis on the Freelance Character artist profile, where we found that he has a dedicated Instagram channel where he posted a detailed video of his creative process that resulted in the MS Dhoni character model.

We concluded that the viral image is not a reconstruction of Indian philosopher Chanakya but a reconstruction of Cricketer MS Dhoni created by an artist named Ankur Khatri, not any University named Magadha DS.
Conclusion:
The viral claim that the 3D model is a recreation of the ancient philosopher Chanakya by a university called Magadha DS University is False and Misleading. In reality, the model is a digital artwork of former Indian cricket captain MS Dhoni, created by artist Ankur Khatri. There is no evidence of a Magadha DS University existence. There is a university named Magadh University in Bodh Gaya, Bihar despite its similar name, we found no evidence in the model's creation. Therefore, the claim is debunked, and the image is confirmed to be a depiction of MS Dhoni, not Chanakya.
Related Blogs
.webp)
Introduction
The automobile business is fast expanding, with vehicles becoming sophisticated, interconnected gadgets equipped with cutting-edge digital technology. This integration improves convenience, safety, and efficiency while also exposing automobiles to a new set of cyber risks. Electric vehicles (EVs) are equipped with sophisticated computer systems that manage various functions, such as acceleration, braking, and steering. If these systems are compromised, it could result in hazardous situations, including the remote control of the vehicle or unauthorized access to sensitive data. The automotive sector is evolving with the rise of connected car stakeholders, exposing new vulnerabilities for hackers to exploit.
Why Automotive Cybersecurity is required
Cybersecurity threats to automotives result from hardware, software and overall systems redundancy. Additional concerns include general privacy clauses that justify collecting and transferring data to “third-party vendors”, without explicitly disclosing who such third parties are and the manner of processing personal data. For example, infotainment platform data may show popular music and the user’s preferences, which may be used by the music industry to improve marketing strategies. Similarly, it is lesser known that any data relating to behavioural tracking data, such as driving patterns etc., are also logged by the original equipment manufacturer.
Hacking is not limited to attackers gaining control of an electronic automobile; it includes malicious actors hacking charging stations to manipulate the systems. In Russia, EV charging stations were hacked in Moscow to display pro-Ukraine and anti-Putin messages such as “Glory to Ukraine” and “Death to the enemy” in the backdrop of the Russia-Ukraine war. Other examples include instances from the Isle of Wight, where hackers controlled the EV monitor to show inappropriate content and display high voltage fault codes to EV owners, preventing them from charging their vehicles with empty batteries.
UN Economic Commission for Europe releases Regulation 155 for Automobiles
UN Economic Commission for Europe Regulation 155 lays down uniform provisions concerning the approval of vehicles with regard to cybersecurity and cybersecurity management systems (CSMS). This was originally a part of the Commission.s Work Paper (W.P.) 29 that aimed to harmonise vehicular regulations for vehicles and vehicle equipment. Regulation 155 has a two-prong objective; first, to ensure cybersecurity at the organisational level and second, to ensure adequate designs of the vehicle architecture. A critical aspect in this context is the implementation of a certified CSMS by all companies that bring vehicles to market. Notably, this requirement alters the perspective of manufacturers; their responsibilities no longer conclude with the start of production (SOP). Instead, manufacturers are now required to continuously monitor and assess the safety systems throughout the entire life cycle of a vehicle, including making any necessary improvements.
This Regulation reflects the highly dynamic nature of software development and assurance. Moreover, the management system is designed to ensure compliance with safety requirements across the entire supply chain. This is a significant challenge, considering that suppliers currently account for over 70 per cent of the software volume.
The Regulation, which is binding in nature for 64 member countries, came into force in 2021. UNECE countries were required to be compliant with the Regulations by July 2022 for all new vehicles and by July 2024, the Regulation was set to apply to all vehicles. It is believed that the Regulation will become a de facto global standard, since vehicles authorised in a particular country may not be brought into the global market or the market of any UNECE member country based on any other authorisation. In such a scenario, OEMs of non-member countries may be required to give a “self-declaration”, declaring the equipment’s conformity with cybersecurity standards.
Conclusion
To compete and ensure trust, global car makers must deliver a robust cybersecurity framework that meets evolving regulations. The UNECE regulations in this regard are driving this direction by requiring automotive original equipment manufacturers (OEMs) to integrate vehicle cybersecurity throughout the entire value chain. The ‘security by design' approach aims to build a connected car that is trusted by all. Automotive cybersecurity involves measures and technologies to protect connected vehicles and their onboard systems from growing digital threats.
References:
- “Electric vehicle cyber security risks and best practices (2023)”, Cyber Talk, 1 August 2023. https://www.cybertalk.org/2023/08/01/electric-vehicle-cyber-security-risks-and-best-practices-2023/#:~:text=EVs%20are%20equipped%20with%20complex,unauthorized%20access%20to%20sensitive%20data.
- Gordon, Aaron, “Russian Electric Vehicle Chargers Hacked, Tell Users “PUTIN IS A D*******D”, Vice, 28 February 2022. https://www.vice.com/en/article/russian-electric-vehicle-chargers-hacked-tell-users-putin-is-a-dickhead/
- “Isle of Wight: Council’s electric vehicle chargers hacked to show porn site”, BBC, 6 April 2022. https://www.bbc.com/news/uk-england-hampshire-61006816
- Sandler, Manuel, “UN Regulation No. 155: What You Need to Know about UN R155”, Cyres Consulting, 1 June 2022. https://www.cyres-consulting.com/un-regulation-no-155-requirements-what-you-need-to-know/?srsltid=AfmBOopV1pH1mg6M2Nn439N1-EyiU-gPwH2L4vq5tmP0Y2vUpQR-yfP7#A_short_overview_Background_knowledge_on_UN_Regulation_No_155
- https://unece.org/wp29-introduction?__cf_chl_tk=ZYt.Sq4MrXvTwSiYURi_essxUCGCysfPq7eSCg1oXLA-1724839918-0.0.1.1-13972
.webp)
Introduction
The rise of unreliable social media newsgroups on online platforms has significantly altered the way people consume and interact with news, contributing to the spread of misinformation and leading to sources of unverified and misleading content. Unlike traditional news outlets that adhere to journalistic standards, these newsgroups often lack proper fact-checking and editorial oversight, leading to the rapid dissemination of false or distorted information. Social media transformed individuals into active content creators. Social media newsgroups (SMNs) are social media platforms used as sources of news and information. According to a survey by the Pew Research Center (July-August 2024), 54% of U.S. adults now rely on social media for news. This rise in SMNs has raised concerns over the integrity of online news and undermines trust in legitimate news sources. Social media users are advised to consume information and news from authentic sources or channels available on social media platforms.
The Growing Issue of Misinformation in Social Media Newsgroups
Social media newsgroups have become both a source of vital information and a conduit for misinformation. While these platforms allow rapid news sharing and facilitate political and social campaigns, they also pose significant risks of unverified information. Misleading information, often driven by algorithms designed to maximise user engagement, proliferates in these spaces. This has led to increasing challenges, as SMNs cater to diverse communities with varying political affiliations, gender demographics, and interests. This sometimes results in the creation of echo chambers where information is not critically assessed, amplifying the confirmation bias and enabling the unchecked spread of misinformation. A prominent example is the false narratives surrounding COVID-19 vaccines that spread across SMNs, contributing to widespread vaccine hesitancy and public health risks.
Understanding the Susceptibility of Online Newsgroups to Misinformation
Several factors make social media newsgroups particularly susceptible to misinformation. Some of the factors are listed below:
- The lack of robust fact-checking mechanisms in social media news groups can lead to false narratives which can spread easily.
- The lack of expertise from admins of online newsgroups, who are often regular users without journalism knowledge, can result in the spreading of inaccurate information. Their primary goal of increasing engagement may overshadow concerns about accuracy and credibility.
- The anonymity of users exacerbates the problem of misinformation. It allows users to share unverified or misleading content without accountability.
- The viral nature of social media also leads to the vast spread of misinformation to audiences instantly, often outpacing efforts to correct it.
- Unlike traditional media outlets, online newsgroups often lack formal fact-checking processes. This absence allows misinformation to circulate without verification, making it easier for inaccuracies to go unchallenged.
- The sheer volume of user engagement in the form of posts has created the struggle to moderate content effectively imposing significant challenges.
- Social Media Platforms have algorithms designed to enhance user engagement and inadvertently amplify sensational or emotionally charged content, which is more likely to be false.
Consequences of Misinformation in Newsgroups
The societal impacts of misinformation in SMNs are profound. Political polarisation can fuel one-sided views and create deep divides in democratic societies. Health risks emerge when false information spreads about critical issues, such as the anti-vaccine movements or misinformation related to public health crises. Misinformation has dire long-term implications and has the potential to destabilise governments and erode trust in media, in both traditional and social media leading to undermining democracy. If unaddressed, the consequences could continue to ripple through society, perpetuating false narratives that shape public opinion.
Steps to Mitigate Misinformation in Social Media Newsgroups
- Educating users in social media literacy education can empower critical assessment of the information encountered, reducing the spread of false narratives.
- Introducing stricter platform policies, including penalties for deliberately sharing misinformation, may act as a deterrent against sharing unverified information.
- Collaborative fact-checking initiatives with involvement from social media platforms, independent journalists, and expert organisations can provide a unified front against the spread of false information.
- From a policy perspective, a holistic approach that combines platform responsibility with user education and governmental and industry oversight is essential to curbing the spread of misinformation in social media newsgroups.
Conclusion
The emergence of Social media newsgroups has revolutionised the dissemination of information. This rapid spread of misinformation poses a significant challenge to the integrity of news in the digital age. It gets further amplified by algorithmic echo chambers unchecked user engagement and profound societal implications. A multi-faceted approach is required to tackle these issues, combining stringent platform policies, AI-driven moderation, and collaborative fact-checking initiatives. User empowerment concerning media literacy is an important factor in promoting critical thinking and building cognitive defences. By adopting these measures, we can better navigate the complexities of consuming news from social media newsgroups and preserve the reliability of online information. Furthermore, users need to consume news from authoritative sources available on social media platforms.
References

Introduction
The recent inauguration of the Google Safety Engineering Centre (GSEC) in Hyderabad on 18th June, 2025, marks a pivotal moment not just for India, but for the entire Asia-Pacific region’s digital future. As only the fourth such centre in the world after Munich, Dublin, and Málaga, its presence signals a shift in how AI safety, cybersecurity, and digital trust are being decentralised, leading to a more globalised and inclusive tech ecosystem. India’s digitisation over the years has grown at a rapid scale, introducing millions of first-time internet users, who, depending on their awareness, are susceptible to online scams, phishing, deepfakes, and AI-driven fraud. The establishment of GSEC is not just about launching a facility but a step towards addressing AI readiness, user protection, and ecosystem resilience.
Building a Safer Digital Future in the Global South
The GSEC is set to operationalise the Google Safety Charter, designed around three core pillars: empowering users by protecting them from online fraud, strengthening government cybersecurity and enterprise, and advancing responsible AI in the platform design and execution. This represents a shift from the standard reactive safety responses to proactive, AI-driven risk mitigation. The goal is to make safety tools not only effective, but tailored to threats unique to the Global South, from multilingual phishing to financial fraud via unofficial lending apps. This centre is expected to stimulate regional cybersecurity ecosystems by creating jobs, fostering public-private partnerships, and enabling collaboration across academia, law enforcement, civil society, and startups. In doing so, it positions Asia-Pacific not as a consumer of the standard Western safety solutions but as an active contributor to the next generation of digital safeguards and customised solutions.
Previous piloted solutions by Google include DigiKavach, a real-time fraud detection framework, and tools like spam protection in mobile operating systems and app vetting mechanisms. What GSEC might aid with is the scaling and integration of these efforts into systems-level responses, where threat detection, safety warnings, and reporting mechanisms, etc., would ensure seamless coordination and response across platforms. This reimagines safety as a core design principle in India’s digital public infrastructure rather than focusing on attack-based response.
CyberPeace Insights
The launch aligns with events such as the AI Readiness Methodology Conference recently held in New Delhi, which brought together researchers, policymakers, and industry leaders to discuss ethical, secure, and inclusive AI implementation. As the world grapples with how to deal with AI technologies ranging from generative content to algorithmic decisions, centres like GSEC can play a critical role in defining the safeguards and governance structures that can support rapid innovation without compromising public trust and safety. The region’s experiences and innovations in AI governance must shape global norms, and the role of Tech firms in doing so is significant. Apart from this, efforts with respect to creating digital infrastructure and safety centres addressing their protection resonate with India’s vision of becoming a global leader in AI.
References
- https://www.thehindu.com/news/cities/Hyderabad/google-safety-engineering-centre-india-inaugurated-in-hyderabad/article69708279.ece
- https://www.businesstoday.in/technology/news/story/google-launches-safety-charter-to-secure-indias-ai-future-flags-online-fraud-and-cyber-threats-480718-2025-06-17?utm_source=recengine&utm_medium=web&referral=yes&utm_content=footerstrip-1&t_source=recengine&t_medium=web&t_content=footerstrip-1&t_psl=False
- https://blog.google/intl/en-in/partnering-indias-success-in-a-new-digital-paradigm/
- https://blog.google/intl/en-in/company-news/googles-safety-charter-for-indias-ai-led-transformation/
- https://economictimes.indiatimes.com/magazines/panache/google-rolls-out-hyderabad-hub-for-online-safety-launches-first-indian-google-safety-engineering-centre/articleshow/121928037.cms?from=mdr