#FactCheck - MS Dhoni Sculpture Falsely Portrayed as Chanakya 3D Recreation
Executive Summary:
A widely used news on social media is that a 3D model of Chanakya, supposedly made by Magadha DS University matches with MS Dhoni. However, fact-checking reveals that it is a 3D model of MS Dhoni not Chanakya. This MS Dhoni-3D model was created by artist Ankur Khatri and Magadha DS University does not appear to exist in the World. Khatri uploaded the model on ArtStation, calling it an MS Dhoni similarity study.
Claims:
The image being shared is claimed to be a 3D rendering of the ancient philosopher Chanakya created by Magadha DS University. However, people are noticing a striking similarity to the Indian cricketer MS Dhoni in the image.
Fact Check:
After receiving the post, we ran a reverse image search on the image. We landed on a Portfolio of a freelance character model named Ankur Khatri. We found the viral image over there and he gave a headline to the work as “MS Dhoni likeness study”. We also found some other character models in his portfolio.
Subsequently, we searched for the mentioned University which was named as Magadha DS University. But found no University with the same name, instead the name is Magadh University and it is located in Bodhgaya, Bihar. We searched the internet for any model, made by Magadh University but found nothing. The next step was to conduct an analysis on the Freelance Character artist profile, where we found that he has a dedicated Instagram channel where he posted a detailed video of his creative process that resulted in the MS Dhoni character model.
We concluded that the viral image is not a reconstruction of Indian philosopher Chanakya but a reconstruction of Cricketer MS Dhoni created by an artist named Ankur Khatri, not any University named Magadha DS.
Conclusion:
The viral claim that the 3D model is a recreation of the ancient philosopher Chanakya by a university called Magadha DS University is False and Misleading. In reality, the model is a digital artwork of former Indian cricket captain MS Dhoni, created by artist Ankur Khatri. There is no evidence of a Magadha DS University existence. There is a university named Magadh University in Bodh Gaya, Bihar despite its similar name, we found no evidence in the model's creation. Therefore, the claim is debunked, and the image is confirmed to be a depiction of MS Dhoni, not Chanakya.
Related Blogs
Introduction
As e-sports flourish in India, mobile gaming platforms and apps have contributed massively to this boom. The wave of online mobile gaming has led to a new recognition of esports. As we see the Sports Ministry being very proactive for e-sports and e-athletes, it is pertinent to ensure that we do not compromise our cyber security for the sake of these games. When we talk about online mobile gaming, the most common names that come to our minds are PUBG and BGMI. As news for all Indian gamers, BGMI is set to be relaunched in India after approval from the Ministry of Electronics and Information Technology.
Why was BGMI banned?
The Govt banned Battle Ground Mobile India on the pretext of being a Chinese application and the fact that all the data was hosted in China itself. This caused a cascade of compliance and user safety issues as the Data was stored outside India. Since 2020 The Indian Govt has been proactive in banning Chinese applications, which might have an adverse effect on national security and Indian citizens. Nearly 200 plus applications have been banned by the Govt, and most of them were banned due to their data hubs being in China. The issue of cross-border data flow has been a key issue in Geo-Politics, and whoever hosts the data virtually owns it as well and under the potential threat of this fact, all apps hosting their data in China were banned.
Why is BGMI coming back?
BGMI was banned for not hosting data in India, and since the ban, the Krafton Inc.-owned game has been engaging in Idnai to set up data banks and servers to have a separate gaming server for Indian players. These moves will lead to a safe gaming ecosystem and result in better adherence to the laws and policies of the land. The developers have not declared a relaunch date yet, but the game is expected to be available for download for iOS and Android users in the coming few days. The game will be back on app stores as a letter from the Ministry of Electronics and Information Technology has been issued stating that the games be allowed and made available for download on the respective app stores.
Grounds for BGMI
BGMI has to ensure that they comply with all the laws, policies and guidelines in India and have to show the same to the Ministry to get an extension on approval. The game has been permitted for only 90 days (3 Months). Hon’ble MoS Meity Rajeev Chandrashekhar stated in a tweet “This is a 3-month trial approval of #BGMI after it has complied with issues of server locations and data security etc. We will keep a close watch on other issues of User harm, Addiction etc., in the next 3 months before a final decision is taken”. This clearly shows the magnitude of the bans on Chinese apps. The ministry and the Govt will not play the soft game now, it’s all about compliance and safeguarding the user’s data.
Way Forward
This move will play a significant role in the future, not only for gaming companies but also for other online industries, to ensure compliance. This move will act as a precedent for the issue of cross-border data flow and the advantages of data localisation. It will go a long way in advocacy for the betterment of the Indian cyber ecosystem. Meity alone cannot safeguard the space completely, it is a shared responsibility of the Govt, industry and netizens.
Conclusion
The advent of online mobile gaming has taken the nation by storm, and thus, being safe and secure in this ecosystem is paramount. The provisional permission form BGMI shows the stance of the Govt and how it is following the no-tolerance policy for noncompliance with laws. The latest policies and bills, like the Digital India Act, Digital Personal Data Protection Act, etc., will go a long way in securing the interests and rights of the Indian netizen and will create a blanket of safety and prevention of issues and threats in the future.
Introduction
Children today are growing up amidst technology, and the internet has become an important part of their lives. The internet provides a wealth of recreational and educational options and learning environments to children, but it also presents extensively unseen difficulties, particularly in the context of deepfakes and misinformation. AI is capable of performing complex tasks in a fast time. However, misuse of AI technologies led to increasing cyber crimes. The growing nature of cyber threats can have a negative impact on children wellbeing and safety while using the Internet.
India's Digital Environment
India has one of the world's fastest-growing internet user bases, and young netizens here are getting online every passing day. The internet has now become an inseparable part of their everyday lives, be it social media or online courses. But the speed at which the digital world is evolving has raised many privacy and safety concerns increasing the chance of exposure to potentially dangerous content.
Misinformation: The raising Concern
Today, the internet is filled with various types of misinformation, and youngsters are especially vulnerable to its adverse effects. With the diversity in the language and culture in India, the spread of misinformation can have a vast negative impact on society. In particular, misinformation in education has the power to divulge young brains and create hindrances in their cognitive development.
To address this issue, it is important that parents, academia, government, industry and civil society start working together to promote digital literacy initiatives that educate children to critically analyse online material which can ease navigation in the digital realm.
DeepFakes: The Deceptive Mirage:
Deepfakes, or digitally altered videos and/or images made with the use of artificial intelligence, pose a huge internet threat. The possible ramifications of deepfake technology are concerning in India, since there is a high level of dependence on the media. Deepfakes can have far-reaching repercussions, from altering political narratives to disseminating misleading information.
Addressing the deepfake problem demands a multifaceted strategy. Media literacy programs should be integrated into the educational curriculum to assist youngsters in distinguishing between legitimate and distorted content. Furthermore, strict laws as well as technology developments are required to detect and limit the negative impact of deepfakes.
Safeguarding Children in Cyberspace
● Parental Guidance and Open Communication: Open communication and parental guidance are essential for protecting children's internet safety. It's a necessity to have open discussions about the possible consequences and appropriate internet use. Understanding the platforms and material children are consuming online, parents should actively participate in their children's online activities.
● Educational Initiatives: Comprehensive programs for digital literacy must be implemented in educational settings. Critical thinking abilities, internet etiquette, and knowledge of the risks associated with deepfakes and misinformation should all be included in these programs. Fostering a secure online environment requires giving young netizens the tools they need to question and examine digital content.
● Policies and Rules: Admitting the threats or risks posed by misuse of advanced technologies such as AI and deepfake, the Indian government is on its way to coming up with dedicated legislation to tackle the issues arising from misuse of deepfake technology by the bad actors. The government has recently come up with an advisory to social media intermediaries to identify misinformation and deepfakes and to make sure of the compliance of Information Technology (IT) Rules 2021. It is the legal obligation of online platforms to prevent the spread of misinformation and exercise due diligence or reasonable efforts are made to identify misinformation and deepfakes. Legal frameworks need to be equipped to handle the challenges posed by AI. Accountability in AI is a complex issue that requires comprehensive legal reforms. In light of various cases reported about the misuse of deepfakes and spreading such deepfake content on social media, It is advocated that there is a need to adopt and enforce strong laws to address the challenges posed by misinformation and deepfakes. Working with technological companies to implement advanced content detection tools and ensuring that law enforcement takes swift action against those who misuse technology will act as a deterrent among cyber crooks.
● Digital parenting: It is important for parents to keep up with the latest trends and digital technologies. Digital parenting includes understanding privacy settings, monitoring online activity, and using parental control tools to create a safe online environment for children.
Conclusion
As India continues to move forward digitally, protecting children in cyberspace has become a shared responsibility. By promoting digital literacy, encouraging open communication and enforcing strong laws, we can create a safer online environment for younger generations. Knowledge, understanding, and active efforts to combat misinformation and deeply entrenched myths are the keys to unlocking the safety net in the online age. Social media Intermediaries or platforms must ensure compliance under IT Rules 2021, IT Act, 2000 and the newly enacted Digital Personal Data Protection Act, 2023. It is the shared responsibility of the government, parents & teachers, users and organisations to establish safe online space for children.
References:
In an era defined by perpetual technological advancement, the hitherto uncharted territories of the human experience are progressively being illuminated by the luminous glow of innovation. The construct of privacy, once a straightforward concept involving personal secrets and solitude, has evolved into a complex web of data protection, consent, and digital rights. This notion of privacy, which often feels as though it elusively ebbs and flows like the ghost of a bygone epoch, is now confronted with a novel intruder – neurotechnology – which promises to redefine the very essence of individual sanctity.
Why Neuro Rights
At the forefront of this existential conversation lie ventures like Elon Musk's Neuralink. This company, which finds itself at the confluence of fantastical dreams and tangible reality, teases a future where the contents of our thoughts could be rendered as accessible as the words we speak. An existence where machines not only decipher our mental whispers but hold the potential to echo back, reshaping our cognitive landscapes. This startling innovation sets the stage for the emergence of 'neurorights' – a paradigm aimed at erecting a metaphorical firewall around the synapses and neurons that compose our innermost selves.
At institutions such as the University of California, Berkeley, researchers, under the aegis of cognitive scientists like Jack Gallant, are already drawing the map of once-inaccessible territories within the mind. Gallant's landmark study, which involved decoding the brain activity of volunteers as they absorbed visual stimuli, opened Pandora's box regarding the implications of mind-reading. The paper published a decade ago, was an inchoate step toward understanding the narrative woven within the cerebral cortex. Although his work yielded only a rough sketch of the observed video content, it heralded an era where thought could be translated into observable media.
The Growth
This rapid acceleration of neuro-technological prowess has not gone unnoticed on the sociopolitical stage. In a pioneering spirit reminiscent of the robust legislative eagerness of early democracies, Chile boldly stepped into the global spotlight in 2021 by legislating neurorights. The Chilean senate's decision to constitutionalize these rights sent ripples the world over, signalling an acknowledgement that the evolution of brain-computer interfaces was advancing at a daunting pace. The initiative was spearheaded by visionaries like Guido Girardi, a former senator whose legislative foresight drew clear parallels between the disruptive advent of social media and the potential upheaval posed by emergent neurotechnology.
Pursuit of Regulation
Yet the pursuit of regulation in such an embryonic field is riddled with intellectual quandaries and ethical mazes. Advocates like Allan McCay articulate the delicate tightrope that policy-makers must traverse. The perils of premature regulation are as formidable as the risks of a delayed response – the former potentially stifling innovation, the latter risking a landscape where technological advances could outpace societal control, engendering a future fraught with unforeseen backlashes.
Such is the dichotomy embodied in the story of Ian Burkhart, whose life was irrevocably altered by the intervention of neurotechnology. Burkhart's experience, transitioning from quadriplegia to digital dexterity through sheer force of thought, epitomizes the utopic potential of neuronal interfaces. Yet, McCay issues a solemn reminder that with great power comes great potential for misuse, highlighting contentious ethical issues such as the potential for the criminal justice system to over extend its reach into the neural recesses of the human psyche.
Firmly ensconced within this brave new world, the quest for prudence is of paramount importance. McCay advocates for a dyadic approach, where privacy is vehemently protected and the workings of technology proffered with crystal-clear transparency. The clandestine machinations of AI and the danger of algorithmic bias necessitate a vigorous, ethical architecture to govern this new frontier.
As legal frameworks around the globe wrestle with the implications of neurotechnology, countries like India, with their burgeoning jurisprudence regarding privacy, offer a vantage point into the potential shape of forthcoming legislation. Jurists and technology lawyers, including Jaideep Reddy, acknowledge ongoing protections yet underscore the imperativeness of continued discourse to gauge the adequacy of current laws in this nascent arena.
Conclusion
The dialogue surrounding neurorights emerges, not merely as another thread in our social fabric, but as a tapestry unto itself – intricately woven with the threads of autonomy, liberty, and privacy. As we hover at the edge of tomorrow, these conversations crystallize into an imperative collective undertaking, promising to define the sanctity of cognitive liberty. The issue at hand is nothing less than a societal reckoning with the final frontier – the safeguarding of the privacy of our thoughts.
References: