#FactCheck - MS Dhoni Sculpture Falsely Portrayed as Chanakya 3D Recreation
Executive Summary:
A widely used news on social media is that a 3D model of Chanakya, supposedly made by Magadha DS University matches with MS Dhoni. However, fact-checking reveals that it is a 3D model of MS Dhoni not Chanakya. This MS Dhoni-3D model was created by artist Ankur Khatri and Magadha DS University does not appear to exist in the World. Khatri uploaded the model on ArtStation, calling it an MS Dhoni similarity study.

Claims:
The image being shared is claimed to be a 3D rendering of the ancient philosopher Chanakya created by Magadha DS University. However, people are noticing a striking similarity to the Indian cricketer MS Dhoni in the image.



Fact Check:
After receiving the post, we ran a reverse image search on the image. We landed on a Portfolio of a freelance character model named Ankur Khatri. We found the viral image over there and he gave a headline to the work as “MS Dhoni likeness study”. We also found some other character models in his portfolio.



Subsequently, we searched for the mentioned University which was named as Magadha DS University. But found no University with the same name, instead the name is Magadh University and it is located in Bodhgaya, Bihar. We searched the internet for any model, made by Magadh University but found nothing. The next step was to conduct an analysis on the Freelance Character artist profile, where we found that he has a dedicated Instagram channel where he posted a detailed video of his creative process that resulted in the MS Dhoni character model.

We concluded that the viral image is not a reconstruction of Indian philosopher Chanakya but a reconstruction of Cricketer MS Dhoni created by an artist named Ankur Khatri, not any University named Magadha DS.
Conclusion:
The viral claim that the 3D model is a recreation of the ancient philosopher Chanakya by a university called Magadha DS University is False and Misleading. In reality, the model is a digital artwork of former Indian cricket captain MS Dhoni, created by artist Ankur Khatri. There is no evidence of a Magadha DS University existence. There is a university named Magadh University in Bodh Gaya, Bihar despite its similar name, we found no evidence in the model's creation. Therefore, the claim is debunked, and the image is confirmed to be a depiction of MS Dhoni, not Chanakya.
Related Blogs

Introduction
The role of ‘Small and Medium Enterprises’ (SMEs) in the economic and social development of the country is well established. The SME sector is often driven by individual creativity and innovation. With its contribution at 8% of the country’s GDP, and 45% of the manufactured output and 40% of its exports, SMEs provide employment to about 60 million persons through over 26 million enterprises producing over six thousand products.
It would be an understatement to say that the SMEs sector in India is highly heterogeneous in terms of the size of the enterprises, variety of products and services produced and the levels of technology employed. With the SME sector booming across the country, these enterprises are contributing significantly to local, state, regional and national growth and feeding into India’s objectives of inclusive, sustainable development.
As the digital economy expands, SMEs cannot be left behind and must integrate online to be able to grow and prosper. This development is not without its risks and cybersecurity concerns and digital threats like misinformation are fast becoming a pressing pain point for the SME sector. The unique challenge posed to SMEs by cyber threats is that while the negative consequences of digital risks are just as damaging for the SMEs as they are for larger industries, the former’s ability to counter these threats is not at par with the latter, owing to the limited nature of resources at their disposal. The rapid development of emerging technologies like artificial intelligence makes it easier for malicious actors to develop bots, deepfakes, or other forms of manipulated content that can steer customers away from small businesses and the consequences can be devastating.
Misinformation is the sharing of inaccurate and misleading information, and the act can be both deliberate and unintentional. Malicious actors can use fake reviews, rumours, or false images to promote negative content or create backlash against a business’ brand and reputation. For a fledgling or growing enterprise, its credibility is a critical asset and any threat to the same is as much a cause for concern as any other operational hindrance.
Relationship Building to Counter Misinformation
We live in a world that is dominated by brands. A brand should ideally inspire trust. It is the single most powerful and unifying characteristic that embodies an organisation's culture and values and once well-established, can create incremental value. Businesses report industry rumours where misinformation resulted in the devaluation of a product, sowing mistrust among customers, and negatively impacting the companies’ revenue. Mitigating strategies to counter these digital downsides can include implementing greater due diligence and basic cyber hygiene practices, like two-factor or multi-factor authentication, as well as open communication of one’s experiences in the larger professional and business networks.
The loss of customer trust can be fatal for a business, and for an SME, the access to the scale of digital and other resources required to restore reputations may simply not be a feasible option. Creating your brand story is not just the selling pitch you give to customers and investors, but is also about larger qualitative factors such as your own motivation for starting the enterprise or the emotional connection your audience base enjoys with your organisation. The brand story is a mosaic of multiple tangible and intangible elements that all come together to determine how the brand is perceived by its various stakeholders. Building a compelling and fortified brand story which resonates deeply with people is an important step in developing a robust reputation. It can help innoculate against several degrees of misinformation and malicious attempts and ensure that customers continue to place their faith in the brand despite attempts to hurt this dynamic.
Engaging with the target audience, ie, the customer base is part of an effective marketing tool and misinformation inoculation strategy. SMEs should also continuously assess their strategies, adapt to market changes, and remain agile in their approach to stay competitive and relevant in today's dynamic business environment. These strategies will lead to greater customer engagement through the means of feedback, reviews and surveys which help in building trust and loyalty. Innovative and dynamic customer service engages the target audience and helps in staying in the competition and being relevant.
Crisis Management and Response
Having a crisis management strategy is an important practice for all SMEs and should be mandated for better policy implementation. Businesses need greater due diligence and basic cyber hygiene practices, like two-factor authentication, essential compliances, strong password protocols, transparent disclosure, etc.
The following steps should form part of a crisis management and response strategy:
- Assessing the damage by identifying the misinformation spread and its impact is the first step.
- Issuing a response in the form of a public statement by engaging the media should precede legal action.
- Two levels of communication need to take place in response to a misinformation attack. The first tier is internal, to the employees and it should clarify the implications of the incident and the organisation’s response plan. The other is aimed at customers via direct outreach to clarify the situation and provide accurate information in regard to the matter. If required the employees can be provided training related to the handling of the customer enquiries regarding the misinformation.
- The digital engagement of the enterprise should be promptly updated and social media platforms and online communications must address the issue and provide clarity and factual information.
- Immediate action must include a plan to rebuild reputations and trust by ensuring customers of the high quality of products and services. The management should seek customer feedback and show commitment to improving processes and transparency. Sharing positive testimonials and stories of satisfied customers can also help at this stage.
- Engaging with the community and collaborating with organisations is also an important part of crisis management.
While these steps are for rebuilding and crisis management, further steps also need to be taken:
- Monitoring customer sentiment and gauging the effectiveness of the efforts taken is also necessary. And if required, strategic adjustments can be made in response to the evolving circumstances.
- Depending on the severity of the impact, management may choose to engage the professional help of PR consultants and crisis management experts to develop comprehensive recovery plans and help navigate the situation.
- A long-term strategy which focuses on building resilience against future attacks is important. Along with this, engaging in transparency and proactive communication with stakeholders is a must.
Legal and Ethical Considerations
SMEs administrators must prioritise ethical market practices and appreciate that SMEs are subject to laws which deal with defamation, intellectual property rights- trademark and copyright infringement in particular, data protection and privacy laws and consumer protection laws. Having the knowledge of these laws and ensuring that there is no infringement upon the rights of other enterprises or their consumers is integral in order to continue engaging in business legally.
Ethical and transparent business conduct includes clear and honest communication and proactive public redressal mechanisms in the event of misinformation or mistakes. These efforts go a long way towards building trust and accountability.
Proactive public engagement is an important step in building relationships. SMEs can engage with the community where they conduct their business through outreach programs and social media engagement. Efforts to counter misinformation through public education campaigns that alert customers and other stakeholders about misinformation serve the dual purpose of countering misinformation and creating deep community ties. SME administrators should monitor content and developments in their markets and sectors to ensure that their marketing practices are ethical and not creating or spreading misinformation, be it in the form of active sensationalising of existing content or passive dissemination of misinformation created by others. Fact-checking tools and expert consultations can help address and prevent a myriad of problems and should be incorporated into everyday operations.
Conclusion
Developing strong cybersecurity protocols, practising basic digital hygiene and ensuring regulatory compliances are crucial to ensure that a business not only survives but also thrives. Therefore, a crisis management plan and trust-building along with ethical business and legal practices go a long way in ensuring the future of SMEs. In today's digital landscape, misinformation is pervasive, and trust has become a cornerstone of successful business operations. It is the bedrock of a resilient and successful SME. By implementing and continuously improving trust-building efforts, businesses can not only navigate the challenges of misinformation but also create lasting value for their customers and stakeholders. Prioritising trust ensures long-term growth and sustainability in an ever-evolving digital landscape.
References
- https://SME.gov.in/sites/default/files/SME-Strategic-Action-Plan.pdf
- https://carnegieendowment.org/research/2024/01/countering-disinformation-effectively-an-evidence-based-policy-guide?lang=en
- https://dcSME.gov.in/Report%20of%20Expert%20Committee%20on%20SMEs%20-%20The%20U%20K%20Sinha%20Committee%20constitutes%20by%20RBI.pdf

Executive Summary:
Traditional Business Email Compromise(BEC) attacks have become smarter, using advanced technologies to enhance their capability. Another such technology which is on the rise is WormGPT, which is a generative AI tool that is being leveraged by the cybercriminals for the purpose of BEC. This research aims at discussing WormGPT and its features as well as the risks associated with the application of the WormGPT in criminal activities. The purpose is to give a general overview of how WormGPT is involved in BEC attacks and give some advice on how to prevent it.
Introduction
BEC(Business Email Compromise) in simple terms can be defined as a kind of cybercrime whereby the attackers target the business in an effort to defraud through the use of emails. Earlier on, BEC attacks were executed through simple email scams and phishing. However, in recent days due to the advancement of AI tools like WormGPT such malicious activities have become sophisticated and difficult to identify. This paper seeks to discuss WormGPT, a generative artificial intelligence, and how it is used in the BEC attacks to make the attacks more effective.
What is WormGPT?
Definition and Overview
WormGPT is a generative AI model designed to create human-like text. It is built on advanced machine learning algorithms, specifically leveraging large language models (LLMs). These models are trained on vast amounts of text data to generate coherent and contextually relevant content. WormGPT is notable for its ability to produce highly convincing and personalised email content, making it a potent tool in the hands of cybercriminals.
How WormGPT Works
1. Training Data: Here the WormGPT is trained with the arrays of data sets, like emails, articles, and other writing material. This extensive training enables it to understand and to mimic different writing styles and recognizable textual content.
2. Generative Capabilities: Upon training, WormGPT can then generate text based on specific prompts, as in the following examples in response to prompts. For example, if a cybercriminal comes up with a prompt concerning the company’s financial information, WormGPT is capable of releasing an appearance of a genuine email asking for more details.
3. Customization: WormGPT can be retrained any time with an industry or an organisation of interest in mind. This customization enables the attackers to make their emails resemble the business activities of the target thus enhancing the chances for an attack to succeed.
Enhanced Phishing Techniques
Traditional phishing emails are often identifiable by their generic and unconvincing content. WormGPT improves upon this by generating highly personalised and contextually accurate emails. This personalization makes it harder for recipients to identify malicious intent.
Automation of Email Crafting
Previously, creating convincing phishing emails required significant manual effort. WormGPT automates this process, allowing attackers to generate large volumes of realistic emails quickly. This automation increases the scale and frequency of BEC attacks.
Exploitation of Contextual Information
WormGPT can be fed with contextual information about the target, such as recent company news or employee details. This capability enables the generation of emails that appear highly relevant and urgent, further deceiving recipients into taking harmful actions.
Implications for Cybersecurity
Challenges in Detection
The use of WormGPT complicates the detection of BEC attacks. Traditional email security solutions may struggle to identify malicious emails generated by advanced AI, as they can closely mimic legitimate correspondence. This necessitates the development of more sophisticated detection mechanisms.
Need for Enhanced Training
Organisations must invest in training their employees to recognize signs of BEC attacks. Awareness programs should emphasise the importance of verifying email requests for sensitive information, especially when such requests come from unfamiliar or unexpected sources.
Implementation of Robust Security Measures
- Multi-Factor Authentication (MFA): MFA can add an additional layer of security, making it harder for attackers to gain unauthorised access even if they successfully deceive an employee.
- Email Filtering Solutions: Advanced email filtering solutions that use AI and machine learning to detect anomalies and suspicious patterns can help identify and block malicious emails.
- Regular Security Audits: Conducting regular security audits can help identify vulnerabilities and ensure that security measures are up to date.
Case Studies
Case Study 1: Financial Institution
A financial institution fell victim to a BEC attack orchestrated using WormGPT. The attacker used the tool to craft a convincing email that appeared to come from the institution’s CEO, requesting a large wire transfer. The email’s convincing nature led to the transfer of funds before the scam was discovered.
Case Study 2: Manufacturing Company
In another instance, a manufacturing company was targeted by a BEC attack using WormGPT. The attacker generated emails that appeared to come from a key supplier, requesting sensitive business information. The attack exploited the company’s lack of awareness about BEC threats, resulting in a significant data breach.
Recommendations for Mitigation
- Strengthen Email Security Protocols: Implement advanced email security solutions that incorporate AI-driven threat detection.
- Promote Cyber Hygiene: Educate employees on recognizing phishing attempts and practising safe email habits.
- Invest in AI for Defense: Explore the use of AI and machine learning in developing defences against generative AI-driven attacks.
- Implement Verification Procedures: Establish procedures for verifying the authenticity of sensitive requests, especially those received via email.
Conclusion
WormGPT is a new tool in the arsenal of cybercriminals which improved their options to perform Business Email Compromise attacks more effectively and effectively. Therefore, it is critical to provide the defence community with information regarding the potential of WormGPT and its implications for enhancing the threat landscape and strengthening the protection systems against advanced and constantly evolving threats.
This means the development of rigorous security protocols, general awareness of security solutions, and incorporating technologies such as artificial intelligence to mitigate the risk factors that arise from generative AI tools to the best extent possible.

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21