#FactCheck - Fake Navbharat Times Graphic Falsely Attributes Casteist Remark to Devkinandan Thakur
A news graphic bearing the Navbharat Times logo is being widely circulated on social media. The graphic claims that religious preacher Devkinandan Thakur made an extremely offensive and casteist remark targeting the ‘Shudra’ community. Social media users are sharing the graphic and claiming that the statement was actually made by Devkinandan Thakur. Cyber Peace Foundation’s research and verification found that the claim being shared online is misleading. Our research found that the viral news graphic is completely fake and that Devkinandan Thakur did not make any such casteist statement.
Claim
A viral news graphic claims that Devkinandan Thakur made a derogatory and caste-based statement about Shudras.On 17 January 2026, an Instagram user shared the viral graphic with the caption, “This is probably the formula of Ram Rajya.”The text on the graphic reads: “People of Shudra castes reproduce through sexual intercourse, whereas Brahmins give birth to children after marriage through the power of their mantras, without intercourse.” The graphic also carries Devkinandan Thakur’s photograph and identifies him as a ‘Kathavachak’ (religious storyteller).

Fact Check:
To verify the claim, we first searched for relevant keywords on Google. However, no credible or verified media reports were found supporting the claim. In the next stage of verification, we found a post published by NBT Hindi News (Navbharat Times) on X (formerly Twitter) on 17 January 2026, in which the organisation explicitly debunked the viral graphic. Navbharat Times clarified that the graphic circulating online was fake and also shared the original and authentic post related to the news.

Further research led us to Devkinandan Thakur’s official Facebook account, where he posted a clarification on 17 January 2026. In his post, he stated that anti-social elements are creating fake ‘Sanatani’ profiles and spreading false news, misusing the names of reputed media houses and platforms to mislead and divide people. He described the viral content as part of a deliberate conspiracy and fake agenda aimed at weakening unity. He also warned that AI-generated fake videos and fabricated statements are increasingly being used to create confusion, mistrust and division.
Devkinandan Thakur urged people not to believe or share any post, news or video without verification, and advised checking information through official websites, verified social media accounts or trusted sources.

Conclusion
The viral news graphic attributing a casteist statement to Devkinandan Thakur is completely fake.Devkinandan Thakur did not make the alleged remark, and the graphic circulating with the Navbharat Times logo is fabricated.
Related Blogs

Introduction
We stand at the edge of a reality once confined to science fiction, a world where the very creations designed to serve us could redefine what it means to be human, rewriting the paradigm we built them in. The increasing prevalence of robotics and embodied AI systems in everyday life and cyber-physical settings draws attention to a complicated network of issues at the intersection of cybersecurity, human-to-robot trust, and robotic safety. The development of robotics cannot be perceived as a novelty or a fleeting interest area for enthusiasts, it has developed into a force that enters the area of human life that is private and has historically been reserved for human connection and care. We live in an era where countries can no longer afford to fall behind, at a time when technological prowess determines global influence. The new development currency of the 21st century is “Techno-sovereign”, meaning that one must be able to innovate as well as incorporate robotics, artificial intelligence, and other technologies.
Entering the Robotic Renaissance
The recent unveiling of the humanoid “pregnancy robot” presents the next frontier in reproductive robotics, garnering both criticism and support. Although this bold innovation holds promise, it also presents unavoidable cybersecurity, privacy, and ethical conundrums. The humanoid is being developed by Kaiwa Technology under the direction of Dr. Zhang Qifeng, who is also connected to Nanyang Technological University. As per the report of ECNS, he presented his idea for a robotic surrogate that could carry a child for a full-term pregnancy at the 2025 World Robot Conference in Beijing. While the technology is indubitably groundbreaking, it raises a lot of ethical and moral concerns as well as legal concerns, as surrogacy is banned in China.
Alongside the concerns raised by various segments of doctors, feminists who argue on the devaluation and pathologising of pregnancy, it also raises various cybersecurity concerns, keeping in mind the interpersonal and intimate nature of human connections, where robotics are now making headway. Pregnancy is inherently intimate. Our understanding of bodily autonomy is blurred when we move into the realm of machinery. From artificial amniotic fluid sensors to embryo data, every layer of this technology becomes a possible attack vector. Robots with artificial wombs are essentially IoT-powered medical systems. As per the research conducted by the Department of Computer Science and Engineering, Cornell University, “our lives have been made easier by the incorporation of AI into robotics systems, but there is a significant drawback as well: these systems are susceptible to security breaches. Malicious actors may take advantage of the data, algorithms, and physical components that make up AI-Robotics systems, which can cast a debilitating impact.
The Robotic Pivot: The Market’s Greatest Disruption
The humanoid “pregnancy robot” is not the only robotic innovation planning to take the industry for a whirlwind. China is pushing the boundaries amidst the escalating trade wars. Beijing is stepping up its efforts in sectors where it has the capacity and necessity to advance before the US. China’s leaders see AI as a source of national pride, a means of enhancing its military might, and a long-standing problem of Western dominance. The proof lies in the fact that Beijing hosted the first World Humanoid Robot Games, reflecting China’s dual goals of showcasing its technological prowess as it moves closer to establishing itself as a dominant force in artificial intelligence applied to robotics and bringing people closer to machines that will eventually play a bigger role in daily life and the economy.
Despite China’s prominence, it is not the only country that sees the potential in AI-enabled robotics. Indian Space Research Organisation’s chairman V Narayanan announced that the humanoid robot Gaganyaan programme’s first uncrewed mission G1 would be launched with humanoid robot Vyommitra in December.
Conclusion
The emergence of robotics holds both great potential and significant obstacles holds both great potential and significant obstacles. Robots have the potential to revolutionise accessibility and efficiency in a variety of fields, including healthcare and space exploration, but only if human trust, ethics, and cybersecurity keep up with technological advancements. This is not a far-flung issue for India, rather, it is a pressing appeal to properly lead in a world where technological sovereignty is equivalent to world power.
References
- https://nurse.org/news/pregnancy-robot-artificial-womb-china/
- https://timesofindia.indiatimes.com/life-style/health-fitness/health-news/chinas-2026-humanoid-robot-pregnancy-with-artificial-womb-a-revolutionary-leap-in-reproductive-technology/articleshow/123357813.cms?utm_source=chatgpt.com
- https://arxiv.org/pdf/2310.08565
- https://www.theguardian.com/world/2025/apr/21/humanoid-workers-and-surveillance-buggies-embodied-ai-is-reshaping-daily-life-in-china
- https://english.elpais.com/technology/2025-08-21/china-stages-first-robot-olympics-to-showcase-its-tech-ambition.html
- https://www.tribuneindia.com/news/india/1st-non-crew-gaganyaan-mission-to-launch-in-dec-with-robot-vyommitra/
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.

Executive Summary:
Given that AI technologies are evolving at a fast pace in 2024, an AI-oriented phishing attack on a large Indian financial institution illustrated the threats. The documentation of the attack specifics involves the identification of attack techniques, ramifications to the institution, intervention conducted, and resultant effects. The case study also turns to the challenges connected with the development of better protection and sensibilisation of automatized threats.
Introduction
Due to the advancement in AI technology, its uses in cybercrimes across the world have emerged significant in financial institutions. In this report a serious incident that happened in early 2024 is analysed, according to which a leading Indian bank was hit by a highly complex, highly intelligent AI-supported phishing operation. Attack made use of AI’s innate characteristic of data analysis and data persuasion which led into a severe compromise of the bank’s internal structures.
Background
The chosen financial institution, one of the largest banks in India, had a good background regarding the extremity of its cybersecurity policies. However, these global cyberattacks opened up new threats that AI-based methods posed that earlier forms of security could not entirely counter efficiently. The attackers concentrated on the top managers of the bank because it is evident that controlling such persons gives the option of entering the inner systems as well as financial information.
Attack Execution
The attackers utilised AI in sending the messages that were an exact look alike of internal messages sent between employees. From Facebook and Twitter content, blog entries, and lastly, LinkedIn connection history and email tenor of the bank’s executives, the AI used to create these emails was highly specific. Some of these emails possessed official formatting, specific internal language, and the CEO’s writing; this made them very realistic.
It also used that link in phishing emails that led the users to a pseudo internal portal in an attempt to obtain the login credentials. Due to sophistication, the targeted individuals thought the received emails were genuine, and entered their log in details easily to the bank’s network, thus allowing the attackers access.
Impact
It caused quite an impact to the bank in every aspect. Numerous executives of the company lost their passwords to the fake emails and compromised several financial databases with information from customer accounts and transactions. The break-in permitted the criminals to cease a number of the financial’s internet services hence disrupting its functions and those of its customers for a number of days.
They also suffered a devastating blow to their customer trust because the breach revealed the bank’s weakness against contemporary cyber threats. Apart from managing the immediate operations which dealt with mitigating the breach, the financial institution was also toppling a long-term reputational hit.
Technical Analysis and Findings
1. The AI techniques that are used in generation of the phishing emails are as follows:
- The attack used powerful NLP technology, which was most probably developed using the large-scaled transformer, such as GPT (Generative Pre-trained Transformer). Since these models are learned from large data samples they used the examples of the conversation pieces from social networks, emails and PC language to create quite credible emails.
Key Technical Features:
- Contextual Understanding: The AI was able to take into account the nature of prior interactions and thus write follow up emails that were perfectly in line with prior discourse.
- Style Mimicry: The AI replicated the writing of the CEO given the emails of the CEO and then extrapolated from the data given such elements as the tone, the language, and the format of the signature line.
- Adaptive Learning: The AI actively adapted from the mistakes, and feedback to tweak the generated emails for other tries and this made it difficult to detect.
2. Sophisticated Spear-Phishing Techniques
Unlike ordinary phishing scams, this attack was phishing using spear-phishing where the attackers would directly target specific people using emails. The AI used social engineering techniques that significantly increased the chances of certain individuals replying to certain emails based on algorithms which machine learning furnished.
Key Technical Features:
- Targeted Data Harvesting: Cyborgs found out the employees of the organisation and targeted messages via the public profiles and messengers were scraped.
- Behavioural Analysis: The latest behaviour pattern concerning the users of the social networking sites and other online platforms were used by the AI to forecast the courses of action expected to be taken by the end users such as clicking on the links or opening of the attachments.
- Real-Time Adjustments: These are times when it was determined that the response to the phishing email was necessary and the use of AI adjusted the consequent emails’ timing and content.
3. Advanced Evasion Techniques
The attackers were able to pull off this attack by leveraging AI in their evasion from the normal filters placed in emails. These techniques therefore entailed a modification of the contents of the emails in a manner that would not be easily detected by the spam filters while at the same time preserving the content of the message.
Key Technical Features:
- Dynamic Content Alteration: The AI merely changed the different aspects of the email message slightly to develop several versions of the phishing email that would compromise different algorithms.
- Polymorphic Attacks: In this case, polymorphic code was used in the phishing attack which implies that the actual payloads of the links changed frequently, which means that it was difficult for the AV tools to block them as they were perceived as threats.
- Phantom Domains: Another tactic employed was that of using AI in generating and disseminating phantom domains, that are actual web sites that appear to be legitimate but are in fact short lived specially created for this phishing attack, adding to the difficulty of detection.
4. Exploitation of Human Vulnerabilities
This kind of attack’s success was not only in AI but also in the vulnerability of people, trust in familiar language and the tendency to obey authorities.
Key Technical Features:
- Social Engineering: As for the second factor, AI determined specific psychological principles that should be used in order to maximise the chance of the targeted recipients opening the phishing emails, namely the principles of urgency and familiarity.
- Multi-Layered Deception: The AI was successfully able to have a two tiered approach of the emails being sent as once the targeted individuals opened the first mail, later the second one by pretext of being a follow up by a genuine company/personality.
Response
On sighting the breach, the bank’s cybersecurity personnel spring into action to try and limit the fallout. They reported the matter to the Indian Computer Emergency Response Team (CERT-In) to find who originated the attack and how to block any other intrusion. The bank also immediately started taking measures to strengthen its security a bit further, for instance, in filtering emails, and increasing the authentication procedures.
Knowing the risks, the bank realised that actions should be taken in order to enhance the cybersecurity level and implement a new wide-scale cybersecurity awareness program. This programme consisted of increasing the awareness of employees about possible AI-phishing in the organisation’s info space and the necessity of checking the sender’s identity beforehand.
Outcome
Despite the fact and evidence that this bank was able to regain its functionality after the attack without critical impacts with regards to its operations, the following issues were raised. Some of the losses that the financial institution reported include losses in form of compensation of the affected customers and costs of implementing measures to enhance the financial institution’s cybersecurity. However, the principle of the incident was significantly critical of the bank as customers and shareholders began to doubt the organisation’s capacity to safeguard information in the modern digital era of advanced artificial intelligence cyber threats.
This case depicts the importance for the financial firms to align their security plan in a way that fights the new security threats. The attack is also a message to other organisations in that they are not immune from such analysis attacks with AI and should take proper measures against such threats.
Conclusion
The recent AI-phishing attack on an Indian bank in 2024 is one of the indicators of potential modern attackers’ capabilities. Since the AI technology is still progressing, so are the advances of the cyberattacks. Financial institutions and several other organisations can only go as far as adopting adequate AI-aware cybersecurity solutions for their systems and data.
Moreover, this case raises awareness of how important it is to train the employees to be properly prepared to avoid the successful cyberattacks. The organisation’s cybersecurity awareness and secure employee behaviours, as well as practices that enable them to understand and report any likely artificial intelligence offences, helps the organisation to minimise risks from any AI attack.
Recommendations
- Enhanced AI-Based Defences: Financial institutions should employ AI-driven detection and response products that are capable of mitigating AI-operation-based cyber threats in real-time.
- Employee Training Programs: CYBER SECURITY: All employees should undergo frequent cybersecurity awareness training; here they should be trained on how to identify AI-populated phishing.
- Stricter Authentication Protocols: For more specific accounts, ID and other security procedures should be tight in order to get into sensitive ones.
- Collaboration with CERT-In: Continued engagement and coordination with authorities such as the Indian Computer Emergency Response Team (CERT-In) and other equivalents to constantly monitor new threats and valid recommendations.
- Public Communication Strategies: It is also important to establish effective communication plans to address the customers of the organisations and ensure that they remain trusted even when an organisation is facing a cyber threat.
Through implementing these, financial institutions have an opportunity for being ready with new threats that come with AI and cyber terrorism on essential financial assets in today’s complex IT environments.