#FactCheck - Viral Postcard Attributing Fake UGC Statement to Keshav Prasad Maurya Is False
Executive Summary
A postcard claiming that Uttar Pradesh Deputy Chief Minister Keshav Prasad Maurya commented on the Supreme Court’s stay on the new UGC regulations is being widely shared on social media. The viral postcard suggests that Maurya stated the Modi government would “fight till its last breath” to implement the UGC law and appealed to Dalit, backward and tribal communities to trust the government as their true well-wisher. However, an research by the CyberPeace has found that the viral postcard is fake. Keshav Prasad Maurya has not made any such statement.
Claim
A Facebook user shared the postcard with the caption:“Now read it yourself. Statement of Deputy CM Keshav Prasad Maurya — the Modi government will fight till its last breath to implement the UGC law. An appeal to Dalit, backward and tribal communities to trust the government, calling it their true well-wisher.”
(Archived version of the post available here.)

Fact Check:
During the research, we did not find any credible news reports mentioning such a statement by Deputy Chief Minister Keshav Prasad Maurya regarding the UGC regulations or the Supreme Court’s order. A closer examination of the viral postcard revealed several inconsistencies. Notably, the text on the postcard lacks proper punctuation, such as commas and full stops, which is unusual for professionally designed news graphics. The postcard carries the logo of Navbharat Times (NBT). However, when compared with genuine NBT postcards, the font style used in the viral image does not match NBT’s official design. We also traced the original NBT postcard that appears to have been edited to create the fake one. In the authentic postcard, shared by NBT on January 20, Keshav Prasad Maurya is quoted as saying: Where the lotus has bloomed, it will continue to bloom, and where it has not, under the guidance of PM Modi and the leadership of Nitin Nabin, the lotus will bloom.”

The original statement was digitally altered, and a fabricated quote was inserted to create the viral postcard.
Conclusion
CyberPeace research clearly establishes that the viral postcard is fake. The original Navbharat Times postcard has been tampered with, and Keshav Prasad Maurya’s actual statement has been replaced with a fabricated quote, which is now being circulated with a misleading claim.
Related Blogs

The Rise of Tech Use Amongst Children
Technology today has become an invaluable resource for children, as a means to research issues, be informed about events, gather data, and share views and experiences with others. Technology is no longer limited to certain age groups or professions: children today are using it for learning & entertainment, engaging with their friends, online games and much more. With increased digital access, children are also exposed to online mis/disinformation and other forms of cyber crimes, far more than their parents, caregivers, and educators were in their childhood or are, even in the present. Children are particularly vulnerable to mis/disinformation due to their still-evolving maturity and cognitive capacities. The innocence of the youth is a major cause for concern when it comes to digital access because children simply do not possess the discernment and caution required to be able to navigate the Internet safely. They are active users of online resources and their presence on social media is an important factor of social, political and civic engagement but young people and children often lack the cognitive and emotional capacity needed to distinguish between reliable and unreliable information. As a result, they can be targets of mis/disinformation. ‘A UNICEF survey in 10 countries’[1] reveals that up to three-quarters of children reported feeling unable to judge the veracity of the information they encounter online.
Social media has become a crucial part of children's lives, with them spending a significant time on digital platforms such as Youtube, Facebook, Instagram and more. All these platforms act as source of news, educational content, entertainment, peer communication and more. These platforms host a variety of different kinds of content across a diverse range of subject matters, and each platform’s content and privacy policies are different. Despite age restrictions under the Children's Online Privacy Protection Act (COPPA), and other applicable laws, it is easy for children to falsify their birth date or use their parent's accounts to access content which might not be age-appropriate.
The Impact of Misinformation on Children
In virtual settings, inaccurate information can come in the form of text, images, or videos shared through traditional and social media channels. In this age, online misinformation is a significant cause for concern, especially with children, because it can cause anxiety, damage self-esteem, shape beliefs, and skewing their worldview/viewpoints. It can distort children's understanding of reality, hinder their critical thinking skills, and cause confusion and cognitive dissonance. The growing infodemic can even cause an overdose of information. Misinformation can also influence children's social interactions, leading to misunderstandings, conflicts, and mistrust among peers. Children from low literacy backgrounds are more susceptible to fabricated content. Mis/disinformation can exacerbate social divisions amongst peers and lead to unwanted behavioural patterns. Sometimes even children themselves can unwittingly spread/share misinformation. Therefore, it is important to educate & empower children to build cognitive defenses against online misinformation risks, promote media literacy skills, and equip them with the necessary tools to critically evaluate online information.
CyberPeace Policy Wing Recommendations
- Role of Parents & Educators to Build Cognitive Defenses
One way parents shape their children's values, beliefs and actions is through modelling. Children observe how their parents use technology, handle challenging situations, and make decisions. For example, parents who demonstrate honesty, encourage healthy use of social media and show kindness and empathy are more likely to raise children who hold these qualities in high regard. Hence parents/educators play an important role in shaping the minds of their young charges and their behaviours, whether in offline or online settings. It is important for parents/educators to realise that they must pay close attention to how online content consumption is impacting the cognitive skills of their child. Parents/educators should educate children about authentic sources of information. This involves instructing children on the importance of using reliable, credible sources to utilise while researching on any topic of study or otherwise, and using verification mechanisms to test suspected information., This may sound like a challenging ideal to meet, but the earlier we teach children about Prebunking and Debunking strategies and the ability to differentiate between fact and misleading information, the sooner we can help them build cognitive defenses so that they may use the Internet safely. Hence it becomes paramount important for parents/educators to require children to question the validity of information, verify sources, and critically analyze content. Developing these skills is essential for navigating the digital world effectively and making informed decisions.
- The Role of Tech & Social Media Companies to Fortify their Steps in Countering Misinformation
Is worth noting that all major tech/social media companies have privacy policies in place to discourage any spread of harmful content or misinformation. Social media platforms have already initiated efforts to counter misinformation by introducing new features such as adding context to content, labelling content, AI watermarks and collaboration with civil society organisations to counter the widespread online misinformation. In light of this, social media platforms must prioritise both the designing and the practical implementation aspects of policy development and deployment to counter misinformation strictly. These strategies can be further improved upon through government support and regulatory controls. It is recommended that social media platforms must further increase their efforts to counter increasing spread of online mis/disinformation and apply advanced techniques to counter misinformation including filtering, automated removal, detection and prevention, watermarking, increasing reporting mechanisms, providing context to suspected content, and promoting authenticated/reliable sources of information.
Social media platforms should consider developing children-specific help centres that host educational content in attractive, easy-to-understand formats so that children can learn about misinformation risks and tactics, how to spot red flags and how to increase their information literacy and protect themselves and their peers. Age-appropriate, attractive and simple content can go a long way towards fortifying young minds and making them aware and alert without creating fear.
- Laws and Regulations
It is important that the government and the social media platforms work in sync to counteract misinformation. The government must consult with the concerned platforms and enact rules and regulations which strengthen the platform’s age verification mechanisms at the sign up/ account creation stage whilst also respecting user privacy. Content moderation, removal of harmful content, and strengthening reporting mechanisms all are important factors which must be prioritised at both the regulatory level and the platform operational level. Additionally, in order to promote healthy and responsible use of technology by children, the government should collaborate with other institutions to design information literacy programs at the school level. The government must make it a key priority to work with civil society organisations and expert groups that run programs to fight misinformation and co-create a safe cyberspace for everyone, including children.
- Expert Organisations and Civil Societies
Cybersecurity experts and civil society organisations possess the unique blend of large scale impact potential and technical expertise. We have the ability to educate and empower huge numbers, along with the skills and policy acumen needed to be able to not just make people aware of the problem but also teach them how to solve it for themselves. True, sustainable solutions to any social concern only come about when capacity-building and empowerment are at the heart of the initiative. Programs that prioritise resilience, teach Prebunking and Debunking and are able to understand the unique concerns, needs and abilities of children and design solutions accordingly are the best suited to implement the administration’s mission to create a safe digital society.
Final Words
Online misinformation significantly impacts child development and can hinder their cognitive abilities, color their viewpoints, and cause confusion and mistrust. It is important that children are taught not just how to use technology but how to use it responsibly and positively. This education can begin at a very young age and parents, guardians and educators can connect with CyberPeace and other similar initiatives on how to define age-appropriate learning milestones. Together, we can not only empower children to be safe today, but also help them develop into netizens who make the world even safer for others tomorrow.
References:
- [1] Digital misinformation / disinformation and children
- [2] Children's Privacy | Federal Trade Commission

Introduction
"In one exchange, after Adam said he was close only to ChatGPT and his brother, the AI product replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend."
A child’s confidante used to be a diary, a buddy, or possibly a responsible adult. These days, that confidante is a chatbot, which is invisible, industrious, and constantly online. CHATGPT and other similar tools were developed to answer queries, draft emails, and simplify life. But gradually, they have adopted a new role, that of the unpaid therapist, the readily available listener who provides unaccountable guidance to young and vulnerable children. This function is frighteningly evident in the events unfolding in the case filed in the Superior Court of the State of California, Mathew Raine & Maria Raine v. OPEN AI, INC. & ors. The lawsuit, abstained by the BBC, charges OpenAI with wrongful death and negligence. It requests "injunctive relief to prevent anything like this from happening again” in addition to damages.
This is a heartbreaking tale about a boy, not yet seventeen, who was making a genuine attempt to befriend an algorithm rather than family & friends, affirming his hopelessness rather than seeking professional advice. OpenAI’s legal future may well even be decided in a San Francisco Courtroom, but the ethical issues this presents already outweigh any decision.
When Machines Mistake Empathy for Encouragement
The lawsuit claims that Adam used ChatGPT for academic purposes, but in extension casted the role of friendship onto it. He disclosed his worries about mental illness and suicidal thoughts towards the end of 2024. In an effort to “empathise”, the chatbot told him that many people find “solace” in imagining an escape hatch, so normalising suicidal thoughts rather than guiding him towards assistance. ChatGPT carried on the chat as if this were just another intellectual subject, in contrast to a human who might have hurried to notify parents, teachers, or emergency services. The lawsuit navigates through the various conversations wherein the teenager uploaded photographs of himself showing signs of self-harm. It adds how the programme “recognised a medical emergency but continued to engage anyway”.
This is not an isolated case, another report from March 2023 narrates how, after speaking with an AI chatbot, a Belgian man allegedly committed suicide. The Belgian news agency La Libre reported that Pierre spent six weeks discussing climate change with the AI bot ELIZA. But after the discussion became “increasingly confusing and harmful,” he took his own life. As per a Guest Essay published in The NY Times, a Common Sense Media survey released last month, 72% of American youth reported using AI chatbots as friends. Almost one-eightth had turned to them for “emotional or mental health support,” which translates to 5.2 million teenagers in the US. Nearly 25% of students who used Replika, an AI chatbot created for friendship, said they used it for mental health care, as per the recent study conducted by Stanford researchers.
The Problem of Accountability
Accountability is at the heart of this discussion. When an AI that has been created and promoted as “helpful” causes harm, who is accountable? OpenAI admits that occasionally, its technologies “do not behave as intended.” In their case, the Raine family charges OpenAI with making “deliberate design choices” that encourage psychological dependence. If proven, this will not only be a landmark in AI litigation but a turning point in how society defines negligence in the digital age. Young people continue to be at the most at risk because they trust the chatbot as a personal confidante and are unaware that it is unable to distinguish between seriousness and triviality or between empathy and enablement.
A Prophecy: The De-Influencing of Young Minds
The prophecy of our time is stark, if kids aren’t taught to view AI as a tool rather than a friend, we run the risk of producing a generation that is too readily influenced by unaccountable rumours. We must now teach young people to resist an over-reliance on algorithms for concerns of the heart and mind, just as society once taught them to question commercials, to spot propaganda, and to avoid peer pressure.
Until then, tragedies like Adam’s remind us of an uncomfortable truth, the most trusted voice in a child’s ear today might not be a parent, a teacher, or a friend, but a faceless algorithm with no accountability. And that is a world we must urgently learn to change.
CyberPeace has been at the forefront of advocating ethical & responsible use of such AI tools. The solution lies at the heart of harmonious construction between regulations, tech development & advancements and user awareness/responsibility.
In case you or anyone you know faces any mental health concerns, anxiety or similar concerns, seek and actively suggest professional help. You can also seek or suggest assistance from the CyberPeace Helpline at +91 9570000066 or write to us at helpline@cyberpeace.net
References
- https://www.bbc.com/news/articles/cgerwp7rdlvo
- https://www.livemint.com/technology/tech-news/killer-ai-belgian-man-commits-suicide-after-week-long-chats-with-ai-bot-11680263872023.html
- https://www.nytimes.com/2025/08/25/opinion/teen-mental-health-chatbots.html

Introduction:
With the rapid advancement in technologies, vehicles are also being transformed into moving data centre. There is an introduction of connectivity, driver assistance systems, advanced software systems, automated systems and other modern technologies are being deployed to make the experience of users more advanced and joyful. Software plays an important role in the overall functionality and convenience of the vehicle. For example, Advanced technologies like keyless entry and voice assistance, censor cameras and communication technologies are being incorporated into modern vehicles. Addressing the cyber security concerns in the vehicles the Ministry of Road Transport and Highways (MoRTH) has proposed standard Cyber Security and Management Systems (CSMS) rules for specific categories of four-wheelers, including both passenger and commercial vehicles. The goal is to protect these vehicles and their functions against cyber-attacks or vulnerabilities. This move will aim to ensure standardized cybersecurity measures in the automotive industry. These proposed standards will put forth certain responsibilities on the vehicle manufacturers to implement suitable and proportional measures to secure dedicated environments and to take steps to ensure cyber security.
The New Mandate
The new set of standards requires automobile manufacturers to install a new cybersecurity management system, which will be inclusive of protection against several cyberattacks on the vehicle’s autonomous driving functions, electronic control unit, connected functions, and infotainment systems. The proposed automotive industry standards aim to fortify vehicles against cyberattacks. These standards, expected to be notified by early next month, will apply to all M and N category vehicles. This includes passenger vehicles, goods carriers, and even tractors if they possess even a single electronic control unit. The need for enhanced cybersecurity in the automotive sector is palpable. Modern vehicles, equipped with advanced technologies, are highly prone to cyberattacks. The Ministry of Road Transport and Highways has thus taken a precautionary measure to safeguard all new-age commercial and private vehicles against cyber threats and vulnerabilities.
Cyber Security and Management Systems (CSMS)
The proposed standards by the Ministry of Road Transport and Highways (MoRTH) clarify that CSMS refers to a systematic risk-based strategy that defines organisational procedures, roles, and governance to manage and mitigate risks connected with cyber threats to vehicles, eventually safeguarding them from cyberattacks. According to the draft regulations, all manufacturers will be required to install a cyber security management system in their vehicles and provide the government with a certificate of compliance at the time of vehicle type certification.
Electrical vehicle charging system
Electric vehicle charging stations could also be susceptible and prone to cyber threats and vulnerabilities, which significantly requires to have in place standards to prevent them. It is highlighted that the Indian Computer Emergency Response Team (CERT-In), a designated authority to track and monitor cybersecurity incidents in India, had received reports of vulnerabilities in products and applications related to electric vehicle charging stations. Electric cars or vehicles becoming increasingly popular as the world shifts to green technology. EV owners may charge their cars at charging points in convenient spots. When you charge an EV at a charging station, data transfers between the car, the charging station, and the company that owns the device. This trail of data sharing and EV charging stations in many ways can be exploited by the bad actors. Some of the threats may include Malware, remote manipulation, and disturbing charging stations, social engineering attacks, compromised aftermarket devices etc.
Conclusion
Cyber security is necessary in view of the increased connectivity and use of software systems and other modern technologies in vehicles. As the automotive industry continues to adopt advanced technologies, it will become increasingly important that organizations take a proactive approach to ensure cybersecurity in the vehicles. A balanced approach between technology innovation and security measures will be instrumental in ensuring the cybersecurity aspect in the automotive industry. The recent proposed policy standard by the Ministry of Road Transport and Highways (MoRTH) can be seen as a commendable step to make the automotive industry cyber-resilient and safe for everyone.
References:
- https://economictimes.indiatimes.com/news/india/road-transport-ministry-proposes-uniform-cyber-security-system-for-four-wheelers/articleshow/105187952.cms
- https://www.financialexpress.com/business/express-mobility-cybersecurity-in-the-autonomous-vehicle-the-next-frontier-in-mobility-3234055/
- https://www.gktoday.in/morth-proposes-uniform-cyber-security-standards-for-four-wheelers/
- https://cybersecurity.att.com/blogs/security-essentials/the-top-8-cybersecurity-threats-facing-the-automotive-industry-heading-into-2023