#FactCheck - Viral Clip and Newspaper Article Claiming 18% GST on 'Good Morning' Messages Debunked
Executive Summary
A recent viral message on social media such as X and Facebook, claims that the Indian Government will start charging an 18% GST on "good morning" texts from April 1, 2024. This news is misinformation. The message includes a newspaper clipping and a video that was actually part of a fake news report from 2018. The newspaper article from Navbharat Times, published on March 2, 2018, was clearly intended as a joke. In addition to this, we also found a video of ABP News, originally aired on March 20, 2018, was part of a fact-checking segment that debunked the rumor of a GST on greetings.

Claims:
The claim circulating online suggests that the Government will start applying a 18% of GST on all "Good Morning" texts sent through mobile phones from 1st of April, this year. This tax would be added to the monthly mobile bills.




Fact Check:
When we received the news, we first did some relevant keyword searches regarding the news. We found a Facebook Video by ABP News titled Viral Sach: ‘Govt to impose 18% GST on sending good morning messages on WhatsApp?’


We have watched the full video and found out that the News is 6 years old. The Research Wing of CyberPeace Foundation also found the full version of the widely shared ABP News clip on its website, dated March 20, 2018. The video showed a newspaper clipping from Navbharat Times, published on March 2, 2018, which had a humorous article with the saying "Bura na mano, Holi hain." The recent viral image is a cutout image from ABP News that dates back to the year 2018.
Hence, the recent image that is spreading widely is Fake and Misleading.
Conclusion:
The viral message claiming that the government will impose GST (Goods and Services Tax) on "Good morning" messages is completely fake. The newspaper clipping used in the message is from an old comic article published by Navbharat Times, while the clip and image from ABP News have been taken out of context to spread false information.
Claim: India will introduce a Goods and Services Tax (GST) of 18% on all "good morning" messages sent through mobile phones from April 1, 2024.
Claimed on: Facebook, X
Fact Check: Fake, made as Comic article by Navbharat Times on 2 March 2018
Related Blogs

Introduction
In 2025, the internet is entering a new paradigm and it is hard not to witness it. The internet as we know it is rapidly changing into a treasure trove of hyper-optimised material over which vast bot armies battle to the death, thanks to the amazing advancements in artificial intelligence. All of that advancement, however, has a price, primarily in human lives. It turns out that releasing highly personalised chatbots on a populace that is already struggling with economic stagnation, terminal loneliness, and the ongoing destruction of our planet isn’t exactly a formula for improved mental health. This is the truth of 75% of the kids and teen population who have had chats with chatbot-generated fictitious characters. AI, or artificial intelligence, Chatbots are becoming more and more integrated into our daily lives, assisting us with customer service, entertainment, healthcare, and education. But as the impact of these instruments grows, accountability and moral behaviour become more important. An investigation of the internal policies of a major international tech firm last year exposed alarming gaps: AI chatbots were allowed to create content with child romantic roleplaying, racially discriminatory reasoning, and spurious medical claims. Although the firm has since amended aspects of these rules, the exposé underscores an underlying global dilemma - how can we regulate AI to maintain child safety, guard against misinformation, and adhere to ethical considerations without suppressing innovation?
The Guidelines and Their Gaps
The tech giants like Meta and Google are often reprimanded for overlooking Child Safety and the overall increase in Mental health issues in children and adolescents. According to reports, Google introduced Gemini AI kids, a kid-friendly version of its Gemini AI chatbot, which represents a major advancement in the incorporation of generative artificial intelligence (Gen-AI) into early schooling. Users under the age of thirteen can use supervised accounts on the Family Link app to access this version of Gemini AI Kids.
AI operates on the premise of data collection and analysis. To safeguard children’s personal information in the digital world, the Digital Personal Data Protection Act, 2023 (DPDP Act) introduces particular safeguards. According to Section 9, before processing the data of children, who are defined as people under the age of 18, Data Fiduciaries, entities that decide the goals and methods of processing personal data, must get verified consent from a parent or legal guardian. Furthermore, the Act expressly forbids processing activities that could endanger a child’s welfare, such as behavioural surveillance and child-targeted advertising. According to court interpretations, a child's well-being includes not just medical care but also their moral, ethical, and emotional growth.
While the DPDP Act is a big start in the right direction, there are still important lacunae in how it addresses AI and Child Safety. Age-gating systems, thorough risk rating, and limitations specific to AI-driven platforms are absent from the Act, which largely concentrates on consent and damage prevention in data protection. Furthermore, it ignores the threats to children’s emotional safety or the long-term psychological effects of interacting with generative AI models. Current safeguards are self-regulatory in nature and dispersed across several laws, such as the Bhartiya Nyaya Sanhita, 2023. These include platform disclaimers, technology-based detection of child-sexual abuse content, and measures under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Child Safety and AI
- The Risks of Romantic Roleplay - Enabling chatbots to engage in romantic roleplaying with youngsters is among the most concerning discoveries. These interactions can result in grooming, psychological trauma, and relaxation to inappropriate behaviour, even if they are not explicitly sexual. Having illicit or sexual conversations with kids in cyberspace is unacceptable, according to child protection experts. However, permitting even "flirtatious" conversation could normalise risky boundaries.
- International Standards and Best Practices - The concept of "safety by design" is highly valued in child online safety guidelines from around the world, including UNICEF's Child Online Protection Guidelines and the UK's Online Safety Bill. This mandating of platforms and developers to proactively remove risks, not reactively to respond to harms, is the bare minimum standard that any AI guidelines must meet if they provide loopholes for child-directed roleplay.
Misinformation and Racism in AI Outputs
- The Disinformation Dilemma - The regulations also allowed AI to create fictional narratives with disclaimers. For example, chatbots were able to write articles promulgating false health claims or smears against public officials, as long as they were labelled as "untrue." While disclaimers might give thin legal cover, they add to the proliferation of misleading information. Indeed, misinformation tends to spread extensively because users disregard caveat labels in favour of provocative assertions.
- Ethical Lines and Discriminatory Content - It is ethically questionable to allow AI systems to generate racist arguments, even when requested. Though scholarly research into prejudice and bias may necessitate such examples, unregulated generation has the potential to normalise damaging stereotypes. Researchers warn that such practice brings platforms from being passive hosts of offensive speech to active generators of discriminatory content. It is a difference that makes a difference, as it places responsibility squarely on developers and corporations.
The Broader Governance Challenge
- Corporate Responsibility and AI Material generated by AI is not equivalent to user speech—it is a direct reflection of corporate training, policy decisions, and system engineering. This fact requires a greater level of accountability. Although companies can update guidelines following public criticism, that there were such allowances in the first place indicates a lack of strong ethical regulation.
- Regulatory Gaps Regulatory regimes for AI are currently in disarray. The EU AI Act, the OECD AI Principles, and national policies all emphasise human rights, transparency, and accountability. The few, though, specify clear guidelines for content risks such as child roleplay or hate narratives. This absence of harmonised international rules leaves companies acting in the shadows, establishing their own limits until contradicted.
An active way forward would include
- Express Child Protection Requirements: AI systems must categorically prohibit interactions with children involving flirting or romance.
- Misinformation Protections: Generative AI must not be allowed to generate knowingly false material, disclaimers being irrelevant.
- Bias Reduction: Developers need to proactively train systems against generating discriminatory accounts, not merely tag them as optional outputs.
- Independent Regulation: External audit and ethics review boards can supply transparency and accountability independent of internal company regulations.
Conclusion
The guidelines that are often contentious are more than the internal folly of just one firm; they point to a deeper systemic issue in AI regulation. The stakes rise as generative AI becomes more and more integrated into politics, healthcare, education, and social interaction. Racism, false information, and inadequate child safety measures are severe issues that require quick resolution. Corporate regulation is only one aspect of the future; other elements include multi-stakeholder participation, stronger global systems, and ethical standards. In the end, rather than just corporate interests, trust in artificial neural networks will be based on their ability to preserve the truth, protect the weak, and represent universal human values.
References
- https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
- https://www.lakshmisri.com/insights/articles/ai-for-children/#
- https://the420.in/meta-ai-chatbot-guidelines-child-safety-racism-misinformation/
- https://www.unicef.org/documents/guidelines-industry-online-child-protection
- https://www.oecd.org/en/topics/sub-issues/ai-principles.html
- https://artificialintelligenceact.eu/

Introduction
Misinformation in India has emerged as a significant societal challenge, wielding a potent influence on public perception, political discourse, and social dynamics. A potential number of first-time voters across India identified fake news as a real problem in the nation. With the widespread adoption of digital platforms, false narratives, manipulated content, and fake news have found fertile ground to spread unchecked information and news.
In the backdrop of India being the largest market of WhatsApp users, who forward more content on chats than anywhere else, the practice of fact-checking forwarded information continues to remain low. The heavy reliance on print media, television, unreliable news channels and primarily, social media platforms acts as a catalyst since studies reveal that most Indians trust any content forwarded by family and friends. It is noted that out of all risks, misinformation and disinformation ranked the highest in India, coming before infectious diseases, illicit economic activity, inequality and labour shortages. World Economic Forum analysts, in connection with their 2024 Global Risk Report, note that “misinformation and disinformation in electoral processes could seriously destabilise the real and perceived legitimacy of newly elected governments, risking political unrest, violence and terrorism and long-term erosion of democratic processes.”
The Supreme Court of India on Misinformation
The Supreme Court of India, through various judgements, has noted the impact of misinformation on democratic processes within the country, especially during elections and voting. In 1995, while adjudicating a matter pertaining to keeping the broadcasting media under the control of the public, it noted that democracy becomes a farce when the medium of information is monopolized either by partisan central authority or by private individuals or oligarchic organizations.
In 2003, the Court stated that “Right to participate by casting a vote at the time of election would be meaningless unless the voters are well informed about all sides of the issue in respect of which they are called upon to express their views by casting their votes. Disinformation, misinformation, non-information all equally create an uninformed citizenry which would finally make democracy a mobocracy and a farce.” It noted that elections would be a useless procedure if voters remained unaware of the antecedents of the candidates contesting elections. Thus, a necessary aspect of a voter’s duty to cast intelligent and rational votes is being well-informed. Such information forms one facet of the fundamental right under Article 19 (1)(a) pertaining to freedom of speech and expression. Quoting James Madison, it stated that a citizen’s right to know the true facts about their country’s administration is one of the pillars of a democratic State.
On a similar note, the Supreme Court, while discussing the disclosure of information by an election candidate, gave weightage to the High Court of Bombay‘s opinion on the matter, which opined that non-disclosure of information resulted in misinformation and disinformation, thereby influencing voters to take uninformed decisions. It stated that a voter had the elementary right to know the full particulars of a candidate who is to represent him in Parliament/Assemblies.
While misinformation was discussed primarily in relation to elections, the effects of misinformation in other sectors have also been discussed from time to time. In particular, The court highlighted the World Health Organisation’s observation in 2021 while discussing the spread of COVID-19, noting that the pandemic was not only an epidemic but also an “infodemic” due to the overabundance of information on the internet, which was riddled with misinformation and disinformation. While condemning governments’ direct or indirect threats of prosecution to citizens, it noted that various citizens who relied on the internet to provide help in securing medical facilities and oxygen tanks were being targeted by alleging that the information posted by them was false and was posted to create panic, defame the administration or damage national image. It instructed authorities to cease such threats and prevent clampdown on information sharing.
More recently, in Facebook v. Delhi Legislative Assembly [(2022) 3 SCC 529], the apex court, while upholding the summons issued to Facebook by the Delhi Legislative Assembly in the aftermath of the 2020 Delhi Riots, noted that while social media enables equal and open dialogue between citizens and policymakers, it is also a tool in the where extremist views are peddled into mainstream media, thereby spreading misinformation. It noted Facebook’s role in the Mynmar, where misinformation and posts that Facebook employees missed fueled offline violence. Since Facebook is one of the most popular social media applications, the platform itself acts as a power center by hosting various opinions and voices on its forum. This directly impacts the governance of States, and some form of liability must be attached to the platform. The Supreme Court objected to Facebook taking contrary stands in various jurisdictions; while in the US, it projected itself as a publisher, which enabled it to maintain control over the material disseminated from its platform, while in India, “it has chosen to identify itself purely as a social media platform, despite its similar functions and services in the two countries.”
Conclusion
The pervasive issue of misinformation in India is a multifaceted challenge with profound implications for democratic processes, public awareness, and social harmony. The alarming statistics of fake news recognition among first-time voters, coupled with a lack of awareness regarding fact-checking organizations, underscore the urgency of addressing this issue. The Supreme Court of India has consistently recognized the detrimental impact of misinformation, particularly in elections. The judiciary has stressed the pivotal role of an informed citizenry in upholding the essence of democracy. It has emphasized the right to access accurate information as a fundamental aspect of freedom of speech and expression. As India grapples with the challenges of misinformation, the intersection of technology, media literacy and legal frameworks will be crucial in mitigating the adverse effects and fostering a more resilient and informed society.
References
- https://thewire.in/media/survey-finds-false-information-risk-highest-in-india
- https://www.statista.com/topics/5846/fake-news-in-india/#topicOverview
- https://www.weforum.org/publications/global-risks-report-2024/digest/
- https://main.sci.gov.in/supremecourt/2020/20428/20428_2020_37_1501_28386_Judgement_08-Jul-2021.pdf
- Secretary, Ministry of Information & Broadcasting, Govt, of India and Others v. Cricket Association of Bengal and Another [(1995) 2 SCC 161]
- People’s Union for Civil Liberties (PUCL) v. Union of India [(2003) 4 SCC 399]
- Kisan Shankar Kathore v. Arun Dattatray Sawant and Others [(2014) 14 SCC 162]
- Distribution of Essential Supplies & Services During Pandemic, In re [(2021) 18 SCC 201]
- Facebook v. Delhi Legislative Assembly [(2022) 3 SCC 529]
.webp)
Executive Summary:
A post on X (formerly Twitter) has gained widespread attention, featuring an image inaccurately asserting that Houthi rebels attacked a power plant in Ashkelon, Israel. This misleading content has circulated widely amid escalating geopolitical tensions. However, investigation shows that the footage actually originates from a prior incident in Saudi Arabia. This situation underscores the significant dangers posed by misinformation during conflicts and highlights the importance of verifying sources before sharing information.

Claims:
The viral video claims to show Houthi rebels attacking Israel's Ashkelon power plant as part of recent escalations in the Middle East conflict.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search reveals that the video circulating online does not refer to an attack on the Ashkelon power plant in Israel. Instead, it depicts a 2022 drone strike on a Saudi Aramco facility in Abqaiq. There are no credible reports of Houthi rebels targeting Ashkelon, as their activities are largely confined to Yemen and Saudi Arabia.

This incident highlights the risks associated with misinformation during sensitive geopolitical events. Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The assertion that Houthi rebels targeted the Ashkelon power plant in Israel is incorrect. The viral video in question has been misrepresented and actually shows a 2022 incident in Saudi Arabia. This underscores the importance of being cautious when sharing unverified media. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The video shows massive fire at Israel's Ashkelon power plant
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading