#FactCheck - Viral Clip and Newspaper Article Claiming 18% GST on 'Good Morning' Messages Debunked
Executive Summary
A recent viral message on social media such as X and Facebook, claims that the Indian Government will start charging an 18% GST on "good morning" texts from April 1, 2024. This news is misinformation. The message includes a newspaper clipping and a video that was actually part of a fake news report from 2018. The newspaper article from Navbharat Times, published on March 2, 2018, was clearly intended as a joke. In addition to this, we also found a video of ABP News, originally aired on March 20, 2018, was part of a fact-checking segment that debunked the rumor of a GST on greetings.

Claims:
The claim circulating online suggests that the Government will start applying a 18% of GST on all "Good Morning" texts sent through mobile phones from 1st of April, this year. This tax would be added to the monthly mobile bills.




Fact Check:
When we received the news, we first did some relevant keyword searches regarding the news. We found a Facebook Video by ABP News titled Viral Sach: ‘Govt to impose 18% GST on sending good morning messages on WhatsApp?’


We have watched the full video and found out that the News is 6 years old. The Research Wing of CyberPeace Foundation also found the full version of the widely shared ABP News clip on its website, dated March 20, 2018. The video showed a newspaper clipping from Navbharat Times, published on March 2, 2018, which had a humorous article with the saying "Bura na mano, Holi hain." The recent viral image is a cutout image from ABP News that dates back to the year 2018.
Hence, the recent image that is spreading widely is Fake and Misleading.
Conclusion:
The viral message claiming that the government will impose GST (Goods and Services Tax) on "Good morning" messages is completely fake. The newspaper clipping used in the message is from an old comic article published by Navbharat Times, while the clip and image from ABP News have been taken out of context to spread false information.
Claim: India will introduce a Goods and Services Tax (GST) of 18% on all "good morning" messages sent through mobile phones from April 1, 2024.
Claimed on: Facebook, X
Fact Check: Fake, made as Comic article by Navbharat Times on 2 March 2018
Related Blogs

Executive Summary
A viral video allegedly featuring cricketer Virat Kohli endorsing a betting app named ‘Aviator’ is being shared widely across the social platform. CyberPeace Research Team’s Investigations revealed that the same has been made using the deepfake technology. In the viral video, we found some potential anomalies that can be said to have been created using Synthetic Media, also no genuine celebrity endorsements for the app exist, we have also previously debunked such Deep Fake videos of cricketer Virat Kohli regarding the misuse of deep fake technology. The spread of such content underscores the need for social media platforms to implement robust measures to combat online scams and misinformation.

Claims:
The claim made is that a video circulating on social media depicts Indian cricketer Virat Kohli endorsing a betting app called "Aviator." The video features an Indian News channel named India TV, where the journalist reportedly endorses the betting app followed by Virat Kohli's experience with the betting app.

Fact Check:
Upon receiving the news, we thoroughly watched the video and found some featured anomalies that are usually found in regular deep fake videos such as the lip sync of the journalist is not proper, and if we see it carefully the lips do not match with the audio that we can hear in the Video. It’s the same case when Virat Kohli Speaks in the video.

We then divided the video into keyframes and reverse searched one of the frames from the Kohli’s part, we found a video similar to the one spread, where we could see Virat Kohli wearing the same brown jacket in that video, uploaded on his verified Instagram handle which is an ad promotion in collaboration with American Tourister.

After going through the entire video, it is evident that Virat Kohli is not endorsing any betting app, rather he is talking about an ad promotion collaborating with American Tourister.
We then did some keyword searches to see if India TV had published any news as claimed in the Viral Video, but we didn’t find any credible source.
Therefore, upon noticing the major anomalies in the video and doing further analysis found that the video was created using Synthetic Media, it's a fake and misleading one.
Conclusion:
The video of Virat Kohli promoting a betting app is fake and does not actually feature the celebrity endorsing the app. This brings up many concerns regarding how Artificial Intelligence is being used for fraudulent activities. Social media platforms need to take action against the spread of fake videos like these.
Claim: Video surfacing on social media shows Indian cricket star Virat Kohli promoting a betting application known as "Aviator."
Claimed on: Facebook
Fact Check: Fake & Misleading

Introduction
The role of ‘Small and Medium Enterprises’ (SMEs) in the economic and social development of the country is well established. The SME sector is often driven by individual creativity and innovation. With its contribution at 8% of the country’s GDP, and 45% of the manufactured output and 40% of its exports, SMEs provide employment to about 60 million persons through over 26 million enterprises producing over six thousand products.
It would be an understatement to say that the SMEs sector in India is highly heterogeneous in terms of the size of the enterprises, variety of products and services produced and the levels of technology employed. With the SME sector booming across the country, these enterprises are contributing significantly to local, state, regional and national growth and feeding into India’s objectives of inclusive, sustainable development.
As the digital economy expands, SMEs cannot be left behind and must integrate online to be able to grow and prosper. This development is not without its risks and cybersecurity concerns and digital threats like misinformation are fast becoming a pressing pain point for the SME sector. The unique challenge posed to SMEs by cyber threats is that while the negative consequences of digital risks are just as damaging for the SMEs as they are for larger industries, the former’s ability to counter these threats is not at par with the latter, owing to the limited nature of resources at their disposal. The rapid development of emerging technologies like artificial intelligence makes it easier for malicious actors to develop bots, deepfakes, or other forms of manipulated content that can steer customers away from small businesses and the consequences can be devastating.
Misinformation is the sharing of inaccurate and misleading information, and the act can be both deliberate and unintentional. Malicious actors can use fake reviews, rumours, or false images to promote negative content or create backlash against a business’ brand and reputation. For a fledgling or growing enterprise, its credibility is a critical asset and any threat to the same is as much a cause for concern as any other operational hindrance.
Relationship Building to Counter Misinformation
We live in a world that is dominated by brands. A brand should ideally inspire trust. It is the single most powerful and unifying characteristic that embodies an organisation's culture and values and once well-established, can create incremental value. Businesses report industry rumours where misinformation resulted in the devaluation of a product, sowing mistrust among customers, and negatively impacting the companies’ revenue. Mitigating strategies to counter these digital downsides can include implementing greater due diligence and basic cyber hygiene practices, like two-factor or multi-factor authentication, as well as open communication of one’s experiences in the larger professional and business networks.
The loss of customer trust can be fatal for a business, and for an SME, the access to the scale of digital and other resources required to restore reputations may simply not be a feasible option. Creating your brand story is not just the selling pitch you give to customers and investors, but is also about larger qualitative factors such as your own motivation for starting the enterprise or the emotional connection your audience base enjoys with your organisation. The brand story is a mosaic of multiple tangible and intangible elements that all come together to determine how the brand is perceived by its various stakeholders. Building a compelling and fortified brand story which resonates deeply with people is an important step in developing a robust reputation. It can help innoculate against several degrees of misinformation and malicious attempts and ensure that customers continue to place their faith in the brand despite attempts to hurt this dynamic.
Engaging with the target audience, ie, the customer base is part of an effective marketing tool and misinformation inoculation strategy. SMEs should also continuously assess their strategies, adapt to market changes, and remain agile in their approach to stay competitive and relevant in today's dynamic business environment. These strategies will lead to greater customer engagement through the means of feedback, reviews and surveys which help in building trust and loyalty. Innovative and dynamic customer service engages the target audience and helps in staying in the competition and being relevant.
Crisis Management and Response
Having a crisis management strategy is an important practice for all SMEs and should be mandated for better policy implementation. Businesses need greater due diligence and basic cyber hygiene practices, like two-factor authentication, essential compliances, strong password protocols, transparent disclosure, etc.
The following steps should form part of a crisis management and response strategy:
- Assessing the damage by identifying the misinformation spread and its impact is the first step.
- Issuing a response in the form of a public statement by engaging the media should precede legal action.
- Two levels of communication need to take place in response to a misinformation attack. The first tier is internal, to the employees and it should clarify the implications of the incident and the organisation’s response plan. The other is aimed at customers via direct outreach to clarify the situation and provide accurate information in regard to the matter. If required the employees can be provided training related to the handling of the customer enquiries regarding the misinformation.
- The digital engagement of the enterprise should be promptly updated and social media platforms and online communications must address the issue and provide clarity and factual information.
- Immediate action must include a plan to rebuild reputations and trust by ensuring customers of the high quality of products and services. The management should seek customer feedback and show commitment to improving processes and transparency. Sharing positive testimonials and stories of satisfied customers can also help at this stage.
- Engaging with the community and collaborating with organisations is also an important part of crisis management.
While these steps are for rebuilding and crisis management, further steps also need to be taken:
- Monitoring customer sentiment and gauging the effectiveness of the efforts taken is also necessary. And if required, strategic adjustments can be made in response to the evolving circumstances.
- Depending on the severity of the impact, management may choose to engage the professional help of PR consultants and crisis management experts to develop comprehensive recovery plans and help navigate the situation.
- A long-term strategy which focuses on building resilience against future attacks is important. Along with this, engaging in transparency and proactive communication with stakeholders is a must.
Legal and Ethical Considerations
SMEs administrators must prioritise ethical market practices and appreciate that SMEs are subject to laws which deal with defamation, intellectual property rights- trademark and copyright infringement in particular, data protection and privacy laws and consumer protection laws. Having the knowledge of these laws and ensuring that there is no infringement upon the rights of other enterprises or their consumers is integral in order to continue engaging in business legally.
Ethical and transparent business conduct includes clear and honest communication and proactive public redressal mechanisms in the event of misinformation or mistakes. These efforts go a long way towards building trust and accountability.
Proactive public engagement is an important step in building relationships. SMEs can engage with the community where they conduct their business through outreach programs and social media engagement. Efforts to counter misinformation through public education campaigns that alert customers and other stakeholders about misinformation serve the dual purpose of countering misinformation and creating deep community ties. SME administrators should monitor content and developments in their markets and sectors to ensure that their marketing practices are ethical and not creating or spreading misinformation, be it in the form of active sensationalising of existing content or passive dissemination of misinformation created by others. Fact-checking tools and expert consultations can help address and prevent a myriad of problems and should be incorporated into everyday operations.
Conclusion
Developing strong cybersecurity protocols, practising basic digital hygiene and ensuring regulatory compliances are crucial to ensure that a business not only survives but also thrives. Therefore, a crisis management plan and trust-building along with ethical business and legal practices go a long way in ensuring the future of SMEs. In today's digital landscape, misinformation is pervasive, and trust has become a cornerstone of successful business operations. It is the bedrock of a resilient and successful SME. By implementing and continuously improving trust-building efforts, businesses can not only navigate the challenges of misinformation but also create lasting value for their customers and stakeholders. Prioritising trust ensures long-term growth and sustainability in an ever-evolving digital landscape.
References
- https://SME.gov.in/sites/default/files/SME-Strategic-Action-Plan.pdf
- https://carnegieendowment.org/research/2024/01/countering-disinformation-effectively-an-evidence-based-policy-guide?lang=en
- https://dcSME.gov.in/Report%20of%20Expert%20Committee%20on%20SMEs%20-%20The%20U%20K%20Sinha%20Committee%20constitutes%20by%20RBI.pdf
.webp)
Introduction
The rapid advancement of technology, including generative AI, offers immense benefits but also raises concerns about misuse. The Internet Watch Foundation reported that, as of July 2024, over 3,500 new AI-generated child sexual abuse images appeared on the dark web. The UK’s National Crime Agency records 800 monthly arrests for online child threats and estimates 840,000 adults as potential offenders. In response, the UK is introducing legislation to criminalise AI-generated child exploitation imagery, which will be a part of the Crime and Policing Bill when it comes to parliament in the next few weeks, aligning with global AI regulations like the EU AI Act and the US AI Initiative Act. This policy shift strengthens efforts to combat online child exploitation and sets a global precedent for responsible AI governance.
Current Legal Landscape and the Policy Gap
The UK’s Online Safety Act 2023 aims to combat CSAM and deepfake pornography by holding social media and search platforms accountable for user safety. It mandates these platforms to prevent children from accessing harmful content, remove illegal material, and offer clear reporting mechanisms. For adults, major platforms must be transparent about harmful content policies and provide users control over what they see.
However, the Act has notable limitations, including concerns over content moderation overreach, potential censorship of legitimate debates, and challenges in defining "harmful" content. It may disproportionately impact smaller platforms and raise concerns about protecting journalistic content and politically significant discussions. While intended to enhance online safety, these challenges highlight the complexities of balancing regulation with digital rights and free expression.
The Proposed Criminalisation of AI-Generated Sexual Abuse Content
The proposed law by the UK criminalises the creation, distribution, and possession of AI-generated CSAM and deepfake pornography. It mandates enforcement agencies and digital platforms to identify and remove such content, with penalties for non-compliance. Perpetrators may face up to two years in prison for taking intimate images without consent or installing equipment to facilitate such offences. Currently, sharing or threatening to share intimate images, including deepfakes, is an offence under the Sexual Offences Act 2003, amended by the Online Safety Act 2023. The government plans to repeal certain voyeurism offences, replacing them with broader provisions covering unauthorised intimate recordings. This aligns with its September 2024 decision to classify sharing intimate images as a priority offence under the Online Safety Act, reinforcing its commitment to balancing free expression with harm prevention.
Implications for AI Regulation and Platform Responsibility
The UK's move aligns with its AI Safety Summit commitments, placing responsibility on platforms to remove AI-generated sexual abuse content or face Ofcom enforcement. The Crime and Policing Bill is expected to tighten AI regulations, requiring developers to integrate safeguards against misuse, and the licensing frameworks may enforce ethical AI standards, restricting access to synthetic media tools. Given AI-generated abuse's cross-border nature, enforcement will necessitate global cooperation with platforms, law enforcement, and regulators. Bilateral and multilateral agreements could help harmonise legal frameworks, enabling swift content takedown, evidence sharing, and extradition of offenders, strengthening international efforts against AI-enabled exploitation.
Conclusion and Policy Recommendations
The Crime and Policing Bill marks a crucial step in criminalising AI-generated CSAM and deepfake pornography, strengthening online safety and platform accountability. However, balancing digital rights and enforcement remains a challenge. For effective implementation, industry cooperation is essential, with platforms integrating detection tools and transparent reporting systems. AI ethics frameworks should prevent misuse while allowing innovation, and victim support mechanisms must be prioritised. Given AI-driven abuse's global nature, international regulatory alignment is key for harmonised laws, evidence sharing, and cross-border enforcement. This legislation sets a global precedent, emphasising proactive regulation to ensure digital safety, ethical AI development, and the protection of human dignity.
References
- https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/
- https://www.reuters.com/technology/artificial-intelligence/uk-makes-use-ai-tools-create-child-abuse-material-crime-2025-02-01/
- https://www.financialexpress.com/life/technology-uk-set-to-ban-ai-tools-for-creating-child-sexual-abuse-images-with-new-laws-3735296/
- https://www.gov.uk/government/publications/national-crime-agency-annual-report-and-accounts-2023-to-2024/national-crime-agency-annual-report-and-accounts-2023-to-2024-accessible#part-1--performance-report