#FactCheck - Viral Clip and Newspaper Article Claiming 18% GST on 'Good Morning' Messages Debunked
Executive Summary
A recent viral message on social media such as X and Facebook, claims that the Indian Government will start charging an 18% GST on "good morning" texts from April 1, 2024. This news is misinformation. The message includes a newspaper clipping and a video that was actually part of a fake news report from 2018. The newspaper article from Navbharat Times, published on March 2, 2018, was clearly intended as a joke. In addition to this, we also found a video of ABP News, originally aired on March 20, 2018, was part of a fact-checking segment that debunked the rumor of a GST on greetings.

Claims:
The claim circulating online suggests that the Government will start applying a 18% of GST on all "Good Morning" texts sent through mobile phones from 1st of April, this year. This tax would be added to the monthly mobile bills.




Fact Check:
When we received the news, we first did some relevant keyword searches regarding the news. We found a Facebook Video by ABP News titled Viral Sach: ‘Govt to impose 18% GST on sending good morning messages on WhatsApp?’


We have watched the full video and found out that the News is 6 years old. The Research Wing of CyberPeace Foundation also found the full version of the widely shared ABP News clip on its website, dated March 20, 2018. The video showed a newspaper clipping from Navbharat Times, published on March 2, 2018, which had a humorous article with the saying "Bura na mano, Holi hain." The recent viral image is a cutout image from ABP News that dates back to the year 2018.
Hence, the recent image that is spreading widely is Fake and Misleading.
Conclusion:
The viral message claiming that the government will impose GST (Goods and Services Tax) on "Good morning" messages is completely fake. The newspaper clipping used in the message is from an old comic article published by Navbharat Times, while the clip and image from ABP News have been taken out of context to spread false information.
Claim: India will introduce a Goods and Services Tax (GST) of 18% on all "good morning" messages sent through mobile phones from April 1, 2024.
Claimed on: Facebook, X
Fact Check: Fake, made as Comic article by Navbharat Times on 2 March 2018
Related Blogs

Executive Summary:
Recently, our team came across a video on social media that appears to show a saint lying in a fire during the Mahakumbh 2025. The video has been widely viewed and comes with captions claiming that it is part of a ritual during the ongoing Mahakumbh 2025. After thorough research, we found that these claims are false. The video is unrelated to Mahakumbh 2025 and comes from a different context and location. This is an example of how the information posted was from the past and not relevant to the alleged context.

Claim:
A video has gone viral on social media, claiming to show a saint lying in fire during Mahakumbh 2025, suggesting that this act is part of the traditional rituals associated with the ongoing festival. This misleading claim falsely implies that the act is a standard part of the sacred ceremonies held during the Mahakumbh event.

Fact Check:
Upon receiving the post we conducted a reverse image search of the key frames extracted from the video, and traced the video to an old article. Further research revealed that the original post was from 2009, when Ramababu Swamiji, aged 80, laid down on a burning fire for the benefit of society. The video is not recent, as it had already gone viral on social media in November 2009. A closer examination of the scene, crowd, and visuals clearly shows that the video is unrelated to the rituals or context of Mahakumbh 2025. Additionally, our research found that such activities are not part of the Mahakumbh rituals. Reputable sources were also kept into consideration to cross-verify this information, effectively debunking the claim and emphasizing the importance of verifying facts before believing in anything.


For more clarity, the YouTube video attached below further clears the doubt, which reminds us to verify whether such claims are true or not.

Conclusion:
The viral video claiming to depict a saint lying in fire during Mahakumbh 2025 is entirely misleading. Our thorough fact-checking reveals that the video dates back to 2009 and is unrelated to the current event. Such misinformation highlights the importance of verifying content before sharing or believing it. Always rely on credible sources to ensure the accuracy of claims, especially during significant cultural or religious events like Mahakumbh.
- Claim: A viral video claims to show a saint lying in fire during the Mahakumbh 2025.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading

Introduction
To combat the problem of annoying calls and SMS, telecom regulator TRAI has urged service providers to create a uniform digital platform in two months that will allow them to request, maintain, and withdraw customers’ approval for promotional calls and messages. In the initial stage, only subscribers will be able to initiate the process of registering their consent to receive promotional calls and SMS, and later, business entities will be able to contact customers to seek their consent to receive promotional messages, according to a statement issued by the Telecom Regulatory Authority of India (TRAI) on Saturday.
TRAI Directs Telecom Providers to Set Up Digital Platform
TRAI has now directed all access providers to develop and deploy the Digital Consent Acquisition (DCA) facility for creating a unified platform and process to digitally register customers’ consent across all service providers and principal entities. Consent is received and maintained under the current system by several key entities such as banks, other financial institutions, insurance firms, trading companies, business entities, real estate businesses, and so on.
The purpose, scope of consent, and the principal entity or brand name shall be clearly mentioned in the consent-seeking message sent over the short code,” according to the statement.
It stated that only approved online or app links, call-back numbers, and so on will be permitted to be used in consent-seeking communications.
TRAI issued guidelines to guarantee that all voice-based Telemarketers are brought under a single Distributed ledger technology (DLT) platform for more efficient monitoring of nuisance calls and unwanted communications. It also instructs operators to actively deploy AI/ML-based anti-phishing systems as well as to integrate tech solutions on the DLT platform to deal with malicious calls and texts.
TRAI has issued two separate Directions to Access Service Providers under TCCCPR-2018 (Telecom Commercial Communications Customer Preference Regulations) to ensure that all promotional messages are sent through Registered Telemarketers (RTMs) using approved Headers and Message Templates on Distributed Ledger Technologies (DLT) platform, and to stop misuse of Headers and Message Templates,” the regulator said in a statement.
Users can already block telemarketing calls and texts by texting 1909 from their registered mobile number. By dialing 1909, customers can opt out of getting advertising calls by activating the do not disturb (DND) feature.

Telecom providers operate DLT platforms, and businesses involved in sending bulk promotional or transactional SMS must register by providing their company information, including sender IDs and SMS templates.
According to the instructions, telecom companies will send consent-seeking messages using the common short code 127. The goal, extent of consent, and primary entity/brand name must be clearly stated in the consent-seeking message delivered via the shortcode.
TRAI stated that only whitelisted URLs/APKs (Android package kits file format)/OTT links/call back numbers, etc., shall be used in consent-seeking messages.
Telcos must “ensure that promotional messages are not transmitted by unregistered telemarketers or telemarketers using telephone numbers (10 digits numbers).” Telecom providers have been urged to act against all erring telemarketers in accordance with the applicable regulations and legal requirements.
Users can, however, refuse to receive any consent-seeking messages launched by any significant Telcos have been urged to create an SMS/IVR (interactive voice response)/online service for this purpose.
According to TRAI’s timeline, the consent-taking process by primary companies will begin on September 1.According to a nationwide survey conducted by a local circle, 66% of mobile users continue to receive three or more bothersome calls per day, the majority of which originate from personal cell numbers.
There are scams surfacing on the internet with new types of scams, like WhatsApp international call scams. The latest scam is targeting Delhi police, the scammers pretend to be police officials of Delhi and ask for the personal details of the users and the calling them from a 9-digit number.
A recent scam
A Twitter user reported receiving an automated call from +91 96681 9555, stating, “This call is from Delhi Police.” It went on to ask her to stay in the queue since some of her documents needed to be picked up. Then he said he is a sub-inspector at New Delhi’s Kirti Nagar police station. He then questioned if she had lately misplaced her Aadhaar card, PAN card, or ATM card, to which she replied ‘no’. The fraudster then claims to be a cop and asks her to validate the final four digits of her card because they have discovered a card with her name on it. And so many other people tweeted about this.
The scams are constantly increasing as earlier these scammers asked for account details and claimed to be Delhi police and used 9-digit numbers for scamming people.
TRAI’s new guidelines regarding the consent to receive any promotional calls and messages to telecommunication providers will be able to curb the scams.
The e- KYC is an essential requirement as e-KYC offers a more secure identity verification process in an increasingly digital age that uses biometric technologies to provide quick results.

Conclusion
The aim is to prevent unwanted calls and communications sent to customers via digital methods without their permission. Once this platform is implemented, an organization can only send promotional calls or messages with the customer’s explicit approval. Companies use a variety of methods to notify clients about their products, including phone calls, text messages, emails, and social media. Customers, however, are constantly assaulted with the same calls and messages as a result of this practice. With the constant increase in scams, the new guideline of TRAI will also curb the calling of Scams. digital KYC prevents SIM fraud and offers a more secure identity verification method.

Executive Summary:
Traditional Business Email Compromise(BEC) attacks have become smarter, using advanced technologies to enhance their capability. Another such technology which is on the rise is WormGPT, which is a generative AI tool that is being leveraged by the cybercriminals for the purpose of BEC. This research aims at discussing WormGPT and its features as well as the risks associated with the application of the WormGPT in criminal activities. The purpose is to give a general overview of how WormGPT is involved in BEC attacks and give some advice on how to prevent it.
Introduction
BEC(Business Email Compromise) in simple terms can be defined as a kind of cybercrime whereby the attackers target the business in an effort to defraud through the use of emails. Earlier on, BEC attacks were executed through simple email scams and phishing. However, in recent days due to the advancement of AI tools like WormGPT such malicious activities have become sophisticated and difficult to identify. This paper seeks to discuss WormGPT, a generative artificial intelligence, and how it is used in the BEC attacks to make the attacks more effective.
What is WormGPT?
Definition and Overview
WormGPT is a generative AI model designed to create human-like text. It is built on advanced machine learning algorithms, specifically leveraging large language models (LLMs). These models are trained on vast amounts of text data to generate coherent and contextually relevant content. WormGPT is notable for its ability to produce highly convincing and personalised email content, making it a potent tool in the hands of cybercriminals.
How WormGPT Works
1. Training Data: Here the WormGPT is trained with the arrays of data sets, like emails, articles, and other writing material. This extensive training enables it to understand and to mimic different writing styles and recognizable textual content.
2. Generative Capabilities: Upon training, WormGPT can then generate text based on specific prompts, as in the following examples in response to prompts. For example, if a cybercriminal comes up with a prompt concerning the company’s financial information, WormGPT is capable of releasing an appearance of a genuine email asking for more details.
3. Customization: WormGPT can be retrained any time with an industry or an organisation of interest in mind. This customization enables the attackers to make their emails resemble the business activities of the target thus enhancing the chances for an attack to succeed.
Enhanced Phishing Techniques
Traditional phishing emails are often identifiable by their generic and unconvincing content. WormGPT improves upon this by generating highly personalised and contextually accurate emails. This personalization makes it harder for recipients to identify malicious intent.
Automation of Email Crafting
Previously, creating convincing phishing emails required significant manual effort. WormGPT automates this process, allowing attackers to generate large volumes of realistic emails quickly. This automation increases the scale and frequency of BEC attacks.
Exploitation of Contextual Information
WormGPT can be fed with contextual information about the target, such as recent company news or employee details. This capability enables the generation of emails that appear highly relevant and urgent, further deceiving recipients into taking harmful actions.
Implications for Cybersecurity
Challenges in Detection
The use of WormGPT complicates the detection of BEC attacks. Traditional email security solutions may struggle to identify malicious emails generated by advanced AI, as they can closely mimic legitimate correspondence. This necessitates the development of more sophisticated detection mechanisms.
Need for Enhanced Training
Organisations must invest in training their employees to recognize signs of BEC attacks. Awareness programs should emphasise the importance of verifying email requests for sensitive information, especially when such requests come from unfamiliar or unexpected sources.
Implementation of Robust Security Measures
- Multi-Factor Authentication (MFA): MFA can add an additional layer of security, making it harder for attackers to gain unauthorised access even if they successfully deceive an employee.
- Email Filtering Solutions: Advanced email filtering solutions that use AI and machine learning to detect anomalies and suspicious patterns can help identify and block malicious emails.
- Regular Security Audits: Conducting regular security audits can help identify vulnerabilities and ensure that security measures are up to date.
Case Studies
Case Study 1: Financial Institution
A financial institution fell victim to a BEC attack orchestrated using WormGPT. The attacker used the tool to craft a convincing email that appeared to come from the institution’s CEO, requesting a large wire transfer. The email’s convincing nature led to the transfer of funds before the scam was discovered.
Case Study 2: Manufacturing Company
In another instance, a manufacturing company was targeted by a BEC attack using WormGPT. The attacker generated emails that appeared to come from a key supplier, requesting sensitive business information. The attack exploited the company’s lack of awareness about BEC threats, resulting in a significant data breach.
Recommendations for Mitigation
- Strengthen Email Security Protocols: Implement advanced email security solutions that incorporate AI-driven threat detection.
- Promote Cyber Hygiene: Educate employees on recognizing phishing attempts and practising safe email habits.
- Invest in AI for Defense: Explore the use of AI and machine learning in developing defences against generative AI-driven attacks.
- Implement Verification Procedures: Establish procedures for verifying the authenticity of sensitive requests, especially those received via email.
Conclusion
WormGPT is a new tool in the arsenal of cybercriminals which improved their options to perform Business Email Compromise attacks more effectively and effectively. Therefore, it is critical to provide the defence community with information regarding the potential of WormGPT and its implications for enhancing the threat landscape and strengthening the protection systems against advanced and constantly evolving threats.
This means the development of rigorous security protocols, general awareness of security solutions, and incorporating technologies such as artificial intelligence to mitigate the risk factors that arise from generative AI tools to the best extent possible.