#FactCheck: Phishing Scam on Jio is offering a ₹700 Holi reward through a promotional link
Executive Summary:
A viral post currently circulating on various social media platforms claims that Reliance Jio is offering a ₹700 Holi gift to its users, accompanied by a link for individuals to claim the offer. This post has gained significant traction, with many users engaging in it in good faith, believing it to be a legitimate promotional offer. However, after careful investigation, it has been confirmed that this post is, in fact, a phishing scam designed to steal personal and financial information from unsuspecting users. This report seeks to examine the facts surrounding the viral claim, confirm its fraudulent nature, and provide recommendations to minimize the risk of falling victim to such scams.
Claim:
Reliance Jio is offering a ₹700 reward as part of a Holi promotional campaign, accessible through a shared link.

Fact Check:
Upon review, it has been verified that this claim is misleading. Reliance Jio has not provided any promo deal for Holi at this time. The Link being forwarded is considered a phishing scam to steal personal and financial user details. There are no reports of this promo offer on Jio’s official website or verified social media accounts. The URL included in the message does not end in the official Jio domain, indicating a fake website. The website requests for the personal information of individuals so that it could be used for unethical cyber crime activities. Additionally, we checked the link with the ScamAdviser website, which flagged it as suspicious and unsafe.


Conclusion:
The viral post claiming that Reliance Jio is offering a ₹700 Holi gift is a phishing scam. There is no legitimate offer from Jio, and the link provided leads to a fraudulent website designed to steal personal and financial information. Users are advised not to click on the link and to report any suspicious content. Always verify promotions through official channels to protect personal data from cybercriminal activities.
- Claim: Users can claim ₹700 by participating in Jio's Holi offer.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

CAPTCHA, or the Completely Automated Public Turing Test to Tell Computers and Humans Apart function, is an image or distorted text that users have to identify or interpret to prove they are human. 2007 marked the inception of CAPTCHA, and Google developed its free service called reCAPTCHA, one of the most commonly used technologies to tell computers apart from humans. CAPTCHA protects websites from spam and abuse by using tests considered easy for humans but were supposed to be difficult for bots to solve.
But, now this has changed. With AI becoming more and more sophisticated, it is now capable of solving CAPTCHA tests at a rate that is more accurate than humans, rendering them increasingly ineffective. This raises the question of whether CAPTCHA is still effective as a detection tool with the advancements of AI.
CAPTCHA Evolution: From 2007 Till Now
CAPTCHA has evolved through various versions to keep bots at bay. reCAPTCHA v1 relied on distorted text recognition, v2 introduced image-based tasks and behavioural analysis, and v3 operated invisibly, assigning risk scores based on user interactions. While these advancements improved user experience and security, AI now solves CAPTCHA with 96% accuracy, surpassing humans (50-86%). Bots can mimic human behaviour, undermining CAPTCHA’s effectiveness and raising the question: is it still a reliable tool for distinguishing real people from bots?
Smarter Bots and Their Rise
AI advancements like machine learning, deep learning and neural networks have developed at a very fast pace in the past decade, making it easier for bots to bypass CAPTCHA. They allow the bots to process and interpret the CAPTCHA types like text and images with almost human-like behaviour. Some examples of AI developments against bots are OCR or Optical Character Recognition. The earlier versions of CAPTCHA relied on distorted text: AI because of this tech is able to recognise and decipher the distorted text, making CAPTCHA useless. AI is trained on huge datasets which allows Image Recognition by identifying the objects that are specific to the question asked. These bots can mimic human habits and patterns by Behavioural Analysis and therefore fool the CAPTCHA.
To defeat CAPTCHA, attackers have been known to use Adversarial Machine Learning, which refers to AI models trained specifically to defeat CAPTCHA. They collect CAPTCHA datasets and answers and create an AI that can predict correct answers. The implications that CAPTCHA failures have on platforms can range from fraud to spam to even cybersecurity breaches or cyberattacks.
CAPTCHA vs Privacy: GDPR and DPDP
GDPR and the DPDP Act emphasise protecting personal data, including online identifiers like IP addresses and cookies. Both frameworks mandate transparency when data is transferred internationally, raising compliance concerns for reCAPTCHA, which processes data on Google’s US servers. Additionally, reCAPTCHA's use of cookies and tracking technologies for risk scoring may conflict with the DPDP Act's broad definition of data. The lack of standardisation in CAPTCHA systems highlights the urgent need for policymakers to reevaluate regulatory approaches.
CyberPeace Analysis: The Future of Human Verification
CAPTCHA, once a cornerstone of online security, is losing ground as AI outperforms humans in solving these challenges with near-perfect accuracy. Innovations like invisible CAPTCHA and behavioural analysis provided temporary relief, but bots have adapted, exploiting vulnerabilities and undermining their effectiveness. This decline demands a shift in focus.
Emerging alternatives like AI-based anomaly detection, biometric authentication, and blockchain verification hold promise but raise ethical concerns like privacy, inclusivity, and surveillance. The battle against bots isn’t just about tools but it’s about reimagining trust and security in a rapidly evolving digital world.
AI is clearly winning the CAPTCHA war, but the real victory will be designing solutions that balance security, user experience and ethical responsibility. It’s time to embrace smarter, collaborative innovations to secure a human-centric internet.
References
- https://www.business-standard.com/technology/tech-news/bot-detection-no-longer-working-just-wait-until-ai-agents-come-along-124122300456_1.html
- https://www.milesrote.com/blog/ai-defeating-recaptcha-the-evolving-battle-between-bots-and-web-security
- https://www.technologyreview.com/2023/10/24/1081139/captchas-ai-websites-computing/
- https://datadome.co/guides/captcha/recaptcha-gdpr/

Introduction
Who would have predicted that the crime of slavery would haunt our lives through the digital world? In a recent unfolding of events, the cyber wing of Maharashtra has saved 60 Indian nationals from a cyber slavery racket run by armed rebel groups operating in Myanmar and arrested five suspects who acted as recruiting agents, including a foreign national. As per the reports, the racketeers made contact with various individuals, enticing them with offers of high-paying jobs in East Asian countries. The operation unfolds a carefully designed crime network that operates through bordering states, Myanmar, Thailand, and Malaysia, targeting vulnerable individuals through deceptive means and forcing them to commit cyber fraud and financial crimes, operating as an authentic industrial setup. The disturbing set of events makes up only one of many such cyber-slavery incidents that are uncovered and various other rackets that operate in the shadows of cyberspace. Another similar event was reported in March 2025, where the disturbing ordeal of a 52-year-old father from Bihar’s Gopalganj, whose son was lured into working in a scam call centre under the pretence of a data entry job in Thailand.
Counting the Unseen: The Dark Metrics of Cyber Slavery
As per the United Nations report from October 2024, a large number of young individuals are enslaved, acting under the impression they will be employed in high-paying jobs, often on social media platforms, and what follows is an intricate web of cybercriminals operating from illegal scam compounds. According to the UN Office on Drugs and Crime (UNODC), financial losses from scams in Southeast Asia reached between $18 billion (Rs 1.6 lakh crore) and $37 billion (Rs 3.2 lakh crore) in 2023, much of it linked to organised crime in these three countries. Also, acting on a similar premise, the Indian Cyber Crime Coordination Centre (I4C), a division under the Ministry of Home Affairs (MHA), organised an inter-ministerial committee to address a significant rise in cybercrime in Southeast Asian countries, which includes Cambodia, Myanmar, and Laos.
The data from the Bureau of Immigration in the Union Ministry of Home Affairs, which included around 29,466 Indians who travelled on visitor visas to Thailand, Vietnam, Myanmar, and Cambodia between January 2022 and May 2024, has gone missing.
From Rescue to Reform: How India is Tackling Cyber Slavery
The recent events that unfolded have agitated the government to undertake vigilant rescue operations for the missing individuals who became victims of this modern-day trafficking and coordinate with foreign ministries in Myanmar, Thailand and Cambodia for extradition and repatriation. It is notable that in the year 2015, India along with seven other countries in South Asia, including Afghanistan, Bangladesh, Bhutan, Maldives, Nepal, Pakistan and Sri Lanka, came together to address transnational threats that transcend geographical and cultural borders in cooperation with the United Nations Office on Drugs and Crimes (UNODC). The collaboration brought together a Compendium of Bilateral and Regional Instruments for South Asia providing for International Cooperation in Criminal Matters. Further, in January 2025, UNODC and the European Union launched a €9 million regional project titled "Preventing and Addressing Trafficking in Human Beings and the Smuggling of Migrants in South Asia." The Government of India, through its various agencies, also lays down various guidelines and advisories on the National Cyber Crime Reporting Portal. Additionally, law enforcement agencies are actively involved, and cybersecurity NGOs are proactively spreading awareness about identifying red flags associated with threats such as cyber slavery.
Recommendations: A Call to Action
- The various advisories released by the Gov. of India emphasise the need for Indian nationals to verify the credentials of the employer through the Indian Embassy located in that country.
- The authorities and various agencies also stress the need for individuals to refrain from sharing personal information such as location details, contact information or any information pertaining to personal relationships that can be exploited by such criminals.
- The fundamental manner of tackling the crime of cyber slavery is to ensure digital literacy and increase awareness through public campaigns and educational programmes
- The need of the hour is international cooperation and collaboration to undertake a concerted effort to bring back the victims and penalise all those who facilitate such criminal activities.
References
- https://www.thehindu.com/news/national/more-than-60-indians-forced-into-cyber-slavery-rescued-from-myanmar-5-arrested/article69438991.ece
- https://www.indiatoday.in/india-today-insight/story/cyber-slavery-the-new-job-con-trapping-indian-youth-abroad-2637157-2024-11-21
- https://indianexpress.com/article/india/mha-high-powered-committee-cybercrimes-from-se-asia-9345843/
- https://www.unodc.org/documents/terrorism/Publications/SAARC%20compendium/SA_Compendium_Volume-2.pdf

One of the best forums for many video producers is YouTube. It also has a great chance of generating huge profits. YouTube content producers need assistance to get the most views, likes, comments, and subscribers for their videos and channels. As a result, some people could use YouTube bots to unnaturally raise their ranks on the YouTube site, which might help them get more organic views and reach a larger audience. However, this strategy is typically seen as unfair and can violate the YouTube platform’s terms of service.
As YouTube grows in popularity, so does the usage of YouTube bots. These bots are software programs that may automate operations on the YouTube platform, such as watching, liking, or disliking videos, subscribing to or unsubscribing from channels, making comments, and adding videos to playlists, among others. There have been YouTube bots around for a while. Many YouTubers widely use these computer codes to increase the number of views on their videos and accounts, which helps them rank higher in YouTube’s algorithm. Researchers discovered a new bot that takes private information from YouTube users’ accounts.
CRIL (Cyble Research and Intelligence Labs) has been monitoring new and active malware families CRIL has discovered a new YouTube bot virus capable of viewing, liking, and commenting on YouTube videos. Furthermore, it is capable of stealing sensitive information from browsers and acting as a bot that accepts orders from the Command and Control (C&C) server to carry out other harmful operations.
The Bot Insight
This YouTube bot has the same capabilities as all other YouTube bots, including the ability to view, like, and comment on videos. Additionally, it has the ability to steal private data from browsers and act as a bot that takes commands from a Command and Control (C&C) server for various malicious purposes. Researchers from Cyble discovered the inner workings of this information breach the Youtube bot uses the sample hash(SHA256) e9dac8b677a670e70919730ee65ab66cc27730378b9233d944ad7879c530d312.They discovered that it was created using the.NET compiler and is an executable file with a 32-bit size.
- The virus runs an AntiVM check as soon as it is executed to thwart researchers’ attempts to find and analyze malware in a virtual environment.
- It stops the execution if it finds that it is operating in a regulated setting. If not, it will carry out the tasks listed in the argument strings.
- Additionally, the virus creates a mutex, copies itself to the %appdata% folder as AvastSecurity.exe, and then uses cmd.exe to run.
- The new mutex makes a task scheduler entry and aids in ensuring
- The victim’s system’s installed Chromium browsers are used to harvest cookies, autofill information, and login information by the AvastSecurity.exe program.
- In order to view the chosen video, the virus runs the YouTube Playwright function, passing the previously indicated arguments along with the browser’s path and cookie data.
- YouTube bot uses the YouTube Playwright function to launch the browser environment with the specified parameters and automate actions like watching, liking, and commenting on YouTube videos. The feature is dependent on Microsoft. playwright’s kit.
- The malware establishes a connection to a C2 server and gets instructions to erase the entry for the scheduled task and end its own process, extract log files to the C2 server, download and run other files, and start/stop watching a YouTube movie.
- Additionally, it verifies that the victim’s PC has the required dependencies, including the Playwright package and the Chrome browser, installed. When it gets the command “view,” it will download and install these dependencies if they are missing.
Recommendations
The following is a list of some of the most critical cybersecurity best practices that serve as the first line of defense against intruders. We propose that our readers follow the advice provided below:
- Downloading pirated software from warez/torrent websites should be avoided. Such a virus is commonly found in “Hack Tools” available on websites such as YouTube, pirate sites, etc.
- When feasible, use strong passwords and impose multi-factor authentication.
- Enable automatic software updates on your laptop, smartphone, and other linked devices.
- Use a reputable antivirus and internet security software package on your linked devices, such as your computer, laptop, and smartphone.
- Avoid clicking on suspicious links and opening email attachments without verifying they are legitimate.Inform staff members on how to guard against dangers like phishing and unsafe URLs.
- Block URLs like Torrent/Warez that might be used to propagate malware.To prevent malware or TAs from stealing data, keep an eye on the beacon at the network level.
Conclusion
Using YouTube bots may be a seductive strategy for content producers looking to increase their ranks and expand their viewership on the site. However, the employment of bots is typically regarded as unfair and may violate YouTube’s terms of service. Utilizing YouTube bots carries additional risk because they might be identified, which could lead to account suspension or termination for the user. Mitigating this pressing issue through awareness drives and surveys to determine the bone of contention is best. NonProfits and civil society organizations can bridge the gap between the tech giant and the end user to facilitate better know-how about these unknown bots.