#FactCheck -Misleading Social Media Claim Targets University Over Viral Video
Executive Summary
A video circulating on social media shows a woman using abusive language in front of a camera. Users sharing the clip claim that the woman is a professor at Galgotias University and that the video exposes her alleged reality. However, an research by CyberPeace found the claim to be misleading. The probe revealed that the woman seen in the viral video has no connection with Galgotias University and is not a professor there.Fact-checking further showed that the video is not recent but around seven years old. The woman featured in the clip was identified as Shubhrastha, who is a political strategist by profession.
Claim:
A user on X (formerly Twitter) shared the viral video on February 18, 2026, claiming: “A ‘class in abuse studies’ at Galgotias University? An obscene video of a professor teaching ethics has gone viral. Another shameful chapter has been added to the list of controversies surrounding Galgotias University.” The post further alleged that after falsely claiming a Chinese robot as its own, the university’s “Culture and Ethics” faculty member was seen publicly using abusive language in the viral clip. The post link and its archived version are provided below:

Fact Check:
To verify the authenticity of the viral claim, we extracted key frames from the video and conducted a reverse image search using Google Lens. During the research , we found the same video uploaded on the Indian Spectator’s YouTube channel on June 9, 2018

The video was also found on another YouTube channel, where it had been uploaded on June 12, 2018.

Conclusion
The research clearly establishes that the woman seen in the viral video has no association with Galgotias University and is not a professor there. The clip is also not recent but approximately seven years old. The woman in the video was identified as Shubhrastha, a political strategist.
Related Blogs
.webp)
Misinformation spread has become a cause for concern for all stakeholders, be it the government, policymakers, business organisations or the citizens. The current push for combating misinformation is rooted in the growing awareness that misinformation leads to sentiment exploitation and can result in economic instability, personal risks, and a rise in political, regional, and religious tensions. The circulation of misinformation poses significant challenges for organisations, brands and administrators of all types. The spread of misinformation online poses a risk not only to the everyday content consumer, but also creates concerns for the sharer but the platforms themselves. Sharing misinformation in the digital realm, intentionally or not, can have real consequences.
Consequences for Platforms
Platforms have been scrutinised for the content they allow to be published and what they don't. It is important to understand not only how this misinformation affects platform users, but also its impact and consequences for the platforms themselves. These consequences highlight the complex environment that social media platforms operate in, where the stakes are high from the perspective of both business and societal impact. They are:
- Legal Consequences: Platforms can be fined by regulators if they fail to comply with content moderation or misinformation-related laws and a prime example of such a law is the Digital Services Act of the EU, which has been created for the regulation of digital services that act as intermediaries for consumers and goods, services, and content. They can face lawsuits by individuals, organisations or governments for any damages due to misinformation. Defamation suits are part of the standard practice when dealing with misinformation-causing vectors. In India, the Prohibition of Fake News on Social Media Bill of 2023 is in the pipeline and would establish a regulatory body for fake news on social media platforms.
- Reputational Consequences: Platforms employ a trust model where the user trusts it and its content. If a user loses trust in the platform because of misinformation, it can reduce engagement. This might even lead to negative coverage that affects the public opinion of the brand, its value and viability in the long run.
- Financial Consequences: Businesses that engage with the platform may end their engagement with platforms accused of misinformation, which can lead to a revenue drop. This can also have major consequences affecting the long-term financial health of the platform, such as a decline in stock prices.
- Operational Consequences: To counter the scrutiny from regulators, the platform might need to engage in stricter content moderation policies or other resource-intensive tasks, increasing operational costs for the platforms.
- Market Position Loss: If the reliability of a platform is under question, then, platform users can migrate to other platforms, leading to a loss in the market share in favour of those platforms that manage misinformation more effectively.
- Freedom of Expression vs. Censorship Debate: There needs to be a balance between freedom of expression and the prevention of misinformation. Censorship can become an accusation for the platform in case of stricter content moderation and if the users feel that their opinions are unfairly suppressed.
- Ethical and Moral Responsibilities: Accountability for platforms extends to moral accountability as they allow content that affects different spheres of the user's life such as public health, democracy etc. Misinformation can cause real-world harm like health misinformation or inciting violence, which leads to the fact that platforms have social responsibility too.
Misinformation has turned into a global issue and because of this, digital platforms need to be vigilant while they navigate the varying legal, cultural and social expectations across different jurisdictions. Efforts to create standardised practices and policies have been complicated by the diversity of approaches, leading platforms to adopt flexible strategies for managing misinformation that align with global and local standards.
Addressing the Consequences
These consequences can be addressed by undertaking the following measures:
- The implementation of a more robust content moderation system by the platforms using a combination of AI and human oversight for the identification and removal of misinformation in an effective manner.
- Enhancing the transparency in platform policies for content moderation and decision-making would build user trust and reduce the backlash associated with perceived censorship.
- Collaborations with fact checkers in the form of partnerships to help verify the accuracy of content and reduce the spread of misinformation.
- Engage with regulators proactively to stay ahead of legal and regulatory requirements and avoid punitive actions.
- Platforms should Invest in media literacy initiatives and help users critically evaluate the content available to them.
Final Takeaways
The accrual of misinformation on digital platforms has resulted in presenting significant challenges across legal, reputational, financial, and operational functions for all stakeholders. As a result, a critical need arises where the interlinked, but seemingly-exclusive priorities of preventing misinformation and upholding freedom of expression must be balanced. Platforms must invest in the creation and implementation of a robust content moderation system with in-built transparency, collaborating with fact-checkers, and media literacy efforts to mitigate the adverse effects of misinformation. In addition to this, adapting to diverse international standards is essential to maintaining their global presence and societal trust.
References
- https://pirg.org/edfund/articles/misinformation-on-social-media/
- https://www.mdpi.com/2076-0760/12/12/674
- https://scroll.in/article/1057626/israel-hamas-war-misinformation-is-being-spread-across-social-media-with-real-world-consequences
- https://www.who.int/europe/news/item/01-09-2022-infodemics-and-misinformation-negatively-affect-people-s-health-behaviours--new-who-review-finds

Introduction
As various technological developments enable our phones to take on a greater role, these devices, along with the applications they host, also become susceptible to greater risks. Recently, Zimperium, a tech company that provides security services for mobiles and applications from threats like malware, phishing, etc., has announced its identification of a malware that is targeted toward stealing information from Indian Banks. The Indian Express reports that data from over 25 million devices has been exfiltrated, making it increasingly dangerous, just going by the it has affected so far.
Understanding the Threat: The Case of FatBoyPanel
A malware is a malicious software that is a file or a program, intentionally harmful to a network, server, computer, and other devices. It is also of various types; however, in the context of the aforementioned case, it is a Trojan horse i.e., a file/program designed to trick the victim into assuming it to be a legitimate software program that is trying to gain access. They are able to execute malicious functions on a device as soon as they are activated post-installation.
The FatBoyPanel, as it is called, is a malware management system that carried out a massive cyberattack, targeting Indian mobile users and their bank details. Their modus operandi included the process of social engineering, wherein attackers posed as bank officials who called their target and warned them that if no immediate action was taken to update their bank details, their account would be suspended immediately. On panicking and asking for instructions, they were told to download a banking application from the link sent in the form of an Android Package Kit (APK) file (that requires one to enable “Install from Unknown Sources” ) and install it. Various versions of similar incidents were acted on by other attackers, all to trick the target into downloading the file sent. The apps sent through the links are fake, and once installed, they immediately ask for critical permissions such as access to contacts, device storage, overlay permissions (to show fake login pages over real apps), and access to SMS messages (to steal OTPs and banking alerts). This aids in capturing text messages (especially OTPs related to banks), read stored files, monitor app usage, etc. This data is stolen and then sent to the FatBoyPanel backend, where hackers are able to see real-time data on their dashboard, which they can further download and sell. FatBoyPanel is a C&C (command and control) server that acts as a centralised control room.
Protecting Yourself: Essential Precautions in the Digital Realm
Although there are various other types of malware, how one must deal with them remains the same. Following are a few instructions that one can practice in order to stay safe:
- Be cautious with app downloads: Only download apps from official app stores (Google Play Store, Apple App Store). Even then, check the developer's reputation, app permissions, and user reviews before installing.
- Keep your operating system and apps updated: Updates often include security patches that protect against known vulnerabilities.
- Be wary of suspicious links and attachments: Avoid clicking on links or opening attachments in unsolicited emails, SMS messages, or social media posts. Verify the sender's authenticity before interacting.
- Enable multi-factor authentication (MFA) wherever possible: While malware like FatBoyPanel can sometimes bypass OTP-based MFA, it still adds an extra layer of security against many other threats.
- Use strong and unique passwords: Employ a combination of uppercase and lowercase letters, numbers, and symbols for all your online accounts. Avoid reusing passwords across different platforms.
- Install and maintain a reputable mobile security app: These apps can help detect and remove malware, as well as warn you about malicious websites and links (Bitdefender, etc.)
- Regularly review app permissions and give access judiciously: Check what permissions your installed apps have and revoke any that seem unnecessary or excessive.
- Educate yourself and stay informed: Keep up-to-date with the latest cybersecurity threats and best practices.
Conclusion
The emergence of malware management systems indicates just how sophisticated the attackers have become over the years. Vigilance at the level of the general public is recommended, but so are increasing efforts in awareness regarding such methods of crime, as people continue to remain vulnerable in aspects related to cybersecurity. Sensitive information at stake, we must take steps to sensitise and better prepare the public to deal with the growing landscape of the digital world.
References
- https://zimperium.com/blog/mobile-indian-cyber-heist-fatboypanel-and-his-massive-data-breach
- https://indianexpress.com/article/technology/tech-news-technology/fatboypanel-new-malware-targeting-indian-users-what-is-it-9965305/
- https://www.techtarget.com/searchsecurity/definition/malware

One of the best forums for many video producers is YouTube. It also has a great chance of generating huge profits. YouTube content producers need assistance to get the most views, likes, comments, and subscribers for their videos and channels. As a result, some people could use YouTube bots to unnaturally raise their ranks on the YouTube site, which might help them get more organic views and reach a larger audience. However, this strategy is typically seen as unfair and can violate the YouTube platform’s terms of service.
As YouTube grows in popularity, so does the usage of YouTube bots. These bots are software programs that may automate operations on the YouTube platform, such as watching, liking, or disliking videos, subscribing to or unsubscribing from channels, making comments, and adding videos to playlists, among others. There have been YouTube bots around for a while. Many YouTubers widely use these computer codes to increase the number of views on their videos and accounts, which helps them rank higher in YouTube’s algorithm. Researchers discovered a new bot that takes private information from YouTube users’ accounts.
CRIL (Cyble Research and Intelligence Labs) has been monitoring new and active malware families CRIL has discovered a new YouTube bot virus capable of viewing, liking, and commenting on YouTube videos. Furthermore, it is capable of stealing sensitive information from browsers and acting as a bot that accepts orders from the Command and Control (C&C) server to carry out other harmful operations.
The Bot Insight
This YouTube bot has the same capabilities as all other YouTube bots, including the ability to view, like, and comment on videos. Additionally, it has the ability to steal private data from browsers and act as a bot that takes commands from a Command and Control (C&C) server for various malicious purposes. Researchers from Cyble discovered the inner workings of this information breach the Youtube bot uses the sample hash(SHA256) e9dac8b677a670e70919730ee65ab66cc27730378b9233d944ad7879c530d312.They discovered that it was created using the.NET compiler and is an executable file with a 32-bit size.
- The virus runs an AntiVM check as soon as it is executed to thwart researchers’ attempts to find and analyze malware in a virtual environment.
- It stops the execution if it finds that it is operating in a regulated setting. If not, it will carry out the tasks listed in the argument strings.
- Additionally, the virus creates a mutex, copies itself to the %appdata% folder as AvastSecurity.exe, and then uses cmd.exe to run.
- The new mutex makes a task scheduler entry and aids in ensuring
- The victim’s system’s installed Chromium browsers are used to harvest cookies, autofill information, and login information by the AvastSecurity.exe program.
- In order to view the chosen video, the virus runs the YouTube Playwright function, passing the previously indicated arguments along with the browser’s path and cookie data.
- YouTube bot uses the YouTube Playwright function to launch the browser environment with the specified parameters and automate actions like watching, liking, and commenting on YouTube videos. The feature is dependent on Microsoft. playwright’s kit.
- The malware establishes a connection to a C2 server and gets instructions to erase the entry for the scheduled task and end its own process, extract log files to the C2 server, download and run other files, and start/stop watching a YouTube movie.
- Additionally, it verifies that the victim’s PC has the required dependencies, including the Playwright package and the Chrome browser, installed. When it gets the command “view,” it will download and install these dependencies if they are missing.
Recommendations
The following is a list of some of the most critical cybersecurity best practices that serve as the first line of defense against intruders. We propose that our readers follow the advice provided below:
- Downloading pirated software from warez/torrent websites should be avoided. Such a virus is commonly found in “Hack Tools” available on websites such as YouTube, pirate sites, etc.
- When feasible, use strong passwords and impose multi-factor authentication.
- Enable automatic software updates on your laptop, smartphone, and other linked devices.
- Use a reputable antivirus and internet security software package on your linked devices, such as your computer, laptop, and smartphone.
- Avoid clicking on suspicious links and opening email attachments without verifying they are legitimate.Inform staff members on how to guard against dangers like phishing and unsafe URLs.
- Block URLs like Torrent/Warez that might be used to propagate malware.To prevent malware or TAs from stealing data, keep an eye on the beacon at the network level.
Conclusion
Using YouTube bots may be a seductive strategy for content producers looking to increase their ranks and expand their viewership on the site. However, the employment of bots is typically regarded as unfair and may violate YouTube’s terms of service. Utilizing YouTube bots carries additional risk because they might be identified, which could lead to account suspension or termination for the user. Mitigating this pressing issue through awareness drives and surveys to determine the bone of contention is best. NonProfits and civil society organizations can bridge the gap between the tech giant and the end user to facilitate better know-how about these unknown bots.