SharpRhino RAT: Advanced Threat Hidden in Legitimate Software
Research Wing
Innovation and Research
PUBLISHED ON
Aug 29, 2024
10
Overview:
A recent addition to the list of cybercrime is SharpRhino, a RAT (Remote Access Trojan) actively used by Hunters International ransomware group. SharpRhino is highly developed and penetrates into the network mask of IT specialists, primarily due to the belief in the tools’ legitimacy. Going under the genuine software installer, SharpRhino started functioning in mid-June 2024. However, Quorum Cyber discovered it in early August 2024 while investigating ransomware.
About Hunters International Group:
Hunters International emerged as one of the most notorious groups focused on ransomware attacks, having compromised over 134 targets worldwide in the first seven months of 2024. It is believed that the group is the rebranding of Hive ransomware group that was previously active, and there are considerable similarities in the code. Its focus on IT employees in particular demonstrates the fact that they move tactically in gaining access to the organizations’ networks.
Modus Operandi:
1. Typosquatting Technique
SharpRhino is mainly distributed by a domain that looks like the genuine Angry IP Scanner, which is a popular network discovery tool. The malware installer, labeled as ipscan-3.9.1-setup. It is a 32-bit Nullsoft installer which embeds a password protected 7z archive in it.
2. Installation Process
Execution of Installer: When the victim downloads and executes the installer and changes the windows registry in order to attain persistence. This is done by generating a registry entry that starts a harmful file, Microsoft. AnyKey. exe, are fakes originating from fake versions of true legitimate Microsoft Visual Studio tools.
Creation of Batch File: This drops a batch file qualified as LogUpdate at the installer.bat, that runs the PowerShell scripts on the device. These scripts are to compile C# code into memory to serve as a means of making the malware covert in its operation.
Directory Creation: The installer establishes two directories that allow the C2 communication – C:\ProgramData\Microsoft: WindowsUpdater24 and LogUpdateWindows.
3. Execution and Functionality:
Command Execution: The malware can execute PowerShell commands on the infected system, these actions may involve privilege escalation and other extended actions such as lateral movement.
C2 Communication: SharpRhino interacts with command and control servers located on domains from platforms such as Cloudflare. This communication is necessary for receiving commands from the attackers and for returning any data of interest to the attackers.
Data Exfiltration and Ransomware Deployment: Once SharpRhino has gained control, it can steal information and then proceed to encrypt it with a .locked extension. The procedure generally concludes with a ransom message, which informs users on how to purchase the decryption key.
4. Propagation Techniques:
Also, SharpRhino can spread through the self-copying method, this is the virus may copy itself to other computers using the network account of the victim and pretending to be trustworthy senders such as emails or network-shared files. Moreover, the victim’s machine may then proceed to propagate the malware to other systems like sharing in the company with other employees.
Indicators of Compromise (IOCs):
LogUpdate.bat
Wiaphoh7um.t
ipscan-3.9.1-setup.exe
kautix2aeX.t
WindowsUpdate.bat
Command and Control Servers:
cdn-server-1.xiren77418.workers.dev
cdn-server-2.wesoc40288.workers.dev
Angryipo.org
Angryipsca.com
Analysis:
Graph:
Precautionary measures to be taken:
To mitigate the risks posed by SharpRhino and similar malware, organizations should implement the following measures:
Implement Security Best Practices: It is important only to download software from official sites and avoid similar sites to confuse the user by changing a few letters.
Enhance Detection Capabilities: Use technology in detection that can detect the IOCs linked to Sharp Rhino.
Educate Employees: Educate IT people and employees on phishing scams and the requirement to check the origin of the application.
Regular Backups: It is also important to back up important files from systems and networks in order to minimize the effects of ransomware attacks on a business.
Conclusion:
SharpRhino could be deemed as the evolution of the strategies used by organizations like Hunters International and others involved in the distribution of ransomware. SharpRhino primarily focuses on the audience of IT professionals and employs complex delivery and execution schemes, which makes it an extremely serious threat for corporate networks. To do so it is imperative that organizations have an understanding of its inner workings in order to fortify their security measures against this relatively new threat. Through the enforcement of proper security measures and constant enlightenment of organizations on the importance of cybersecurity, firms can prevent the various risks associated with SharpRhino and related malware. Be safe, be knowledgeable, and most importantly, be secure when it comes to cyber security for your investments.
This tale, the Toothbrush Hack, straddles the ordinary and the sophisticated; an unassuming household item became the tool for committing cyber crime. Herein lies the account of how three million electronic toothbrushes turned into the unwitting infantry in a cyber skirmish—a Distributed Denial of Service (DDoS) assault that flirted with the thin line that bridges the real and the outlandish.
In January, within the Swiss borders, a story began circulating—first reported by the Aargauer Zeitung, a Swiss German-language daily newspaper. A legion of cybercriminals, with honed digital acumen, had planted malware on some three million electric toothbrushes. These devices, mere slivers of plastic and circuitry, became agents of chaos, converging their electronic requests upon the servers of an undisclosed Swiss firm, hurling that digital domain into digital blackout for several hours and wreaking an economic turmoil calculated in seven-figure sums.
The entire Incident
It was claimed that three million electric toothbrushes were allegedly used for a distributed denial-of-service (DDoS) attack, first reported by the Aargauer Zeitung, a Swiss German-language daily newspaper. The article claimed that cybercriminals installed malware on the toothbrushes and used them to access a Swiss company's website, causing the site to go offline and causing significant financial loss. However, cybersecurity experts have questioned the veracity of the story, with some describing it as "total bollocks" and others pointing out that smart electric toothbrushes are connected to smartphones and tablets via Bluetooth, making it impossible for them to launch DDoS attacks over the web. Fortinet clarified that the topic of toothbrushes being used for DDoS attacks was presented as an illustration of a given type of attack and that no IoT botnets have been observed targeting toothbrushes or similar embedded devices.
The Tech Dilemma - IOT Hack
Imagine the juxtaposition of this narrative against our common expectations of technology: 'This example, which could have been from a cyber thriller, did indeed occur,' asserted the narratives that wafted through the press and social media. The story radiated outward with urgency, painting the image of IoT devices turned to evil tools of digital unrest. It was disseminated with such velocity that face value became an accepted currency amid news cycles. And yet, skepticism took root in the fertile minds of those who dwell in the domains of cyber guardianship.
Several cyber security and IOT experts, postulated that the information from Fortinet had been contorted by the wrench of misinterpretation. They and their ilk highlighted a critical flaw: smart electric toothbrushes are bound to their smartphone or tablet counterparts by the tethers of Bluetooth, not the internet, stripping them of any innate ability to conduct DDoS or any other type of cyber attack directly.
With this unraveling of an incident fit for our cyber age, we are presented with a sobering reminder of the threat spectrum that burgeons as the tendrils of the Internet of Things (IoT) insinuate themselves into our everyday fabrics. Innocuous devices, previously deemed immune to the internet's shadow, now stand revealed as potential conduits for cyber evil. The layers of impact are profound, touching the private spheres of individuals, the underpinning frameworks of national security, and the sinews that clutch at our economic realities. The viral incident was a misinformation.
IOT Weakness
IoT devices bear inherent weaknesses for twin reasons: the oft-overlooked element of security and the stark absence of a means to enact those security measures. Ponder this problem Is there a pathway to traverse the security settings of an electric toothbrush? Or to install antivirus measures within the cooling confines of a refrigerator? The answers point to an unsettling simplicity—you cannot.
How to Protect
Vigilance - What then might be the protocol to safeguard our increasingly digital space? It begins with vigilance, the cornerstone of digital self-defense. Ensure the automatic updating of all IoT devices when they beckon with the promise of a new security patch.
Self Awareness- Avoid the temptation of public USB charging stations, which, while offering electronic succor to your devices, could also stand as the Trojan horses for digital pathogens. Be attuned to signs of unusual power depletion in your gadgets, for it may well serve as the harbinger of clandestine malware. Navigate the currents of public Wi-Fi with utmost care, as they are as fertile for data interception as they are convenient for your connectivity needs.
Use of Firewall - A firewall can prove stalwart against the predators of the internet interlopers. Your smart appliances, from the banality of a kitchen toaster to the novelty of an internet-enabled toilet, if shielded by this barrier, remain untouched, and by extension, uncompromised. And let us not dismiss this notion with frivolity, for the prospect of a malware-compromised toilet or any such smart device leaves a most distasteful specter.
Limit the use of IOT - Additionally, and this is conveyed with the gravity warranted by our current digital era, resist the seduction of IoT devices whose utility does not outweigh their inherent risks. A smart television may indeed be vital for the streaming aficionado amongst us, yet can we genuinely assert the need for a connected laundry machine, an iron, or indeed, a toothbrush? Here, prudence is a virtue; exercise it with judicious restraint.
Conclusion
As we step forward into an era where connectivity has shifted from a mere luxury to an omnipresent standard, we must adopt vigilance and digital hygiene practices with the same fervour as those for our corporal well-being. Let the toothbrush hack not simply be a tale of caution, consigned to the annals of internet folklore, but a fable that imbues us with the recognition of our role in maintaining discipline in a realm where even the most benign objects might be mustered into service by a cyberspace adversary.
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.