Post Session Report on Universal Acceptance and Multilingual Internet at Royal Global University under CyberPeace Center of Excellence (CCoE)
18th November 2022 CyberPeace Foundation in association with Universal Acceptance has successfully conducted the workshop on Universal Acceptance and Multilingual Internet for the students and faculties of Royal Global University under CyberPeace Center of Excellence (CCoE). CyberPeace Foundation has always been engaged towards the aim of spreading awareness regarding the various developments, avenues, opportunities and threats regarding cyberspace. The same has been the keen principle of the CyberPeace Centre of Excellence setup in collaboration with various esteemed educational institutes. We at CyberPeace Foundation would like to take the collaborations and our efforts to a new height of knowledge and awareness by proposing a workshop on UNIVERSAL ACCEPTANCE AND MULTILINGUAL INTERNET. This workshop was instrumental in providing the academia and research community a wholesome outlook towards the multilingual spectrum of internet including Internationalized domain names and email address Internationalization.
Date –18th November 2022
Time – 10:00 AM to 12:00 PM
Duration – 2 hours
Mode - Online
Audience – Academia and Research Community
Participants Joined- 130
Crowd Classification - Engineering students (1st and 4th year, all streams) and Faculties members
Organizer : Mr. Harish Chowdhary : UA Ambassador Moderator: Ms. Pooja Tomar, Project coordinator cum trainer
GuestSpeakers:Mr. Abdalmonem Galila, Abdalmonem: Vice Chair , Universal Acceptance Steering Group (UASG) ,Mr. Mahesh D Kulkarni: Director, Evaris Systems and Former Senior Director, CDAC, Government of India, Mr. Akshat Joshi, Founder Think Trans First session was delivered by Mr. Abdalmonem Galila, Abdalmonem: Vice Chair , Universal Acceptance Steering Group (UASG) “Universal Acceptance( UA) and why UA matters?”
- What is universal acceptance?
- UA is cornerstone to a digitally inclusive internet by ensuring all domain names and email addresses in all languages, script and character length.
- Achieving UA ensures that every person has the ability to navigate the internet.
- Different UA issues were also discussed and explained.
- Tagated systems by the UA and implication were discussed in detail.
Second Session was delivered by Mr. Akshat Joshi, Founder Think Trans on “Universal Acceptance to the IDNsand the economic Landscape”
- What is Universal Acceptance?
- The internet has had standards that allow people to use domain names and email addresses in their native scripts. Software developers need to bring their applications up-to-date so that consumers can use their chosen identity.
- A typical problem is that an IDN email address is not recognised by a website form as a valid email address.
- The importance of adopting IDNs z Enable citizens to use their own identity online (correct spelling, native language) z Relates to language, culture and content z Promotes local and regional content z Allows businesses and politicians to better target their messages.
Third session was delivered by Mr. Mahesh D Kulkarni, ES Director Evaris on the topic of “IDNs in Indian languages perspective- challenges and solutions”.
- The multilingual diversity of India was focused on and its impact.
- Most students were not aware of what Unicode, IDNS is and their usage.
- Students were briefed by giving real time examples on IDN, Domain name implementation using local language.
- In depth knowledge of and practical exposure of Universal Acceptance and Multilingual Internet has been served to the students.
- Tools and Resources for Domain Name and Domain Languages were explained.
- Languages nuances of Multilingual diversity of India explained with real time facts and figures.
- Given the idea of IDN Email,Homograph attack,Homographic variant with proper real time examples.
- Explained about the security threats and IDNA protocols.
- Given the explanation on ABNF.
- Explained the stages of Universal Acceptance.
Related Blogs
Introduction
In an age where the lines between truth and fiction blur with an alarming regularity, we stand at the precipice of a new and dangerous era. Amidst the wealth of information that characterizes the digital age, deep fakes and disinformation rise like ghosts, haunting our shared reality. These manifestations of a technological revolution that promised enlightenment instead threaten the foundations upon which our societies are built: trust, truth, and collective understanding.
These digital doppelgängers, enabled by advanced artificial intelligence, and their deceitful companion—disinformation—are not mere ghosts in the machine. They are active agents of chaos, capable of undermining the core of democratic values, human rights, and even the safety of individuals who dare to question the status quo.
The Perils of False Narratives in the Digital Age
As a society, we often throw around terms such as 'fake news' with a mixture of disdain and a weary acceptance of their omnipresence. However, we must not understate their gravity. Misinformation and disinformation represent the vanguard of the digital duplicitous tide, a phenomenon growing more complex and dire each day. Misinformation, often spread without malicious intent but with no less damage, can be likened to a digital 'slip of the tongue' — an error in dissemination or interpretation. Disinformation, its darker counterpart, is born of deliberate intent to deceive, a calculated move in the chess game of information warfare.
Their arsenal is varied and ever-evolving: from misleading memes and misattributed quotations to wholesale fabrications in the form of bogus news sites and carefully crafted narratives. Among these weapons of deceit, deepfakes stand out for their audacity and the striking challenge they pose to the concept of seeing to believe. Through the unwelcome alchemy of algorithms, these video and audio forgeries place public figures, celebrities, and even everyday individuals into scenarios they never experienced, uttering words they never said.
The Human Cost: Threats to Rights and Liberties
The impact of this disinformation campaign transcends inconvenience or mere confusion; it strikes at the heart of human rights and civil liberties. It particularly festers at the crossroads of major democratic exercises, such as elections, where the right to a truthful, unmanipulated narrative is not just a political nicety but a fundamental human right, enshrined in Article 25 of the International Convention on Civil and Political Rights (ICCPR).
In moments of political change, whether during elections or pivotal referenda, the deliberate seeding of false narratives is a direct assault on the electorate's ability to make informed decisions. This subversion of truth infects the electoral process, rendering hollow the promise of democratic choice.
This era of computational propaganda has especially chilling implications for those at the frontline of accountability—journalists and human rights defenders. They find themselves targets of character assassinations and smear campaigns that not only put their safety at risk but also threaten to silence the crucial voices of dissent.
It should not be overlooked that the term 'fake news' has, paradoxically, been weaponized by governments and political entities against their detractors. In a perverse twist, this label becomes a tool to shut down legitimate debate and shield human rights violations from scrutiny, allowing for censorship and the suppression of opposition under the guise of combatting disinformation.
Deepening the societal schisms, a significant portion of this digital deceit traffic in hate speech. Its contents are laden with xenophobia, racism, and calls to violence, all given a megaphone through the anonymity and reach the internet so readily provides, feeding a cycle of intolerance and violence vastly disproportionate to that seen in traditional media.
Legislative and Technological Countermeasures: The Ongoing Struggle
The fight against this pervasive threat, as illustrated by recent actions and statements by the Indian government, is multifaceted. Notably, Union Minister Rajeev Chandrasekhar's commitment to safeguarding the Indian populace from the dangers of AI-generated misinformation signals an important step in the legislative and policy framework necessary to combat deepfakes.
Likewise, Prime Minister Narendra Modi's personal experience with a deepfake video accentuates the urgency with which policymakers, technologists, and citizens alike must view this evolving threat. The disconcerting experience of actor Rashmika Mandanna serves as a sobering reminder of the individual harm these false narratives can inflict and reinforces the necessity of a robust response.
In their pursuit to negate these virtual apparitions, policymakers have explored various avenues ranging from legislative action to penalizing offenders and advancing digital watermarks. However, it is not merely in the realm of technology that solutions must be sought. Rather, the confrontation with deepfakes and disinformation is also a battle for the collective soul of societies across the globe.
As technological advancements continue to reshape the battleground, figures like Kris Gopalakrishnan and Manish Gangwar posit that only a mix of rigorous regulatory frameworks and savvy technological innovation can hold the front line against this rising tidal wave of digital distrust.
This narrative is not a dystopian vision of a distant future - it is the stark reality of our present. And as we navigate this new terrain, our best defenses are not just technological safeguards, but also the nurturing of an informed and critical citizenry. It is essential to foster media literacy, to temper the human inclination to accept narratives at face value and to embolden the values that encourage transparency and the robust exchange of ideas.
As we peer into the shadowy recesses of our increasingly digital existence, may we hold fast to our dedication to the truth, and in doing so, preserve the essence of our democratic societies. For at stake is not just a technological arms race, but the very quality of our democratic discourse and the universal human rights that give it credibility and strength.
Conclusion
In this age of digital deceit, it is crucial to remember that the battle against deep fakes and disinformation is not just a technological one. It is also a battle for our collective consciousness, a battle to preserve the sanctity of truth in an era of falsehoods. As we navigate the labyrinthine corridors of the digital world, let us arm ourselves with the weapons of awareness, critical thinking, and a steadfast commitment to truth. In the end, it is not just about winning the battle against deep fakes and disinformation, but about preserving the very essence of our democratic societies and the human rights that underpin them.
Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html
Recent Incidents:
Recent reports are revealing a significant security threat linked to a new infostealer based malware campaign known to solely target gaming accounts. This attack has affected users of Activision and other gaming websites. The sophisticated software has captured millions of login credentials, notably from the cheats and players. The officials at Activision Blizzard, an American video game holding company, are still investigating the matter and collaborating with cheated developers to minimize the impact and inform the accounts’ residents of appropriate safety measures.
Overview:
Infostealer, also known as information stealer, is a type of malware designed in the form of a Trojan virus for stealing private data from the infected system. It can have a variety of incarnations and collect user data of various types such as browser history, passwords, credit card numbers, and login details and credentials to social media, gaming platforms, bank accounts, and other websites. Bad actors use the log obtained as a result of the collection of personal records to access the victim’s financial accounts, appropriate the victim’s online identity, and perform fraudulent actions on behalf of the victim.
Modus Operandi:
- Infostealer is a malicious program created to illegally obtain people's login details, like usernames and passwords. Its goal is to enable cyberattacks, sell on dark web markets, or pursue malicious aims.
- This malware targets both personal devices and corporate systems. It spreads through methods like phishing emails, harmful websites, and infected public sites.
- Once inside a device, Infostealer secretly gathers sensitive data like passwords, account details, and personal information. It's designed to infiltrate systems being undetected. The stolen credentials are compiled into datalogs. These logs are then sold illegally on dark web marketplaces for profit.
Analysis:
Basic properties:
- MD5: 06f53d457c530635b34aef0f04c59c7d
- SHA-1: 7e30c3aee2e4398ddd860d962e787e1261be38fb
- SHA-256: aeecc65ac8f0f6e10e95a898b60b43bf6ba9e2c0f92161956b1725d68482721d
- Vhash: 145076655d155515755az4e?z4
- Authentihash: 65b5ecd5bca01a9a4bf60ea4b88727e9e0c16b502221d5565ae8113f9ad2f878
- Imphash: f4a69846ab44cc1bedeea23e3b680256
- Rich PE header hash: ba3da6e3c461234831bf6d4a6d8c8bff
- SSDEEP: 6144:YcdXHqXTdlR/YXA6eV3E9MsnhMuO7ZStApGJiZcX8aVEKn3js7/FQAMyzSzdyBk8:YIKXd/UgGXS5U+SzdjTnE3V
- TLSH:T1E1B4CF8E679653EAC472823DCC232595E364FB009267875AC25702D3EFBB3D56C29F90
- File type: Win32 DLL executable windows win32 pepe dll
- Magic: PE32+ executable (DLL) (GUI) x86-64, for MS Windows
- File size: 483.50 KB (495104 bytes)
Additional Hash Files:
- 160389696ed7f37f164f1947eda00830
- 229a758e232aeb49196c862655797e12
- 23e4ac5e7db3d5a898ea32d27e8b7661
- 3440cced6ec7ab38c6892a17fd368cf8
- 36d7da7306241979b17ca14a6c060b92
- 38d2264ff74123f3113f8617fabc49f6
- 3c5c693ba9b161fa1c1c67390ff22c96
- 3e0fe537124e6154233aec156652a675
- 4571090142554923f9a248cb9716a1ae
- 4e63f63074eb85e722b7795ec78aeaa3
- 63dd2d927adce034879b114d209b23de
- 642aa70b188eb7e76273130246419f1d
- 6ab9c636fb721e00b00098b476c49d19
- 71b4de8b5a1c5a973d8c23a20469d4ec
- 736ce04f4c8f92bda327c69bb55ed2fc
- 7acfddc5dfd745cc310e6919513a4158
- 7d96d4b8548693077f79bc18b0f9ef21
- 8737c4dc92bd72805b8eaf9f0ddcc696
- 9b9ff0d65523923a70acc5b24de1921f
- 9f7c1fffd565cb475bbe963aafab77ff
Indicators of Compromise:
- Unusual Outbound Network Traffic: An increase in odd or questionable outbound network traffic may be a sign that infostealer malware has accessed more data.
- Anomalies in Privileged User Account Activity: Unusual behavior or illegal access are two examples of irregular actions that might indicate a breach in privileged user accounts.
- Suspicious Registry or System File Changes: Infostealer malware may be trying to alter system settings if there are any unexpected changes to system files, registry settings, or configurations.
- Unusual DNS queries: When communicating with command and control servers or rerouting traffic, infostealer malware may produce strange DNS queries.
- Unexpected System Patching: Unexpected or unauthorized system patching by unidentified parties may indicate that infostealer malware has compromised the system and is trying to hide its footprint or become persistent.
- Phishing emails and social engineering attempts: It is a popular strategy employed by cybercriminals to get confidential data or implant malicious software. To avoid compromise, it is crucial to be wary of dubious communications and attempts of social engineering.
Recommendations:
- Be Vigilant: In today's digital world, many cybercrimes threaten online safety, Phishing tricks, fake web pages, and bad links pose real dangers. Carefully check email sources. Examine websites closely. Use top security programs. Follow safe browsing rules. Update software often. Share safety tips. These steps reduce risks. They help keep your online presence secure.
- Regular use of Anti-Virus Software to detect the threats: Antivirus tools are vital for finding and stopping cyber threats. These programs use signature detection and behavior analysis to identify known malicious code and suspicious activities. Updating virus definitions and software-patches regularly, improves their ability to detect new threats. This helps maintain system security and data integrity.
- Provide security related training to the employees and common employees: One should learn Cybersecurity and the best practices in order to keep the office safe. Common workers will get lessons on spotting risks and responding well, creating an environment of caution.
- Keep changing passwords: Passwords should be changed frequently for better security. Rotating passwords often makes it harder for cyber criminals to compromise and make it happen or confidential data to be stolen. This practice keeps intruders out and shields sensitive intel.
Conclusion:
To conclude, to reduce the impact and including the safety measures, further investigations and collaboration are already in the pipeline regarding the recent malicious software that takes advantage of gamers and has stated that about millions of credentials users have been compromised. To protect sensitive data, continued usage of antivirus software, use of trusted materials and password changes are the key elements. The ways to decrease risks and safely protect sensitive information are to develop improved Cybersecurity methods such as multi-factor authentication and the conduct of security audits frequently. Be safe and be vigilant.
Reference:
- https://techcrunch.com/2024/03/28/activision-says-its-investigating-password-stealing-malware-targeting-game-players/
- https://www.bleepingcomputer.com/news/security/activision-enable-2fa-to-secure-accounts-recently-stolen-by-malware/
- https://cyber.vumetric.com/security-news/2024/03/29/activision-enable-2fa-to-secure-accounts-recently-stolen-by-malware/
- https://www.virustotal.com/
- https://otx.alienvault.com/