#Fact Check – Analysis of Viral Claims Regarding India's UNSC Permanent Membership
Executive Summary:
Recently, there has been a massive amount of fake news about India’s standing in the United Security Council (UNSC), including a veto. This report, compiled scrupulously by the CyberPeace Research Wing, delves into the provenance and credibility of the information, and it is debunked. No information from the UN or any relevant bodies has been released with regard to India’s permanent UNSC membership although India has swiftly made remarkable progress to achieve this strategic goal.

Claims:
Viral posts claim that India has become the first-ever unanimously voted permanent and veto-holding member of the United Nations Security Council (UNSC). Those posts also claim that this was achieved through overwhelming international support, granting India the same standing as the current permanent members.



Factcheck:
The CyberPeace Research Team did a thorough keyword search on the official UNSC official website and its associated social media profiles; there are presently no official announcements declaring India's entry into permanent status in the UNSC. India remains a non-permanent member, with the five permanent actors- China, France, Russia, United Kingdom, and USA- still holding veto power. Furthermore, India, along with Brazil, Germany, and Japan (the G4 nations), proposes reform of the UNSC; yet no formal resolutions have come to the surface to alter the status quo of permanent membership. We then used tools such as Google Fact Check Explorer to uncover the truth behind these viral claims. We found several debunked articles posted by other fact-checking organizations.

The viral claims also lack credible sources or authenticated references from international institutions, further discrediting the claims. Hence, the claims made by several users on social media about India becoming the first-ever unanimously voted permanent and veto-holding member of the UNSC are misleading and fake.
Conclusion:
The viral claim that India has become a permanent member of the UNSC with veto power is entirely false. India, along with the non-permanent members, protests the need for a restructuring of the UN Security Council. However, there have been no official or formal declarations or commitments for alterations in the composition of the permanent members and their powers to date. Social media users are advised to rely on verified sources for information and refrain from spreading unsubstantiated claims that contribute to misinformation.
- Claim: India’s Permanent Membership in UNSC.
- Claimed On: YouTube, LinkedIn, Facebook, X (Formerly Known As Twitter)
- Fact Check: Fake & Misleading.
Related Blogs

Misinformation is a scourge in the digital world, making the most mundane experiences fraught with risk. The threat is considerably heightened in conflict settings, especially in the modern era, where geographical borders blur and civilians and conflict actors alike can take to the online realm to discuss -and influence- conflict events. Propaganda can complicate the narrative and distract from the humanitarian crises affecting civilians, while also posing a serious threat to security operations and law and order efforts. Sensationalised reports of casualties and manipulated portrayals of military actions contribute to a cycle of violence and suffering.
A study conducted by MIT found the mere thought of sharing news on social media reduced the ability to judge whether a story was true or false; the urge to share outweighed the consideration of accuracy (2023). Cross-border misinformation has become a critical issue in today's interconnected world, driven by the rise of digital communication platforms. To effectively combat misinformation, coordinated international policy frameworks and cooperation between governments, platforms, and global institutions are created.
The Global Nature of Misinformation
Cross-border misinformation is false or misleading information that spreads across countries. Out-of-border creators amplify information through social media and digital platforms and are a key source of misinformation. Misinformation can interfere with elections, and create serious misconceptions about health concerns such as those witnessed during the COVID-19 pandemic, or even lead to military conflicts.
The primary challenge in countering cross-border misinformation is the difference in national policies, legal frameworks and governance policies of social media platforms across various jurisdictions. Examining the existing international frameworks, such as cybersecurity treaties and data-sharing agreements used for financial crimes might be helpful to effectively address cross-border misinformation. Adapting these approaches to the digital information ecosystem, nations could strengthen their collective response to the spread of misinformation across borders. Global institutions like the United Nations or regional bodies like the EU and ASEAN can work together to set a unified response and uniform international standards for regulation dealing with misinformation specifically.
Current National and Regional Efforts
Many countries have taken action to deal with misinformation within their borders. Some examples include:
- The EU’s Digital Services Act has been instrumental in regulating online intermediaries and platforms including marketplaces, social networks, content-sharing platforms, app stores, etc. The legislation aims to prevent illegal and harmful activities online and the spread of disinformation.
- The primary legislation that governs cyberspace in India is the IT Act of 2000 and its corresponding rules (IT Rules, 2023), which impose strict requirements on social media platforms to counter misinformation content and enable the traceability of the creator responsible for the origin of misinformation. Platforms have to conduct due diligence, failing which they risk losing their safe harbour protection. The recently-enacted DPDP Act of 2023 indirectly addresses personal data misuse that can be used to contribute to the creation and spread of misinformation. Also, the proposed Digital India Act is expected to focus on “user harms” specific to the online world.
- In the U.S., the Right to Editorial Discretion and Section 230 of the Communications Decency Act place the responsibility for regulating misinformation on private actors like social media platforms and social media regulations. The US government has not created a specific framework addressing misinformation and has rather encouraged voluntary measures by SMPs to have independent policies to regulate misinformation on their platforms.
The common gap area across these policies is the absence of a standardised, global framework for addressing cross-border misinformation which results in uneven enforcement and dependence on national regulations.
Key Challenges in Achieving International Cooperation
Some of the key challenges identified in achieving international cooperation to address cross-border misinformation are as follows:
- Geopolitical tensions can emerge due to the differences in political systems, priorities, and trust issues between countries that hinder attempts to cooperate and create a universal regulation.
- The diversity in approaches to internet governance and freedom of speech across countries complicates the matters further.
- Further complications arise due to technical and legal obstacles around the issues of sovereignty, jurisdiction and enforcement, further complicating matters relating to the monitoring and removal of cross-border misinformation.
CyberPeace Recommendations
- The UN Global Principles For Information Integrity Recommendations for Multi-stakeholder Action, unveiled on 24 June 2024, are a welcome step for addressing cross-border misinformation. This can act as the stepping stone for developing a framework for international cooperation on misinformation, drawing inspiration from other successful models like climate change agreements, international criminal law framework etc.
- Collaborations like public-private partnerships between government, tech companies and civil societies can help enhance transparency, data sharing and accountability in tackling cross-border misinformation.
- Engaging in capacity building and technology transfers in less developed countries would help to create a global front against misinformation.
Conclusion
We are in an era where misinformation knows no borders and the need for international cooperation has never been more urgent. Global democracies are exploring solutions, both regulatory and legislative, to limit the spread of misinformation, however, these fragmented efforts fall short of addressing the global scale of the problem. Establishing a standardised, international framework, backed by multilateral bodies like the UN and regional alliances, can foster accountability and facilitate shared resources in this fight. Through collaborative action, transparent regulations, and support for developing nations, the world can create a united front to curb misinformation and protect democratic values, ensuring information integrity across borders.
References
- https://economics.mit.edu/sites/default/files/2023-10/A%20Model%20of%20Online%20Misinformation.pdf
- https://www.indiatoday.in/global/story/in-the-crosshairs-manufacturing-consent-and-the-erosion-of-public-trust-2620734-2024-10-21
- https://laweconcenter.org/resources/knowledge-and-decisions-in-the-information-age-the-law-economics-of-regulating-misinformation-on-social-media-platforms/
- https://www.article19.org/resources/un-article-19-global-principles-for-information-integrity/
.webp)
Introduction
The spread of misinformation online has become a significant concern, with far-reaching social, political, economic and personal implications. The degree of vulnerability to misinformation differs from person to person, dependent on psychological elements such as personality traits, familial background and digital literacy combined with contextual factors like information source, repetition, emotional content and topic. How to reduce misinformation susceptibility in real-world environments where misinformation is regularly consumed on social media remains an open question. Inoculation theory has been proposed as a way to reduce susceptibility to misinformation by informing people about how they might be misinformed. Psychological inoculation campaigns on social media are effective at improving misinformation resilience at scale.
Prebunking has gained prominence as a means to preemptively build resilience against anticipated exposure to misinformation. This approach, grounded in Inoculation Theory, allows people to analyse and avoid manipulation without prior knowledge of specific misleading content by helping them build generalised resilience. We may draw a parallel here with broad spectrum antibiotics that can be used to fight infections and protect the body against symptoms before one is able to identify the particular pathogen at play.
Inoculation Theory and Prebunking
Inoculation theory is a promising approach to combat misinformation in the digital age. It involves exposing individuals to weakened forms of misinformation before encountering the actual false information. This helps develop resistance and critical thinking skills to identify and counter deceptive content.
Inoculation theory has been established as a robust framework for countering unwanted persuasion and can be applied within the modern context of online misinformation:
- Preemptive Inoculation: Preemptive inoculation entails exposing people to weaker kinds of misinformation before they encounter genuine erroneous information. Individuals can build resistance and critical thinking abilities by being exposed to typical misinformation methods and strategies.
- Technique/logic based Inoculation: Individuals can educate themselves about typical manipulative strategies used in online misinformation, which could be emotionally manipulative language, conspiratorial reasoning, trolling and logical fallacies. Learning to recognise these tactics as indicators of misinformation is an important first step to being able to recognise and reject the same. Through logical reasoning, individuals can recognize such tactics for what they are: attempts to distort the facts or spread misleading information. Individuals who are equipped with the capacity to discern weak arguments and misleading methods may properly evaluate the reliability and validity of information they encounter on the Internet.
- Educational Campaigns: Educational initiatives that increase awareness about misinformation, its consequences, and the tactics used to manipulate information can be useful inoculation tools. These programmes equip individuals with the knowledge and resources they need to distinguish between reputable and fraudulent sources, allowing them to navigate the online information landscape more successfully.
- Interactive Games and Simulations: Online games and simulations, such as ‘Bad News,’ have been created as interactive aids to protect people from misinformation methods. These games immerse users in a virtual world where they may learn about the creation and spread of misinformation, increasing their awareness and critical thinking abilities.
- Joint Efforts: Combining inoculation tactics with other anti-misinformation initiatives, such as accuracy primes, building resilience on social media platforms, and media literacy programmes, can improve the overall efficacy of our attempts to combat misinformation. Expert organisations and people can build a stronger defence against the spread of misleading information by using many actions at the same time.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
Implementation of the Inoculation Theory on social media platforms can be seen as an effective strategy point for building resilience among users and combating misinformation. Tech/social media platforms can develop interactive and engaging content in the form of educational prebunking videos, short animations, infographics, tip sheets, and misinformation simulations. These techniques can be deployed through online games, collaborations with influencers and trusted sources that help design and deploy targeted campaigns whilst also educating netizens about the usefulness of Inoculation Theory so that they can practice critical thinking.
The approach will inspire self-monitoring amongst netizens so that people consume information mindfully. It is a powerful tool in the battle against misinformation because it not only seeks to prevent harm before it occurs, but also actively empowers the target audience. In other words, Inoculation Theory helps build people up, and takes them on a journey of transformation from ‘potential victim’ to ‘warrior’ in the battle against misinformation. Through awareness-building, this approach makes people more aware of their own vulnerabilities and attempts to exploit them so that they can be on the lookout while they read, watch, share and believe the content they receive online.
Widespread adoption of Inoculation Theory may well inspire systemic and technological change that goes beyond individual empowerment: these interventions on social media platforms can be utilized to advance digital tools and algorithms so that such interventions and their impact are amplified. Additionally, social media platforms can explore personalized inoculation strategies, and customized inoculation approaches for different audiences so as to be able to better serve more people. One such elegant solution for social media platforms can be to develop a dedicated prebunking strategy that identifies and targets specific themes and topics that could be potential vectors for misinformation and disinformation. This will come in handy, especially during sensitive and special times such as the ongoing elections where tools and strategies for ‘Election Prebunks’ could be transformational.
Conclusion
Applying Inoculation Theory in the modern context of misinformation can be an effective method of establishing resilience against misinformation, help in developing critical thinking and empower individuals to discern fact from fiction in the digital information landscape. The need of the hour is to prioritize extensive awareness campaigns that encourage critical thinking, educate people about manipulation tactics, and pre-emptively counter false narratives associated with information. Inoculation strategies can help people to build mental amour or mental defenses against malicious content and malintent that they may encounter in the future by learning about it in advance. As they say, forewarned is forearmed.
References
- https://www.science.org/doi/10.1126/sciadv.abo6254
- https://stratcomcoe.org/publications/download/Inoculation-theory-and-Misinformation-FINAL-digital-ISBN-ebbe8.pdf

Executive Summary:
New Linux malware has been discovered by a cybersecurity firm Volexity, and this new strain of malware is being referred to as DISGOMOJI. A Pakistan-based threat actor alias ‘UTA0137’ has been identified as having espionage aims, with its primary focus on Indian government entities. Like other common forms of backdoors and botnets involved in different types of cyberattacks, DISGOMOJI, the malware allows the use of commands to capture screenshots, search for files to steal, spread additional payloads, and transfer files. DISGOMOJI uses Discord (messaging service) for Command & Control (C2) and uses emojis for C2 communication. This malware targets Linux operating systems.
The DISCOMOJI Malware:
- The DISGOMOJI malware opens a specific channel in a Discord server and every new channel corresponds to a new victim. This means that the attacker can communicate with the victim one at a time.
- This particular malware connects with the attacker-controlled Discord server using Emoji, a form of relay protocol. The attacker provides unique emojis as instructions, and the malware uses emojis as a feedback to the subsequent command status.
- For instance, the ‘camera with flash’ emoji is used to screenshots the device of the victim or to steal, the ‘fox’ emoji cracks all Firefox profiles, and the ‘skull’ emoji kills the malware process.
- This C2 communication is done using emojis to ensure messaging between infected contacts, and it is almost impossible for Discord to shut down the malware as it can always change the account details of Discord it is using once the maliciou server is blocked.
- The malware also has capabilities aside from the emoji-based C2 such as network probing, tunneling, and data theft that are needed to help the UTA0137 threat actor in achieving its espionage goals.
Specific emojis used for different commands by UTA0137:
- Camera with Flash (📸): Captures a picture of the target device’s screen as per the victim’s directions.
- Backhand Index Pointing Down (👇): Extracts files from the targeted device and sends them to the command channel in the form of attachments.
- Backhand Index Pointing Right (👉): This process involves sending a file found on the victim’s device to another web-hosted file storage service known as Oshi or oshi[. ]at.
- Backhand Index Pointing Left (👈): Sends a file from the victim’s device to transfer[. ]sh, which is an online service for sharing files on the Internet.
- Fire (🔥): Finds and transmits all files with certain extensions that exist on the victim’s device, such as *. txt, *. doc, *. xls, *. pdf, *. ppt, *. rtf, *. log, *. cfg, *. dat, *. db, *. mdb, *. odb, *. sql, *. json, *. xml, *. php, *. asp, *. pl, *. sh, *. py, *. ino, *. cpp, *. java,
- Fox (🦊): This works by compressing all Firefox related profiles in the affected device.
- Skull (💀): Kills the malware process in windows using ‘os. Exit()’
- Man Running (🏃♂️): Execute a command on a victim’s device. This command receives an argument, which is the command to execute.
- Index Pointing up (👆) : Upload a file to the victim's device. The file to upload is attached along with this emoji
Analysis:
The analysis was carried out for one of the indicator of compromised SHA-256 hash file- C981aa1f05adf030bacffc0e279cf9dc93cef877f7bce33ee27e9296363cf002.
It is found that most of the vendors have marked the file as trojan in virustotal and the graph explains the malicious nature of the contacted domains and IPs.


Discord & C2 Communication for UTA0137:
- Stealthiness: Discord is a well-known messaging platform used for different purposes, which means that sending any messages or files on the server should not attract suspicion. Such stealthiness makes it possible for UTA0137 to remain dormant for greater periods before launching an attack.
- Customization: UTA0137 connected to Discord is able to create specific channels for distinct victims on the server. Such a framework allows the attackers to communicate with each of the victims individually to make a process more accurate and efficient.
- Emoji-based protocol: For C2 communication, emojis really complicates the attempt that Discord might make to interfere with the operations of the malware. In case the malicious server gets banned, malware could easily be recovered, especially by using the Discord credentials from the C2 server.
- Persistence: The malware, as stated above, has the ability to perpetually exist to hack the system and withstand rebooting of systems so that the virus can continue to operate without being detected by the owner of the hacked system.
- Advanced capabilities: Other features of DISGOMOJI are the Network Map using Nmap scanner, network tunneling through Chisel and Ligolo and Data Exfiltration by File Sharing services. These capabilities thus help in aiding the espionage goals of UTA0137.
- Social engineering: The virus and the trojan can show the pop-up windows and prompt messages, for example the fake update for firefox and similar applications, where the user can be tricked into inputting the password.
- Dynamic credential fetching: The malware does not write the hardcoded values of the credentials in order to connect it to the discord server. This also inconveniences analysts as they are unable to easily locate the position of the C2 server.
- Bogus informational and error messages: They never show any real information or errors because they do not want one to decipher the malicious behavior easily.
Recommendations to mitigate the risk of UTA0137:
- Regularly Update Software and Firmware: It is essential to regularly update all the application software and firmware of different devices, particularly, routers, to prevent hackers from exploiting the discovered and disclosed flaws. This includes fixing bugs such as CVE-2024-3080 and CVE-2024-3912 on ASUS routers, which basically entails solving a set of problems.
- Implement Multi-Factor Authentication: There are statistics that show how often user accounts are attacked, it is important to incorporate multi-factor authentication to further secure the accounts.
- Deploy Advanced Malware Protection: Provide robust guard that will help the user recognize and prevent the execution of the DISGOMOJI malware and similar threats.
- Enhance Network Segmentation: Utilize stringent network isolation mechanisms that seek to compartmentalize the key systems and data from the rest of the network in order to minimize the attack exposure.
- Monitor Network Activity: Scanning Network hour to hour for identifying and handling the security breach and the tools such as Nmap, Chisel, Ligolo etc can be used.
- Utilize Threat Intelligence: To leverage advanced threats intelligence which will help you acquire knowledge on previous threats and vulnerabilities and take informed actions.
- Secure Communication Channels: Mitigate the problem of the leakage of developers’ credentials and ways of engaging with the discord through loss of contact to prevent abusing attacks or gaining control over Discord as an attack vector.
- Enforce Access Control: Regularly review and update the user authentication processes by adopting stricter access control measures that will allow only the right personnel to access the right systems and information.
- Conduct Regular Security Audits: It is important to engage in security audits periodically in an effort to check some of the weaknesses present within the network or systems.
- Implement Incident Response Plan: Conduct a risk assessment, based on that design and establish an efficient incident response kit that helps in the early identification, isolation, and management of security breaches.
- Educate Users: Educate users on cybersecurity hygiene, opportunities to strengthen affinity with the University, and conduct retraining on threats like phishing and social engineering.
Conclusion:
The new threat actor named UTA0137 from Pakistan who was utilizing DISGOMOJI malware to attack Indian government institutions using embedded emojis with a command line through the Discord app was discovered by Volexity. It has the capability to exfiltrate and aims to steal the data of government entities. The UTA0137 was continuously improved over time to permanently communicate with victims. It underlines the necessity of having strong protection from viruses and hacker attacks, using secure passwords and unique codes every time, updating the software more often and having high-level anti-malware tools. Organizations can minimize advanced threats, the likes of DISGOMOJI and protect sensitive data by improving network segmentation, continuous monitoring of activities, and users’ awareness.
References:
https://otx.alienvault.com/pulse/66712446e23b1d14e4f293eb
https://thehackernews.com/2024/06/pakistani-hackers-use-disgomoji-malware.html?m=1
https://cybernews.com/news/hackers-using-emojis-to-command-malware/
https://www.volexity.com/blog/2024/06/13/disgomoji-malware-used-to-target-indian-government/