#Fact Check – Analysis of Viral Claims Regarding India's UNSC Permanent Membership
Executive Summary:
Recently, there has been a massive amount of fake news about India’s standing in the United Security Council (UNSC), including a veto. This report, compiled scrupulously by the CyberPeace Research Wing, delves into the provenance and credibility of the information, and it is debunked. No information from the UN or any relevant bodies has been released with regard to India’s permanent UNSC membership although India has swiftly made remarkable progress to achieve this strategic goal.

Claims:
Viral posts claim that India has become the first-ever unanimously voted permanent and veto-holding member of the United Nations Security Council (UNSC). Those posts also claim that this was achieved through overwhelming international support, granting India the same standing as the current permanent members.



Factcheck:
The CyberPeace Research Team did a thorough keyword search on the official UNSC official website and its associated social media profiles; there are presently no official announcements declaring India's entry into permanent status in the UNSC. India remains a non-permanent member, with the five permanent actors- China, France, Russia, United Kingdom, and USA- still holding veto power. Furthermore, India, along with Brazil, Germany, and Japan (the G4 nations), proposes reform of the UNSC; yet no formal resolutions have come to the surface to alter the status quo of permanent membership. We then used tools such as Google Fact Check Explorer to uncover the truth behind these viral claims. We found several debunked articles posted by other fact-checking organizations.

The viral claims also lack credible sources or authenticated references from international institutions, further discrediting the claims. Hence, the claims made by several users on social media about India becoming the first-ever unanimously voted permanent and veto-holding member of the UNSC are misleading and fake.
Conclusion:
The viral claim that India has become a permanent member of the UNSC with veto power is entirely false. India, along with the non-permanent members, protests the need for a restructuring of the UN Security Council. However, there have been no official or formal declarations or commitments for alterations in the composition of the permanent members and their powers to date. Social media users are advised to rely on verified sources for information and refrain from spreading unsubstantiated claims that contribute to misinformation.
- Claim: India’s Permanent Membership in UNSC.
- Claimed On: YouTube, LinkedIn, Facebook, X (Formerly Known As Twitter)
- Fact Check: Fake & Misleading.
Related Blogs

In an era defined by perpetual technological advancement, the hitherto uncharted territories of the human experience are progressively being illuminated by the luminous glow of innovation. The construct of privacy, once a straightforward concept involving personal secrets and solitude, has evolved into a complex web of data protection, consent, and digital rights. This notion of privacy, which often feels as though it elusively ebbs and flows like the ghost of a bygone epoch, is now confronted with a novel intruder – neurotechnology – which promises to redefine the very essence of individual sanctity.
Why Neuro Rights
At the forefront of this existential conversation lie ventures like Elon Musk's Neuralink. This company, which finds itself at the confluence of fantastical dreams and tangible reality, teases a future where the contents of our thoughts could be rendered as accessible as the words we speak. An existence where machines not only decipher our mental whispers but hold the potential to echo back, reshaping our cognitive landscapes. This startling innovation sets the stage for the emergence of 'neurorights' – a paradigm aimed at erecting a metaphorical firewall around the synapses and neurons that compose our innermost selves.
At institutions such as the University of California, Berkeley, researchers, under the aegis of cognitive scientists like Jack Gallant, are already drawing the map of once-inaccessible territories within the mind. Gallant's landmark study, which involved decoding the brain activity of volunteers as they absorbed visual stimuli, opened Pandora's box regarding the implications of mind-reading. The paper published a decade ago, was an inchoate step toward understanding the narrative woven within the cerebral cortex. Although his work yielded only a rough sketch of the observed video content, it heralded an era where thought could be translated into observable media.
The Growth
This rapid acceleration of neuro-technological prowess has not gone unnoticed on the sociopolitical stage. In a pioneering spirit reminiscent of the robust legislative eagerness of early democracies, Chile boldly stepped into the global spotlight in 2021 by legislating neurorights. The Chilean senate's decision to constitutionalize these rights sent ripples the world over, signalling an acknowledgement that the evolution of brain-computer interfaces was advancing at a daunting pace. The initiative was spearheaded by visionaries like Guido Girardi, a former senator whose legislative foresight drew clear parallels between the disruptive advent of social media and the potential upheaval posed by emergent neurotechnology.
Pursuit of Regulation
Yet the pursuit of regulation in such an embryonic field is riddled with intellectual quandaries and ethical mazes. Advocates like Allan McCay articulate the delicate tightrope that policy-makers must traverse. The perils of premature regulation are as formidable as the risks of a delayed response – the former potentially stifling innovation, the latter risking a landscape where technological advances could outpace societal control, engendering a future fraught with unforeseen backlashes.
Such is the dichotomy embodied in the story of Ian Burkhart, whose life was irrevocably altered by the intervention of neurotechnology. Burkhart's experience, transitioning from quadriplegia to digital dexterity through sheer force of thought, epitomizes the utopic potential of neuronal interfaces. Yet, McCay issues a solemn reminder that with great power comes great potential for misuse, highlighting contentious ethical issues such as the potential for the criminal justice system to over extend its reach into the neural recesses of the human psyche.
Firmly ensconced within this brave new world, the quest for prudence is of paramount importance. McCay advocates for a dyadic approach, where privacy is vehemently protected and the workings of technology proffered with crystal-clear transparency. The clandestine machinations of AI and the danger of algorithmic bias necessitate a vigorous, ethical architecture to govern this new frontier.
As legal frameworks around the globe wrestle with the implications of neurotechnology, countries like India, with their burgeoning jurisprudence regarding privacy, offer a vantage point into the potential shape of forthcoming legislation. Jurists and technology lawyers, including Jaideep Reddy, acknowledge ongoing protections yet underscore the imperativeness of continued discourse to gauge the adequacy of current laws in this nascent arena.
Conclusion
The dialogue surrounding neurorights emerges, not merely as another thread in our social fabric, but as a tapestry unto itself – intricately woven with the threads of autonomy, liberty, and privacy. As we hover at the edge of tomorrow, these conversations crystallize into an imperative collective undertaking, promising to define the sanctity of cognitive liberty. The issue at hand is nothing less than a societal reckoning with the final frontier – the safeguarding of the privacy of our thoughts.
References:

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.

Introduction
In this ever-evolving world of technology, cybercrimes and criminals continue to explore new and innovative methods to exploit and intimidate their victims. One of the recent shocking incidents has been reported from the city of Bharatpur, Rajasthan, where the cyber crooks organised a mock court session This complex operation, meant to induce fear and force obedience, exemplifies the daring and intelligence of modern hackers. In this blog article, we’ll go deeper into this concerning occurrence, delving into it to offer light on the strategies used and the ramifications for cybersecurity.to frighten their targets.
The Setup
The case was reported from Gopalgarh village in Bharatpur, Rajasthan, and has unfolded with a shocking twist -the father-son duo, Tahir Khan and his son Talim Khano — from Gopalgarh village in Bharatpur, Rajasthan, has been fooling people to gain their monetary gain by staging a mock court setting and recorded the proceedings to intimidate their victims into paying hefty sums. In the recent case, they have gained 2.69 crores through sextortion. the duo uses to trace their targets on social media platforms, blackmail them, and earn a hefty amount.
An official complaint was filed by a 69-year-old victim who was singled out through his social media accounts, his friends, and his posts Initially, they contacted the victim with a pre-recorded video featuring a nude woman, coaxing him into a compromising situation. As officials from the Delhi Crime Branch and the CBI, they threatened the victim, claiming that a girl had approached them intending to file a complaint against him. Later, masquerading as YouTubers, they threatened to release the incriminating video online. Adding to the charade, they impersonated a local MLA and presented the victim with a forged stamp paper alleging molestation charges. Eventually, posing as Delhi Crime Branch officials again, they demanded money to settle the case after falsely stating that they had apprehended the girl. To further manipulate the victim, the accused staged a court proceeding, recording it and subsequently sending it to him, creating the illusion that everything was concluded. This unique case of sextortion stands out as the only instance where the culprits went to such lengths, staging and recording a mock court to extort money. Furthermore, it was discovered that the accused had fabricated a letter from the Delhi High Court, adding another layer of deception to their scheme.
The Investigation
The complaint was made in a cyber cell. After the complaint was filed, the investigation was made, and it was found that this case stands as one of the most significant sextortion incidents in the country. The father-son pair skillfully assumed five different roles, meticulously executing their plan, which included creating a simulated court environment. “We have also managed to recover Rs 25 lakh from the accused duo—some from their residence in Gopalgarh and the rest from the bank account where it was deposited.
The Tricks used by the duo
The father-son The setup in the fake court scene event was a meticulously built web of deception to inspire fear and weakness in the victim. Let’s look at the tricks the two used to fool the people.
- Social Engineering strategies: Cyber criminals are skilled at using social engineering strategies to acquire the trust of their victims. In this situation, they may have employed phishing emails or phone calls to get personal information about the victim. By appearing as respectable persons or organisations, the crooks tricked the victim into disclosing vital information, giving them weapons they needed to create a sense of trustworthiness.
- Making a False Narrative: To make the fictitious court scenario more credible, the cyber hackers concocted a captivating story based on the victim’s purported legal problems. They might have created plausible papers to give their plan authority, such as forged court summonses, legal notifications, or warrants. They attempted to create a sense of impending danger and an urgent necessity for the victim to comply with their demands by deploying persuasive language and legal jargon.
- Psychological Manipulation: The perpetrators of the fictitious court scenario were well aware of the power of psychological manipulation in coercing their victims. They hoped to emotionally overwhelm the victim by using fear, uncertainty, and the possible implications of legal action. The offenders probably used threats of incarceration, fines, or public exposure to increase the victim’s fear and hinder their capacity to think critically. The idea was to use desperation and anxiety to force the victim to comply.
- Use of Technology to Strengthen Deception: Technological advancements have given cyber thieves tremendous tools to strengthen their misleading methods. The simulated court scenario might have included speech modulation software or deep fake technology to impersonate the voices or appearances of legal experts, judges, or law enforcement personnel. This technology made the deception even more believable, blurring the border between fact and fiction for the victim.
The use of technology in cybercriminals’ misleading techniques has considerably increased their capacity to fool and influence victims. Cybercriminals may develop incredibly realistic and persuasive simulations of judicial processes using speech modulation software, deep fake technology, digital evidence alteration, and real-time communication tools. Individuals must be attentive, gain digital literacy skills, and practice critical thinking when confronting potentially misleading circumstances online as technology advances. Individuals can better protect themselves against the expanding risks posed by cyber thieves by comprehending these technological breakthroughs.
What to do?
Seeking Help and Reporting Incidents: If you or anyone you know is the victim of cybercrime or is fooled by cybercrooks. When confronted with disturbing scenarios such as the imitation court scene staged by cybercrooks, victims must seek help and act quickly by reporting the occurrence. Prompt reporting serves various reasons, including increasing awareness, assisting with investigations, and preventing similar crimes from occurring again. Victims should take the following steps:
- Contact your local law enforcement: Inform local legal enforcement about the cybercrime event. Provide them with pertinent incident facts and proof since they have the experience and resources to investigate cybercrime and catch the offenders involved.
- Seek Assistance from a Cybersecurity specialist: Consult a cybersecurity specialist or respected cybersecurity business to analyse the degree of the breach, safeguard your digital assets, and obtain advice on minimising future risks. Their knowledge and forensic analysis can assist in gathering evidence and mitigating the consequences of the occurrence.
- Preserve Evidence: Keep any evidence relating to the event, including emails, texts, and suspicious actions. Avoid erasing digital evidence, and consider capturing screenshots or creating copies of pertinent exchanges. Evidence preservation is critical for investigations and possible legal procedures.
Conclusion
The setting fake court scene event shows how cybercriminals would deceive and abuse their victims. These criminals tried to use fear and weakness in the victim through social engineering methods, the fabrication of a false narrative, the manipulation of personal information, psychological manipulation, and the use of technology. Individuals can better defend themselves against cybercrooks by remaining watchful and sceptical.