Advisory for APS School Students
Pretext
The Army Welfare Education Society has informed the Parents and students that a Scam is targeting the Army schools Students. The Scamster approaches the students by faking the voice of a female and a male. The scamster asks for the personal information and photos of the students by telling them they are taking details for the event, which is being organised by the Army welfare education society for the celebration of independence day. The Army welfare education society intimated that Parents to beware of these calls from scammers.
The students of Army Schools of Jammu & Kashmir, Noida, are getting calls from the scamster. The students were asked to share sensitive information. Students across the country are getting calls and WhatsApp messages from two numbers, which end with 1715 and 2167. The Scamster are posing to be teachers and asking for the students’ names on the pretext of adding them to the WhatsApp Groups. The scamster then sends forms links to the WhatsApp groups and asking students to fill out the form to seek more sensitive information.
Do’s
- Do Make sure to verify the caller.
- Do block the caller while finding it suspicious.
- Do be careful while sharing personal Information.
- Do inform the School Authorities while receiving these types of calls and messages posing to be teachers.
- Do Check the legitimacy of any agency and organisation while telling the details
- Do Record Calls asking for personal information.
- Do inform parents about scam calling.
- Do cross-check the caller and ask for crucial information.
- Do make others aware of the scam.
Don’ts
- Don’t answer anonymous calls or unknown calls from anyone.
- Don’t share personal information with anyone.
- Don’t Share OTP with anyone.
- Don’t open suspicious links.
- Don’t fill any forms, asking for personal information
- Don’t confirm your identity until you know the caller.
- Don’t Reply to messages asking for financial information.
- Don’t go to a fake website by following a prompt call.
- Don’t share bank Details and passwords.
- Don’t Make payment over a prompt fake call.
Related Blogs

Introduction
The Indian Ministry of Information and Broadcasting has proposed a new legislation. On the 10th of November, 2023, a draft bill emerged, a parchment of governance seeking to sculpt the contours of the nation's broadcasting landscape. The Broadcasting Services (Regulation) Bill, 2023, is not merely a legislative doctrine; it is a harbinger of change, an attestation to the storm of technology and the diversification of media in the age of the internet.
The bill, slated to replace the Cable Television Networks (Regulation) Act of 1995, acknowledges the paradigm shifts that have occurred in the media ecosystem. The emergence of Internet Protocol Television (IPTV), over-the-top (OTT) platforms and other digital broadcasting services has rendered the previous legislation a relic, ill-suited to the dynamism of the current milieu. The draft bill, therefore, stands at the precipice of the future, inviting stakeholders and the vox populi to weigh in on its provisions, to shape the edifice of regulation that will govern the airwaves and the digital streams.
Defining the certain Clauses of the bill
Clause 1 (dd) - The Programme
In the intricate tapestry of the bill's clauses, certain threads stand out, demanding scrutiny and careful consideration. Clause 1(dd), for instance, grapples with the definition of 'Programme,' a term that, in its current breadth, could ensnare the vast expanse of audio, visual, and written content transmitted through broadcasting networks. The implications are profound: content disseminated via YouTube or any website could fall within the ambit of this regulation, a prospect that raises questions about the scope of governmental oversight in the digital realm.
Clause 2(v) - The news and current affairs
Clause 2(v) delves into the murky waters of 'news and current affairs programmes,' a definition that, as it stands, is a maelstrom of ambiguity. The phrases 'newly-received or noteworthy audio, visual or audio-visual programmes' and 'about recent events primarily of socio-political, economic or cultural nature' are a siren's call, luring the unwary into a vortex of subjective interpretation. The threat of potential abuse looms larger, threatening the right to freedom of expression enshrined in Article 19 of the Indian Constitution. It is a clarion call for stakeholders to forge a definition that is objective and clear, one that is in accordance with the Supreme Court's decision in Shreya Singhal v. Union of India, which upheld the sanctity of digital expression while advocating for responsible content creation.
Clause 2(y) Over the Top Broadcasting Services
Clause 2(y) casts its gaze upon OTT broadcasting services, entities that operate in a realm distinct from traditional broadcasting. The one-to-many paradigm of broadcast media justifies a degree of governmental control, but OTT streaming is a more intimate affair, a one-on-one engagement with content on personal devices. The draft bill's attempt to umbrella OTT services under the broadcasting moniker is a conflation that could stifle the diversity and personalised nature of these platforms. It is a conundrum that other nations, such as Australia and Singapore, have approached with nuanced regulatory frameworks that recognise the unique characteristics of OTT services.
Clause 4(4) - Requirements for Broadcasters and Network Operators
The bill's journey through the labyrinth of regulation is fraught with other challenges. The definition of 'Person' in Clause 2(z), the registration exemptions in Clause 4(4), the prohibition on state governments and political parties from engaging in broadcasting in Clause 6, and the powers of inspection and seizure in Clauses 30(2) and 31, all present a complex puzzle. Each clause, each sub-section, is a cog in the machinery of governance that must be calibrated with precision to balance the imperatives of regulation with the freedoms of expression and innovation.
Clause 27 - Advisory Council
The Broadcast Advisory Council, envisioned in Clause 27, is yet another crucible where the principles of impartiality and independence must be tempered. The composition of this council, the public consultations that inform its establishment, and the alignment with constitutional principles are all vital to its legitimacy and efficacy.
A Way Forward
It is up to us, as participants in the democratic process and citizens, to interact with the bill's provisions as it makes its way through the halls of public discourse and legislative examination. To guarantee that the ultimate version of the Broadcasting Services (Regulation) Bill, 2023, is a symbol of advancement and a charter that upholds our most valued liberties while welcoming the opportunities presented by the digital era, we must employ the instruments of study and discussion.
The draft bill is more than just a document in this turbulent time of transition; it is a story of India's dreams, a testament to its dedication to democracy, and a roadmap for its digital future. Therefore, let us take this duty with the seriousness it merits, as the choices we make today will have a lasting impact on the history of our country and the media environment for future generations.
References
- https://scroll.in/article/1059881/why-indias-new-draft-broadcast-bill-has-raised-fears-of-censorship-and-press-suppression#:~:text=The%20bill%20extends%20the%20regulatory,regulation%20through%20content%20evaluation%20committees.
- https://pib.gov.in/PressReleasePage.aspx?PRID=1976200
- https://www.hindustantimes.com/india-news/new-broadcast-bill-may-also-cover-those-who-put-up-news-content-online-101701023054502.html

Introduction
Artificial Intelligence (AI) is fast transforming our future in the digital world, transforming healthcare, finance, education, and cybersecurity. But alongside this technology, bad actors are also weaponising it. More and more, state-sponsored cyber actors are misusing AI tools such as ChatGPT and other generative models to automate disinformation, enable cyberattacks, and speed up social engineering operations. This write-up explores why and how AI, in the form of large language models (LLMs), is being exploited in cyber operations associated with adversarial states, and the necessity for international vigilance, regulation, and AI safety guidelines.
The Shift: AI as a Cyber Weapon
State-sponsored threat actors are misusing tools such as ChatGPT to turbocharge their cyber arsenal.
- Phishing Campaigns using AI- Generative AI allows for highly convincing and grammatically correct phishing emails. Unlike the shoddily written scams of yesteryears, these AI-based messages are tailored according to the victim's location, language, and professional background, increasing the attack success rate considerably. Example: It has recently been reported by OpenAI and Microsoft that Russian and North Korean APTs have employed LLMs to create customised phishing baits and malware obfuscation notes.
- Malware Obfuscation and Script Generation- Big Language Models (LLMs) such as ChatGPT may be used by cyber attackers to help write, debug, and camouflage malicious scripts. While the majority of AI instruments contain safety mechanisms to guard against abuse, threat actors often exploit "jailbreaking" to evade these protections. Once such constraints are lifted, the model can be utilised to develop polymorphic malware that alters its code composition to avoid detection. It can also be used to obfuscate PowerShell or Python scripts to render them difficult for conventional antivirus software to identify. Also, LLMs have been employed to propose techniques for backdoor installation, additional facilitating stealthy access to hijacked systems.
- Disinformation and Narrative Manipulation
State-sponsored cyber actors are increasingly employing AI to scale up and automate disinformation operations, especially on election, protest, and geopolitical dispute days. With LLMs' assistance, these actors can create massive amounts of ersatz news stories, deepfake interview transcripts, imitation social media posts, and bogus public remarks on online forums and petitions. The localisation of content makes this strategy especially perilous, as messages are written with cultural and linguistic specificity, making them credible and more difficult to detect. The ultimate aim is to seed societal unrest, manipulate public sentiments, and erode faith in democratic institutions.
Disrupting Malicious Uses of AI – OpenAI Report (June 2025)
OpenAI released a comprehensive threat intelligence report called "Disrupting Malicious Uses of AI" and the “Staying ahead of threat actors in the age of AI”, which outlined how state-affiliated actors had been testing and misusing its language models for malicious intent. The report named few advanced persistent threat (APT) groups, each attributed to particular nation-states. OpenAI highlighted that the threat actors used the models mostly for enhancing linguistic quality, generating social engineering content, and expanding operations. Significantly, the report mentioned that the tools were not utilized to produce malware, but rather to support preparatory and communicative phases of larger cyber operations.
AI Jailbreaking: Dodging Safety Measures
One of the largest worries is how malicious users can "jailbreak" AI models, misleading them into generating banned content using adversarial input. Some methods employed are:
- Roleplay: Simulating the AI being a professional criminal advisor
- Obfuscation: Concealing requests with code or jargon
- Language Switching: Proposing sensitive inquiries in less frequently moderated languages
- Prompt Injection: Lacing dangerous requests within innocent-appearing questions
These methods have enabled attackers to bypass moderation tools, transforming otherwise moral tools into cybercrime instruments.
Conclusion
As AI generations evolve and become more accessible, its application by state-sponsored cyber actors is unprecedentedly threatening global cybersecurity. The distinction between nation-state intelligence collection and cybercrime is eroding, with AI serving as a multiplier of adversarial campaigns. AI tools such as ChatGPT, which were created for benevolent purposes, can be targeted to multiply phishing, propaganda, and social engineering attacks. The cross-border governance, ethical development practices, and cyber hygiene practices need to be encouraged. AI needs to be shaped not only by innovation but by responsibility.
References
- https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/
- https://www.bankinfosecurity.com/openais-chatgpt-hit-nation-state-hackers-a-28640
- https://oecd.ai/en/incidents/2025-06-13-b5e9
- https://www.microsoft.com/en-us/security/security-insider/meet-the-experts/emerging-AI-tactics-in-use-by-threat-actors
- https://www.wired.com/story/youre-not-ready-for-ai-hacker-agents/
- https://www.cert-in.org.in/PDF/Digital_Threat_Report_2024.pdf
- https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf

Executive Summary:
Recently, there has been a massive amount of fake news about India’s standing in the United Security Council (UNSC), including a veto. This report, compiled scrupulously by the CyberPeace Research Wing, delves into the provenance and credibility of the information, and it is debunked. No information from the UN or any relevant bodies has been released with regard to India’s permanent UNSC membership although India has swiftly made remarkable progress to achieve this strategic goal.

Claims:
Viral posts claim that India has become the first-ever unanimously voted permanent and veto-holding member of the United Nations Security Council (UNSC). Those posts also claim that this was achieved through overwhelming international support, granting India the same standing as the current permanent members.



Factcheck:
The CyberPeace Research Team did a thorough keyword search on the official UNSC official website and its associated social media profiles; there are presently no official announcements declaring India's entry into permanent status in the UNSC. India remains a non-permanent member, with the five permanent actors- China, France, Russia, United Kingdom, and USA- still holding veto power. Furthermore, India, along with Brazil, Germany, and Japan (the G4 nations), proposes reform of the UNSC; yet no formal resolutions have come to the surface to alter the status quo of permanent membership. We then used tools such as Google Fact Check Explorer to uncover the truth behind these viral claims. We found several debunked articles posted by other fact-checking organizations.

The viral claims also lack credible sources or authenticated references from international institutions, further discrediting the claims. Hence, the claims made by several users on social media about India becoming the first-ever unanimously voted permanent and veto-holding member of the UNSC are misleading and fake.
Conclusion:
The viral claim that India has become a permanent member of the UNSC with veto power is entirely false. India, along with the non-permanent members, protests the need for a restructuring of the UN Security Council. However, there have been no official or formal declarations or commitments for alterations in the composition of the permanent members and their powers to date. Social media users are advised to rely on verified sources for information and refrain from spreading unsubstantiated claims that contribute to misinformation.
- Claim: India’s Permanent Membership in UNSC.
- Claimed On: YouTube, LinkedIn, Facebook, X (Formerly Known As Twitter)
- Fact Check: Fake & Misleading.