Advisory for APS School Students
Pretext
The Army Welfare Education Society has informed the Parents and students that a Scam is targeting the Army schools Students. The Scamster approaches the students by faking the voice of a female and a male. The scamster asks for the personal information and photos of the students by telling them they are taking details for the event, which is being organised by the Army welfare education society for the celebration of independence day. The Army welfare education society intimated that Parents to beware of these calls from scammers.
The students of Army Schools of Jammu & Kashmir, Noida, are getting calls from the scamster. The students were asked to share sensitive information. Students across the country are getting calls and WhatsApp messages from two numbers, which end with 1715 and 2167. The Scamster are posing to be teachers and asking for the students’ names on the pretext of adding them to the WhatsApp Groups. The scamster then sends forms links to the WhatsApp groups and asking students to fill out the form to seek more sensitive information.
Do’s
- Do Make sure to verify the caller.
- Do block the caller while finding it suspicious.
- Do be careful while sharing personal Information.
- Do inform the School Authorities while receiving these types of calls and messages posing to be teachers.
- Do Check the legitimacy of any agency and organisation while telling the details
- Do Record Calls asking for personal information.
- Do inform parents about scam calling.
- Do cross-check the caller and ask for crucial information.
- Do make others aware of the scam.
Don’ts
- Don’t answer anonymous calls or unknown calls from anyone.
- Don’t share personal information with anyone.
- Don’t Share OTP with anyone.
- Don’t open suspicious links.
- Don’t fill any forms, asking for personal information
- Don’t confirm your identity until you know the caller.
- Don’t Reply to messages asking for financial information.
- Don’t go to a fake website by following a prompt call.
- Don’t share bank Details and passwords.
- Don’t Make payment over a prompt fake call.
Related Blogs

Introduction
Iran stands as a nation poised at the threshold of a transformative era. The Islamic Republic, a land of ancient civilisations now grappling with the exigencies of the 21st century, is now making strides in the emerging field of artificial intelligence (AI). This is not merely an adoption of new tools; it is a strategic embrace, a calculated leap into the digital unknown, where the potential for economic growth and security enhancement resonates with the promise of a redefined future.
Embarking on this technological odyssey, Iranian President Ebrahim Raisi, in a conclave with the nation’s virtual business activists, delineated the ‘big steps’ being undertaken in the realm of AI. The gathering, as reported by the pro-government Tasnim News, was not a simple exchange of polite remarks but a profound discourse that offered an incisive overview of the burgeoning digital economy and the strides Iran is making in the AI landscape. The conversation deeply revolved around the current ecosystem of technology and innovation within Iran, delving into the burgeoning startup culture and the commendable drive within its youth populace to propel the nation to the forefront of technology.
Iranian AI Integration
Military Implications
The discourse ranged from the current technological infrastructure to the broader implications for the security and defense of the region. The Iranian polity, with its rich history that seamlessly blends with aspirations for the future, is acutely aware that the implications of AI reach far beyond mere economic growth. They extend into the very fibres of military might and the structure of national security. The investment in cyber capabilities in Iran is well-documented, a display of shrewdness and pragmatism. And the integration of AI technologies is the next logical step in an ever-evolving defense architecture. Brigadier General Alireza Sabahifard, Commander of the Iranian Army Air Defense Force, has underscored the pivotal role of AI in modern warfare. He identifies the ongoing adoption of AI technologies as a strategic imperative, a top priority fundamentally designed to elevate the air defense capabilities in Iran to meet 21st-century threats.
Economic Implications
Yet, the Iranian pursuit of AI is not solely confined to bolstering military prowess. It is also pervasive in nurturing economic opportunity. President Raisi’s rhetoric touches upon economic rejuvenation, job creation, and the proliferation of financial and legal support mechanisms, all blurred into a cohesive vision that would foster a suitable environment for the private sector in the AI domain. The ambition is grand and strikingly clear — a nation committed to training several thousand individuals in the digital economy sector, signaling a deep-rooted commitment to cultivating a healthy environment for AI-driven innovation.
The Iranian leader’s vision extends beyond the simple creation of infrastructure. It extends to the fostering of a healthy, competitive, and peaceful social milieu where domestic and international markets are within easy reach, promoting the prosperity of the digital economy and its activists. Such a vision of technological symbiosis, in many Western democracies, would be labelled as audaciously progressive. In Iran, however, withdrawing a major chunk of economic investments from the country's security state adds layers of complexity and nuance to this transformative narrative.
Cultural Integration
Still, Iran’s ambitious AI journey unfolds with a recognition of its cultural underpinnings and societal structure. The Nexus between the private sector, with its cyber-technocratic visionaries, and the regime, with its omnipresent ties to the Islamic Revolutionary Guard Corps, is a tightrope that requires unparalleled poise and vigilance.
Moreover, in the holy city of Qom, a hub of intellectual fervour and the domicile of half of Iran's 200,000 Shia clerics, there burgeons a captivating interest in the possible synergies between AI and theological study. The clerical establishment, hidden within a stronghold of religious scholarship, perceives AI not as a problem but as a potential solution, a harbinger of progress that could ally with tradition. It sees in AI the potential of parsing Islamic texts with newfound precision, thereby allowing religious rulings, or fatwas, to resonate with the everchanging Iranian society. This integration of technology is a testament to the dynamic interplay between tradition and modernity.
Yet the integration of AI into the venerable traditions of societies such as Iran's is threaded with challenges. Herein lays the paradox, for as AI is poised to potentially bolster religious study, the threat of cultural dissolution remains present. AI, if not judiciously designed with local values and ethics in mind, could inadvertently propagate an ideology at odds with local customs, beliefs, and the cornerstone principles of a society.
Natural Resources
Similarly, Iran's strategic foray into AI extends into its sovereign dominion—the charge of its natural resources. As Mehr News Agency reports, the National Iranian Oil Company (NIOC) is on the cusp of pioneering a joint venture with international tech juggernauts, chiefly Chinese companies, to inject the lifeblood of AI into the heart of its oil and gas production processes. This grand undertaking is nothing short of a digital renaissance aimed at achieving 'great reforms’ and driving a drastic 20% improvement in efficiency. AI’s algorithmic potency, unleashed in the hydrocarbon fields, promises to streamline expenses, enhance efficacy, and maximise production outputs, thereby bolstering Iran's economic bulwark.
The AI way Forward
As we delve further into Iran's sophisticated AI strategy, we observe an approach that is both vibrant and multi-dimensional. From military development to religious tutelage, from the diligent charge of the environment to the pursuit of sustainable economic development, Iran's AI ventures are emblematic of the broader global discourse. They mark a vivid intersection of AI governance, security, and the future of technological enterprise, highlighting the evolution of technological adoption and its societal, ethical, and geopolitical repercussions.
Conclusion
The multifaceted nature of Iran's AI pursuits encapsulates a spectrum of strategic imperatives, bringing the spearheads of defense modernisation and religious academics with the imperatives of resource allocation. It reflects a nuanced approach to the adoption and integration of technology, adjudicating between the venerable pillars of traditional values and the inexorable forces of modernisation. As Iran continues to delineate and traverse its path through the burgeoning landscape of AI, attending global stakeholders, watch with renewed interest and measured apprehension. Mindful of the intricate geopolitical implications and the transformative potential inherent in Iran's burgeoning AI endeavours, the global community watches, waits, and wonders at what may emerge from this ancient civilisation’s bold, resolute strides into the future.
References
- https://www.jpost.com/middle-east/article-792391
- https://www.ft.com/content/9c1c3fd3-4aea-40ab-977b-24fe5527300c
- https://www.foxnews.com/world/iran-looks-ai-weather-western-sanctions-help-military-fight-cheap

Introduction
In 2025, the internet is entering a new paradigm and it is hard not to witness it. The internet as we know it is rapidly changing into a treasure trove of hyper-optimised material over which vast bot armies battle to the death, thanks to the amazing advancements in artificial intelligence. All of that advancement, however, has a price, primarily in human lives. It turns out that releasing highly personalised chatbots on a populace that is already struggling with economic stagnation, terminal loneliness, and the ongoing destruction of our planet isn’t exactly a formula for improved mental health. This is the truth of 75% of the kids and teen population who have had chats with chatbot-generated fictitious characters. AI, or artificial intelligence, Chatbots are becoming more and more integrated into our daily lives, assisting us with customer service, entertainment, healthcare, and education. But as the impact of these instruments grows, accountability and moral behaviour become more important. An investigation of the internal policies of a major international tech firm last year exposed alarming gaps: AI chatbots were allowed to create content with child romantic roleplaying, racially discriminatory reasoning, and spurious medical claims. Although the firm has since amended aspects of these rules, the exposé underscores an underlying global dilemma - how can we regulate AI to maintain child safety, guard against misinformation, and adhere to ethical considerations without suppressing innovation?
The Guidelines and Their Gaps
The tech giants like Meta and Google are often reprimanded for overlooking Child Safety and the overall increase in Mental health issues in children and adolescents. According to reports, Google introduced Gemini AI kids, a kid-friendly version of its Gemini AI chatbot, which represents a major advancement in the incorporation of generative artificial intelligence (Gen-AI) into early schooling. Users under the age of thirteen can use supervised accounts on the Family Link app to access this version of Gemini AI Kids.
AI operates on the premise of data collection and analysis. To safeguard children’s personal information in the digital world, the Digital Personal Data Protection Act, 2023 (DPDP Act) introduces particular safeguards. According to Section 9, before processing the data of children, who are defined as people under the age of 18, Data Fiduciaries, entities that decide the goals and methods of processing personal data, must get verified consent from a parent or legal guardian. Furthermore, the Act expressly forbids processing activities that could endanger a child’s welfare, such as behavioural surveillance and child-targeted advertising. According to court interpretations, a child's well-being includes not just medical care but also their moral, ethical, and emotional growth.
While the DPDP Act is a big start in the right direction, there are still important lacunae in how it addresses AI and Child Safety. Age-gating systems, thorough risk rating, and limitations specific to AI-driven platforms are absent from the Act, which largely concentrates on consent and damage prevention in data protection. Furthermore, it ignores the threats to children’s emotional safety or the long-term psychological effects of interacting with generative AI models. Current safeguards are self-regulatory in nature and dispersed across several laws, such as the Bhartiya Nyaya Sanhita, 2023. These include platform disclaimers, technology-based detection of child-sexual abuse content, and measures under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Child Safety and AI
- The Risks of Romantic Roleplay - Enabling chatbots to engage in romantic roleplaying with youngsters is among the most concerning discoveries. These interactions can result in grooming, psychological trauma, and relaxation to inappropriate behaviour, even if they are not explicitly sexual. Having illicit or sexual conversations with kids in cyberspace is unacceptable, according to child protection experts. However, permitting even "flirtatious" conversation could normalise risky boundaries.
- International Standards and Best Practices - The concept of "safety by design" is highly valued in child online safety guidelines from around the world, including UNICEF's Child Online Protection Guidelines and the UK's Online Safety Bill. This mandating of platforms and developers to proactively remove risks, not reactively to respond to harms, is the bare minimum standard that any AI guidelines must meet if they provide loopholes for child-directed roleplay.
Misinformation and Racism in AI Outputs
- The Disinformation Dilemma - The regulations also allowed AI to create fictional narratives with disclaimers. For example, chatbots were able to write articles promulgating false health claims or smears against public officials, as long as they were labelled as "untrue." While disclaimers might give thin legal cover, they add to the proliferation of misleading information. Indeed, misinformation tends to spread extensively because users disregard caveat labels in favour of provocative assertions.
- Ethical Lines and Discriminatory Content - It is ethically questionable to allow AI systems to generate racist arguments, even when requested. Though scholarly research into prejudice and bias may necessitate such examples, unregulated generation has the potential to normalise damaging stereotypes. Researchers warn that such practice brings platforms from being passive hosts of offensive speech to active generators of discriminatory content. It is a difference that makes a difference, as it places responsibility squarely on developers and corporations.
The Broader Governance Challenge
- Corporate Responsibility and AI Material generated by AI is not equivalent to user speech—it is a direct reflection of corporate training, policy decisions, and system engineering. This fact requires a greater level of accountability. Although companies can update guidelines following public criticism, that there were such allowances in the first place indicates a lack of strong ethical regulation.
- Regulatory Gaps Regulatory regimes for AI are currently in disarray. The EU AI Act, the OECD AI Principles, and national policies all emphasise human rights, transparency, and accountability. The few, though, specify clear guidelines for content risks such as child roleplay or hate narratives. This absence of harmonised international rules leaves companies acting in the shadows, establishing their own limits until contradicted.
An active way forward would include
- Express Child Protection Requirements: AI systems must categorically prohibit interactions with children involving flirting or romance.
- Misinformation Protections: Generative AI must not be allowed to generate knowingly false material, disclaimers being irrelevant.
- Bias Reduction: Developers need to proactively train systems against generating discriminatory accounts, not merely tag them as optional outputs.
- Independent Regulation: External audit and ethics review boards can supply transparency and accountability independent of internal company regulations.
Conclusion
The guidelines that are often contentious are more than the internal folly of just one firm; they point to a deeper systemic issue in AI regulation. The stakes rise as generative AI becomes more and more integrated into politics, healthcare, education, and social interaction. Racism, false information, and inadequate child safety measures are severe issues that require quick resolution. Corporate regulation is only one aspect of the future; other elements include multi-stakeholder participation, stronger global systems, and ethical standards. In the end, rather than just corporate interests, trust in artificial neural networks will be based on their ability to preserve the truth, protect the weak, and represent universal human values.
References
- https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
- https://www.lakshmisri.com/insights/articles/ai-for-children/#
- https://the420.in/meta-ai-chatbot-guidelines-child-safety-racism-misinformation/
- https://www.unicef.org/documents/guidelines-industry-online-child-protection
- https://www.oecd.org/en/topics/sub-issues/ai-principles.html
- https://artificialintelligenceact.eu/

Introduction
Misinformation and disinformation are significant issues in today's digital age. The challenge is not limited to any one sector or industry, and has been seen to affect everyone that deals with data of any sort. In recent times, we have seen a rise in misinformation about all manner of subjects, from product and corporate misinformation to manipulated content about regulatory or policy developments.
Micro, Small, and Medium Enterprises (MSMEs) play an important role in economies, particularly in developing nations, by promoting employment, innovation, and growth. However, in the evolving digital landscape, they also confront tremendous hurdles, such as the dissemination of mis/disinformation which may harm reputations, disrupt businesses, and reduce consumer trust. MSMEs are particularly susceptible since they have minimal resources at their disposal and cannot afford to invest in the kind of talent, technology and training that is needed for a business to be able to protect itself in today’s digital-first ecosystem. Mis/disinformation for MSMEs can arise from internal communications, supply chain partners, social media, competitors, etc. To address these dangers, MSMEs must take proactive steps such as adopting frameworks to counter misinformation and prioritising best practices like digital literacy and training, monitoring and social listening, transparency protocols and robust communication practices.
Assessing the Impact of Misinformation on MSMEs
To assess the impact of misinformation on MSMEs, it is essential to get a full sense of the challenges. To begin with, one must consider the categories of damage which can include financial loss, reputational damage, operational damages, and regulatory noncompliance. Various assessment methodologies can be used to analyze the impact of misinformation, including surveys, interviews, case studies, social media and news data analysis, and risk analysis practices.
Policy Framework and Gaps in Addressing Misinformation
The Digital India Initiative, a flagship program of the Government of India, aims to transform India into a digitally empowered society and knowledge economy. The Information Technology Act, 2000 and the rules made therein govern the technology space and serve as the legal framework for cyber security and data protection. The Bhartiya Nyay Sanhita, 2023 also contains provisions regarding ‘fake news’. The Digital Personal Data Protection Act, 2023 is a brand new law aimed at protecting personal data. Fact-check units (FCUs) are government and private independent bodies that verify claims about government policies, regulations, announcements, and measures. However, these policy measures are not sector-specific and lack specific guidelines, which have limited impact on their awareness initiatives on misinformation and insufficient support structure for MSMEs to verify information and protect themselves.
Recommendations for Countering Misinformation in the MSME Sector
To counter misinformation for MSMEs, recommendations include creating a dedicated Misinformation Helpline, promoting awareness campaigns, creating regulatory support and guidelines, and collaborating with tech platforms and expert organisations for the identification and curbing of misinformation.
Organisational recommendations include the Information Verification Protocols for the consumers of Information for the verification of critical information before acting upon it, engaging in employee training for regular training on the identification and management of misinformation, creating a crisis management plan to deal with misinformation crisis, form collaboration networks with other MSMEs to share verified information and best practices.
Engage with technological solutions like AI and ML tools for the detection and flagging of potential misinformation along with fact-checking tools and engaging with cyber security measures to prevent misinformation via digital channels.
Conclusion: Developing a Vulnerability Assessment Framework for MSMEs
Creating a vulnerability assessment framework for misinformation in Micro, Small, and Medium Enterprises (MSMEs) in India involves several key components which include the understanding of the sources and types of misinformation, assessing the impact on MSMEs, identifying the current policies and gaps, and providing actionable recommendations. The implementation strategy for policies to counter misinformation in the MSME sector can be by starting with pilot programs in key MSME clusters, and stakeholder engagement by involving industry associations, tech companies and government bodies. Initiating a feedback mechanism for constant improvement of the framework and finally, developing a plan to scale successful initiatives across the country.
References
- https://publications.ut-capitole.fr/id/eprint/48849/1/wp_tse_1516.pdf
- https://techinformed.com/how-misinformation-can-impact-businesses/
- https://pib.gov.in/aboutfactchecke.aspx