Advisory for APS School Students
Pretext
The Army Welfare Education Society has informed the Parents and students that a Scam is targeting the Army schools Students. The Scamster approaches the students by faking the voice of a female and a male. The scamster asks for the personal information and photos of the students by telling them they are taking details for the event, which is being organised by the Army welfare education society for the celebration of independence day. The Army welfare education society intimated that Parents to beware of these calls from scammers.
The students of Army Schools of Jammu & Kashmir, Noida, are getting calls from the scamster. The students were asked to share sensitive information. Students across the country are getting calls and WhatsApp messages from two numbers, which end with 1715 and 2167. The Scamster are posing to be teachers and asking for the students’ names on the pretext of adding them to the WhatsApp Groups. The scamster then sends forms links to the WhatsApp groups and asking students to fill out the form to seek more sensitive information.
Do’s
- Do Make sure to verify the caller.
- Do block the caller while finding it suspicious.
- Do be careful while sharing personal Information.
- Do inform the School Authorities while receiving these types of calls and messages posing to be teachers.
- Do Check the legitimacy of any agency and organisation while telling the details
- Do Record Calls asking for personal information.
- Do inform parents about scam calling.
- Do cross-check the caller and ask for crucial information.
- Do make others aware of the scam.
Don’ts
- Don’t answer anonymous calls or unknown calls from anyone.
- Don’t share personal information with anyone.
- Don’t Share OTP with anyone.
- Don’t open suspicious links.
- Don’t fill any forms, asking for personal information
- Don’t confirm your identity until you know the caller.
- Don’t Reply to messages asking for financial information.
- Don’t go to a fake website by following a prompt call.
- Don’t share bank Details and passwords.
- Don’t Make payment over a prompt fake call.
Related Blogs

Introduction
Earlier this month, lawmakers in Colorado, a U.S. state, were summoned to a special legislative session to rewrite their newly passed Artificial Intelligence (AI) law before it even takes effect. Although the discussion taking place in Denver may seem distant, evolving regulations like this one directly address issues that India will soon encounter as we forge our own course for AI governance.
The Colorado Artificial Intelligence Act
Colorado became the first U.S. state to pass a comprehensive AI accountability law, set to come into force in 2026. It aims to protect people from bias, discrimination, and harm caused by predictive algorithms since AI tools have been known to reproduce societal biases by sidelining women from hiring processes, penalising loan applicants from poor neighbourhoods, or through welfare systems that wrongly deny citizens their benefits. But the law met resistance from tech companies who threatened to pull out form the state, claiming it is too broad in scope in its current form and would stifle innovation. This brings critical questions about AI regulation to the forefront:
- Who should be responsible when AI causes harm? Developers, deployers, or both?
- How should citizens seek justice?
- How can tech companies be incentivised to develop safe technologies?
Colorado’s governor has called a special session to update the law before it kicks in.
What This Means for India
India is on its path towards framing a dedicated AI-specific law or directions, and discussions are underway through the IndiaAI Mission, the proposed Digital India Act, committee set by the Delhi High Court on deepfake and other measures. But the dilemmas Colorado is wrestling with are also relevant here.
- AI uptake is growing in public service delivery in India. Facial recognition systems are expanding in policing, despite accuracy and privacy concerns. Fintech apps using AI-driven credit scoring raise questions of fairness and transparency.
- Accountability is unclear. If an Indian AI-powered health app gives faulty advice, who should be liable- the global developer, the Indian startup deploying it, or the regulator who failed to set safeguards?
- India has more than 1,500 AI startups (NASSCOM), which, like Colorado’s firms, fear that onerous compliance could choke growth. But weak guardrails could undermine public trust in AI altogether.
Lessons for India
India’s Ministry of Electronics and IT ( MEITy) favours a light-touch approach to AI regulation, and exploring and advancing ways for a future-proof guideline. Further, lessons from other global frameworks can guide its way.
- Colorado’s case shows us the necessity of incorporating feedback loops in the policy-making process. India should utilise regulatory sandboxes and open, transparent consultation processes before locking in rigid rules.
- It will also need to explore proportionate obligations, lighter for low-risk applications and stricter for high-risk use cases such as policing, healthcare, or welfare delivery.
- Europe’s AI Act is heavy on compliance, the U.S. federal government leans toward deregulation, and Colorado is somewhere in between. India has the chance to create a middle path, grounded in our democratic and developmental context.
Conclusion
As AI becomes increasingly embedded in hiring, banking, education, and welfare, opportunities for ordinary Indians are being redefined. To shape how this pans out, states like Tamil Nadu and Telangana have taken early steps to frame AI policies. Lessons will emerge from their initiative in addressing AI governance. Policy and regulation will always be contested, but contestations are a part of the process.
The Colorado debate shows us how participative law-making, with room for debate, revision, and iteration, is not a weakness but a necessity. For India’s emerging AI governance landscape, the challenge will be to embrace this process while ensuring that citizen rights and inclusion are balanced well with industry concerns. CyberPeace advocates for responsible AI regulation that balances innovation and accountability.
References
- https://www.cbsnews.com/colorado/news/colorado-lawmakers-look-repeal-replace-controversial-artificial-intelligence-law/
- https://www.naag.org/attorney-general-journal/a-deep-dive-into-colorados-artificial-intelligence-act/
- https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en
- https://the-captable.com/2024/12/india-ai-regulation-light-touch/
- https://indiaai.gov.in/article/tamilnadu-s-ai-policy-six-step-tamdef-guidance-framework-and-deepmax-scorecard
.webp)
Starting on 16th February 2025, Google changed its advertisement platform program policy. It will permit advertisers to employ device fingerprinting techniques for user tracking. Organizations that use their advertising services are now permitted to use fingerprinting techniques for tracking their users' data. Originally announced on 18th December 2024, this rule change has sparked yet another debate regarding privacy and profits.
The Issue
Fingerprinting is a technique that allows for the collection of information about a user’s device and browser details, ultimately enabling the creation of a profile of the user. Not only used for or limited to targeting advertisements, data procured in such a manner can be used by private entities and even government organizations to identify individuals who access their services. If information on customization options, such as language settings and a user’s screen size, is collected, it becomes easier to identify an individual when combined with data points like browser type, time zone, battery status, and even IP address.
What makes this technique contentious at the moment is the lack of awareness regarding the information being collected from the user and the inability to opt out once permissions are granted.
This is unlike Google’s standard system of data collection through permission requests, such as accepting website cookies—small text files sent to the browser when a user visits a particular website. While contextual and first-party cookies limit data collection to enhance user experience, third-party cookies enable the display of irrelevant advertisements while users browse different platforms. Due to this functionality, companies can engage in targeted advertising.
This issue has been addressed in laws like the General Data Protection Regulation (GDPR) of the European Union (EU) and the Digital Personal Data Protection (DPDP) Act, 2023 (India), which mandate strict rules and regulations regarding advertising, data collection, and consent, among other things. One of the major requirements in both laws is obtaining clear, unambiguous consent. This also includes the option to opt out of previously granted permissions for cookies.
However, in the case of fingerprinting, the mechanism of data collection relies on signals that users cannot easily erase. While clearing all data from the browser or refusing cookies might seem like appropriate steps to take, they do not prevent tracking through fingerprinting, as users can still be identified using system details that a website has already collected. This applies to all IoT products as well. People usually do not frequently change the devices they use, and once a system is identified, there are no available options to stop tracking, as fingerprinting relies on device characteristics rather than data-collecting text files that could otherwise be blocked.
Google’s Changing Stance
According to Statista, Google’s revenue is largely made up of the advertisement services it provides (amounting to 264.59 billion U.S. dollars in 2024). Any change in its advertisement program policies draws significant attention due to its economic impact.
In 2019, Google claimed in a blog post that fingerprinting was a technique that “subverts user choice and is wrong.” It is in this context that the recent policy shift comes as a surprise. In response, the ICO (Information Commissioner’s Office), the UK’s data privacy watchdog, has stated that this change is irresponsible. Google, however, is eager to have further discussions with the ICO regarding the policy change.
Conclusion
The debate regarding privacy in targeted advertising has been ongoing for quite some time. Concerns about digital data collection and storage have led to new and evolving laws that mandate strict fines for non-compliance.
Google’s shift in policy raises pressing concerns about user privacy and transparency. Fingerprinting, unlike cookies, offers no opt-out mechanism, leaving users vulnerable to continuous tracking without consent. This move contradicts Google’s previous stance and challenges global regulations like the GDPR and DPDP Act, which emphasize clear user consent.
With regulators like the ICO expressing disapproval, the debate between corporate profits and individual privacy intensifies. As digital footprints become harder to erase, users, lawmakers, and watchdogs must scrutinize such changes to ensure that innovation does not come at the cost of fundamental privacy rights
References
- https://www.techradar.com/pro/security/profit-over-privacy-google-gives-advertisers-more-personal-info-in-major-fingerprinting-u-turn
- https://www.ccn.com/news/technology/googles-new-fingerprinting-policy-sparks-privacy-backlash-as-ads-become-harder-to-avoid/
- https://www.emarketer.com/content/google-pivot-digital-fingerprinting-enable-better-cross-device-measurement
- https://www.lewissilkin.com/insights/2025/01/16/google-adopts-new-stance-on-device-fingerprinting-102ju7b
- https://www.lewissilkin.com/insights/2025/01/16/ico-consults-on-storage-and-access-cookies-guidance-102ju62
- https://www.bbc.com/news/articles/cm21g0052dno
- https://www.techradar.com/features/browser-fingerprinting-explained
- https://fingerprint.com/blog/canvas-fingerprinting/
- https://www.statista.com/statistics/266206/googles-annual-global-revenue/#:~:text=In%20the%20most%20recently%20reported,billion%20U.S.%20dollars%20in%202024

Introduction
The Ministry of Electronics and Information Technology (MeitY) recently issued the “Email Policy of Government of India, 2024.” It is an updated email policy for central government employees, requiring the exclusive use of official government emails managed by the National Informatics Centre (NIC) for public duties. The policy replaces 2015 guidelines and prohibits government employees, contractors, and consultants from using their official email addresses on social media or other websites unless authorised for official functions. The policy aims to reinforce cybersecurity measures and protocols, maintain secure communications, and ensure compliance across departments. It is not legally binding, but its gazette notification ensures compliance and maintains cyber resilience in communications. The updated policy is also aligned with the newly enacted Digital Personal Data Protection Act, 2023.
Brief Highlights of Email Policy of Government of India, 2024
- The Email Policy of the Government of India, 2024 is divided into three parts namely, Part I: Introduction, Part II: Terms of Use, Part III: Functions, duties and Responsibilities, and with an annexe attached to it defining the meaning of certain organisation types in relation to this policy.
- The policy direct to not use NICeMail address for registering on any social media or other websites or mobile applications, save for the performance of official duties or with due authorisation from the authority competent.
- Under this new policy, “core use organisations” (central government departments and other government-controlled entities that do not provide goods or services on commercial terms) and its users shall use only NICeMail for official purposes.
- However, where the Core Use Organisation has an office or establishment outside India, to ensure availability of local communication channels under exigent circumstances may use alternative email services hosted outside India with all due approval.
- Core Use Organisations, including those dealing with national security, have their own independent email servers and can continue operating their independent email servers provided the servers are hosted in India. They should also consider migrating their email services to NICeMail Services for security and uniform policy enforcement.
- The policy also requires departments that currently use @gov.in or @nic.in to instead migrate to @departmentname.gov.in mail domains so that information sanctity and integrity can be maintained when officials are transferred from one department/ministry to another, and so that the ministry/department doesn’t lose access to the official communication. For this, the department or ministry in question must register the domain name with NIC. For instance, MeitY has registered the mail domain @meity.gov.in. The policy gives government departments six months time period complete this migration.
- The policy also makes distinction between (1) Organisation-linked email addresses and (2) Service-linked email addresses. The policy in respect of “organisation-linked email addresses” is laid down in paragraphs 5.3.2(a) and 5.4 to 5.6.3. And the policy in respect of “service-linked email addresses” is laid down in paragraphs 5.3.2(b) and 5.7 to 5.7.2 under the official document of said policy.
- Further, the new policy includes specific directives on separating the email addresses of regular government employees from those of contractors or consultants to improve operational clarity.
CyberPeace Policy Outlook
The revised Email Policy of the Government of India reflects the government’s proactive response to countering the evolving cybersecurity challenges and aims to maintain cyber resilience across the government department’s email communications. The policy represents a significant step towards securing inter government and intra-government communications. We as a cybersecurity expert organisation emphasise the importance of protecting sensitive data against cyber threats, particularly in a world increasingly targeted by sophisticated phishing and malware attacks, and we advocate for safe and secure online communication and information exchange. Email communications hold sensitive information and therefore require robust policies and mechanisms in place to safeguard the communications and ensure that sensitive data is shielded through regulated and secure email usage with technical capabilities for safe use. The proactive step taken by MeitY is commendable and aligned with securing governmental communication channels.
References:
- https://www.meity.gov.in/writereaddata/files/Email-policy-30-10-2024.pdf-(Official document for Email Policy of Government of India, 2024.
- https://www.hindustantimes.com/india-news/dont-use-govt-email-ids-for-social-media-central-govt-policy-for-employees-101730312997936.html#:~:text=Government%20employees%20must%20not%20use,email%20policy%20issued%20on%20Wednesday
- https://bwpeople.in/article/new-email-policy-issued-for-central-govt-employees-to-strengthen-cybersecurity-measures-537805
- https://www.thehindu.com/news/national/centre-notifies-email-policy-for-ministries-central-departments/article68815537.ece