Advisory for APS School Students
Pretext
The Army Welfare Education Society has informed the Parents and students that a Scam is targeting the Army schools Students. The Scamster approaches the students by faking the voice of a female and a male. The scamster asks for the personal information and photos of the students by telling them they are taking details for the event, which is being organised by the Army welfare education society for the celebration of independence day. The Army welfare education society intimated that Parents to beware of these calls from scammers.
The students of Army Schools of Jammu & Kashmir, Noida, are getting calls from the scamster. The students were asked to share sensitive information. Students across the country are getting calls and WhatsApp messages from two numbers, which end with 1715 and 2167. The Scamster are posing to be teachers and asking for the students’ names on the pretext of adding them to the WhatsApp Groups. The scamster then sends forms links to the WhatsApp groups and asking students to fill out the form to seek more sensitive information.
Do’s
- Do Make sure to verify the caller.
- Do block the caller while finding it suspicious.
- Do be careful while sharing personal Information.
- Do inform the School Authorities while receiving these types of calls and messages posing to be teachers.
- Do Check the legitimacy of any agency and organisation while telling the details
- Do Record Calls asking for personal information.
- Do inform parents about scam calling.
- Do cross-check the caller and ask for crucial information.
- Do make others aware of the scam.
Don’ts
- Don’t answer anonymous calls or unknown calls from anyone.
- Don’t share personal information with anyone.
- Don’t Share OTP with anyone.
- Don’t open suspicious links.
- Don’t fill any forms, asking for personal information
- Don’t confirm your identity until you know the caller.
- Don’t Reply to messages asking for financial information.
- Don’t go to a fake website by following a prompt call.
- Don’t share bank Details and passwords.
- Don’t Make payment over a prompt fake call.
Related Blogs

As AI language models become more powerful, they are also becoming more prone to errors. One increasingly prominent issue is AI hallucinations, instances where models generate outputs that are factually incorrect, nonsensical, or entirely fabricated, yet present them with complete confidence. Recently, ChatGPT released two new models—o3 and o4-mini, which differ from earlier versions as they focus more on step-by-step reasoning rather than simple text prediction. With the growing reliance on chatbots and generative models for everything from news summaries to legal advice, this phenomenon poses a serious threat to public trust, information accuracy, and decision-making.
What Are AI Hallucinations?
AI hallucinations occur when a model invents facts, misattributes quotes, or cites nonexistent sources. This is not a bug but a side effect of how Large Language Models (LLMs) work, and it is only the probability that can be reduced, not their occurrence altogether. Trained on vast internet data, these models predict what word is likely to come next in a sequence. They have no true understanding of the world or facts, they simulate reasoning based on statistical patterns in text. What is alarming is that the newer and more advanced models are producing more hallucinations, not fewer. seemingly counterintuitive. This has been prevalent reasoning-based models, which generate answers step-by-step in a chain-of-thought style. While this can improve performance on complex tasks, it also opens more room for errors at each step, especially when no factual retrieval or grounding is involved.
As per reports shared on TechCrunch, it mentioned that when users asked AI models for short answers, hallucinations increased by up to 30%. And a study published in eWeek found that ChatGPT hallucinated in 40% of tests involving domain-specific queries, such as medical and legal questions. This was not, however, limited to this particular Large Language Model, but also similar ones like DeepSeek. Even more concerning are hallucinations in multimodal models like those used for deepfakes. Forbes reports that some of these models produce synthetic media that not only look real but are also capable of contributing to fabricated narratives, raising the stakes for the spread of misinformation during elections, crises, and other instances.
It is also notable that AI models are continually improving with each version, focusing on reducing hallucinations and enhancing accuracy. New features, such as providing source links and citations, are being implemented to increase transparency and reliability in responses.
The Misinformation Dilemma
The rise of AI-generated hallucinations exacerbates the already severe problem of online misinformation. Hallucinated content can quickly spread across social platforms, get scraped into training datasets, and re-emerge in new generations of models, creating a dangerous feedback loop. However, it helps that the developers are already aware of such instances and are actively charting out ways in which we can reduce the probability of this error. Some of them are:
- Retrieval-Augmented Generation (RAG): Instead of relying purely on a model’s internal knowledge, RAG allows the model to “look up” information from external databases or trusted sources during the generation process. This can significantly reduce hallucination rates by anchoring responses in verifiable data.
- Use of smaller, more specialised language models: Lightweight models fine-tuned on specific domains, such as medical records or legal texts. They tend to hallucinate less because their scope is limited and better curated.
Furthermore, transparency mechanisms such as source citation, model disclaimers, and user feedback loops can help mitigate the impact of hallucinations. For instance, when a model generates a response, linking back to its source allows users to verify the claims made.
Conclusion
AI hallucinations are an intrinsic part of how generative models function today, and such a side-effect would continue to occur until foundational changes are made in how models are trained and deployed. For the time being, developers, companies, and users must approach AI-generated content with caution. LLMs are, fundamentally, word predictors, brilliant but fallible. Recognising their limitations is the first step in navigating the misinformation dilemma they pose.
References
- https://www.eweek.com/news/ai-hallucinations-increase/
- https://www.resilience.org/stories/2025-05-11/better-ai-has-more-hallucinations/
- https://www.ekathimerini.com/nytimes/1269076/ai-is-getting-more-powerful-but-its-hallucinations-are-getting-worse/
- https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/
- https://en.as.com/latest_news/is-chatgpt-having-robot-dreams-ai-is-hallucinating-and-producing-incorrect-information-and-experts-dont-know-why-n/
- https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
- https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
- https://towardsdatascience.com/how-i-deal-with-hallucinations-at-an-ai-startup-9fc4121295cc/
- https://www.informationweek.com/machine-learning-ai/getting-a-handle-on-ai-hallucinations

Introduction
Misinformation spreads faster than a pimple before your best friend's wedding, and these viral skincare hacks on social media can do more harm than good if smeared on without a second thought. The unverified skin care tips, exaggerated results, and product endorsements lacking proper dermatological backing can often lead to breakouts and serious damage.
The Allure and Risks of Online Skincare Trends
In the age of social media, beauty advice is easily accessible, but not all trending skincare hacks are beneficial. Influencers lacking professional dermatological knowledge often endorse "medical grade" skincare products, which may not be suitable for all skin types. The viral DIY skincare hacks, such as natural remedies like multani mitti (Fuller's earth), have found a new audience online. However, suppose such skincare tips are approached without due care and caution regarding their suitability for different skin types, or without the proper formulation of ingredients. In that case, they can result in skin problems. It is crucial to approach online skincare advice with a critical eye, as not all trends are backed by scientific research.
CyberPeace Recommendations
- Influencer Responsibility and Ethical Endorsements in Skincare
Influencers play a crucial role in shaping public perception in the skincare and lifestyle industries. However, they must exercise due diligence before endorsing skincare products or practices, as misinformation can lead to financial loss and health consequences. Influencers should only promote products they have personally tested or vetted by dermatologists or skincare professionals. They should also research the brand's credibility, check ingredients for safety, and understand the product's target audience.
- Strengthening Digital Literacy in Skincare Spaces
CyberPeace highlights that improving digital literacy is one of the best strategies to stop the spread of false information about skincare. Users nowadays, particularly young people, are continuously exposed to a deluge of wellness and beauty-related content. Many people are duped by overstated claims, pseudoscientific cures, and influencer-driven marketing masquerading as sound advice if they lack the necessary digital literacy. We recommend supporting digital literacy initiatives that teach users how to evaluate sources, think critically, and comprehend how algorithms promote content. Long-term impact is thought to be achieved through influencer partnerships, gamified learning modules, and community workshops that promote media literacy.
- Recommendation for Users to Prioritise Research and Critical Thinking
Users should prioritise research and critical thinking when engaging with skincare content online. It's crucial to distinguish between valid advice and misinformation. Thorough research, including expert reviews, ingredient checks, and scientific sources, is essential. Questioning endorsements and relying on trusted platforms and dermatologists can help ensure a skincare routine based on sound practices.
- Mandating Transparency from Influencers and Brands
Enforcing stronger transparency laws for influencers and skincare companies is a key suggestion. Social media influencers frequently neglect to reveal sponsored collaborations or paid advertisements, giving followers the impression that the skincare advice is based on the creators' own experience and objective judgment. This dishonest practice frequently promotes goods with little to no scientific support and feeds false information. The social media companies need to be proactive in identifying and removing content that violates disclosure and advertising guidelines.
- Creating a Verified Registry for Skincare Professionals
Increasing the voices of real experts is one of the most important strategies to build credibility and trust online. The establishment of a publicly available, validated registry of certified dermatologists, cosmetologists, and skincare scientists is suggested by cybersecurity experts and medical professionals. These experts could then receive a "verified expert" badge from social media companies, making it easier for users to discern between content created by unqualified people and genuine, evidence-based advice. Algorithms that promote such verified content would inevitably limit the dissemination of false information.
- Enforcing Platform Accountability and Reporting System
There needs to be platform-level accountability and safeguard mechanisms in case of any false information about skincare. Platforms should monitor repeat offenders and implement a tiered penalty system that includes content removal and temporary or permanent bans on such malicious user profiles.
References

Introduction
In a distressing incident that highlights the growing threat of cyber fraud, a software engineer in Bangalore fell victim to fraudsters who posed as police officials. These miscreants, operating under the guise of a fake courier service and law enforcement, employed a sophisticated scam to dupe unsuspecting individuals out of their hard-earned money. Unfortunately, this is not an isolated incident, as several cases of similar fraud have been reported recently in Bangalore and other cities. It is crucial for everyone to be aware of these scams and adopt preventive measures to protect themselves.
Bangalore Techie Falls Victim to ₹33 Lakh
The software engineer received a call from someone claiming to be from FedEx courier service, informing him that a parcel sent in his name to Taiwan had been seized by the Mumbai police for containing illegal items. The call was then transferred to an impersonator posing as a Mumbai Deputy Commissioner of Police (DCP), who alleged that a money laundering case had been registered against him. The fraudsters then coerced him into joining a Skype call for verification purposes, during which they obtained his personal details, including bank account information.
Under the guise of verifying his credentials, the fraudsters manipulated him into transferring a significant amount of money to various accounts. They assured him that the funds would be returned after the completion of the procedure. However, once the money was transferred, the fraudsters disappeared, leaving the victim devastated and financially drained.
Best Practices to Stay Safe
- Be vigilant and skeptical: Maintain a healthy level of skepticism when receiving unsolicited calls or messages, especially if they involve sensitive information or financial matters. Be cautious of callers pressuring you to disclose personal details or engage in immediate financial transactions.
- Verify the caller’s authenticity: If someone claims to represent a legitimate organisation or law enforcement agency, independently verify their credentials. Look up the official contact details of the organization or agency and reach out to them directly to confirm the authenticity of the communication.
- Never share sensitive information: Avoid sharing personal information, such as bank account details, passwords, or Aadhaar numbers, over the phone or through unfamiliar online platforms. Legitimate organizations will not ask for such information without proper authentication protocols.
- Use secure communication channels: When communicating sensitive information, prefer secure platforms or official channels that provide end-to-end encryption. Avoid switching to alternative platforms or applications suggested by unknown callers, as fraudsters can exploit these.
- Educate yourself and others: Stay informed about the latest cyber fraud techniques and scams prevalent in your region. Share this knowledge with family, friends, and colleagues to create awareness and prevent them from falling victim to similar schemes.
- Implement robust security measures: Keep your devices and software updated with the latest security patches. Utilize robust anti-virus software, firewalls, and spam filters to safeguard against malicious activities. Regularly review your financial statements and account activity to detect any unauthorized transactions promptly.
Conclusion:
The incident involving the Bangalore techie and other victims of cyber fraud highlights the importance of remaining vigilant and adopting preventive measures to safeguard oneself from such scams. It is disheartening to see individuals falling prey to impersonators who exploit their trust and manipulate them into sharing sensitive information. By staying informed, exercising caution, and following best practices, we can collectively minimize the risk and protect ourselves from these fraudulent activities. Remember, the best defense against cyber fraud is a well-informed and alert individual.