Advisory for APS School Students
Pretext
The Army Welfare Education Society has informed the Parents and students that a Scam is targeting the Army schools Students. The Scamster approaches the students by faking the voice of a female and a male. The scamster asks for the personal information and photos of the students by telling them they are taking details for the event, which is being organised by the Army welfare education society for the celebration of independence day. The Army welfare education society intimated that Parents to beware of these calls from scammers.
The students of Army Schools of Jammu & Kashmir, Noida, are getting calls from the scamster. The students were asked to share sensitive information. Students across the country are getting calls and WhatsApp messages from two numbers, which end with 1715 and 2167. The Scamster are posing to be teachers and asking for the students’ names on the pretext of adding them to the WhatsApp Groups. The scamster then sends forms links to the WhatsApp groups and asking students to fill out the form to seek more sensitive information.
Do’s
- Do Make sure to verify the caller.
- Do block the caller while finding it suspicious.
- Do be careful while sharing personal Information.
- Do inform the School Authorities while receiving these types of calls and messages posing to be teachers.
- Do Check the legitimacy of any agency and organisation while telling the details
- Do Record Calls asking for personal information.
- Do inform parents about scam calling.
- Do cross-check the caller and ask for crucial information.
- Do make others aware of the scam.
Don’ts
- Don’t answer anonymous calls or unknown calls from anyone.
- Don’t share personal information with anyone.
- Don’t Share OTP with anyone.
- Don’t open suspicious links.
- Don’t fill any forms, asking for personal information
- Don’t confirm your identity until you know the caller.
- Don’t Reply to messages asking for financial information.
- Don’t go to a fake website by following a prompt call.
- Don’t share bank Details and passwords.
- Don’t Make payment over a prompt fake call.
Related Blogs

Executive Summary:
A video is rapidly circulating on social media following claims that Iran’s national security chief Ali Larijani was killed in an Israeli airstrike. The viral clip is being shared with the assertion that it shows the moment Israel launched a powerful attack on Iran to eliminate Larijani, allegedly shaking the ground due to the intensity of the strike However, research by CyberPeace has found the claim to be misleading. The viral video is AI-generated and has no connection to real-world events.
Claim:
Social media users have shared the video with alarming captions. One such post by Deepak Sharma reads:
“WAR UPDATE… Iran is in its final phase… Israel is striking selectively… This attack will leave you shocked… Iran’s national security chief Ali Larijani has been killed in this attack… The intensity of the strike shook the Iranian الأرض.
Post links:

Similar videos were also shared by other users:
- urabh_raj3026/status/2038834832869032026
- https://x.com/ibmindia20/status/2038938020154597447
- https://x.com/Saurabh_raj3026/status/2038834832869032026
Fact Check
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search. During this process, we found the same video on Instagram, uploaded on March 9, 2026, by the account “_iranwar_2026” without any descriptive caption.

According to a BBC report, Ali Larijani died on March 17 in an Israeli strike. This establishes that the viral video predates the reported incident, making the claim factually inconsistent. Further examination of the Instagram account revealed that it frequently shares pro-Iran content, including gaming visuals and AI-generated clips, raising doubts about the authenticity of the video.

To strengthen the verification, we analyzed the viral clip using the AI detection tool “Zhuque AI Detection Assistant.” The results indicated a 91.71% probability that the video is AI-generated, confirming that it is not real footage.

Conclusion
The viral claim linking the video to an Israeli airstrike that allegedly killed Ali Larijani is misleading and factually incorrect. Multiple layers of verification show that the video existed online before the reported incident, ruling out any direct connection. Additionally, AI detection analysis strongly suggests that the video is artificially generated. The source account’s pattern of sharing AI and gaming-related content further weakens the credibility of the claim. There is no verified evidence to support that the viral clip depicts a real attack or any event related to Larijani’s death. Instead, the video appears to be a digitally created visual circulated without context to amplify misinformation.
.webp)
Introduction
Smartphones have revolutionised human connectivity. In 2023, it was estimated that almost 96% of the global digital population is accessing the internet via their mobile phones and India alone has 1.05 billion users. Information consumption has grown exponentially due to the enhanced accessibility that these mobiles provide. These devices allow accessibility to information no matter where one is, and have completely transformed how we engage with the world around us, be it to skim through work emails while commuting, video streaming during breaks, reading an ebook at our convenience or even catching up on news at any time or place. Mobile phones grant us instant access to the web and are always within reach.
But this instant connection has its downsides too, and one of the most worrying of these is the rampant rise of misinformation. These tiny screens and our constant, on-the-go dependence on them can be directly linked to the spread of “fake news,” as people consume more and more content in rapid bursts, without taking the time to really process the same or think deeply about its authenticity. There is an underlying cultural shift in how we approach information and learning currently: the onslaught of vast amounts of “bite-sized information” discourages people from researching what they’re being told or shown. The focus has shifted from learning deeply to consuming more and sharing faster. And this change in audience behaviour is making us vulnerable to misinformation, disinformation and unchecked foreign influence.
The Growth of Mobile Internet Access
More than 5 billion people are connected to the internet and web traffic is increasing rapidly. The developed countries in North America and Europe are experiencing mobile internet penetration at a universal rate. Contrastingly, the developing countries of Africa, Asia, and Latin America are experiencing rapid growth in this penetration. The introduction of affordable smartphones and low-cost mobile data plans has expanded access to internet connectivity. 4G and 5G infrastructure development have further bridged any connectivity gaps. This widespread access to the mobile internet has democratised information, allowing millions of users to participate in the digital economy. Access to educational resources while at the same time engaging in global conversations is one such example of the democratisation of information. This reduces the digital divide between diverse groups and empowers communities with unprecedented access to knowledge and opportunities.
The Nature of Misinformation in the Mobile Era
Misinformation spread has become more prominent in recent times and one of the contributing factors is the rise of mobile internet. This instantaneous connection has made social media platforms like Facebook, WhatsApp, and X (formerly Twitter) available on a single compact and portable device. These social media platforms enable users to share content instantly and to a wide user base, many times without verifying its accuracy. The virality of social media sharing, where posts can reach thousands of users in seconds, accelerates the spread of false information. This ease of sharing, combined with algorithms that prioritise engagement, creates a fertile ground for misinformation to flourish, misleading vast numbers of people before corrections or factual information can be disseminated.
Some of the factors that are amplifying misinformation sharing through mobile internet are algorithmic amplification which prioritises engagement, the ease of sharing content due to instant access and user-generated content, the limited media literacy of users and the echo chambers which reinforce existing biases and spread false information.
Gaps and Challenges due to the increased accessibility of Mobile Internet
Despite growing concerns about misinformation spread due to mobile internet, policy responses remain inadequate, particularly in developing countries. These gaps include: the lack of algorithm regulation, as social media platforms prioritise engaging content, often fueling misinformation. Inadequate international cooperation further complicates enforcement, as addressing the cross-border nature of misinformation has been a struggle for national regulations.
Additionally, balancing content moderation with free speech remains challenging, with efforts to curb misinformation sometimes leading to concerns over censorship.
Finally, a deficit in media literacy leaves many vulnerable to false information. Governments and international organisations must prioritise public education to equip users with the required skills to evaluate online content, especially in low-literacy regions.
CyberPeace Recommendations
Addressing misinformation via mobile internet requires a collaborative, multi-stakeholder approach.
- Governments should mandate algorithm transparency, ensuring social media platforms disclose how content is prioritised and give users more control.
- Collaborative fact-checking initiatives between governments, platforms, and civil society could help flag or correct false information before it spreads, especially during crises like elections or public health emergencies.
- International organisations should lead efforts to create standardised global regulations to hold platforms accountable across borders.
- Additionally, large-scale digital literacy campaigns are crucial, teaching the public how to assess online content and avoid misinformation traps.
Conclusion
Mobile internet access has transformed information consumption and bridged the digital divide. At the same time, it has also accelerated the spread of misinformation. The global reach and instant nature of mobile platforms, combined with algorithmic amplification, have created significant challenges in controlling the flow of false information. Addressing this issue requires a collective effort from governments, tech companies, and civil society to implement transparent algorithms, promote fact-checking, and establish international regulatory standards. Digital literacy should be enhanced to empower users to assess online content and counter any risks that it poses.
References
- https://www.statista.com/statistics/1289755/internet-access-by-device-worldwide/
- https://www.forbes.com/sites/kalevleetaru/2019/05/01/are-smartphones-making-fake-news-and-disinformation-worse/
- https://www.pewresearch.org/short-reads/2019/03/07/7-key-findings-about-mobile-phone-and-social-media-use-in-emerging-economies/ft_19-02-28_globalmobilekeytakeaways_misinformation/
- https://www.psu.edu/news/research/story/slow-scroll-users-less-vigilant-about-misinformation-mobile-phones

Introduction
The rapid advancement of artificial intelligence (AI) technology has sparked intense debates and concerns about its potential impact on humanity. Sam Altman, CEO of AI research laboratory OpenAI, and Altman, known as the father of ChatGPT, an AI chatbot, hold a complex position, recognising both the existential risks AI poses and its potential benefits. In a world tour to raise awareness about AI risks, Altman advocates for global cooperation to establish responsible guidelines for AI development. Artificial intelligence has become a topic of increasing interest and concern as technology advances. Developing sophisticated AI systems raises many ethical questions, including whether they will ultimately save or destroy humanity.
Addressing Concerns
Altman engages with various stakeholders, including protesters who voice concerns about the race toward artificial general intelligence (AGI). Critics argue that focusing on safety rather than pushing AGI development would be a more responsible approach. Altman acknowledges the importance of safety progress but believes capability progress is necessary to ensure safety. He advocates for a global regulatory framework similar to the International Atomic Energy Agency, which would coordinate research efforts, establish safety standards, monitor computing power dedicated to AI training, and possibly restrict specific approaches.
Risks of AI Systems
While AI holds tremendous promise, it also presents risks that must be carefully considered. One of the major concerns is the development of artificial general intelligence (AGI) without sufficient safety precautions. AGI systems with unchecked capabilities could potentially pose existential risks to humanity if they surpass human intelligence and become difficult to control. These risks include the concentration of power, misuse of technology, and potential for unintended consequences.
There are also fears surrounding AI systems’ impact on employment. As machines become more intelligent and capable of performing complex tasks, there is a risk that many jobs will become obsolete. This could lead to widespread unemployment and economic instability if steps are not taken to prepare for this shift in the labour market.
While these risks are certainly caused for concern, it is important to remember that AI systems also have tremendous potential to do good in the world. By carefully designing these technologies with ethics and human values in mind, we can mitigate many of the risks while still reaping the benefits of this exciting new frontier in technology.

Open AI Systems and Chatbots
Open AI systems like ChatGPT and chatbots have gained popularity due to their ability to engage in natural language conversations. However, they also come with risks. The reliance on large-scale training data can lead to biases, misinformation, and unethical use of AI. Ensuring open AI systems’ safety and responsible development mitigates potential harm and maintains public trust.
The Need for Global Cooperation
Sam Altman and other tech leaders emphasise the need for global cooperation to address the risks associated with AI development. They advocate for establishing a global regulatory framework for superintelligence. Superintelligence refers to AGI operating at an exceptionally advanced level, capable of solving complex problems that have eluded human comprehension. Such a framework would coordinate research efforts, enforce safety standards, monitor computing power, and potentially restrict specific approaches. International collaboration is essential to ensure responsible and beneficial AI development while minimising the risks of misuse or unintended consequences.
Can AI Systems Make the World a Better Place: Benefits of AI Systems
AI systems hold many benefits that can greatly improve human life. One of the most significant advantages of AI is its ability to process large amounts of data at a rapid pace. In industries such as healthcare, this has allowed for faster diagnoses and more effective treatments. Another benefit of AI systems is their capacity to learn and adapt over time. This allows for more personalised experiences in areas such as customer service, where AI-powered chatbots can provide tailored solutions based on an individual’s needs. Additionally, AI can potentially increase efficiency in various industries, from manufacturing to transportation. By automating repetitive tasks, human workers can focus on higher-level tasks that require creativity and problem-solving skills. Overall, the benefits of AI systems are numerous and promising for improving human life in various ways.
We must remember the impact of AI on education. It has already started to show its potential by providing personalised learning experiences for students at all levels. With the help of AI-driven systems like intelligent tutoring systems (ITS), adaptive learning technologies (ALT), and educational chatbots, students can learn at their own pace without feeling overwhelmed or left behind.
While there are certain risks associated with the development of AI systems, there are also numerous opportunities for them to make our world a better place. By harnessing the power of these technologies for good, we can create a brighter future for ourselves and generations to come.

Conclusion
The AI revolution presents both extraordinary opportunities and significant challenges for humanity. The benefits of AI, when developed responsibly, have the potential to uplift societies, improve quality of life, and address long-standing global issues. However, the risks associated with AGI demand careful attention and international cooperation. Governments, researchers, and industry leaders must work together to establish guidelines, safety measures, and ethical standards to navigate the path toward AI systems that serve humanity’s best interests and safeguard against potential risks. By taking a balanced approach, we can strive for a future where AI systems save humanity rather than destroy it.