Honouring UN Peacekeepers: Safeguarding Peace in a Changing World
Introduction
Today, on the International Day of UN Peacekeepers, we honour the brave individuals who risk their lives to uphold peace in the world’s most fragile and conflict-ridden regions. These peacekeepers are symbols of hope, diplomacy, and resilience. But as the world changes, so do the arenas of conflict. In today’s interconnected age, peace and safety are no longer confined to physical spaces—they extend to the digital realm. As we commemorate their service, we must also reflect on the new frontlines of peacekeeping: the internet, where misinformation, cyberattacks, and digital hate threaten stability every day.
The Legacy of UN Peacekeepers
Since 1948, UN Peacekeepers have served in over 70 missions, protecting civilians, facilitating political processes, and rebuilding societies. From conflict zones in Africa to the Balkans, they’ve worked in the toughest terrains to keep the peace. Their role is built on neutrality, integrity, and international cooperation. But as hybrid warfare becomes more prominent and digital threats increasingly influence real-world violence, the peacekeeping mandate must evolve. Traditional missions are now accompanied by the need to understand and respond to digital disruptions that can escalate local tensions or undermine democratic institutions.
The Digital Battlefield
In recent years, we’ve seen how misinformation, deepfakes, online radicalisation, and coordinated cyberattacks can destabilise peace processes. Disinformation campaigns can polarise communities, hinder humanitarian efforts, and provoke violence. Peacekeepers now face the added challenge of navigating conflict zones where digital tools are weaponised. The line between physical and virtual conflict is blurring. Cybersecurity has gone beyond being just a technical issue and is now a peace and security issue as well. From securing communication systems to monitoring digital hate speech that could incite violence, peacekeeping must now include digital vigilance and strategic digital diplomacy.
Building a Culture of Peace Online
Safeguarding peace today also means protecting people from harm in the digital space. Governments, tech companies, civil society, and international organisations must come together to build digital resilience. This includes investing in digital literacy, combating online misinformation, and protecting human rights in cyberspace. Peacekeepers may not wear blue helmets online, but their spirit lives on in every effort to make the internet a safer, kinder, and more truthful place. The role of youth, educators, and responsible digital citizens has never been more crucial. A culture of peace must be cultivated both offline and online.
Conclusion: A Renewed Pledge for Peace
On this UN Peacekeepers’ Day, let us not only honour those who have served and sacrificed but also renew our commitment to peace in all its dimensions. The world’s conflicts are evolving, and so must our response. As we support peacekeepers on the ground, let’s also become peacebuilders in the digital world, amplifying truth, rejecting hate, and building safer, inclusive communities online. Peace today is not just about silencing guns but also silencing disinformation. The call for peace is louder than ever. Let’s answer it, both offline and online.
Related Blogs

Introduction
India has been a nation where technology penetration has been a little slower in the previous decades; however, that has changed now. Cyberspace has influenced and touched every country and has significantly diminished the gap between developing nations, developed nations, and underdeveloped nations. This has also been substantiated and strengthened during the Covid-19 pandemic as the world went into lockdown and the cyberspace was the only medium of communication and information. India witnessed a rise of 61% in terms of internet users, and a significant part of this number represented rural India.
New Standards
These standards have been released in threefold aspects covering – Digital Television Receivers, USB Type-C chargers, and Video Surveillance Systems, thus streamlining the use of gadgets and reduction of e-waste for the country.
1. Digital Television Receivers
The Indian standard IS 18112:2022 specification for digital television, and this standard would enable reception of free-to-air TV and radio channels just by connecting a dish antenna with LNB mounted on a suitable area with good signal reception. This will help in the transmission of knowledge about government initiatives and schemes, the educational content of Doordarshan, and the repository of Indian cultural programs. Doordarshan is in the process of phasing out analog transmission, and free-to-air channels will continue to be broadcast using digital satellite transmission. The keen aspects of educational and awareness programs run by the Govt and CSOs will impact more Indians than before as the Ministry of Information and Broadcast intends to increase their free channels of Doordarshan from 55 to 200 by the end of this year, which shows the importance of developments in the mass media industry.
2. USB Type C
Standard (IS/IEC 62680-1-3:2022) for USB Type-C receptacles, plugs, and cables adopting the existing global standard IEC 62680-1-3:2022. This standard provides for the requirements for USB type C ports and cables for use in various electronic devices like laptops, mobile phones, and other gadgets. This standard is similar to the new European standard, which is also aimed at the reduction of carbon emissions and e-waste; this move will result in ease for the industry and the end users. This will also contribute towards the strengthening of the cyber security aspects and prevent threats like ‘Juice Jacking’ to a massive extent.
3. Video Surveillance System
IS 16190, this standard provides a detailed outline of the aspects of a video surveillance system, such as requirements for its components like camera devices, interfaces, system requirements, and tests to ascertain the camera’s image quality on different devices. This series of standards would assist customers, installers, and users in establishing their requirements and determining the appropriate equipment required for their intended application and also provide means of evaluating the performance of the VSS objectively. This will also help in the improvement of surveillance by the individuals, and this will also help in the better investigation by Law enforcement agencies and faster apprehension of criminals, thus contributing to an overall safe society.

The Advantages
These standards are in power with the Internationally prevalent standards, thus taking the safety factors to the global aspect. This will also allow the Indian industry to create world-class products which can be shared all across the globe. This will open India to various opportunities and job avenues, thus opening the world to invest in India. The aspect of Atma Nirbhar Bharat and Digital India will be strengthened to a new level as the nation will be able to deliver products in power with quality in developed countries. The end Indian consumer will benefit the most from these upgraded standards in terms of Digital Televisions, Type ‘C’ USB chargers, and Video surveillance systems, as these impacts the consumers’ daily activities in terms of security and access to information.
- Reduction in Carbon Emission
- Production of World Class components and devices
- Boost to the economy and Atmanirbhar Bharat
- New avenues and opportunities for startups and MSMEs
- Better transmission of Knowledge
- Boosting FDI
- Improved quality of products for the end consumer
- New innovation hubs and exposure to global talents
This government move simply shows how India is working toward securing the Sustainable development Goals (SDG) by United Nations. This clearly shares the message to the world that India is ready for the future and will also be a helping hand to various developing and underdeveloped nations in the times to come.
Conclusion
These standards will significantly contribute towards the reduction of E-Waste and unnecessary accessories for daily use gadgets. This strengthens the reduction in carbon emissions and thus contributes towards the perseverance of the environment and working towards sustainable development goals. Such standards will lead the future towards securing the netizens and their new and evolving digital habits. In the current phase of cyberspace, the most essential aspect of establishing Critical Infrastructure as the same will act as a shield against the threats of cyberspace.

Introduction
The first activity one engages in while using social media is scrolling through their feed and liking or reacting to posts. Social media users' online activity is passive, involving merely reading and observing, while active use occurs when a user consciously decides to share information or comment after actively analysing it. We often "like" photos, posts, and tweets reflexively, hardly stopping to think about why we do it and what information it contains. This act of "liking" or "reacting" is a passive activity that can spark an active discourse. Frequently, we encounter misinformation on social media in various forms, which could be identified as false at first glance if we exercise caution and avoid validating it with our likes.
Passive engagement, such as liking or reacting to a post, triggers social media algorithms to amplify its reach, exposing it to a broader audience. This amplification increases the likelihood of misinformation spreading quickly as more people interact with it. As the content circulates, it gains credibility through repeated exposure, reinforcing false narratives and expanding its impact.
Social media platforms are designed to facilitate communication and conversations for various purposes. However, this design also enables the sharing, exchange, distribution, and reception of content, including misinformation. This can lead to the widespread spread of false information, influencing public opinion and behaviour. Misinformation has been identified as a contributing factor in various contentious events, ranging from elections and referenda to political or religious persecution, as well as the global response to the COVID-19 pandemic.
The Mechanics of Passive Sharing
Sharing a post without checking the facts mentioned or sharing it without providing any context can create situations where misinformation can be knowingly or unknowingly spread. The problem with sharing and forwarding information on social media without fact-checking is that it usually starts in small, trusted networks before going on to be widely seen across the internet. This web which begins is infinite and cutting it from the roots is necessary. The rapid spread of information on social media is driven by algorithms that prioritise engagement and often they amplify misleading or false content and contribute to the spread of misinformation. The algorithm optimises the feed and ensures that the posts that are most likely to engage with appear at the top of the timeline, thus encouraging a cycle of liking and posting that keeps users active and scrolling.
The internet reaches billions of individuals and enables them to tailor persuasive messages to the specific profiles of individual users. The internet because of its reach is an ideal medium for the fast spread of falsehoods at the expense of accurate information.
Recommendations for Combating Passive Sharing
The need to combat passive sharing that we indulge in is important and some ways in which we can do so are as follows:
- We need to critically evaluate the sources before sharing any content. This will ensure that the information source is not corrupted and used as a means to cause disruptions. The medium should not be used to spread misinformation due to the source's ulterior motives. Tools such as crowdsourcing and AI methods have been used in the past to evaluate the sources and have been successful to an extent.
- Engaging with fact-checking tools and verifying the information is also crucial. The information that has been shared on the post needs to be verified through authenticated sources before indulging in the practice of sharing.
- Being mindful of the potential impact of online activity, including likes and shares is important. The kind of reach that social media users have today is due to several reasons ranging from the content they create, the rate at which they engage with other users etc. Liking and sharing content might not seem much for an individual user but the impact it has collectively is huge.
Conclusion
Passive sharing of misinformation, like liking or sharing without verification, amplifies false information, erodes trust in legitimate sources, and deepens social and political divides. It can lead to real-world harm and ethical dilemmas. To combat this, critical evaluation, fact-checking, and mindful online engagement are essential to mitigating this passive spread of misinformation. The small act of “like” or “share” has a much more far-reaching effect than we anticipate and we should be mindful of all our activities on the digital platform.
References
- https://www.tandfonline.com/doi/full/10.1080/00049530.2022.2113340#summary-abstract
- https://timesofindia.indiatimes.com/city/thane/badlapur-protest-police-warn-against-spreading-fake-news/articleshow/112750638.cms

The Expanding Governance Challenge of Artificial Intelligence
Artificial intelligence (AI) systems are increasingly embedded in economic and social infrastructure. They are being adopted in financial services, healthcare diagnostics, hiring systems, and public administration. But while these systems improve efficiency and decision-making, they also introduce new forms of technological risk.
Unlike conventional software, AI systems learn patterns from data and continue to evolve as they run. This poses governance issues since risks can arise throughout the AI life cycle, whether at the coding level or in their implementation.
The latest regulatory frameworks, such as the European Union’s AI Act (EU AI Act) and the UNESCO Recommendation on the Ethics of Artificial Intelligence, note that responsible AI governance depends on the realisation of where risks emerge across the development process.
This article maps the AI system lifecycle, identifies the risks that emerge at each stage and evaluates the policy tools used to mitigate them using the lifecycle framework developed by the Organisation of Economic Co-operation and Development (OECD).
The Lifecycle of an AI System
AI systems are developed through a structured process that includes problem definition, dataset collection and preparation, model development, testing and validation, deployment, and monitoring.

The OECD conceptualises this development process as the AI system lifecycle. Each stage entails various technical and administrative procedures, since choices made during these stages will dictate the goals and limits of an AI system. Further, the quality and representativeness of training sets will have a strong effect on the behaviour of models after implementation.
Since this is an iterative and not a linear procedure, risks can be introduced at each stage of the AI lifecycle. New data can be retrained into different models, and systems are regularly updated once they have been deployed, to address performance degradation, model errors, or unintended outputs. This iterative process means governance must address risks across the entire lifecycle, not just at deployment.
Where AI Risks Emerge
AI risks usually emerge earlier in the development process, especially in the phases when system objectives are formulated and training data are chosen. The EU AI Act and the UNESCO Recommendation on the Ethics of AI outline the following risks: bias and discrimination, privacy and data security violations, the absence of transparency in automated decision-making, and risks to fundamental rights.

AI Governance Risk Landscape: Core Risk Categories Under International Frameworks
Risk categories jointly identified by the EU AI Act and UNESCO Recommendation on the Ethics of Artificial Intelligence
Outlining the risks throughout the AI lifecycle helps understand the areas where governance interventions are most necessary. For example, discriminatory outcomes often result from biased or unrepresentative training data, while safety failures are typically linked to inadequate testing before deployment. Risks such as misinformation arise post the development process, when generative AI systems are deployed at scale on digital platforms.

AI System Lifecycle: Key Risks at Each Stage
Risks identified per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Understanding where risks emerge across the lifecycle explains why governance frameworks classify AI systems by risk and apply oversight at multiple stages.
Policy Tools for Mitigating AI Risks
Governments and international organisations have developed regulatory tools to help mitigate AI risks in the lifecycle. These tools are meant to make sure that AI technologies are identified as up to standard in safety, accountability and fairness prior to and after deployment.
For example, the OECD AI Policy Observatory recommends that governments adopt policy instruments such as risk evaluations, algorithmic auditing necessities, regulatory sandboxes, and transparency necessities of AI systems. The European Union’s Artificial Intelligence Act (AI Act) is one of the most comprehensive systems of governance that introduces a risk-oriented regulation strategy. It mandates adherence to requirements concerning data governance, documentation, human oversight, and robustness, and cybersecurity. Such requirements bring regulatory checkpoints to the lifecycle of AI systems.
Mapping these policy tools across the lifecycle illustrates how governance mechanisms can intervene at different stages of AI development.

Governance Overlay: Policy Interventions Across the AI Lifecycle
Regulatory tools mapped at each stage of AI development per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Several policy tools are directed at the risks that occur in the pre-developmental stages. In one example, algorithmic impact assessment has been applied in various jurisdictions to measure the possible consequences of automated decision systems on society before implementation. On the same note, the requirements of dataset documentation, including dataset transparency requirements and model cards, are aimed at enhancing accountability during the training and development stages of the AI systems. Therefore, lifecycle-based policy design allows regulators to intervene before harmful outcomes occur, rather than responding only after AI systems have caused damage in real-world environments.
The Policy Gap in AI Governance
The misalignment between risks and governance tools across the AI lifecycle indicates a critical structural gap in existing regulations. Numerous governance processes become activated after AI systems are classified as “high risk” or after they are implemented in the real world. But the most serious sources of damage have their roots in earlier stages of the development procedure.
An example is that prejudiced or unbalanced training data is almost inevitably a source of discriminative results in automated decision systems. When these types of models are applied in areas like staffing, credit rating, or in providing services to the public, such biases can quickly spread to large populations and undermine democratic rights. In the same way, the lack of transparency in model design might result in the fact that the regulator or individuals are affected by the decision-making process. This reflects a broader timing gap in AI governance, where risks originate during design and development, but regulatory intervention typically occurs only after deployment.
Analysis
1. Key risks originate before deployment: As depicted in the lifecycle mapping, the data collection and model development phase presents several significant governance risks as opposed to the deployment phase. Structural issues can be entrenched within AI systems even before they are deployed in practice due to bias in data sets, incomplete reporting of training sets, and obscured network designs.
2. Data governance is a primary point of vulnerability: Most of the instances of algorithmic discrimination listed above are associated with training material that is not representative of some population groups or is historical. Since machine learning models are optimisations of patterns that exist in datasets, these biases can be carried through the whole lifecycle and reproduced after deployment.
3. Regulatory approaches remain mismatched across jurisdictions: Different countries adopt varying approaches to AI governance, ranging from risk-based frameworks such as the EU AI Act to more sector-specific or voluntary guidelines in other regions. This divergence creates inconsistencies in safety, accountability, and enforcement standards, allowing risks to persist across borders and potentially undermining the protection of users in globally deployed AI systems.
4. Governance interventions remain uneven across the lifecycle: Whereas the various regulatory instruments aim at deployment and monitoring, fewer instruments systematically tackle the risks that are posed by the previous design and development phases.
Recommendations
1. Introduce mandatory lifecycle risk assessments: The regulatory systems need to demand systemic risk evaluation at the beginning of AI development, especially at the problem design and dataset selection phases. This would assist in detecting possible harmful applications in advance, before systems are constructed and installed.
2. Strengthen dataset governance standards: Training datasets must be supplemented with documentation as to their provenance, composition and limitations. Standardised documentation frameworks of data sets can assist in the discovery by regulators and auditors of the potential sources of bias or privacy threats.
3. Expand independent algorithmic auditing: AI systems can be assessed by regular third-party audits based on fairness, strength, and security weaknesses. The auditing mechanisms especially apply to high-risk systems employed in employment, finance or the public services.
4. Integrate continuous monitoring requirements: AI systems may be monitored regularly after implementation to identify model drift, unforeseen consequences, or abuse. Reporting systems can facilitate the process where the regulators can see the emerging risks and modify the governance systems.
Conclusion - The Need for Global AI Governance
Despite growing regulatory attention, global air governance remains fragmented. Different jurisdictions adopt varying approaches to risk classification, oversight, and enforcement, leading to inconsistencies in safety and accountability standards. Given that AI systems are often developed, deployed, and used across borders, this lack of coordination allows risks to persist beyond national regulatory frameworks.
Addressing these challenges requires a shift towards greater international cooperation and lifecycle-based governance. Developing shared standards, improving cross-border regulatory alignment, and embedding oversight across all stages of AI development will be essential to ensuring that AI systems are safe, transparent, and accountable in a globally interconnected environment.
References
- OECD AI lifecycle
- OECD AI system lifecycle description
- OECD AI governance lifecycle framework
- EU AI Act overview
- EU AI Act risk categories
- UNESCO Recommendation on the Ethics of AI
- AI governance lifecycle analysis
- OECD AI policy tools database