Cyber Operations and Critical Infrastructure Resilience
Barshan Karmakar
Intern - Policy & Advocacy, CyberPeace
PUBLISHED ON
Dec 16, 2025
10
Introduction
Cyberwarfare has evolved into one of the most decisive instruments of statecraft and conflict. The increasing digitisation of critical infrastructure like power grids, water systems, transportation systems, healthcare networks, and energy sources has made these systems new targets in the war of algorithms. Military logic is evolving to paralyse the nation’s critical infrastructure to keep its resources engaged in repairing them and thereby break the nation’s ability to deter and counter attacks, all without firing a single bullet.
From Ransomware to an Invisible Sabotage: The changing nature of warfare
The operational technology (OT) landscape has become the epicentre of cyber operations, all around the world. Once, which was insulated, related to industrial systems that controlled turbines, pipelines, or dams, they now stand connected to the Internet through supervisory control and data acquisition (SCADA) and the Internet of Things. These connections have also become gateways for attackers, besides enhancing the efficiency of the infrastructural lifelines of the nation.
Groups like Volt Typhoon, Sandworm, Laurionite, and Cyberavengers have transformed the art of digital infiltration into a strategic shift. Volt Typhoon, which is linked to China, has used “living-off-the-land” techniques to exploit the legitimate administrative tools to remain invisible while scanning the critical infrastructures in the US. Sandworm, which is aligned with Russia’s GRU (Glavnoye Razvedyvatelnoye Upravlenie) or Main Intelligence Directorate (in English), has demonstrated the power of cyber sabotage in real time, as its attacks on Ukraine’s power grids in 2015 and 2021 had left millions in darkness, coinciding with kinetic missile strikes. Meanwhile, the Iranian-affiliated Cyberavengers group, which has weaponised the AI-assisted malware, such as IOCONTROL, that are capable of hijacking water and energy control systems. Each of these systems used in these operations reflects a shift from direct espionage activities to a state of strategic paralysis.
In comparison to the traditional cybercrime activities that are aimed at stealing data and extortion of money, these campaigns repeatedly target the physical systems, which consist of the machinery that sustains civilian life and military preparedness.
The Military Logic behind Cyber Targeting: A Web of Vulnerabilities
A critical infrastructure is a complex ecosystem that covers power generation, transportation, communication, and manufacturing are all interconnected, which means a single compromised node can cascade into a national paralysis. For instance, a breach in the systems of the dam can flood an entire city, a grid shutdown can halt water supply to hospitals, and even affect air traffic. The 2015 Black Energy Malware attack in Ukraine has proved this possibility when three utilities were hacked, plunging thousands of homes into darkness. The Iranian hackers once again gained access to the Bowman Avenue Dam of New York and controlled its floodgates, which gave a chilling demonstration of the destructive reality of digital manipulation.
The systems remain vulnerable mainly for 3 reasons such as-
Legacy Architectures: Many of these industrial systems were designed decades ago with no built-in cybersecurity mechanisms.
Slow Patching and Segmentation Gaps: All updates and segmentation between IT and TO networks often lag, providing open entry points for attackers.
Converging with IoT: The integration of smart sensors and cloud-based management tools has expanded the attack surface exponentially.
This interconnected fragility has turned our critical infrastructures into both a weapon and a target or a tool for coercion in modern hybrid warfare. Between 2023 and 2024, over 420 cyberattacks were witnessed in several critical global infrastructures, which averaged to 13 attacks per second, according to a news report. These were not just random acts of digital vandalism; they were deliberate and coordinated operational attempts by state-led actors from China, Russia, and Iran.
Developing a new Resilience as the new tool of Deterrence
Cyber deterrence no longer rests on the fear of retaliation, it relies on the need for resilience. Nations that can absorb attacks, maintain continuity, and recover rapidly would be the true superpowers of this digital age. Segmentation, real-time threat detection, and AI-assisted recovery models are vital pillars of this model of resilience. The logic of modern cyberwarfare is clear, which means that the more a nation digitizes, the more it will need to defend itself.
However, as the line between war and peace blurs, safeguarding critical infrastructure is no longer just an IT priority; rather, it is a national security doctrine. In this silent theatre of cyberwarfare, survival will depend not only on firepower, but on firewalls.
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
The global food industry is vast and complex, influencing consumer behaviour, policy, and health outcomes worldwide. However, misinformation within this sector is pervasive, with significant consequences for public health and market dynamics. Misinformation can arise from various sources, including misleading marketing campaigns, unsubstantiated health claims, and misrepresentation of food production practices through public endorsement or otherwise. Nutrition misinformation is one such example. The promotion of false or unproven products for profit can lead to mislead consumers and affect their interests. Misleading claims and inaccurate information about the nutritional value of food products and processes are common claims. The misinformation created about food on the global stage distorts public understanding of nutrition, food safety, and environmental impacts, leading to significant consequences for public health, consumer trust, and the economy.
Rise of Nutritional Misinformation and Consumer Distrust
Health and nutrition-related misinformation is one of the most prevalent types in the food sector. Businesses frequently advertise their products as "natural" or "healthy" without providing sufficient data to back up these claims, tricking customers into buying goods that might be heavy in fat, sugar, or salt. Words like "superfood" are frequently used without supporting evidence from science, giving the impression that they are healthier.
Misinformation also impacts the sustainability and ethics of food production. Claims of "sustainable" or "ethical" sourcing are frequently exaggerated or fabricated, leaving consumers unaware of the true environmental and social costs associated with certain products.
This lack of clarity is not only observed in general food trends but also within organisations meant to provide trustworthy information. There has been significant criticism, directed at the International Food Information Council (IFIC), for their alleged promotion of nutrition-based misinformation to safeguard the interests of large food corporations, resulting in potentially compromising public health. The preemptive claims that IFIC made about the nutritive claims have been questioned by the National Institutes of Health, USA in November 2022. They reported in their study that IFIC promotes food and beverage company interests and undermines the accurate dissemination of scientific evidence related to diet and health. This was in support of the objective of the study, which was to determine whether, there have been many claims that the nutritional value of certain foods or diets may be manipulated to favour business goals, leaving consumers misinformed about what constitutes a truly healthy diet.
Another source of misinformation is the growing ‘Free-From’ fad. The “free-from” label in the US is a food category of products that claim to be free from certain ingredients or chemicals. It has been steadily growing by 7% annually. These labels often tout products as healthier due to a simpler ingredient list. Although seemingly harmless, transparency in ingredient disclosure is often obscured in the 'free-from' trend. This can lead to consumer distrust in the long run, making them hesitant.
The Harmful Effects of Food Misinformation
The effects of misinformation about nutrition and food safety can directly affect public health. Consumers unknowingly may accept false claims or avoid certain foods without scientific basis and adopt harmful dietary habits, potentially leading to malnutrition or other health problems. By the time the realisation sets in about being misled, their trust is eroded not only towards such companies but also towards the regulators. This distrust can lead to declining consumer confidence and disrupt market stability.
Some food-related misinformation downplays the environmental impact that certain food production practices have. An example of such a situation is the promotion of meat alternatives as being entirely eco-friendly without considering all environmental factors. This can mislead consumers and obscure the complex environmental effects of food production systems.
Misinformation can distort consumer purchasing habits, potentially leading to a reduced demand for certain products and unfair competition. The sufferers in this case are the small-scale producers who suffer disproportionately, while the large corporations might use this misinformation to maintain their dominance in the market. Regulatory checks, open communication, and public education campaigns are needed to combat mis/disinformation in the global food sector and enable consumers to make decisions that are sustainable, healthful and informed.
CyberPeace Recommendations
Unfair trade practices like providing misleading information or unchecked claims on food products should be better addressed by the regulators. Companies must provide clear, transparent and accurate information about their products as mandated under the Food Safety and Standards (Advertising and Claims) Regulations, 2018. This information should include the true origins, production methods, and nutritional content on their labels.
Promotions of initiatives and investments by public health organisations and food authorities towards educating consumers and improving food literacy should encouraged.
Regulating social media endorsement is also crucial to prevent the spread of misinformation and unchecked claims. Without proper due diligence on product details, influencers may unknowingly mislead their audience, causing potential harm.
The Social Media Platforms can partner with nutritionists, dietitians, and other health professionals who are content creators, as they can help in understanding and promoting accurate, science-based nutrition information and debunk any misleading claims.
Campaigns should be encouraged to spread public awareness about the harms of food-related misleading claims or trends. Emphasis should be on evidence-based nutritional guidance. The ongoing research towards food safety, nutrition, and true information should be actively communicated to keep the public informed. Combating food misinformation requires more robust regulations, improved transparency, and heightened consumer awareness and vigilance.
The Information Technology (IT) Ministry has tested a new parental control app called ‘SafeNet’ that is intended to be pre-installed in all mobile phones, laptops and personal computers (PCs). The government's approach shows collaborative efforts by involving cooperation between Internet service providers (ISPs), the Department of School Education, and technology manufacturers to address online safety concerns. Campaigns and the proposed SafeNet application aim to educate parents about available resources for online protection and safeguarding their children.
The Need for SafeNet App
SafeNet Trusted Access is an access management and authentication service that ensures no user is a target by allowing you to expand authentication to all users and apps with diverse authentication capabilities. SafeNet is, therefore, an arsenal of tools, each meticulously crafted to empower guardians in the art of digital parenting. With the finesse of a master weaver, it intertwines content filtering with the vigilant monitoring of live locations, casting a protective net over the vulnerable online experiences of the children. The ability to oversee calls and messages adds another layer of security, akin to a watchful sentinel standing guard over the gates of communication. Some pointers regarding the parental control app that can be taken into consideration are as follows.
1. Easy to use and set up: The app should be useful, intuitive, and easy to use. The interface plays a significant role in achieving this goal. The setup process should be simple enough for parents to access the app without any technical issues. Parents should be able to modify settings and monitor their children's activity with ease.
2. Privacy and data protection: Considering the sensitive nature of children's data, strong privacy and data protection measures are paramount. From the app’s point of view, strict privacy standards include encryption protocols, secure data storage practices, and transparent data handling policies with the right of erasure to protect and safeguard the children's personal information from unauthorized access.
3. Features for Time Management: Effective parental control applications frequently include capabilities for regulating screen time and establishing use limitations. The app will evaluate if the software enables parents to set time limits for certain applications or devices, therefore promoting good digital habits and preventing excessive screen time.
4. Comprehensive Features of SafeNet: The app's commitment to addressing the multifaceted aspects of online safety is reflected in its robust features. It allows parents to set content filters with surgical precision, manage the time their children spend in the digital world, and block content that is deemed age-inappropriate. This reflects a deep understanding of the digital ecosystem's complexities and the varied threats that lurk within its shadows.
5. Adaptable to the needs of the family: In a stroke of ingenuity, SafeNet offers both parent and child versions of the app for shared devices. This adaptability to diverse family dynamics is not just a nod to inclusivity but a strategic move that enhances its usability and effectiveness in real-world scenarios. It acknowledges the unique tapestry of family structures and the need for tools that are as flexible and dynamic as the families they serve.
6. Strong Support From Government: The initiative enjoys a chorus of support from both government and industry stakeholders, a symphony of collaboration that underscores the collective commitment to the cause. Recommendations for the pre-installation of SafeNet on devices by an industry consortium resonate with the directives from the Prime Minister's Office (PMO),creating a harmonious blend of policy and practice. The involvement of major telecommunications players and Internet service providers underscores the industry's recognition of the importance of such initiatives, emphasising a collaborative approach towards deploying digital safeguarding measures at scale.
Recommendations
The efforts by the government to implement parental controls a recommendable as they align with societal goals of child welfare and protection. This includes providing parents with tools to manage and monitor their children's Internet usage to address concerns about inappropriate content and online risks. The following suggestions are made to further support the government's initiative:
1. The administration can consider creating a verification mechanism similar to how identities are verified when mobile SIMS are issued. While this certainly makes for a longer process, it will help address concerns about the app being misused for stalking and surveillance if it is made available to everyone as a default on all digital devices.
2. Parental controls are available on several platforms and are designed to shield, not fetter. Finding the right balance between protection and allowing for creative exploration is thus crucial to ensuring children develop healthy digital habits while fostering their curiosity and learning potential. It might be helpful to the administration to establish updated policies that prioritise the privacy-protection rights of children so that there is a clear mandate on how and to what extent the app is to be used.
3. Policy reforms can be further supported through workshops, informational campaigns, and resources that educate parents and children about the proper use of the app, the concept of informed consent, and the importance of developing healthy, transparent communication between parents and children.
Conclusion
Safety is a significant step towards child protection and development. Children have to rely on adults for protection and cannot identify or sidestep risk. In this context, the United Nations Convention on the Rights of the Child emphasises the matter of protection efforts for children, which notes that children have the "right to protection". Therefore, the parental safety app will lead to significant concentration on the general well-being and health of the children besides preventing drug misuse. On the whole, while technological solutions can be helpful, one also needs to focus on educating people on digital safety, responsible Internet use, and parental supervision.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.