Bharat National Cyber Security Exercise, 2024 - Harmonising Efforts in the Indian Cybersecurity Space
Introduction:
The National Security Council Secretariat, in strategic partnership with the Rashtriya Raksha University, Gujarat, conducted a 12-day Bharat National Cyber Security Exercise in 2024 (from 18th November to 29th November). This exercise included landmark events such as a CISO (Chief Information Security Officers) Conclave and a Cyber Security Start-up exhibition, which were inaugurated on 27 November 2024. Other key features of the exercise include cyber defense training, live-fire simulations, and strategic decision-making simulations. The aim of the exercise was to equip senior government officials and personnel in critical sector organisations with skills to deal with cybersecurity issues. The event also consisted of speeches, panel discussions, and initiatives such as the release of the National Cyber Reference Framework (NCRF)- which provides a structured approach to cyber governance, and the launch of the National Cyber Range(NCR) 1.0., a cutting-edge facility for cyber security research training.
The Deputy National Security Advisor, Shri T.V. Ravichandran (IPS) reiterated, through his speech, the importance of the inclusion of technology in challenges with respect to cyber security and shaping India’s cyber strategy in a manner that is proactive. The CISOs of both government and private entities were encouraged to take up multidimensional efforts which included technological upkeep but also soft skills for awareness.
CyberPeace Outlook
The Bharat National Cybersecurity Exercise (Bharat NCX) 2024 underscores India’s commitment to a comprehensive and inclusive approach to strengthening its cybersecurity ecosystem. By fostering collaboration between startups, government bodies, and private organizations, the initiative facilitates dialogue among CISOs and promotes a unified strategy toward cyber resilience. Platforms like Bharat NCX encourage exploration in the Indian entrepreneurial space, enabling startups to innovate and contribute to critical domains like cybersecurity. Developments such as IIT Indore’s intelligent receivers (useful for both telecommunications and military operations) and the Bangalore Metro Rail Corporation Limited’s plans to establish a dedicated Security Operations Centre (SOC) to counter cyber threats are prime examples of technological strides fostering national cyber resilience.
Cybersecurity cannot be understood in isolation: it is an integral aspect of national security, impacting the broader digital infrastructure supporting Digital India initiatives. The exercise emphasises skills training, creating a workforce adept in cyber hygiene, incident response, and resilience-building techniques. Such efforts bolster proficiency across sectors, aligning with the government’s Atmanirbhar Bharat vision. By integrating cybersecurity into workplace technologies and fostering a culture of awareness, Bharat NCX 2024 is a platform that encourages innovation and is a testament to the government’s resolve to fortify India’s digital landscape against evolving threats.
References
- https://ciso.economictimes.indiatimes.com/news/cybercrime-fraud/bharat-cisos-conclave-cybersecurity-startup-exhibition-inaugurated-under-bharat-ncx-2024/115755679
- https://pib.gov.in/PressReleasePage.aspx?PRID=2078093
- https://timesofindia.indiatimes.com/city/indore/iit-indore-unveils-groundbreaking-intelligent-receivers-for-enhanced-6g-and-military-communication-security/articleshow/115265902.cms
- https://www.thehindu.com/news/cities/bangalore/defence-system-to-be-set-up-to-protect-metro-rail-from-cyber-threats/article68841318.ece
- https://rru.ac.in/wp-content/uploads/2021/04/Brochure12-min.pdf
Related Blogs

Introduction
Google.org is committed to stepping ahead to enhance Internet safety and responsible online behaviour. ‘Google for INDIA 2023’, an innovative conclave, took place on 19th October 2023. Google.org has embarked on its vision for a safer Internet and combating misinformation, financial frauds and other threats that come from bad actors. Alphabet Big Tech is committed to leading this charter and engaging with all stakeholders, including government agencies. Google.org has partnered with CyberPeace Foundation to foster a safer online environment and empower users on informed decisions on the Internet. CyberPeace will run a nationwide awareness and capacity-building Initiative equipping more than 40 Million Indian netizens with fact-checking techniques, tools, SoPs, and guidance for responsible and safe online behaviour. The campaign will be deployed in 15 Indian regional languages as a comprehensive learning outcome for the whole nation. Together, Google.org and CyberPeace Foundation aim to make the Internet safer for everyone and work in a direction to ensure that progress for everyone is built on a strong foundation of trusted information available on the Internet and pursuing the true spirit of “Technology for Good”.
Google.org and CyberPeace together for enhanced online safety
A new $4 million grant to CyberPeace Foundation will support a nationwide awareness-building program and comprehensive multilingual digital resource hub with content available in up to 15 Indian languages to empower nearly 40 million underserved people across the country in building resilience against misinformation and practice responsible online behaviour. Together, Google.org and CyberPeace are on their way to creating a strong pathway of trusted Internet and a safer digital environment. The said campaign will be undertaken for a duration of 3 years, and the following key components will run at the core of the same:
- CyberPeace Corps Volunteers: This will be a pan India volunteer engagement initiative to create a community of 9 million CyberPeace Ambassadors/First Responders/Volunteers to fight misinformation and promote responsible online behaviour going far into the rural, marginalised and most vulnerable strata of society.
- Digital Resource Hub: In pursuance of the campaign, CyberPeace is developing a cutting-edge platform offering a wealth of resources on media literacy, responsible content creation, and cyber hygiene translated into 15 Indian regional languages for a widespread impact on the ground.
- Public Sensitisation: CyberPeace will be conducting an organic series of online and offline events focusing on empowering netizens to discern fact from fiction. These sensitisation drives will be taken on by start master trainers from different regions of India to ensure all states and UTs are impacted.
- CyberPeace Quick Reaction Team: A specialised team of tech enthusiasts that will work closely with platforms to rapidly address new-age cyber threats and misinformation campaigns in real-time and establish best practices and SoPs for the diverse elements in the industries.
- Engaging Multimedia Content: With CyberPeace’s immense expertise in E-Course and digital content, the campaign will produce a range of multilingual multimedia resources, including informative videos, posters, games, contests, infographics, and more.
- Fact-check unit: Fact-check units will play a crucial role in monitoring, identifying, and doing fact analysis of the suspected information and overall busting the growing incidents of misinformation. Fake news or misinformation has negative consequences on society at large. The fact-check units play a significant role in controlling the widespread of misinformation.
Fight Against Misinformation
Misinformation is rampant all across the world and requires attention. With the increasing penetration of social media and the internet, this remains a global issue. Google.org has taken up the initiative to address this issue in India and, in collaboration with CyberPeace Foundation taken a proactive step to multiple avenues for mass-scale awareness and upskilling campaigns have been piloted to make an impact on the ground with the vision of upskilling over 40 Million people in the country and building resilience against misinformation and practicing responsible online behavior.
Maj Vineet Kumar, Founder of CyberPeace, said,
"In an era in which digital is deeply intertwined with our lives, knowing how to discern, act on, and share the credible from the wealth of information available online is critical to our well-being, and of our families and communities. Through this initiative, we’re committing to help Internet users across India become informed, empowered and responsible netizens leading through conversations and actions. Whether it’s in fact-checking information before sharing it, or refraining from sharing unverified news, we all play an important role in building a web that is a safe and inclusive space for everyone, and we are extremely grateful to Google.org for propelling us forward in this mission with their grant support.”
Annie Lewin, Senior Director of Global Advocacy and Head of Asia Pacific, Google.org said:
“We have a longstanding commitment to supporting changemakers using technology to solve humanity's biggest challenges. And, the innovation and zeal of Indian nonprofit organisations has inspired us to deepen our commitment in India. With the new grant to CyberPeace Foundation, we are proud to support solutions that speak directly to Google’s DNA, helping first-time internet users chart their path in a digital world with confidence. Such solutions give us pride and hope that each step, built on a strong foundation of trusted information, will translate into progress for all.”
Conclusion
Google.org has partnered with government agencies and other Indian organisations with the vision of future-proof India for digital public infrastructure and staying a step ahead for Internet safety, keeping the citizens safe online. Google.org is taking its largest step yet towards online safety in India. There is widespread misinformative content and information in the digital media space or on the internet. This proactive initiative of Google.org in collaboration with CyberPeace is a commendable step to prevent the spread of misinformation and empower users to act responsibly while sharing any information and making informed decisions while using the Internet, hence creating a safe digital environment for everyone.
References:
- https://www.youtube.com/live/-b4lTVjOsXY?feature=shared
- https://blog.google/intl/en-in/products/google-for-india-2023-product-announcements/
- https://blog.google/intl/en-in/partnering-indias-success-in-a-new-digital-paradigm/
- https://telecom.economictimes.indiatimes.com/news/internet/google-to-debut-credit-in-india-announces-a-slew-of-ai-powered-launches/104547623
- https://theprint.in/economy/google-for-india-2023-tech-giant-says-it-removed-2-million-violative-videos-in-q2-2023/1810201/

Introduction
The Ministry of Communications, Department of Telecommunications notified the Telecommunications (Telecom Cyber Security) Rules, 2024 on 22nd November 2024. These rules were notified to overcome the vulnerabilities that rapid technological advancements pose. The evolving nature of cyber threats has contributed to strengthening and enhancing telecom cyber security. These rules empower the central government to seek traffic data and any other data (other than the content of messages) from service providers.
Background Context
The Telecommunications Act of 2023 was passed by Parliament in December, receiving the President's assent and being published in the official Gazette on December 24, 2023. The act is divided into 11 chapters 62 sections and 3 schedules. The said act has repealed the old legislation viz. Indian Telegraph Act of 1885 and the Indian Wireless Telegraphy Act of 1933. The government has enforced the act in phases. Sections 1, 2, 10-30, 42-44, 46, 47, 50-58, 61, and 62 came into force on June 26, 2024. While, sections 6-8, 48, and 59(b) were notified to be effective from July 05, 2024.
These rules have been notified under the powers granted by Section 22(1) and Section 56(2)(v) of the Telecommunications Act, 2023.
Key Provisions of the Rules
These rules collectively aim to reinforce telecom cyber security and ensure the reliability of telecommunication networks and services. They are as follows:
The Central Government agency authorized by it may request traffic or other data from a telecommunication entity through the Central Government portal to safeguard and ensure telecom cyber security. In addition, the Central Govt. can instruct telecommunication entities to establish the necessary infrastructure and equipment for data collection, processing, and storage from designated points.
● Obligations Relating To Telecom Cybersecurity:
Telecom entities must adhere to various obligations to prevent cyber security risks. Telecommunication cyber security must not be endangered, and no one is allowed to send messages that could harm it. Misuse of telecommunication equipment such as identifiers, networks, or services is prohibited. Telecommunication entities are also required to comply with directions and standards issued by the Central Govt. and furnish detailed reports of actions taken on the government portal.
● Compulsory Measures To Be Taken By Every Telecommunication Entity:
Telecom entities must adopt and notify the Central Govt. of a telecom cyber security policy to enhance cybersecurity. They have to identify and mitigate risks of security incidents, ensure timely responses, and take appropriate measures to address such incidents and minimize their impact. Periodic telecom cyber security audits must be conducted to assess network resilience against potential threats for telecom entities. They must report security incidents promptly to the Central Govt. and establish facilities like a Security Operations Centre.
● Reporting of Security Incidents:
- Telecommunication entities must report the detection of security incidents affecting their network or services within six hours.
- 24 hours are provided for submitting detailed information about the incident, including the number of affected users, the duration, geographical scope, the impact on services, and the remedial measures implemented.
The Central Govt. may require the affected entity to provide further information, such as its cyber security policy, or conduct a security audit.
CyberPeace Policy Analysis
The notified rules reflect critical updates from their draft version, including the obligation to report incidents immediately upon awareness. This ensures greater privacy for consumers while still enabling robust cybersecurity oversight. Importantly, individuals whose telecom identifiers are suspended or disconnected due to security concerns must be given a copy of the order and a chance to appeal, ensuring procedural fairness. The notified rules have removed "traffic data" and "message content" definitions that may lead to operational ambiguities. While the rules establish a solid foundation for protecting telecom networks, they pose significant compliance challenges, particularly for smaller operators who may struggle with costs associated with audits, infrastructure, and reporting requirements.
Conclusion
The Telecom Cyber Security Rules, 2024 represent a comprehensive approach to securing India’s communication networks against cyber threats. Mandating robust cybersecurity policies, rapid incident reporting, and procedural safeguards allows the rules to balance national security with privacy and fairness. However, addressing implementation challenges through stakeholder collaboration and detailed guidelines will be key to ensuring compliance without overburdening telecom operators. With adaptive execution, these rules have the potential to enhance the resilience of India’s telecom sector and also position the country as a global leader in digital security standards.
References
● Telecommunications Act, 2023 https://acrobat.adobe.com/id/urn:aaid:sc:AP:767484b8-4d05-40b3-9c3d-30c5642c3bac
● CyberPeace First Read of the Telecommunications Act, 2023 https://www.cyberpeace.org/resources/blogs/the-government-enforces-key-sections-of-the-telecommunication-act-2023
● Telecommunications (Telecom Cyber Security) Rules, 2024

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide