Policy & Advocacy Team
Introduction
The Ministry of Communications, Department of Telecommunications notified the Telecommunications (Telecom Cyber Security) Rules, 2024 on 22nd November 2024. These rules were notified to overcome the vulnerabilities that rapid technological advancements pose. The evolving nature of cyber threats has contributed to strengthening and enhancing telecom cyber security. These rules empower the central government to seek traffic data and any other data (other than the content of messages) from service providers.
Background Context
The Telecommunications Act of 2023 was passed by Parliament in December, receiving the President's assent and being published in the official Gazette on December 24, 2023. The act is divided into 11 chapters 62 sections and 3 schedules. The said act has repealed the old legislation viz. Indian Telegraph Act of 1885 and the Indian Wireless Telegraphy Act of 1933. The government has enforced the act in phases. Sections 1, 2, 10-30, 42-44, 46, 47, 50-58, 61, and 62 came into force on June 26, 2024. While, sections 6-8, 48, and 59(b) were notified to be effective from July 05, 2024.
These rules have been notified under the powers granted by Section 22(1) and Section 56(2)(v) of the Telecommunications Act, 2023.
Key Provisions of the Rules
These rules collectively aim to reinforce telecom cyber security and ensure the reliability of telecommunication networks and services. They are as follows:
The Central Government agency authorized by it may request traffic or other data from a telecommunication entity through the Central Government portal to safeguard and ensure telecom cyber security. In addition, the Central Govt. can instruct telecommunication entities to establish the necessary infrastructure and equipment for data collection, processing, and storage from designated points.
● Obligations Relating To Telecom Cybersecurity:
Telecom entities must adhere to various obligations to prevent cyber security risks. Telecommunication cyber security must not be endangered, and no one is allowed to send messages that could harm it. Misuse of telecommunication equipment such as identifiers, networks, or services is prohibited. Telecommunication entities are also required to comply with directions and standards issued by the Central Govt. and furnish detailed reports of actions taken on the government portal.
● Compulsory Measures To Be Taken By Every Telecommunication Entity:
Telecom entities must adopt and notify the Central Govt. of a telecom cyber security policy to enhance cybersecurity. They have to identify and mitigate risks of security incidents, ensure timely responses, and take appropriate measures to address such incidents and minimize their impact. Periodic telecom cyber security audits must be conducted to assess network resilience against potential threats for telecom entities. They must report security incidents promptly to the Central Govt. and establish facilities like a Security Operations Centre.
● Reporting of Security Incidents:
- Telecommunication entities must report the detection of security incidents affecting their network or services within six hours.
- 24 hours are provided for submitting detailed information about the incident, including the number of affected users, the duration, geographical scope, the impact on services, and the remedial measures implemented.
The Central Govt. may require the affected entity to provide further information, such as its cyber security policy, or conduct a security audit.
CyberPeace Policy Analysis
The notified rules reflect critical updates from their draft version, including the obligation to report incidents immediately upon awareness. This ensures greater privacy for consumers while still enabling robust cybersecurity oversight. Importantly, individuals whose telecom identifiers are suspended or disconnected due to security concerns must be given a copy of the order and a chance to appeal, ensuring procedural fairness. The notified rules have removed "traffic data" and "message content" definitions that may lead to operational ambiguities. While the rules establish a solid foundation for protecting telecom networks, they pose significant compliance challenges, particularly for smaller operators who may struggle with costs associated with audits, infrastructure, and reporting requirements.
Conclusion
The Telecom Cyber Security Rules, 2024 represent a comprehensive approach to securing India’s communication networks against cyber threats. Mandating robust cybersecurity policies, rapid incident reporting, and procedural safeguards allows the rules to balance national security with privacy and fairness. However, addressing implementation challenges through stakeholder collaboration and detailed guidelines will be key to ensuring compliance without overburdening telecom operators. With adaptive execution, these rules have the potential to enhance the resilience of India’s telecom sector and also position the country as a global leader in digital security standards.
References
● Telecommunications Act, 2023 https://acrobat.adobe.com/id/urn:aaid:sc:AP:767484b8-4d05-40b3-9c3d-30c5642c3bac
● CyberPeace First Read of the Telecommunications Act, 2023 https://www.cyberpeace.org/resources/blogs/the-government-enforces-key-sections-of-the-telecommunication-act-2023
● Telecommunications (Telecom Cyber Security) Rules, 2024
Digitisation in Agriculture
The traditional way of doing agriculture has undergone massive digitization in recent years, whereby several agricultural processes have been linked to the Internet. This globally prevalent transformation, driven by smart technology, encompasses the use of sensors, IoT devices, and data analytics to optimize and automate labour-intensive farming practices. Smart farmers in the country and abroad now leverage real-time data to monitor soil conditions, weather patterns, and crop health, enabling precise resource management and improved yields. The integration of smart technology in agriculture not only enhances productivity but also promotes sustainable practices by reducing waste and conserving resources. As a result, the agricultural sector is becoming more efficient, resilient, and capable of meeting the growing global demand for food.
Digitisation of Food Supply Chains
There has also been an increase in the digitisation of food supply chains across the globe since it enables both suppliers and consumers to keep track of the stage of food processing from farm to table and ensures the authenticity of the food product. The latest generation of agricultural robots is being tested to minimise human intervention. It is thought that AI-run processes can mitigate labour shortage, improve warehousing and storage and make transportation more efficient by running continuous evaluations and adjusting the conditions real-time while increasing yield. The company Muddy Machines is currently trialling an autonomous asparagus-harvesting robot called Sprout that not only addresses labour shortages but also selectively harvests green asparagus, which traditionally requires careful picking. However, Chris Chavasse, co-founder of Muddy Machines, highlights that hackers and malicious actors could potentially hack into the robot's servers and prevent it from operating by driving it into a ditch or a hedge, thereby impending core crop activities like seeding and harvesting. Hacking agricultural pieces of machinery also implies damaging a farmer’s produce and in turn profitability for the season.
Case Study: Muddy Machines and Cybersecurity Risks
A cyber attack on digitised agricultural processes has a cascading impact on online food supply chains. Risks are non-exhaustive and spill over to poor protection of cargo in transit, increased manufacturing of counterfeit products, manipulation of data, poor warehousing facilities and product-specific fraud, amongst others. Additional impacts on suppliers are also seen, whereby suppliers have supplied the food products but fail to receive their payments. These cyber-threats may include malware(primarily ransomware) that accounts for 38% of attacks, Internet of Things (IoT) attacks that comprise 29%, Distributed Denial of Service (DDoS) attacks, SQL Injections, phishing attacks etc.
Prominent Cyber Attacks and Their Impacts
Ransomware attacks are the most popular form of cyber threats to food supply chains and may include malicious contaminations, deliberate damage and destruction of tangible assets (like infrastructure) or intangible assets (like reputation and brand). In 2017, NotPetya malware disrupted the world’s largest logistics giant Maersk and destroyed all end-user devices in more than 60 countries. Interestingly, NotPetya was also linked to the malfunction of freezers connected to control systems. The attack led to these control systems being compromised, resulting in freezer failures and potential spoilage of food, highlighting the vulnerability of industrial control systems to cyber threats.
Further Case Studies
NotPetya also impacted Mondelez, the maker of Oreos but disrupting its email systems, file access and logistics for weeks. Mondelez’s insurance claim was also denied since NotPetya malware was described as a “war-like” action, falling outside the purview of the insurance coverage. In April 2021, over the Easter weekend, Bakker Logistiek, a logistics company based in the Netherlands that offers air-conditioned warehousing and food transportation for Dutch supermarkets, experienced a ransomware attack. This incident disrupted their supply chain for several days, resulting in empty shelves at Albert Heijn supermarkets, particularly for products such as packed and grated cheese. Despite the severity of the attack, the company successfully restored their operations within a week by utilizing backups. JBS, one of the world’s biggest meat processing companies, also had to pay $11 million in ransom via Bitcoin to resolve a cyber attack in the same year, whereby computer networks at JBS were hacked, temporarily shutting down their operations and endangering consumer data. The disruption threatened food supplies and risked higher food prices for consumers. Additional cascading impacts also include low food security and hindrances in processing payments at retail stores.
Credible Threat Agents and Their Targets
Any cyber-attack is usually carried out by credible threat agents that can be classified as either internal or external threat agents. Internal threat agents may include contractors, visitors to business sites, former/current employees, and individuals who work for suppliers. External threat agents may include activists, cyber-criminals, terror cells etc. These threat agents target large organisations owing to their larger ransom-paying capacity, but may also target small companies due to their vulnerability and low experience, especially when such companies are migrating from analogous methods to digitised processes.
The Federal Bureau of Investigation warns that the food and agricultural systems are most vulnerable to cyber-security threats during critical planting and harvesting seasons. It noted an increase in cyber-attacks against six agricultural co-operatives in 2021, with ancillary core functions such as food supply and distribution being impacted. Resultantly, cyber-attacks may lead to a mass shortage of food not only meant for human consumption but also for animals.
Policy Recommendations
To safeguard against digital food supply chains, Food defence emerges as one of the top countermeasures to prevent and mitigate the effects of intentional incidents and threats to the food chain. While earlier, food defence vulnerability assessments focused on product adulteration and food fraud, including vulnerability assessments of agriculture technology now be more relevant.
Food supply organisations must prioritise regular backups of data using air-gapped and password-protected offline copies, and ensure critical data copies are not modifiable or deletable from the main system. For this, blockchain-based food supply chain solutions may be deployed, which are not only resilient to hacking, but also allow suppliers and even consumers to track produce. Companies like Ripe.io, Walmart Global Tech, Nestle and Wholechain deploy blockchain for food supply management since it provides overall process transparency, improves trust issues in the transactions, enables traceable and tamper-resistant records and allows accessibility and visibility of data provenance. Extensive recovery plans with multiple copies of essential data and servers in secure, physically separated locations, such as hard drives, storage devices, cloud or distributed ledgers should be adopted in addition to deploying operations plans for critical functions in case of system outages. For core processes which are not labour-intensive, including manual operation methods may be used to reduce digital dependence. Network segmentation, updates or patches for operating systems, software, and firmware are additional steps which can be taken to secure smart agricultural technologies.
References
- Muddy Machines website, Accessed 26 July 2024. https://www.muddymachines.com/
- “Meat giant JBS pays $11m in ransom to resolve cyber-attack”, BBC, 10 June 2021. https://www.bbc.com/news/business-57423008
- Marshall, Claire & Prior, Malcolm, “Cyber security: Global food supply chain at risk from malicious hackers.”, BBC, 20 May 2022. https://www.bbc.com/news/science-environment-61336659
- “Ransomware Attacks on Agricultural Cooperatives Potentially Timed to Critical Seasons.”, Private Industry Notification, Federal Bureau of Investigation, 20 April https://www.ic3.gov/Media/News/2022/220420-2.pdf.
- Manning, Louise & Kowalska, Aleksandra. (2023). “The threat of ransomware in the food supply chain: a challenge for food defence”, Trends in Organized Crime. https://doi.org/10.1007/s12117-023-09516-y
- “NotPetya: the cyberattack that shook the world”, Economic Times, 5 March 2022. https://economictimes.indiatimes.com/tech/newsletters/ettech-unwrapped/notpetya-the-cyberattack-that-shook-the-world/articleshow/89997076.cms?from=mdr
- Abrams, Lawrence, “Dutch supermarkets run out of cheese after ransomware attack.”, Bleeping Computer, 12 April 2021. https://www.bleepingcomputer.com/news/security/dutch-supermarkets-run-out-of-cheese-after-ransomware-attack/
- Pandey, Shipra; Gunasekaran, Angappa; Kumar Singh, Rajesh & Kaushik, Anjali, “Cyber security risks in globalised supply chains: conceptual framework”, Journal of Global Operations and Strategic Sourcing, January 2020. https://www.researchgate.net/profile/Shipra-Pandey/publication/338668641_Cyber_security_risks_in_globalized_supply_chains_conceptual_framework/links/5e2678ae92851c89c9b5ac66/Cyber-security-risks-in-globalized-supply-chains-conceptual-framework.pdf
- Daley, Sam, “Blockchain for Food: 10 examples to know”, Builin, 22 March 2023 https://builtin.com/blockchain/food-safety-supply-chain
Introduction
Social media has emerged as a leading source of communication and information; its relevance cannot be ignored during natural disasters since it is relied upon by governments and disaster relief organisations as a tool for disseminating aid and relief-related resources and communications instantly. During disaster times, social media has emerged as a primary source for affected populations to access information on relief resources; community forums offering aid resources and official government channels for government aid have enabled efficient and timely administration of relief initiatives.
However, given the nature of social media, misinformation risks during natural disasters has also emerged as a primary concern that severely hampers aid administration during natural disasters. The disaster-disinformation network offers some sensationalised influential campaigns against communities at their most vulnerable. Victims who seek reliable resources during natural calamities often reach out to inhospitable campaigns and may experience delayed or lack of access to necessary healthcare, significantly impacting their recovery and survival. This delay can lead to worsening medical conditions and an increased death toll among those affected by the disaster. Victims may lack clear information on the appropriate agencies to seek assistance from, causing confusion and delays in receiving help.
Misinformation Threat Landscape during Natural Disaster
During the 2018 floods in Kerala, it was noted that a fake video on water leakage from the Mullaperyar Dam created panic among the citizens and negatively impacted the rescue operations. Similarly, in 2017, reports emerged claiming that Hurricane Irma had caused sharks to be displaced onto a Florida highway. Similar stories, accompanied by the same image, resurfaced following Hurricanes Harvey and Florence. The disaster-affected nation may face international criticism and fail to receive necessary support due to its perceived inability to manage the crisis effectively. This lack of confidence from the global community can further exacerbate the challenges faced by the nation, leaving it more vulnerable and isolated in its time of need.
The spread of misinformation through social media severely hinders the administration of aid and relief operations during natural disasters since it hinders first responders' efforts to counteract and reduce the spread of misinformation, rumours, and false information and declines public trust in government, media, and non-governmental organisations (NGOs), who are often the first point of contact for both victims and officials due to their familiarity with the region and the community. In Moldova, it was noted that foreign influence has exploited the ongoing drought to create divisions between the semi-autonomous regions of Transnistria and Gagauzia and the central government in Chisinau. News coverage critical of the government leverages economic and energy insecurities to incite civil unrest in this already unstable region. Additionally, First responders may struggle to locate victims and assist them to safety, complicating rescue operations. The inability to efficiently find and evacuate those in need can result in prolonged exposure to dangerous conditions and a higher risk of injury or death.
Further, international aid from other countries could be impeded, affecting the overall relief effort. Without timely and coordinated support from the global community, the disaster response may be insufficient, leaving many needs unmet. Further, misinformation also impedes military, reducing the effectiveness of rescue and relief operations. Military assistance often plays a crucial role in disaster response, and any delays can hinder efforts to provide immediate and large-scale aid.
Misinformation also creates problems of allocation of relief resources to unaffected areas which resultantly impacts aid processes for regions in actual need. Following the April 2015 earthquake in Nepal, a Facebook post claimed that 300 houses in Dhading needed aid. Shared over 1,000 times, it reached around 350,000 people within 48 hours. The originator aimed to seek help for Ward #4’s villagers via social media. Given the average Facebook user has 350 contacts, the message was widely viewed. However, the need had already been reported on quakemap.org, a crisis-mapping database managed by Kathmandu Living Labs, a week earlier. Helping Hands, a humanitarian group was notified on May 7, and by May 11, Ward #4 received essential food and shelter. The re-sharing and sensationalisation of outdated information could have wasted relief efforts since critical resources would have been redirected to a region that had already been secured.
Policy Recommendations
Perhaps the most important step in combating misinformation during natural disasters is the increasing public education and the rapid, widespread dissemination of early warnings. This was best witnessed in the November 1970 tropical cyclone in southeastern Bangladesh, combined with a high tide, struck southeastern Bangladesh, leaving more than 300,000 people dead and 1.3 million homeless. In May 1985, when a comparable cyclone and storm surge hit the same area, local dissemination of disaster warnings was much improved and the people were better prepared to respond to them. The loss of life, while still high (at about 10,000), the numbers were about 3% of that in 1970. On a similar note, when a devastating cyclone struck the same area of Bangladesh in May 1994, fewer than 1,000 people died. In India, the 1977 cyclone in Andra Pradesh killed 10,000 people, but a similar storm in the same area 13 years later killed only 910. The dramatic difference in mortalities was owed to a new early-warning system connected with radio stations to alert people in low-lying areas.
Additionally, location-based filtering for monitoring social media during disasters is considered as another best practice to curb misinformation. However, agencies should be aware that this method may miss local information from devices without geolocation enabled. A 2012 Georgia Tech study found that less than 1.4 percent of Twitter content is geolocated. Additionally, a study by Humanity Road and Arizona State University on Hurricane Sandy data indicated a significant decline in geolocation data during weather events.
Alternatively, Publish frequent updates to promote transparency and control the message. In emergency management and disaster recovery, digital volunteers—trusted agents who provide online support—can assist overwhelmed on-site personnel by managing the vast volume of social media data. Trained digital volunteers help direct affected individuals to critical resources and disseminate reliable information.
Enhancing the quality of communication requires double-verifying information to eliminate ambiguity and reduce the impact of misinformation, rumors, and false information must also be emphasised. This approach helps prevent alert fatigue and "cry wolf" scenarios by ensuring that only accurate, relevant information is disseminated. Prioritizing ground truth over assumptions and swiftly releasing verified information or acknowledging the situation can bolster an agency's credibility. This credibility allows the agency to collaborate effectively with truth amplifiers. Prebunking and Debunking methods are also effective way to counter misinformation and build cognitive defenses to recognise red flags. Additionally, evaluating the relevance of various social media information is crucial for maintaining clear and effective communication.
References
- https://www.nature.com/articles/s41598-023-40399-9#:~:text=Moreover%2C%20misinformation%20can%20create%20unnecessary,impacting%20the%20rescue%20operations29.
- https://www.redcross.ca/blog/2023/5/why-misinformation-is-dangerous-especially-during-disasters
- https://www.soas.ac.uk/about/blog/disinformation-during-natural-disasters-emerging-vulnerability
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-M dia-Disasters-Emergencies_Mar2018-508.pdf
Introduction
The Digital Personal Data Protection (DPDP) Act, of 2023, introduces a framework for the protection of personal data in India. Data fiduciaries are the entity that essentially determines the purpose and means of processing of personal data. The small-scale industries also fall within the ambit of the term. Startups/Small companies and Micro, Small, and Medium Enterprises (MSMEs) while determining the purpose of processing of personal data in the capacity of ‘data fiduciary’ are also required to comply with the DPDP Act provisions. The obligations set for the data fiduciary will apply to them unilaterally, though compliance with this Act and can be challenging due to resource constraints and limited expertise in data protection.
DPDP Act, 2023 Section 17(3) gives power to the Central Government to exempt Startups from being obligated to comply with the Act, taking into account the volume and nature of personal data processed. It is the nation's first standalone law on data protection and privacy, which sets forth strict rules on how data fiduciaries can collect and process personal data, focusing on consent-based mechanisms and personal data protection. Small-scale industries are given more time to comply with the DPDP Act. The detailed provisions to be notified in further rulemaking called ‘DPDP rules’.
Obligations on Data Fiduciary under the DPDP Act, 2023
The DPDP Act focuses on processing digital personal data in a manner that recognizes both the right of individuals to protect their personal data and the need to process such personal data for lawful purposes and for matters connected therewith or incidental thereto. Hence, small-scale industries also need to comply with provisions aimed at protecting digital personal data.
The key requirements to be considered:
- Data Processing Principles: Ensuring that data processing is done lawfully, fairly, and transparently. Further, the collection and processing of personal data is only for specific, clear, and legitimate purposes and only the data necessary for the stated purpose. Ensuring that the data is accurate and up to date is also necessary. An important part is that the data is not retained longer than necessary and appropriate security measures are taken to protect the said data.
- Consent Management: Clear and informed consent should be obtained from individuals before collecting their personal data. Further, individuals have the option to withdraw their consent easily.
- Rights of Data Principals: Data principals (individuals) whose data is being collected have the right to Information, the right to correction and erasure of data, the right to grievance redressa, Right to nominate.the right to access, correct, and delete their personal data. Data fiduciaries need to be mindful of mechanisms to handle requests from data principals regarding their concerns.
- Data Breach Notifications: Data fiduciaries are required to notify the data protection board and the affected individuals in case a data breach has occurred.
- Appropriate technical and organisational measures: A Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the rules made thereunder.Cross-border Data Transfers: Compliance with regulations in relation to the transfer of personal data outside of India should be ensured.
Challenges for Small Scale Industries for the DPDP Act Compliance
While small-scale industries have high aims for their organisational growth and now in the digital age they also need to place reliance on online security measures and handling of personal data, with the DPDP act in the picture it becomes an obligation to consider and comply with. As small-scale industries including MSMEs, they might face certain challenges in fulfilling these obligations but digital data protection measures will also boost the competitive market and customer growth in their business. Bringing reforms in methods aimed at better data governance in today's digital era is significant.
One of the major challenges for small-scale industries could be ensuring a skilled workforce that understands and educates internal stakeholders about the DPDP Act compliances. This could undoubtedly become an additional burden.
Further, the limited resources can make the implementation of data protection, which is oftentimes complex for a layperson in the case of a small-scale industry, difficult to implement. Limitations in resources are often financial or human resources.
Cybersecurity, cyber awareness, and protection from cyber threats need some form of expertise, which is lacking in small enterprises. The outsourcing of such expertise is a decision that is sometimes taken too late, and some form of harm can take place between the periods by which an incident can occur.
Investment in the core business or enterprise many times doesn't include technology other than the basic requirements to run the business, nor towards ensuring that the data is secure and all compliances are met. However, in the fast-moving digital world, all industries need to be mindful of their efforts to protect personal data and proper data governance.
Recommendations
To ensure the proper and effective personal data handling practices as per the provisions of the act, the small companies/startups need to work backend and frontend and ensure that they take adequate measures to comply with the act. While such industries have been given more time to ensure compliance, there are some suggestions for them to be compliant with the new law.
Small companies can ensure compliance with the DPDP Act by implementing robust data protection policies, investing in and providing employee training on data privacy, using age-verification mechanisms, and adopting privacy-by-design principles. Conduct a gap analysis to identify areas where current practices fall short of DPDP Act requirements. Regular audits, secure data storage solutions, and transparent communication with users about data practices are also essential. Use cost-effective tools and technologies for data protection and management.
Conclusion
Small-scale industries must take proactive steps to align with the DPDP Act, 2023 provisions. By understanding the requirements, leveraging external expertise, and adopting best practices, small-scale industries can ensure compliance and protect personal data effectively. In the long run, complying with the new law would lead to greater trust and better business for the enterprises, resulting in a larger revenue share for them.
References
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1959161
- https://www.financialexpress.com/business/digital-transformation-dpdp-act-managing-data-protection-compliance-in-businesses-3305293/
- https://economictimes.indiatimes.com/tech/technology/big-tech-coalition-seeks-12-18-month-extension-to-comply-with-indias-dpdp-act/articleshow/104726843.cms?from=mdr
Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide