Harmonised cybersecurity rules? The EU proposes Cyber Resilience Act 2022

On Thursday, 15 September 2022, the European Commission proposed the first-ever EU-wide Cyber Resilience Act regulating essential cybersecurity requirements for products with digital elements and ensuring more secure hardware and software for consumers within the single market.

According to the Commission, cybersecurity of the entire supply chain is maintained only if all its components are cyber-secure. The existing EU legal framework covers only certain aspects linked to cybersecurity from different angles (products, services, crisis management, and crimes), which leaves substantial gaps in this regard, and does not determine mandatory requirements for the security of products with digital elements.

The proposed rules determine the obligations of the economic operators, manufacturers, importers, and distributors to abide by the essential cybersecurity requirements. Indeed, the rules would benefit different stakeholders; by ensuring secure products, businesses would maintain customers’ trust and their established reputation. Further, customers would have detailed instructions and necessary information while purchasing products which would in turn assure data and privacy protection.

According to the proposal, manufacturers must ensure that cybersecurity is taken into account in the planning, design, development, production, delivery, and maintenance phase, and cybersecurity risks are documented, further, vulnerabilities and incidents are reported. The regulation also introduces stricter rules for the duty of care for the entire life cycle of products with digital elements. Indeed, once sold, companies must remain responsible for the security of products throughout their expected lifetime, or a minimum of five years (whichever is shorter). Moreover, smart device makers must communicate to consumers “sufficient and accurate information” to enable buyers to grasp security considerations at the time of purchase and to set up devices securely. Importers shall only place on the market products with digital elements that comply with the requirements set out in the Act and where the processes put in place by the manufacturer comply with the essential requirements. When making a product with digital elements available on the market, distributors shall act with due care in relation to the requirements of the Regulation. Non-compliance with the cybersecurity requirements and infringements by economic operators will result in administrative fines and penalties (Article 53). Indeed, market surveillance authorities will have the power to order withdrawals or to recall non-compliant devices.

The Regulation defines horizontal cybersecurity rules while rules peculiar to certain sectors or products could have been more useful and practical. The new rules do not apply to devices whose cybersecurity requirements have already been regulated by the existing EU rules, such as aviation technology, cars, and medical devices.

The Commission’s press release announced that the new rules will have an impact not only in the Union but also in the global market beyond Europe. Considering the international significance of the GDPR rules, there is a potential for such an expected future. On another note, attempts to ensure cyber-secure products are not specific only to the EU, but different states have already taken similar measures. By comparison, the UK launched consultation ahead of potential legislation to ensure household items connected to the internet are better protected from cyber-attacks.

While the EU’s proposed Act is a significant step forward, it still needs to be reviewed by the European Parliament and the Council before it becomes effective, and indeed, if adopted, economic operators and the Member States will have twenty-four months (2 years) to implement the new requirements. The obligation to report actively exploited vulnerabilities and incidents will be in hand a year after the entry into force (Article 57).

EU Proposes a Uniform Approach to the Regulation of Artificial Intelligence

Artificial intelligence (AI) is used in many domains ranging from public sector to health, finance, insurance, home affairs and agriculture. There is no doubt that AI can potentially bring a wide array of economic and societal benefits for nations and humanity as a whole. However, it has been subject of intense deliberation as to how AI can be best regulated given that its applications could potentially have adverse consequences on privacy, dignity and other fundamental human rights of individuals. There is no easy answer to this question and various options have been deliberated over the years. Academics have come up with theories as to which manner of regulation would suit the interest of the society best, whilst various stakeholders (developers and/or users of the technology) have supported different types of regulation alternatives suiting their interests.

On 21 April, the European Commission unveiled its proposal for the regulation of AI in EU (2021/0106 (COD)). This is an important development which will, no doubt, generate significant interest (and debate) and play a role in shaping the regulatory framework not only in the EU but perhaps globally. In a nutshell, the proposed new regulatory regime for AI will be as follows:

  • The regulation lists AI systems whose use is considered unacceptable and accordingly prohibited (Article 5). Such AI practices are: i) those that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm; ii) those that exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; iii) those that are used by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following: a) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; b) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity; and iv) those that use “real-time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (certain exclusions also listed for this).
  • The new regime contains specific rules for AI systems that create a high risk to the health and safety of fundamental rights of natural persons (Title III, Arts 6 and 7). Annex III, lists a limited number of AI systems whose risks have already materialised or likely to materialise in the near future (e.g. biometric identification and categorisation of natural persons; AI systems intended to be used for recruitment or selection of natural persons for employment; AI systems intended to be used by public authorities to evaluate the eligibility of natural persons for public assistance benefits and services and AI systems intended to be used by law enforcement authorities as polygraphs and similar tools to detect the emotional state of a natural person) Article 7 authorises the Commission to expand the list of high-risk AI systems in the future by applying a set of criteria and risk assessment methodology.
  • The proposed regulation sets out the legal requirements for high-risk AI systems in relation to data and data governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security (Chapter 2).              
  • Chapter 4 sets the framework for notified bodies to be involved as independent third parties in conformity assessment procedures and Chapter 5 explains in detail the conformity assessment procedures to be followed for each type of high-risk AI system.
  • Certain transparency obligations have been set for certain AI systems (e.g. those that i) interact with humans; ii) are used to detect emotions or determine association with (social) categories based on biometric data and iii) generate or manipulate content (deep fakes)) by virtue of Title IV.
  • Title V encourages national competent authorities to set up regulatory sandboxes and sets a basic framework in terms of governance, supervison and liability.   
  • The draft regulation proposes to establish a European Artificial Intelligence Board which will facilitate a smooth, effective and harmonised implementation of the requirements under this regulation by contributing to the effective corporation of the national supervisory authorities and the Commission and providing advice and expertise to the Commission. At national level, Member States will have to designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the regulation (Title VI).           

There is no doubt in the coming weeks the suitability of the proposed regulation will be rigorously deliberated. For example, civil rights campaigners might possibly argue that the proposed regulation does not go far enough as the it allows several exceptions to the use of “real time” biometric identification systems. Fundamentally, Article 5 of the proposed regulation states that the use of real-time biometric identification systems can be allowed for the “prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack”, the interpretation of which leaves wide discretionary power to the authorities. On the other hand, developers of AI applications might find it troubling that the Commission would have a discretion going forward to treat new applications developed as high-risk making them subject to a demanding compliance regime set out in the proposed regulation.

Obviously, the proposed regulation will not apply in the UK. However, it is important for the relevant regulators in the UK to see what is brewing on the other side of the Channel. We should follow the debates emerging, reactions to it from various interest groups and academics with interest. There might be considerable benefit for the UK to make its move once the path the EU is taken on this issue is settled. This might bring economic advantages and even perhaps a competitive edge (assuming that more efficient regulatory measures are preferred in the UK)!