Artificial intelligence (AI) is used in many domains ranging from public sector to health, finance, insurance, home affairs and agriculture. There is no doubt that AI can potentially bring a wide array of economic and societal benefits for nations and humanity as a whole. However, it has been subject of intense deliberation as to how AI can be best regulated given that its applications could potentially have adverse consequences on privacy, dignity and other fundamental human rights of individuals. There is no easy answer to this question and various options have been deliberated over the years. Academics have come up with theories as to which manner of regulation would suit the interest of the society best, whilst various stakeholders (developers and/or users of the technology) have supported different types of regulation alternatives suiting their interests.
On 21 April, the European Commission unveiled its proposal for the regulation of AI in EU (2021/0106 (COD)). This is an important development which will, no doubt, generate significant interest (and debate) and play a role in shaping the regulatory framework not only in the EU but perhaps globally. In a nutshell, the proposed new regulatory regime for AI will be as follows:
- The regulation lists AI systems whose use is considered unacceptable and accordingly prohibited (Article 5). Such AI practices are: i) those that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm; ii) those that exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; iii) those that are used by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following: a) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; b) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity; and iv) those that use “real-time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (certain exclusions also listed for this).
- The new regime contains specific rules for AI systems that create a high risk to the health and safety of fundamental rights of natural persons (Title III, Arts 6 and 7). Annex III, lists a limited number of AI systems whose risks have already materialised or likely to materialise in the near future (e.g. biometric identification and categorisation of natural persons; AI systems intended to be used for recruitment or selection of natural persons for employment; AI systems intended to be used by public authorities to evaluate the eligibility of natural persons for public assistance benefits and services and AI systems intended to be used by law enforcement authorities as polygraphs and similar tools to detect the emotional state of a natural person) Article 7 authorises the Commission to expand the list of high-risk AI systems in the future by applying a set of criteria and risk assessment methodology.
- The proposed regulation sets out the legal requirements for high-risk AI systems in relation to data and data governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security (Chapter 2).
- Chapter 4 sets the framework for notified bodies to be involved as independent third parties in conformity assessment procedures and Chapter 5 explains in detail the conformity assessment procedures to be followed for each type of high-risk AI system.
- Certain transparency obligations have been set for certain AI systems (e.g. those that i) interact with humans; ii) are used to detect emotions or determine association with (social) categories based on biometric data and iii) generate or manipulate content (deep fakes)) by virtue of Title IV.
- Title V encourages national competent authorities to set up regulatory sandboxes and sets a basic framework in terms of governance, supervison and liability.
- The draft regulation proposes to establish a European Artificial Intelligence Board which will facilitate a smooth, effective and harmonised implementation of the requirements under this regulation by contributing to the effective corporation of the national supervisory authorities and the Commission and providing advice and expertise to the Commission. At national level, Member States will have to designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the regulation (Title VI).
There is no doubt in the coming weeks the suitability of the proposed regulation will be rigorously deliberated. For example, civil rights campaigners might possibly argue that the proposed regulation does not go far enough as the it allows several exceptions to the use of “real time” biometric identification systems. Fundamentally, Article 5 of the proposed regulation states that the use of real-time biometric identification systems can be allowed for the “prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack”, the interpretation of which leaves wide discretionary power to the authorities. On the other hand, developers of AI applications might find it troubling that the Commission would have a discretion going forward to treat new applications developed as high-risk making them subject to a demanding compliance regime set out in the proposed regulation.
Obviously, the proposed regulation will not apply in the UK. However, it is important for the relevant regulators in the UK to see what is brewing on the other side of the Channel. We should follow the debates emerging, reactions to it from various interest groups and academics with interest. There might be considerable benefit for the UK to make its move once the path the EU is taken on this issue is settled. This might bring economic advantages and even perhaps a competitive edge (assuming that more efficient regulatory measures are preferred in the UK)!