EU Proposes a Uniform Approach to the Regulation of Artificial Intelligence

Artificial intelligence (AI) is used in many domains ranging from public sector to health, finance, insurance, home affairs and agriculture. There is no doubt that AI can potentially bring a wide array of economic and societal benefits for nations and humanity as a whole. However, it has been subject of intense deliberation as to how AI can be best regulated given that its applications could potentially have adverse consequences on privacy, dignity and other fundamental human rights of individuals. There is no easy answer to this question and various options have been deliberated over the years. Academics have come up with theories as to which manner of regulation would suit the interest of the society best, whilst various stakeholders (developers and/or users of the technology) have supported different types of regulation alternatives suiting their interests.

On 21 April, the European Commission unveiled its proposal for the regulation of AI in EU (2021/0106 (COD)). This is an important development which will, no doubt, generate significant interest (and debate) and play a role in shaping the regulatory framework not only in the EU but perhaps globally. In a nutshell, the proposed new regulatory regime for AI will be as follows:

  • The regulation lists AI systems whose use is considered unacceptable and accordingly prohibited (Article 5). Such AI practices are: i) those that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm; ii) those that exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; iii) those that are used by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following: a) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; b) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity; and iv) those that use “real-time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (certain exclusions also listed for this).
  • The new regime contains specific rules for AI systems that create a high risk to the health and safety of fundamental rights of natural persons (Title III, Arts 6 and 7). Annex III, lists a limited number of AI systems whose risks have already materialised or likely to materialise in the near future (e.g. biometric identification and categorisation of natural persons; AI systems intended to be used for recruitment or selection of natural persons for employment; AI systems intended to be used by public authorities to evaluate the eligibility of natural persons for public assistance benefits and services and AI systems intended to be used by law enforcement authorities as polygraphs and similar tools to detect the emotional state of a natural person) Article 7 authorises the Commission to expand the list of high-risk AI systems in the future by applying a set of criteria and risk assessment methodology.
  • The proposed regulation sets out the legal requirements for high-risk AI systems in relation to data and data governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security (Chapter 2).              
  • Chapter 4 sets the framework for notified bodies to be involved as independent third parties in conformity assessment procedures and Chapter 5 explains in detail the conformity assessment procedures to be followed for each type of high-risk AI system.
  • Certain transparency obligations have been set for certain AI systems (e.g. those that i) interact with humans; ii) are used to detect emotions or determine association with (social) categories based on biometric data and iii) generate or manipulate content (deep fakes)) by virtue of Title IV.
  • Title V encourages national competent authorities to set up regulatory sandboxes and sets a basic framework in terms of governance, supervison and liability.   
  • The draft regulation proposes to establish a European Artificial Intelligence Board which will facilitate a smooth, effective and harmonised implementation of the requirements under this regulation by contributing to the effective corporation of the national supervisory authorities and the Commission and providing advice and expertise to the Commission. At national level, Member States will have to designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the regulation (Title VI).           

There is no doubt in the coming weeks the suitability of the proposed regulation will be rigorously deliberated. For example, civil rights campaigners might possibly argue that the proposed regulation does not go far enough as the it allows several exceptions to the use of “real time” biometric identification systems. Fundamentally, Article 5 of the proposed regulation states that the use of real-time biometric identification systems can be allowed for the “prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack”, the interpretation of which leaves wide discretionary power to the authorities. On the other hand, developers of AI applications might find it troubling that the Commission would have a discretion going forward to treat new applications developed as high-risk making them subject to a demanding compliance regime set out in the proposed regulation.

Obviously, the proposed regulation will not apply in the UK. However, it is important for the relevant regulators in the UK to see what is brewing on the other side of the Channel. We should follow the debates emerging, reactions to it from various interest groups and academics with interest. There might be considerable benefit for the UK to make its move once the path the EU is taken on this issue is settled. This might bring economic advantages and even perhaps a competitive edge (assuming that more efficient regulatory measures are preferred in the UK)!   

Published by

Professor Barış Soyer

Professor Soyer was appointed a lecturer at the School of Law, Swansea University in 2001 and was promoted to readership in 2006 and professorship in 2009. He was appointed Director of the Institute of Shipping and Trade Law at the School of Law, Swansea in October 2010. He was previously a lecturer at the University of Exeter. His postgraduate education was in the University of Southampton from where he obtained his Ph.D degree in 2000. Whilst at Southampton he was also a part-time lecturer and tutor. His principal research interest is in the field of insurance, particularly marine insurance, but his interests extend broadly throughout maritime law and contract law. He is the author of Warranties in Marine Insurance published by Cavendish Publishing (2001), and an impressive list of articles published in elite Journals such as Lloyd’s Maritime and Commercial Law Quarterly, Berkley Journal of International Law, Journal of Contract Law and Journal of Business Law. His first book was the joint winner of the Cavendish Book Prize 2001 and was awarded the British Insurance Law Association Charitable Trust Book Prize in 2002, for the best contribution to insurance literature. A new edition of this book was published in 2006. In 2008, he edited a collection of essays published by Informa evaluating the Law Commissions' Reform Proposals in Insurance Law: Reforming Commercial and Marine Insurance Law. This book has been cited on numerous occasions in the Consultation Reports published by English and Scottish Law Commissions and also by the Irish Law Reform Commission and has been instrumental in shaping the nature of law reform. In recent years, he edited several books in partnership with Professor Tettenborn: Pollution at Sea: Law and Liability, published by Informa in 2012; Carriage of Goods by Sea, Land and Air, published by Informa in 2013 and Offshore Contracts and Liabilities, published by Informa Law from Routledge in 2014. His most recent monograph, Marine Insurance Fraud, was published in 2014 by Informa Law from Routledge. His teaching experience extends to the under- and postgraduate levels, including postgraduate teaching of Carriage of Goods by Sea, Transnational Commercial Law, Marine Insurance, Admiralty Law and Oil and Gas Law. He is one of the editors of the Journal of International Maritime Law and is also on the editorial board of Shipping and Trade Law and Baltic Maritime Law Quarterly. He currently teaches Admiralty Law, Oil and Gas Law and Marine Insurance on the LLM programme and also is the Head of the Department of Postgraduate Legal Studies at Swansea.

One thought on “EU Proposes a Uniform Approach to the Regulation of Artificial Intelligence”

Leave a Reply