New “Consumer Duty”- Would It Affect Insurers Utilising AI and Algorithms?

By 31 July 2023, all retailers in the UK must comply with a new “Consumer Duty” when selling new and existing products and services to their customers (the date of implementation is 31 July 2024 for firms offering closed products and services). This Duty has been introduced by the Financial Conduct Authority (FCA) with an amendment to existing Principles for Business (PRIN) and intends to impose a higher standard of behaviour for firms interacting directly or indirectly with retail customers. The scope of the duty has been extended to the regulated activities and ancillary activities of all firms authorised under the Financial Services and Markets Act 2000 (FSMA), the Payment Services Regulations 2017 (PSRs) and E-money Regulations 2011 (EMRs), and on that basis applies not only to insurers but also to insurance intermediaries (e.g., insurance brokers).

What Does the New “Consumer Duty” Entail?

In a nutshell, the new “Consumer Duty” requires retailers to take a more proactive approach and put their customers’ needs first. It should, however, be noted that the duty is neither a “duty of care” nor a “fiduciary” one. It also does not require retailers to provide advice to customers.  Although the Duty does not give customers a private right of action, it enables the FCA to investigate any allegation of breach and the FCA could accordingly issue fines against firms and secure redress for customers who have suffered harm through a firm’s breach of the Duty.

More specifically, the Duty introduces:

  1. An overarching customer principle that firms must act to deliver good outcomes for retail customers.
  1. This overarching principle requires firms: i) to act in good faith; ii) to avoid causing foreseeable harm; and iii) to enable and support customers to pursue their financial objectives. No firm definition of the term “good faith” in this context has been provided but the FCA put forward some examples where a firm would be judged not be acting in good faith. Accordingly, an insurance firm will not be acting in good faith if it sells insurance to a customer by taking advantage of his/her vulnerability. Similarly, an insurance company will not be acting in good faith if it exploits its customers’ behavioural bias- i.e. renewing a policy automatically without reviewing the details of any revised terms or endorsements as well as any changes to excess or premiums introduced by the policy.    
  1. The Duty focuses on four outcomes (products and services, price and value, consumer understanding and consumer support) and requires firms to ensure consumers receive communications so that they can understand products and services that meet their needs, offer fair value and the support needed to consumers.

The Duty, therefore, will require insurers to reflect on how they assemble, sell, market insurance products to their customers and what kind of support they provide to customers who make inquiries. The insurers are now under a statutory duty to act in good faith, avoid causing foreseeable harm and support their customers in the process of delivering these outcomes.      

Specific Implications for Insurance Companies- Especially Those Using AI and Algorithms

The insurers are already reflecting on how they present their policies and various terms in their policies. They will be expected to inform customers fully of the limits of cover (especially policy excesses). Similarly, any proposed changes to cover at renewal stage should be made clear to customers so that they are aware of the changes to their policy and scope of cover. Many insurers would tell you that these are the good practices that they have been implementing for some time anyway.

One area that insurers need to pay careful attention is the standard questions they expect potential customers to answer in cases where they utilise automated computer underwriting systems through which applications for insurance are evaluated and processed without the need for individual underwriter involvement. In some recent cases, the vagueness of such questions has raised legal issues (see, for example, Ristorante Ltd T/A Bar Massimo v. Zurich Insurance Plc [2021] EWHC 2538 (Ch)). For example, if a consumer had a “declined to quote” decision from a pervious insurer, how would s/he be expected to respond to a standard question on such an automated system asking him/her to specify whether s/he has been refused insurance previously? Would a standard customer expected to appreciate that “decline to quote” might not necessarily mean refusal of insurance? The insurers need to think how they phrase such questions, and it would be advisable in the light of the new Duty to provide additional explanation on such a question posed on an automated underwriting platform. 

However, more interesting questions might arise in cases where insurance companies utilise AI and algorithms for pricing, risk assessment and consumer support purposes.

Naturally, there is an expectation on any insurance firm that utilise AI in risk assessment process to ensure that the system in use does not inadvertently lead to discriminating outcomes and the new Consumer Duty amplifies this. That is easy to say but difficult to achieve in practice. It is well-known that it is rather difficult, if not impossible, when algorithms are used for risk assessment purposes to know what data has been used by the algorithm and what difference any factor made in such risk assessment (commonly known as the ‘black-box problem’). Insurers rely on programmers, designers and tech experts when they employ AI for risk assessment purposes and as much as they expect such experts to assist them in fulfilling their “Consumer Duty”, it is ultimately something they have very little control over. More significantly, it is rather doubtful that the FCA will have that degree of expertise and technical knowledge to assess whether an algorithm used could deliver good outcomes for the customers. To put differently, it is not clear at this stage whether the new Consumer Duty will in practice enhance the position of consumers when underwriting decisions are taken by AI and algorithms.

Another advantage that algorithms could provide to insurers is to differentiate in price not simply based on risk related factors but other factors (such as the tendency of an individual to pay more for the same product). If allowed or left unchecked, an algorithm by taking into account factors (i.e. number of luxury items ordered by an individual online), might quote a higher premium to an individual than it would have quoted for another individual with a similar risk portfolio. We have a similar problem here- could the algorithm be trained not to do this and more significantly how can a regulator check whether this is complied with or not?

Also, today many insurance companies use chatbots when interacting with customers. Given that the Customer Duty requires insurance companies to provide adequate support to consumers, it is likely that an insurer might fall short of this duty by employing a chatbot that could not deal with unexpected situations or non-standard issues. Checking whether a chatbot is fit for purpose should be easier than trying to understand what factors an algorithm has utilised in making an insurance decision. I suppose the new Consumer Duty would mean that insurers must invest in more advanced chatbots or should put in place alternative support mechanisms for those customers who do not get adequate or satisfactory answers from chatbots.

There is no doubt that the objective of the new Consumer Duty is to create a culture change and encourage retailers, and insurers, to monitor their products and make changes to ensure that their practices and products are “appropriate” and deliver good outcomes for customers. This will also be the motivating factor when insurers utilise AI and algorithms for product development, underwriting and customer support. However, it is also evident that the technical expertise and knowledge within the insurance sector is at an elementary level, and it will probably take some time until the insurers and regulators have the knowledge and expertise to assess and adapt AI and algorithms in line with the consumers’ needs.              

Artificial Intelligence, Inventions and Patents: New Enhanced Guidance by the UK IPO

On 22 September 2022, the UK Intellectual Property Office (IPO) published the Guidance on Examining Patent Applications Relating to Artificial Intelligence Inventions. The Guidance consists of two parts:

  • The Guidelines on the practice of examining patent applications for inventions relating to artificial intelligence (AI), and
  • The Scenarios illustrating and reflecting a non-binding assessment of how the IPO would apply the Guidance to the patentability of an AI invention 

Following the UK Government’s response to the Call for Views on Artificial Intelligence and Intellectual Property which ran from 7 September 2020 to 30 November 2020, the IPO was committed to publishing this Guidance. Indeed, the benchmark of this project was the Refreshed Industrial Strategy (“Strategy for Growth”) and the government’s wider ambition for the United Kingdom to be at the forefront of the technological revolution and a leader in AI technology.

In its response to the AI and IP calls, the government defined AI as “technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation”. Accordingly, the Guidelines rely on the basis that in the UK, patents are available for AI inventions in all fields of technology provided the conditions for the grant of a valid patent are met. According to Section 1(1) of the Patents Act 1977, a patent may be granted only if: (a) the invention is new, (b) it involves an inventive step, (c) it is capable of industrial application, and (d) the grant of a patent for it is not excluded by the relevant provisions. These four conditions apply to all inventions in all fields of technology. Thus, a patent may be granted for an AI invention when it is new, involves an inventive step, is capable of industrial application, and is not excluded from patent protection.

AI inventions can be either computer-implemented relying on mathematical methods or computer programs in some way. Indeed, the guidelines apply whether the invention is categorised as “applied AI” or “core AI” or it relates to training an AI invention in some way. The IPO practice is to examine whether such an invention makes a contribution that is technical in nature by considering what task or process it performs when run on a computer. The latter excludes inventions relating solely to a mathematical method “as such” and/or a program for a computer “as such”. An AI invention is excluded from patent protection if it does not reveal a technical contribution.

The Guidelines also touch briefly on the requirement for sufficiency of disclosure concerning AI inventions.

It is worth noting that the recent guidelines do not have any mandatory effect and they are not a source of law. The current legal framework in the field includes the Patents Act 1977, as amended by subsequent legislation, and the Patents Rules 2007. While deciding on the relevant issues the case law and UK courts’ interpretation of the legislation should be considered. Furthermore, judicial notice must be taken of international conventions (such as the European Patent Convention) and of decisions and opinions made under these conventions.

The opinions on the patentability and practical illustrations of the possible scenarios drafted by the IPO shall not be binding for any purpose under the Patents Act 1977. Regardless of their advisory character, the Guidance is quite helpful and supplements the comprehensive Guidance about the patent practice at the IPO set out in the Manual of Patent Practice. The constructive feature of the guidelines is particularly evident in the explanations referring to the fundamental case law and judicial interpretations where relevant. More details of the Guidance and the full documents are available at Examining patent applications relating to artificial intelligence (AI) inventions – GOV.UK (www.gov.uk).

Harmonised cybersecurity rules? The EU proposes Cyber Resilience Act 2022

On Thursday, 15 September 2022, the European Commission proposed the first-ever EU-wide Cyber Resilience Act regulating essential cybersecurity requirements for products with digital elements and ensuring more secure hardware and software for consumers within the single market.

According to the Commission, cybersecurity of the entire supply chain is maintained only if all its components are cyber-secure. The existing EU legal framework covers only certain aspects linked to cybersecurity from different angles (products, services, crisis management, and crimes), which leaves substantial gaps in this regard, and does not determine mandatory requirements for the security of products with digital elements.

The proposed rules determine the obligations of the economic operators, manufacturers, importers, and distributors to abide by the essential cybersecurity requirements. Indeed, the rules would benefit different stakeholders; by ensuring secure products, businesses would maintain customers’ trust and their established reputation. Further, customers would have detailed instructions and necessary information while purchasing products which would in turn assure data and privacy protection.

According to the proposal, manufacturers must ensure that cybersecurity is taken into account in the planning, design, development, production, delivery, and maintenance phase, and cybersecurity risks are documented, further, vulnerabilities and incidents are reported. The regulation also introduces stricter rules for the duty of care for the entire life cycle of products with digital elements. Indeed, once sold, companies must remain responsible for the security of products throughout their expected lifetime, or a minimum of five years (whichever is shorter). Moreover, smart device makers must communicate to consumers “sufficient and accurate information” to enable buyers to grasp security considerations at the time of purchase and to set up devices securely. Importers shall only place on the market products with digital elements that comply with the requirements set out in the Act and where the processes put in place by the manufacturer comply with the essential requirements. When making a product with digital elements available on the market, distributors shall act with due care in relation to the requirements of the Regulation. Non-compliance with the cybersecurity requirements and infringements by economic operators will result in administrative fines and penalties (Article 53). Indeed, market surveillance authorities will have the power to order withdrawals or to recall non-compliant devices.

The Regulation defines horizontal cybersecurity rules while rules peculiar to certain sectors or products could have been more useful and practical. The new rules do not apply to devices whose cybersecurity requirements have already been regulated by the existing EU rules, such as aviation technology, cars, and medical devices.

The Commission’s press release announced that the new rules will have an impact not only in the Union but also in the global market beyond Europe. Considering the international significance of the GDPR rules, there is a potential for such an expected future. On another note, attempts to ensure cyber-secure products are not specific only to the EU, but different states have already taken similar measures. By comparison, the UK launched consultation ahead of potential legislation to ensure household items connected to the internet are better protected from cyber-attacks.

While the EU’s proposed Act is a significant step forward, it still needs to be reviewed by the European Parliament and the Council before it becomes effective, and indeed, if adopted, economic operators and the Member States will have twenty-four months (2 years) to implement the new requirements. The obligation to report actively exploited vulnerabilities and incidents will be in hand a year after the entry into force (Article 57).

EU Proposes a Uniform Approach to the Regulation of Artificial Intelligence

Artificial intelligence (AI) is used in many domains ranging from public sector to health, finance, insurance, home affairs and agriculture. There is no doubt that AI can potentially bring a wide array of economic and societal benefits for nations and humanity as a whole. However, it has been subject of intense deliberation as to how AI can be best regulated given that its applications could potentially have adverse consequences on privacy, dignity and other fundamental human rights of individuals. There is no easy answer to this question and various options have been deliberated over the years. Academics have come up with theories as to which manner of regulation would suit the interest of the society best, whilst various stakeholders (developers and/or users of the technology) have supported different types of regulation alternatives suiting their interests.

On 21 April, the European Commission unveiled its proposal for the regulation of AI in EU (2021/0106 (COD)). This is an important development which will, no doubt, generate significant interest (and debate) and play a role in shaping the regulatory framework not only in the EU but perhaps globally. In a nutshell, the proposed new regulatory regime for AI will be as follows:

  • The regulation lists AI systems whose use is considered unacceptable and accordingly prohibited (Article 5). Such AI practices are: i) those that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm; ii) those that exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; iii) those that are used by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following: a) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; b) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity; and iv) those that use “real-time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (certain exclusions also listed for this).
  • The new regime contains specific rules for AI systems that create a high risk to the health and safety of fundamental rights of natural persons (Title III, Arts 6 and 7). Annex III, lists a limited number of AI systems whose risks have already materialised or likely to materialise in the near future (e.g. biometric identification and categorisation of natural persons; AI systems intended to be used for recruitment or selection of natural persons for employment; AI systems intended to be used by public authorities to evaluate the eligibility of natural persons for public assistance benefits and services and AI systems intended to be used by law enforcement authorities as polygraphs and similar tools to detect the emotional state of a natural person) Article 7 authorises the Commission to expand the list of high-risk AI systems in the future by applying a set of criteria and risk assessment methodology.
  • The proposed regulation sets out the legal requirements for high-risk AI systems in relation to data and data governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security (Chapter 2).              
  • Chapter 4 sets the framework for notified bodies to be involved as independent third parties in conformity assessment procedures and Chapter 5 explains in detail the conformity assessment procedures to be followed for each type of high-risk AI system.
  • Certain transparency obligations have been set for certain AI systems (e.g. those that i) interact with humans; ii) are used to detect emotions or determine association with (social) categories based on biometric data and iii) generate or manipulate content (deep fakes)) by virtue of Title IV.
  • Title V encourages national competent authorities to set up regulatory sandboxes and sets a basic framework in terms of governance, supervison and liability.   
  • The draft regulation proposes to establish a European Artificial Intelligence Board which will facilitate a smooth, effective and harmonised implementation of the requirements under this regulation by contributing to the effective corporation of the national supervisory authorities and the Commission and providing advice and expertise to the Commission. At national level, Member States will have to designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the regulation (Title VI).           

There is no doubt in the coming weeks the suitability of the proposed regulation will be rigorously deliberated. For example, civil rights campaigners might possibly argue that the proposed regulation does not go far enough as the it allows several exceptions to the use of “real time” biometric identification systems. Fundamentally, Article 5 of the proposed regulation states that the use of real-time biometric identification systems can be allowed for the “prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack”, the interpretation of which leaves wide discretionary power to the authorities. On the other hand, developers of AI applications might find it troubling that the Commission would have a discretion going forward to treat new applications developed as high-risk making them subject to a demanding compliance regime set out in the proposed regulation.

Obviously, the proposed regulation will not apply in the UK. However, it is important for the relevant regulators in the UK to see what is brewing on the other side of the Channel. We should follow the debates emerging, reactions to it from various interest groups and academics with interest. There might be considerable benefit for the UK to make its move once the path the EU is taken on this issue is settled. This might bring economic advantages and even perhaps a competitive edge (assuming that more efficient regulatory measures are preferred in the UK)!