New “Consumer Duty”- Would It Affect Insurers Utilising AI and Algorithms?

By 31 July 2023, all retailers in the UK must comply with a new “Consumer Duty” when selling new and existing products and services to their customers (the date of implementation is 31 July 2024 for firms offering closed products and services). This Duty has been introduced by the Financial Conduct Authority (FCA) with an amendment to existing Principles for Business (PRIN) and intends to impose a higher standard of behaviour for firms interacting directly or indirectly with retail customers. The scope of the duty has been extended to the regulated activities and ancillary activities of all firms authorised under the Financial Services and Markets Act 2000 (FSMA), the Payment Services Regulations 2017 (PSRs) and E-money Regulations 2011 (EMRs), and on that basis applies not only to insurers but also to insurance intermediaries (e.g., insurance brokers).

What Does the New “Consumer Duty” Entail?

In a nutshell, the new “Consumer Duty” requires retailers to take a more proactive approach and put their customers’ needs first. It should, however, be noted that the duty is neither a “duty of care” nor a “fiduciary” one. It also does not require retailers to provide advice to customers.  Although the Duty does not give customers a private right of action, it enables the FCA to investigate any allegation of breach and the FCA could accordingly issue fines against firms and secure redress for customers who have suffered harm through a firm’s breach of the Duty.

More specifically, the Duty introduces:

  1. An overarching customer principle that firms must act to deliver good outcomes for retail customers.
  1. This overarching principle requires firms: i) to act in good faith; ii) to avoid causing foreseeable harm; and iii) to enable and support customers to pursue their financial objectives. No firm definition of the term “good faith” in this context has been provided but the FCA put forward some examples where a firm would be judged not be acting in good faith. Accordingly, an insurance firm will not be acting in good faith if it sells insurance to a customer by taking advantage of his/her vulnerability. Similarly, an insurance company will not be acting in good faith if it exploits its customers’ behavioural bias- i.e. renewing a policy automatically without reviewing the details of any revised terms or endorsements as well as any changes to excess or premiums introduced by the policy.    
  1. The Duty focuses on four outcomes (products and services, price and value, consumer understanding and consumer support) and requires firms to ensure consumers receive communications so that they can understand products and services that meet their needs, offer fair value and the support needed to consumers.

The Duty, therefore, will require insurers to reflect on how they assemble, sell, market insurance products to their customers and what kind of support they provide to customers who make inquiries. The insurers are now under a statutory duty to act in good faith, avoid causing foreseeable harm and support their customers in the process of delivering these outcomes.      

Specific Implications for Insurance Companies- Especially Those Using AI and Algorithms

The insurers are already reflecting on how they present their policies and various terms in their policies. They will be expected to inform customers fully of the limits of cover (especially policy excesses). Similarly, any proposed changes to cover at renewal stage should be made clear to customers so that they are aware of the changes to their policy and scope of cover. Many insurers would tell you that these are the good practices that they have been implementing for some time anyway.

One area that insurers need to pay careful attention is the standard questions they expect potential customers to answer in cases where they utilise automated computer underwriting systems through which applications for insurance are evaluated and processed without the need for individual underwriter involvement. In some recent cases, the vagueness of such questions has raised legal issues (see, for example, Ristorante Ltd T/A Bar Massimo v. Zurich Insurance Plc [2021] EWHC 2538 (Ch)). For example, if a consumer had a “declined to quote” decision from a pervious insurer, how would s/he be expected to respond to a standard question on such an automated system asking him/her to specify whether s/he has been refused insurance previously? Would a standard customer expected to appreciate that “decline to quote” might not necessarily mean refusal of insurance? The insurers need to think how they phrase such questions, and it would be advisable in the light of the new Duty to provide additional explanation on such a question posed on an automated underwriting platform. 

However, more interesting questions might arise in cases where insurance companies utilise AI and algorithms for pricing, risk assessment and consumer support purposes.

Naturally, there is an expectation on any insurance firm that utilise AI in risk assessment process to ensure that the system in use does not inadvertently lead to discriminating outcomes and the new Consumer Duty amplifies this. That is easy to say but difficult to achieve in practice. It is well-known that it is rather difficult, if not impossible, when algorithms are used for risk assessment purposes to know what data has been used by the algorithm and what difference any factor made in such risk assessment (commonly known as the ‘black-box problem’). Insurers rely on programmers, designers and tech experts when they employ AI for risk assessment purposes and as much as they expect such experts to assist them in fulfilling their “Consumer Duty”, it is ultimately something they have very little control over. More significantly, it is rather doubtful that the FCA will have that degree of expertise and technical knowledge to assess whether an algorithm used could deliver good outcomes for the customers. To put differently, it is not clear at this stage whether the new Consumer Duty will in practice enhance the position of consumers when underwriting decisions are taken by AI and algorithms.

Another advantage that algorithms could provide to insurers is to differentiate in price not simply based on risk related factors but other factors (such as the tendency of an individual to pay more for the same product). If allowed or left unchecked, an algorithm by taking into account factors (i.e. number of luxury items ordered by an individual online), might quote a higher premium to an individual than it would have quoted for another individual with a similar risk portfolio. We have a similar problem here- could the algorithm be trained not to do this and more significantly how can a regulator check whether this is complied with or not?

Also, today many insurance companies use chatbots when interacting with customers. Given that the Customer Duty requires insurance companies to provide adequate support to consumers, it is likely that an insurer might fall short of this duty by employing a chatbot that could not deal with unexpected situations or non-standard issues. Checking whether a chatbot is fit for purpose should be easier than trying to understand what factors an algorithm has utilised in making an insurance decision. I suppose the new Consumer Duty would mean that insurers must invest in more advanced chatbots or should put in place alternative support mechanisms for those customers who do not get adequate or satisfactory answers from chatbots.

There is no doubt that the objective of the new Consumer Duty is to create a culture change and encourage retailers, and insurers, to monitor their products and make changes to ensure that their practices and products are “appropriate” and deliver good outcomes for customers. This will also be the motivating factor when insurers utilise AI and algorithms for product development, underwriting and customer support. However, it is also evident that the technical expertise and knowledge within the insurance sector is at an elementary level, and it will probably take some time until the insurers and regulators have the knowledge and expertise to assess and adapt AI and algorithms in line with the consumers’ needs.              

One more move: Decentralised Autonomous Organisations (DAOs)

Over the last couple of years, the Law Commission for England and Wales has successfully launched several law reform projects related to digital assets, smart contracts, and electronic trade documents. With the UK’s target of becoming a tech leader, yesterday, on 16 November 2022, the Commission published a public call for evidence to be delivered by stakeholders on the characterisation and legal regulation of decentralised autonomous organisations (DAOs) – emerging organisational entities.

A DAO, a concept first developed in 2016, is a legal structure without any central governing body that enables the members with a common target to manage the entire entity on the basis of blockchain technology, smart contracts, software systems, and the Ethereum network. As automated and decentralised bodies functioning without human intervention, DAOs serve for transparency and efficiency. Indeed, the use of DAOs is dramatically expanding in today’s financial markets, banking, and corporate governance. With this increasing significance and progressive application amidst to inevitable practical uncertainties and ambiguities, it is getting a more crucial task than ever to seriously ponder upon their operation under the existing legal systems. That is the precise target of the Government asking the Commission to investigate what exactly constitutes a DAO and what are encompassed by their structure.

The Commission has drafted the following questions encouraging everybody with relevant expertise to respond, but not necessarily to all of them:

  • “When would a DAO choose to include an incorporated entity into its structure?
  • What is the status of a DAO’s investors / token-holders?
  • What kind of liability do or should developers of open-source code have (if any)?
  • How does / should the distinction between an incorporated company (or other legal form or incorporated entity) involved in software development and an open-source smart contract-based software protocol operate as a matter of law?
  • How do DAOs structure their governance and decision-making processes?
  • How do money laundering, corporate reporting and other regulatory concepts apply to DAOs, and who is liable for taxes if the DAO makes a profit?
  • Which jurisdictions are currently attractive for DAOs and why?”

The Law Commission aims at reaching a final report which will define the relevant issues under the existing laws of England and Wales as well as determine the potential for further legal reforms.

The call for evidence will last for 10 weeks from 16 November 2022 with the closing date of 25 January 2023. Further details of the project are available at Decentralised Autonomous Organisations (DAOs) | Law Commission.