ChatCPT- A New Threat for Cyber Risk Insurers?

The ChatGPT (Generative Pretrained Transformer)- an OpenAI platform- was released in November 2022 and has been an instant success. In simple terms, ChatCPT is an artificially intelligent model that has been trained to generate text that imitates human language once it has been prompted with a query or question. The producers of the model expect to release a new version soon!       

ChatGTP has been identified as a problem in education sector that could potentially enable some students to engage in unfair practice undermining the integrity of assessment procedures. We have also in the press read about lawyers in different jurisdictions making submissions to courts by using texts obviously prepared by ChatCPT. In insurance sector, especially cyber risk insurers are also concerned of the potential disruptive impact of this OpenAI platform on their business models.

If prompted ChatCPT will refuse to write ransomware or malicious codes and when denying such requests, it will explain that ransomware is both “illegal” and “unethical”. However, there is no guarantee that a person will not find a way to create a malicious code by utilizing ChatCPT. As long as right questions are posed, the current version of the model could give anyone step by step guidance as to how to create a malicious code. This is a genuine concern for cyber risk insurers as it potentially makes it easier to produce such a malicious code (even by amateurs) and then target a business. Small and medium sized (SMEs) businesses, which do not have appropriate cyber security measures in place, are particularly vulnerable to ransomware attacks.  

Also, it is possible that if prompted ChatGPT can write convincing phishing emails that can be utilized in social engineering campaigns by threat actors. Again, this increases the possibility that an employee in a company or business could engage with such a convincing phishing email potentially compromising the cyber security of the organization in question.  

In recent months, cyber risk insurers have reported that ChatCPT has been utilized by criminals in ransomware negotiations potentially tilting the balance in favour of such criminal elements-one underwriter who discussed the matter with the author believes that ransomware negotiations are getting more difficult by the day thanks to ChatGPT and sums paid by insurers are increasing as a result!  

The main problem stems from the fact that OpenAI remains largely an unregulated area and realistically this will not change anytime soon. While there is an expectation on the creators of ChatGPT to ensure that their tool cannot be easily manipulated by threat factors, there is no denying the fact that ChatGPT has broadened the potential attack surface for businesses and this a particular concern for cyber risk insurers. If the new version of ChatGPT is not designed to better detect such threat factors (and block such requests), we should expect an increase in the successful ransomware attacks on businesses which will potentially lead to a further increase in cyber risk insurance premiums. We cannot stop innovation, but we have every right to expect the producers to put in place mechanism to prevent their harmful use. Cyber risk insurers are hoping that the new version of this OpenAI tool will be equipped to deal with those who are panning to use it for criminal purposes. This will be a good illustration of how tech can perform the function of regulation as well as innovation!   

New “Consumer Duty”- Would It Affect Insurers Utilising AI and Algorithms?

By 31 July 2023, all retailers in the UK must comply with a new “Consumer Duty” when selling new and existing products and services to their customers (the date of implementation is 31 July 2024 for firms offering closed products and services). This Duty has been introduced by the Financial Conduct Authority (FCA) with an amendment to existing Principles for Business (PRIN) and intends to impose a higher standard of behaviour for firms interacting directly or indirectly with retail customers. The scope of the duty has been extended to the regulated activities and ancillary activities of all firms authorised under the Financial Services and Markets Act 2000 (FSMA), the Payment Services Regulations 2017 (PSRs) and E-money Regulations 2011 (EMRs), and on that basis applies not only to insurers but also to insurance intermediaries (e.g., insurance brokers).

What Does the New “Consumer Duty” Entail?

In a nutshell, the new “Consumer Duty” requires retailers to take a more proactive approach and put their customers’ needs first. It should, however, be noted that the duty is neither a “duty of care” nor a “fiduciary” one. It also does not require retailers to provide advice to customers.  Although the Duty does not give customers a private right of action, it enables the FCA to investigate any allegation of breach and the FCA could accordingly issue fines against firms and secure redress for customers who have suffered harm through a firm’s breach of the Duty.

More specifically, the Duty introduces:

  1. An overarching customer principle that firms must act to deliver good outcomes for retail customers.
  1. This overarching principle requires firms: i) to act in good faith; ii) to avoid causing foreseeable harm; and iii) to enable and support customers to pursue their financial objectives. No firm definition of the term “good faith” in this context has been provided but the FCA put forward some examples where a firm would be judged not be acting in good faith. Accordingly, an insurance firm will not be acting in good faith if it sells insurance to a customer by taking advantage of his/her vulnerability. Similarly, an insurance company will not be acting in good faith if it exploits its customers’ behavioural bias- i.e. renewing a policy automatically without reviewing the details of any revised terms or endorsements as well as any changes to excess or premiums introduced by the policy.    
  1. The Duty focuses on four outcomes (products and services, price and value, consumer understanding and consumer support) and requires firms to ensure consumers receive communications so that they can understand products and services that meet their needs, offer fair value and the support needed to consumers.

The Duty, therefore, will require insurers to reflect on how they assemble, sell, market insurance products to their customers and what kind of support they provide to customers who make inquiries. The insurers are now under a statutory duty to act in good faith, avoid causing foreseeable harm and support their customers in the process of delivering these outcomes.      

Specific Implications for Insurance Companies- Especially Those Using AI and Algorithms

The insurers are already reflecting on how they present their policies and various terms in their policies. They will be expected to inform customers fully of the limits of cover (especially policy excesses). Similarly, any proposed changes to cover at renewal stage should be made clear to customers so that they are aware of the changes to their policy and scope of cover. Many insurers would tell you that these are the good practices that they have been implementing for some time anyway.

One area that insurers need to pay careful attention is the standard questions they expect potential customers to answer in cases where they utilise automated computer underwriting systems through which applications for insurance are evaluated and processed without the need for individual underwriter involvement. In some recent cases, the vagueness of such questions has raised legal issues (see, for example, Ristorante Ltd T/A Bar Massimo v. Zurich Insurance Plc [2021] EWHC 2538 (Ch)). For example, if a consumer had a “declined to quote” decision from a pervious insurer, how would s/he be expected to respond to a standard question on such an automated system asking him/her to specify whether s/he has been refused insurance previously? Would a standard customer expected to appreciate that “decline to quote” might not necessarily mean refusal of insurance? The insurers need to think how they phrase such questions, and it would be advisable in the light of the new Duty to provide additional explanation on such a question posed on an automated underwriting platform. 

However, more interesting questions might arise in cases where insurance companies utilise AI and algorithms for pricing, risk assessment and consumer support purposes.

Naturally, there is an expectation on any insurance firm that utilise AI in risk assessment process to ensure that the system in use does not inadvertently lead to discriminating outcomes and the new Consumer Duty amplifies this. That is easy to say but difficult to achieve in practice. It is well-known that it is rather difficult, if not impossible, when algorithms are used for risk assessment purposes to know what data has been used by the algorithm and what difference any factor made in such risk assessment (commonly known as the ‘black-box problem’). Insurers rely on programmers, designers and tech experts when they employ AI for risk assessment purposes and as much as they expect such experts to assist them in fulfilling their “Consumer Duty”, it is ultimately something they have very little control over. More significantly, it is rather doubtful that the FCA will have that degree of expertise and technical knowledge to assess whether an algorithm used could deliver good outcomes for the customers. To put differently, it is not clear at this stage whether the new Consumer Duty will in practice enhance the position of consumers when underwriting decisions are taken by AI and algorithms.

Another advantage that algorithms could provide to insurers is to differentiate in price not simply based on risk related factors but other factors (such as the tendency of an individual to pay more for the same product). If allowed or left unchecked, an algorithm by taking into account factors (i.e. number of luxury items ordered by an individual online), might quote a higher premium to an individual than it would have quoted for another individual with a similar risk portfolio. We have a similar problem here- could the algorithm be trained not to do this and more significantly how can a regulator check whether this is complied with or not?

Also, today many insurance companies use chatbots when interacting with customers. Given that the Customer Duty requires insurance companies to provide adequate support to consumers, it is likely that an insurer might fall short of this duty by employing a chatbot that could not deal with unexpected situations or non-standard issues. Checking whether a chatbot is fit for purpose should be easier than trying to understand what factors an algorithm has utilised in making an insurance decision. I suppose the new Consumer Duty would mean that insurers must invest in more advanced chatbots or should put in place alternative support mechanisms for those customers who do not get adequate or satisfactory answers from chatbots.

There is no doubt that the objective of the new Consumer Duty is to create a culture change and encourage retailers, and insurers, to monitor their products and make changes to ensure that their practices and products are “appropriate” and deliver good outcomes for customers. This will also be the motivating factor when insurers utilise AI and algorithms for product development, underwriting and customer support. However, it is also evident that the technical expertise and knowledge within the insurance sector is at an elementary level, and it will probably take some time until the insurers and regulators have the knowledge and expertise to assess and adapt AI and algorithms in line with the consumers’ needs.