What’s coming in 2023?

What’s coming in 2023?

Nearly two weeks into the New Year and the IISTL’s version of ‘Old Moore’s Almanack’ looks ahead to what 2023 is going to have in store us.

Brexit. EU Retained EU Law (Revocation and Reform) Bill will kick in at end of the year. It will be a major surprise if the two Conflicts Regulations, Rome I and Rome 2 aren’t retained, but not the Port Services Regulation.

Ebury Partners Belgium SA/NV v Technical Touch BV, Jan Berthels [2022] EWHC 2927 (Comm) is another recent decision in which an ASI has been granted to restrain proceedings in an EU Member State (Belgium) in respect of a contract subject to English jurisdiction.

Electronic bills of lading. Electronic Trade Documents Bill. Likely to become law in 2023 and to come into effect two months after getting Royal Assent. The Law Commission will publish a consultation paper “Digital assets: which law, which court?” dealing with conflicts of law issues in the second half of 2023.

Autonomous vessels. The Department for Transport consultation on MASS and possible amendments to the Merchant Shipping Act 1995 closed in November 2021. Maybe some results in 2023?

Supreme Court cases

Okpabi v Royal Dutch Shell. The case may well go to trial in 2023, although in May 2022 the High Court EWHC 989 (TCC), held it was premature to grant a  Group Litigation Order and directed that each individual claimant should specify additional details to formulate a proper cause of action for the defendants to respond to.

In similar proceedings in the Netherlands in which the Court of Appeal in the Hague gave judgment in January 2021 relating to multiple oil pipeline leaks in the Niger Delta, it was announced just before Christmas 2022 that Shell will pay 15 million euros ($15.9 million) to the affected communities in Nigeria in full and final settlement on a basis of no admission of liability.

The Eternal Bliss appeal to the Supreme Court is likely to be heard in 2023, with possibility of judgment given in 2023.

But there must be a question mark over London Steam-ship Owners’ Mutual Insurance Association Ltd (Respondent) v Kingdom of Spain (Appellant), Case ID: Case ID 2022/0062 where it is stated “This appeal has been adjourned by request of the parties.”

Climate Change

IMO  Two measures aimed at reducing shipping’s contribution to GHG emissions,   EEXI and Cii, both came into force as from 1 January 2023 and will be in the forefront of the minds of those negotiating new time charters.

EU. Shipping is likely to come into the ETS system with the amendments to the 2003 ETS Directive with phasing in from 1 January 2024. Here and here.

BIMCO has produced time charter clauses to deal with all three of these measures.

Ewan McGaughey et al v. Universities Superannuation Scheme Limited is a case involving whether the investments in fossil fuels by a large pension fund in the UK breach the directors’ fiduciary duties and duties towards contributors of the pension fund. On 24 May 2022, the High Court refused permission to bring a derivative action against USSL, but the Court of Appeal gave permission to appeal in October 2022, so a hearing in 2023 is “on the cards”.

European Union

On 15 July 2022, the EU Taxonomy Complementary Climate Delegated Act covering certain nuclear and gas activities came into force on 4 August 2022 and has applied from 1 January 2023. A legal challenge against the Commission before the CJEU by various NGOs and two member states, Austria and/or Luxembourg has been threatened in connection for the inclusion of nuclear energy and natural gas in the Delegated Act. Climate mitigation and adaptation criteria for maritime shipping, were included in the EU Taxonomy Climate Delegated Act adopted in April 2021.

Previous requests from other NGOs asking the Commission to carry out an internal review of the inclusion of certain forestry and bioenergy activities in the EU green taxonomy had already been rejected by the Commission in 2022.

The Corporate sustainability reporting directive came into effect on 16 Dec, 2022

For EU companies already required to prepare a non-financial information statement, the CSRD is effective for periods commencing on or after 1 January 2024. Large UK and other non-EU companies listed on an EU regulated market (i.e. those meeting two of the three following criteria: more than €20 million total assets, more than €40 million net turnover and more than 250 employees) will be subject to the CSRD requirements for periods commencing on or after 1 January 2025. 

UK and other non-EU companies that are not listed in the EU but which have substantial activity in the EU will be subject to the CSRD for periods commencing on or after 1 January 2028.

Finally, a very happy 2023 to all our readers.

EU Parliament suggests civil liability regulation for artificial intelligence. Possible collisions ahead with IMO civil liability collisions?

On 22 October the European Parliament sent a draft regulation to the Commission for a new strict liability regime for operators of AI systems. Its salient features are.

Scope

Art 2

This Regulation applies on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical integrity of a natural person, to the property of a natural or legal person or has caused significant immaterial harm resulting in a verifiable economic loss.

Definitions

Art 3

(c)  ‘high risk’ means a significant potential in an autonomously operating AI-system to cause harm or damage to one or more persons in a manner that is random and goes beyond what can reasonably be expected; the significance of the potential depends on the interplay between the severity of possible harm or damage, the degree of autonomy of decision-making, the likelihood that the risk materializes and the manner and the context in which the AI-system is being used;

(d)  ‘operator’ means both the frontend and the backend operator as long as the latter’s liability is not already covered by Directive 85/374/EEC;

(e)  ‘frontend operator’ means any natural or legal person who exercises a degree of control over a risk connected with the operation and functioning of the AI-system and benefits from its operation;

(f)  ‘backend operator’ means any natural or legal person who, on a continuous basis, defines the features of the technology and provides data and an essential backend support service and therefore also exercises a degree of control over the risk connected with the operation and functioning of the AI-system;

(g)  ‘control’ means any action of an operator that influences the operation of an AI-system and thus the extent to which the operator exposes third parties to the potential risks associated with the operation and functioning of the AI-system; such actions can impact the operation at any stage by determining the input, output or results, or can change specific functions or processes within the AI-system; the degree to which those aspects of the operation of the AI-system are determined by the action depends on the level of influence the operator has over the risk connected with the operation and functioning of the AI-system;

(h)  ‘affected person’ means any person who suffers harm or damage caused by a physical or virtual activity, device or process driven by an AI-system, and who is not its operator;

(i)  ‘harm or damage’ means an adverse impact affecting the life, health, physical integrity of a natural person, the property of a natural or legal person or causing significant immaterial harm that results in a verifiable economic loss;

(j)  ‘producer’ means the producer as defined in Article 3 of Directive 85/374/EEC. (the Product Liability Directive)

Strict liability for high-risk AI-systems

Article 4

1.  The operator of a high-risk AI-system shall be strictly liable for any harm or damage that was caused by a physical or virtual activity, device or process driven by that AI-system.

2.  All high-risk AI-systems and all critical sectors where they are used shall be listed in the Annex to this Regulation…

3.  Operators of high-risk AI-systems shall not be able to exonerate themselves from liability by arguing that they acted with due diligence or that the harm or damage was caused by an autonomous activity, device or process driven by their AI-system. Operators shall not be held liable if the harm or damage was caused by force majeure.

4.  The frontend operator of a high-risk AI-system shall ensure that operations of that AI-system are covered by liability insurance that is adequate in relation to the amounts and extent of compensation provided for in Articles 5 and 6 of this Regulation. The backend operator shall ensure that its services are covered by business liability or product liability insurance that is adequate in relation to the amounts and extent of compensation provided for in Article 5 and 6 of this Regulation. If compulsory insurance regimes of the frontend or backend operator already in force pursuant to other Union or national law or existing voluntary corporate insurance funds are considered to cover the operation of the AI-system or the provided service, the obligation to take out insurance for the AI-system or the provided service pursuant to this Regulation shall be deemed fulfilled, as long as the relevant existing compulsory insurance or the voluntary corporate insurance funds cover the amounts and the extent of compensation provided for in Articles 5 and 6 of this Regulation.

5.  This Regulation shall prevail over national liability regimes in the event of conflicting strict liability classification of AI-systems.

Amount of compensation

Article 5

1.   An operator of a high-risk AI-system that has been held liable for harm or damage under this Regulation shall compensate:

(a)  up to a maximum amount of EUR two million in the event of the death of, or in the event of harm caused to the health or physical integrity of, an affected person, resulting from an operation of a high-risk AI-system;

(b)  up to a maximum amount of EUR one million in the event of significant immaterial harm that results in a verifiable economic loss or of damage caused to property, including when several items of property of an affected person were damaged as a result of a single operation of a single high-risk AI-system; where the affected person also holds a contractual liability claim against the operator, no compensation shall be paid under this Regulation, if the total amount of the damage to property or the significant immaterial harm is of a value that falls below [EUR 500](9).

Limitation period

Article 7

1.  Civil liability claims, brought in accordance with Article 4(1), concerning harm to life, health or physical integrity, shall be subject to a special limitation period of 30 years from the date on which the harm occurred.

2.  Civil liability claims, brought in accordance with Article 4(1), concerning damage to property or significant immaterial harm that results in a verifiable economic loss shall be subject to special limitation period of:

(a)  10 years from the date when the property damage occurred or the verifiable economic loss resulting from the significant immaterial harm, respectively, occurred, or

(b)  30 years from the date on which the operation of the high-risk AI-system that subsequently caused the property damage or the immaterial harm took place.

Of the periods referred to in the first subparagraph, the period that ends first shall be applicable.

Fault-based liability for other AI-systems

Article 8

1.  The operator of an AI-system that does not constitute a high-risk AI-system as laid down in Articles 3(c) and 4(2) and, as a result is not listed in the Annex to this Regulation, shall be subject to fault-based liability for any harm or damage that was caused by a physical or virtual activity, device or process driven by the AI-system.

2.  The operator shall not be liable if he or she can prove that the harm or damage was caused without his or her fault, relying on either of the following grounds:

(a)  the AI-system was activated without his or her knowledge while all reasonable and necessary measures to avoid such activation outside of the operator’s control were taken, or

(b)  due diligence was observed by performing all the following actions: selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities and maintaining the operational reliability by regularly installing all available updates.

The operator shall not be able to escape liability by arguing that the harm or damage was caused by an autonomous activity, device or process driven by his or her AI-system. The operator shall not be liable if the harm or damage was caused by force majeure.

3.  Where the harm or damage was caused by a third party that interfered with the AI-system by modifying its functioning or its effects, the operator shall nonetheless be liable for the payment of compensation if such third party is untraceable or impecunious.

4.  At the request of the operator or the affected person, the producer of an AI-system shall have the duty of cooperating with, and providing information to, them to the extent warranted by the significance of the claim, in order to allow for the identification of the liabilities.

National provisions on compensation and limitation period

Article 9

Civil liability claims brought in accordance with Article 8(1) shall be subject, in relation to limitation periods as well as the amounts and the extent of compensation, to the laws of the Member State in which the harm or damage occurred.

 There are also provisions regarding contributory negligence, Article 10, joint and several liability, Article 11, recourse for compensation, Article 12.

There are no carve-outs as regards existing civil liability conventions for maritime claims such as the CLC, Bunker Oil Pollution Convention, HNS Convention, which are strict liability based, and the 1910 Brussels Collision Convention which is fault liability based. This is in contrast to the exclusions contained in the 2004 Environmental Liability Directive whose territorial scope was widened with the 2013 Offshore Safety Directive. The current proposal applies to the “territory of the Union” which would encompass the territorial sea of Member States and, in the light of the ECJ’s decision in Commun v Mesquer, loss or damage manifesting on land from an oil spill in the exclusive economic zone of a Member State would come within the scope of the Regulation. As such, it has the capacity to conflict with existing strict liability conventions as enacted in national laws (see the priority of the regulation stipulated in art 4(5)) where autonomous vessels come into operation at MASS Levels 3 and 4, if these are regarded as ‘high risk’. If that were the case, collision liabilities involving such vessels would be dealt with by a strict liability regime, as opposed to the current fault based regime under the Brussels Collision Convention 1910.

First Intergovernmental Standard on AI & Cyber Risk Management

In giving evidence to the Public Accounts Committee (PAC) on Cybersecurity in the UK Sir Mark Sedwill (Cabinet Secretary, Head of the UK Civil Service and UK National Security Advisor) asserted, “the law of the sea 200 years ago is not a bad parallel” for the “big international question” of cyberspace governance today (see Public Accounts Committee Oral evidence: Cyber Security in the UK, HC 1745 [1st April 2019] Q93).

In making this assertion Sir Mark may have had in mind articles such as Dr. Florian Egloff’s Cybersecurity and the Age of Privateering: A Historical Analogy in which the author asserted: 1. “Cyber actors are comparable to the actors of maritime warfare in the sixteenth and seventeenth centuries. 2. The militarisation of cyberspace resembles the situation in the sixteenth century, when states transitioned from a reliance on privateers to dependence on professional navies. 3. As with privateering, the use of non-state actors by states in cyberspace has produced unintended harmful consequences; the emergence of a regime against privateering provides potentially fruitful lessons for international cooperation and the management of these consequences.”

In our IP Wales Guide on Cyber Defence we note: “Since 2004, a UN Group of Governmental Experts (UN GEE) has sought to expedite international norms and regulations to create confidence and security-building measures between member states in cyberspace. In a first major breakthrough, the GGE in 2013 agreed that international law and the UN Charter is applicable to state activity in cyberspace. Two years later, a consensus report outlined four voluntary peace time norms for state conduct in cyberspace: states should not interfere with each other’s critical infrastructure, should not target each other’s emergency services, should assist other states in the forensics of cyberattacks, and states are responsible for operations originating from within their territory.

The latest 2016-17 round of deliberations ended in the stalling of the UN GGE process as its members could not agree on draft paragraph 34, which details how exactly certain international law applies to a states’ use of information and communications technology. While the U.S.A. pushed for detailing international humanitarian law, the right of self-defence, and the law of state responsibility (including the countermeasures applying to cyber operations), other participants, like China and Russia, contended it was premature.”

Indeed China has gone further and condemned the U.S.A. for trying to apply double standards to the issue, in light of public disclosures of spying by their own National Security Agency (NSA).

Sir Mark went on to reveal that because cyberspace governance is being only partly addressed through the UN, “we are looking at coalitions of the willing, such as the OECD and some other countries that have similar systems to ours, to try to approach this.”

Evidence of this strategy in operation can be seen at Ministerial Council Meeting of the Organisation for Economic Co-ordination and Development (OECD) on the 22nd May 2019 when 42 countries adopted five value-based principles on artificial intelligence (AI), including AI systems “must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.”

The recently created UK National Cyber Security Centre (NCSC) has sought to give substance to this principle through offering new guidance on cybersecurity design principles. These principles are divided into five categories, loosely aligned with the stages at which a cyberattack can be mitigated: 1. “Establishing the context. All the elements that compose a system should be determined, so the defensive measures will have no blind spots. 2. Making compromise difficult. An attacker can target only the parts of a system they can reach. Therefore, the system should be made as difficult to penetrate as possible. 3. Making disruption difficult. The system should be designed so that it is resilient to denial of service attacks and usage spikes. 4. Making compromise detection easier. The system should be designed so suspicious activity can be spotted as it happens and the necessary action taken. 5. Reducing the impact of compromise. If an attacker succeeds in gaining a foothold, they will then move to exploit the system. This should be made as difficult as possible.”

Alec Ross (Senior Advisor for Innovation to Hillary Clinton as U.S. Secretary of State) warns that, “small businesses cannot pay for the type of expensive cybersecurity protection that governments and major corporations can (afford)” A Ross, Industries of the Future (2016). It remains to be seen to what extent cybersecurity design principles will become a financial impediment to small business engaging with AI developments in the near future.

14th IISTL Colloquium on New Technologies and Shipping/Trade Law

Sponsored by 

The Institute’s 14th Annual Colloquium will be held on 10-11 September 2018. The subject of this year’s event is new technologies and their present and future effect on shipping and trade law.

14th IISTL Colloquium

To register for this event, please visit our Eventbrite page.

Continue reading 14th IISTL Colloquium on New Technologies and Shipping/Trade Law