Artificial Intelligence, Inventions and Patents: New Enhanced Guidance by the UK IPO

On 22 September 2022, the UK Intellectual Property Office (IPO) published the Guidance on Examining Patent Applications Relating to Artificial Intelligence Inventions. The Guidance consists of two parts:

  • The Guidelines on the practice of examining patent applications for inventions relating to artificial intelligence (AI), and
  • The Scenarios illustrating and reflecting a non-binding assessment of how the IPO would apply the Guidance to the patentability of an AI invention 

Following the UK Government’s response to the Call for Views on Artificial Intelligence and Intellectual Property which ran from 7 September 2020 to 30 November 2020, the IPO was committed to publishing this Guidance. Indeed, the benchmark of this project was the Refreshed Industrial Strategy (“Strategy for Growth”) and the government’s wider ambition for the United Kingdom to be at the forefront of the technological revolution and a leader in AI technology.

In its response to the AI and IP calls, the government defined AI as “technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation”. Accordingly, the Guidelines rely on the basis that in the UK, patents are available for AI inventions in all fields of technology provided the conditions for the grant of a valid patent are met. According to Section 1(1) of the Patents Act 1977, a patent may be granted only if: (a) the invention is new, (b) it involves an inventive step, (c) it is capable of industrial application, and (d) the grant of a patent for it is not excluded by the relevant provisions. These four conditions apply to all inventions in all fields of technology. Thus, a patent may be granted for an AI invention when it is new, involves an inventive step, is capable of industrial application, and is not excluded from patent protection.

AI inventions can be either computer-implemented relying on mathematical methods or computer programs in some way. Indeed, the guidelines apply whether the invention is categorised as “applied AI” or “core AI” or it relates to training an AI invention in some way. The IPO practice is to examine whether such an invention makes a contribution that is technical in nature by considering what task or process it performs when run on a computer. The latter excludes inventions relating solely to a mathematical method “as such” and/or a program for a computer “as such”. An AI invention is excluded from patent protection if it does not reveal a technical contribution.

The Guidelines also touch briefly on the requirement for sufficiency of disclosure concerning AI inventions.

It is worth noting that the recent guidelines do not have any mandatory effect and they are not a source of law. The current legal framework in the field includes the Patents Act 1977, as amended by subsequent legislation, and the Patents Rules 2007. While deciding on the relevant issues the case law and UK courts’ interpretation of the legislation should be considered. Furthermore, judicial notice must be taken of international conventions (such as the European Patent Convention) and of decisions and opinions made under these conventions.

The opinions on the patentability and practical illustrations of the possible scenarios drafted by the IPO shall not be binding for any purpose under the Patents Act 1977. Regardless of their advisory character, the Guidance is quite helpful and supplements the comprehensive Guidance about the patent practice at the IPO set out in the Manual of Patent Practice. The constructive feature of the guidelines is particularly evident in the explanations referring to the fundamental case law and judicial interpretations where relevant. More details of the Guidance and the full documents are available at Examining patent applications relating to artificial intelligence (AI) inventions – GOV.UK (www.gov.uk).

First Intergovernmental Standard on AI & Cyber Risk Management

In giving evidence to the Public Accounts Committee (PAC) on Cybersecurity in the UK Sir Mark Sedwill (Cabinet Secretary, Head of the UK Civil Service and UK National Security Advisor) asserted, “the law of the sea 200 years ago is not a bad parallel” for the “big international question” of cyberspace governance today (see Public Accounts Committee Oral evidence: Cyber Security in the UK, HC 1745 [1st April 2019] Q93).

In making this assertion Sir Mark may have had in mind articles such as Dr. Florian Egloff’s Cybersecurity and the Age of Privateering: A Historical Analogy in which the author asserted: 1. “Cyber actors are comparable to the actors of maritime warfare in the sixteenth and seventeenth centuries. 2. The militarisation of cyberspace resembles the situation in the sixteenth century, when states transitioned from a reliance on privateers to dependence on professional navies. 3. As with privateering, the use of non-state actors by states in cyberspace has produced unintended harmful consequences; the emergence of a regime against privateering provides potentially fruitful lessons for international cooperation and the management of these consequences.”

In our IP Wales Guide on Cyber Defence we note: “Since 2004, a UN Group of Governmental Experts (UN GEE) has sought to expedite international norms and regulations to create confidence and security-building measures between member states in cyberspace. In a first major breakthrough, the GGE in 2013 agreed that international law and the UN Charter is applicable to state activity in cyberspace. Two years later, a consensus report outlined four voluntary peace time norms for state conduct in cyberspace: states should not interfere with each other’s critical infrastructure, should not target each other’s emergency services, should assist other states in the forensics of cyberattacks, and states are responsible for operations originating from within their territory.

The latest 2016-17 round of deliberations ended in the stalling of the UN GGE process as its members could not agree on draft paragraph 34, which details how exactly certain international law applies to a states’ use of information and communications technology. While the U.S.A. pushed for detailing international humanitarian law, the right of self-defence, and the law of state responsibility (including the countermeasures applying to cyber operations), other participants, like China and Russia, contended it was premature.”

Indeed China has gone further and condemned the U.S.A. for trying to apply double standards to the issue, in light of public disclosures of spying by their own National Security Agency (NSA).

Sir Mark went on to reveal that because cyberspace governance is being only partly addressed through the UN, “we are looking at coalitions of the willing, such as the OECD and some other countries that have similar systems to ours, to try to approach this.”

Evidence of this strategy in operation can be seen at Ministerial Council Meeting of the Organisation for Economic Co-ordination and Development (OECD) on the 22nd May 2019 when 42 countries adopted five value-based principles on artificial intelligence (AI), including AI systems “must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.”

The recently created UK National Cyber Security Centre (NCSC) has sought to give substance to this principle through offering new guidance on cybersecurity design principles. These principles are divided into five categories, loosely aligned with the stages at which a cyberattack can be mitigated: 1. “Establishing the context. All the elements that compose a system should be determined, so the defensive measures will have no blind spots. 2. Making compromise difficult. An attacker can target only the parts of a system they can reach. Therefore, the system should be made as difficult to penetrate as possible. 3. Making disruption difficult. The system should be designed so that it is resilient to denial of service attacks and usage spikes. 4. Making compromise detection easier. The system should be designed so suspicious activity can be spotted as it happens and the necessary action taken. 5. Reducing the impact of compromise. If an attacker succeeds in gaining a foothold, they will then move to exploit the system. This should be made as difficult as possible.”

Alec Ross (Senior Advisor for Innovation to Hillary Clinton as U.S. Secretary of State) warns that, “small businesses cannot pay for the type of expensive cybersecurity protection that governments and major corporations can (afford)” A Ross, Industries of the Future (2016). It remains to be seen to what extent cybersecurity design principles will become a financial impediment to small business engaging with AI developments in the near future.