By Hamza Drabu, Alison McAdams, Jonathan Bonser and Christian Carr

|

Published 05 May 2021

Overview

On 21 April, the European Commission published its bold proposal1 for a regulation laying down harmonised rules governing artificial intelligence.

In doing so the Commission has placed the EU at the forefront of the global debate on when and how risks arising from AI should be captured and regulated. Although the UK is no longer directly subject to EU regulations, the AI market is global. From a medical devices perspective, AI providers cannot ignore the regulations, especially if they wish to provide their products within the EU.

Overcoming tensions

Ever present within the proposed regulations is the familiar tension between, on the one hand, the desire to avoid encroaching on freedom to research and swiftly exploit new technologies bringing wide ranging expected benefits and, on the other, the need to protect the public. The proposals seek to bring the attendant risks within a workable legal framework.

Whilst some in tech have already signalled concern, the Commission’s stated aims in producing the proposal are difficult to argue with. Taking a long term view, innovation only stands to benefit from legal certainty. Such certainty can only enhance the prospect of those working with AI securing confident investment, and build public trust and “buy in” - public confidence being key to the continued uptake of AI-based solutions. It will also help prevent the market fragmentation across the EU that might have come with a less comprehensive legal instrument.

The challenges AI presents to the legal orthodoxy are myriad, whether one considers the medical device regulatory regime, the common law fault-based liability framework injured patients traditionally navigate in clinical negligence cases in the United Kingdom, or the strict liability “defect”-based product liability framework.

Against this complex background, we go on to consider the key aspects of the Commission’s proposal with a particular focus on what it could mean for stakeholders in the health sector.

The Commission’s proposal in more detail

The proposal seeks to impose on “high-risk AI systems” an adjusted form of the regime governing medical devices (and indeed a range of other products). AI systems qualifying as high risk are expected to go through a conformity assessment process and be CE-marked before being placed on the market or put into service. Certain AI systems are entirely prohibited, and those that are not “high-risk” are subject to more limited obligations, but the focus for those in the health sector will overwhelmingly, for reasons set out below, be on the provisions relating to high-risk AI systems.

“AI system” is defined very broadly, and includes software developed by machine learning using a wide variety of methods, including deep learning; logic- and knowledge-based approaches; and finally statistical approaches, Bayesian estimation, search and optimisation methods. Any such software that can, “for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments it interacts with”, will fall within the definition. From a medical devices perspective, Article 6 of the proposed regulation confirms that an AI system is “high-risk” where it is intended to be used as a safety component for a product, or is itself a product, covered by the Union harmonisation legislation at Annex II and would be required to undergo a third-party conformity assessment pursuant to that legislation. Annex II includes the EU Regulations on Medical Devices (“MDR”)2 and In Vitro Diagnostic Medical Devices (“IVDR”)3). The classification rules and conformity assessment procedures under the MDR mean that most software qualifying as a medical device will require the involvement of a notified body before CE marking, so will qualify as high-risk AI systems where they include an AI element. Specific systems deemed high risk may also appear in Annex III.

The proposed regulation provides that high-risk AI systems must be subject to an extensive risk management and quality management system and a technical file must be produced before being CE marked. Notified bodies will be enabled to assess conformity. Of interest to those in the UK, conformity assessment bodies in “third countries” may be authorised to carry out the activities of notified bodies under the regulation, so long as the Union has concluded an agreement with them. Some requirements are of interest both for their own sake and for the ways they seek to resolve some of the more vexed questions on how a liability system can navigate the challenges of AI. For example, Articles 10-14 of the proposal make provision for high-risk AI systems to:

  • be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons when in use;
  • be “sufficiently transparent” to enable users to interpret the system’s output and use it “appropriately”;
  • be subject to safeguards against the danger of bias in training materials from which the AI learns;
  • automatically record events covering, at a minimum, the period of use of the system, the reference database against which input data has been checked by the system, the input data, and the identity of the natural persons involved in verifying the results.

The Commission states that the proposed minimum requirements “are already state-of-the-art for many diligent operators and the result of two years of preparatory work”, derived from the Ethics Guidelines of the High Level Expert Group on Artificial Intelligence (Ethics Guidelines for Trustworthy AI), piloted by more than 350 organisations. It goes on to state that they are “largely consistent with other international recommendations and principles, which ensures that the proposed AI framework is compatible with those adopted by the EU’s international trade partners. The precise technical solutions to achieve compliance with those requirements may be provided by standards or by other technical specifications or otherwise be developed in accordance with general engineering or scientific knowledge at the discretion of the provider of the AI system. This flexibility is particularly important, because it allows providers of AI systems to choose the way to meet their requirements, taking into account the state-of-the-art and technological and scientific progress in this field.”

Article 60 envisages an EU database for stand-alone high risk AI systems, with providers under an obligation to register their systems and enter various pieces of information about them that will be accessible to the public.

As regards enforcement, for persistent non-compliance Member States are expected to “take all appropriate measures to restrict or prohibit the high-risk AI system being made available on the market or ensure that it is recalled or withdrawn from the market”. Non-compliance with the data and data governance requirements in Article 10 should not be taken lightly. It can lead to fines of up to a maximum of EUR30,000,000 or up to 6% of a company’s total worldwide annual turnover for the preceding financial year if greater. Lesser penalties are envisaged for other instances of non-compliance and the supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities.

One issue the proposal does not directly address is civil liability, though the explanatory memorandum states that “initiatives” that address liability issues related to AI are in the pipeline and will build on and complement the approach taken. It is worth taking a brief look at what might be expected in that regard.

EU initiatives on liability

Turning to the question of liability, medical device manufacturers and other stakeholders in the sector should be mindful of the European Parliament’s resolution of 20 October 20204. In this resolution, the EU Parliament made recommendations to the Commission on a civil liability regime for AI. This will form a key strand in the bloc’s approach to grappling with AI.

The recommendations included revision of the Product Liability Directive5 to adapt to the “digital world”, including clarification of the definition of “product”, “damage”, “defect”, and “producer”. The recommendations acknowledge that by its very nature AI could present significant difficulties to injured parties wishing to prove their case and seek redress. In order to address what could be seen as an inequality of arms, they made various proposals, including that in certain clearly defined cases the burden of proof should be reversed.

In common with the Commission’s proposal, the Parliament’s liability recommendation also made reference to “high-risk AI systems”, singling them out as suitable candidates for a standalone strict liability, compulsory insurance-backed compensation system. Under that system, the front- and/or back-end operator of a high-risk AI system would be jointly and severally liable to compensate any party up to EUR2,000,000 where they had been caused injury by a physical or virtual activity, device or process driven by that AI system. The operator could not exonerate themselves with a “due diligence” defence – only a “force majeure” type defence would be available – and once the injured party had been compensated, the paying party could seek proportional redress from other operators based on the degree of control they exercised over the risk. In other words, apportionment would be dealt with between defendants later, once liability and any consequent compensation had been worked out with the injured Claimant.

Through the Consumer Protection Act (the legislation implementing the Product Liability Directive in the UK), a strict liability regime covering “defective” products has of course operated in this jurisdiction for many years. Clearly there is much debate over whether that framework will remain fit for purpose as AI based products evolve and proliferate in ever more varied and complex healthcare settings in future. Absent a contractual relationship between the patient and those responsible for the product incorporating AI, it also remains to be seen whether product liability claims will come to be viewed by claimants as a viable alternative to actions in tort. That said, adjustments to the core principles of negligence have of course been made before by the Courts, if with some reluctance, to meet novel challenges that arise in a complex litigation environment6.

Stakeholders will watch with interest how the Commission’s proposal meshes with any forthcoming instruments tackling liability.

Welcome first steps

The Commission’s proposal is a welcome development and the passage of the proposed regulation through the legislative process will be keenly observed globally. Notwithstanding that it will be a long time before a future iteration of the proposal becomes law, it provides a concrete starting point to begin to answer some of the many other questions posed by AI in a legal sense.

In tandem with the Parliament’s recommendations, the question, for example, of legal personality for AI would appear to have been effectively sidestepped by instead looking at AI systems and operators. The proportionate approach of isolating high-risk AI systems for the greatest scrutiny is also a step in the right direction.

In the Medicines and Medical Devices Act 20217, the Secretary of State has at their disposal an enabling piece of primary legislation under which there are extensive powers to make regulations fit for the digital age.

When making regulations under the relevant subsection, the Secretary of State must have in mind the overarching objective of safeguarding public health. As part of this, consideration must be given to whether or not regulations would affect the likelihood of the United Kingdom being seen as a “favourable place” in which to carry out research, develop, manufacture or supply medical devices8.

With that in mind, all UK stakeholders will be keen to see sooner rather than later where they stand relative to those in the EU.


References

1https://ec.europa.eu/transparency/regdoc/rep/1/2021/EN/COM-2021-206-F1-EN-MAIN-PART-1.PDF
2Regulation (EU) 2017/745 (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32017R0745)
3Regulation (EU) 2017/746 (https://eur-lex.europa.eu/eli/reg/2017/746/oj)
4https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html
5Council Directive 85/374/EEC (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A31985L0374)
6See, for example, Fairchild v Glenhaven Funeral Services Ltd & Ors [2002] UKHL 22 and subsequent measures taken to effectively handle mesothelioma claims. (https://www.bailii.org/uk/cases/UKHL/2002/22.html)
7https://www.legislation.gov.uk/ukpga/2021/3/enacted
8Reg.15(3)

Authors