By Alison McAdams, Hamza Drabu & Emily Broad

|

Published 13 May 2024

Overview

The transformative potential of artificial intelligence (AI) is discussed daily but there is always the caveat that it must be designed, developed and deployed safely and responsibly.

AI is already making a significant contribution to the way healthcare is delivered in the UK and hopes are pinned on the MedTech sector to provide solutions that will address the pressures on the NHS and enable patients to benefit from faster, innovative care.

With its responsibility for the safety, efficacy and quality of medicines and medical devices, the Medicines and Health products Regulatory Agency (MHRA) has now set out its strategic approach for the regulation of AI as well as how it plans to deploy AI in the way it delivers its services.

As part of that approach, the AI Airlock project has now been launched, to help the MHRA identify and address the challenges involved in regulating AI as a medical device (AIaMD).

 

Background

In February 2024, the Government asked a number of regulators, including the MHRA, to outline the steps they were taking in line with the White Paper on the Pro-innovation approach to the Regulation of AI.

The White Paper had set out the government's framework for governing AI in order to drive safe, responsible innovation, underpinned by five key principles – safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

The UK’s regulators were charged with interpreting and applying these principles to AI within their particular remits, allowing AI to be regulated in a targeted, coherent and context-specific manner. The framework proposed in the White Paper is currently non-statutory although it noted that it may become necessary to introduce a statutory duty for regulators to have due regard to the principles at a later point.

The MHRA's publication of this policy paper on its strategic approach to AI is therefore its response to this call for greater transparency regarding the steps it is taking to understand both the opportunities and the risks AI creates, and the actions it is taking.

 

The Scope of the MHRA's Approach

Against this background, the MHRA's policy paper considers the opportunities and risks of AI from three different perspectives – as a regulator of AI products; as  a public service organisation delivering time critical decisions; and as an organisation that makes evidence-based decisions that impact on public and patient safety, where that evidence is often supplied by third parties.

While many organisations get to grips with the opportunities AI provides to improve the efficiency of the services they provide, it is the MHRA's comments in its capacity as regulator of AIaMD that will be of most interest to manufacturers and developers of AI healthcare products.

 

The Regulation of AI Products

Where AI is used for a medical purpose, it is very likely to come within the definition of a medical device. That medical purpose can be the diagnosis, prevention or treatment of disease of injury, the replacement or modification of anatomy of physiological process, or the control of contraception.

Software and AI used for such a medical purpose will fall within the remit of the UK Medical Devices Regulations 2002 (UK MDR) and must meet its requirements before it can be placed on the UK market. These regulations apply throughout the lifecycle of the product, setting out the responsibilities of manufacturers from pre-market clinical investigation, through conformity assessment, registering the product with the MHRA, mitigating risk and addressing safety and performance, to post-market surveillance activities.

A programme of regulatory reform for medical devices is currently underway and as part of this process, the MHRA announced its roadmap setting out the Software and AI as a Medical Device Change Programme back in September 2021. The roadmap provided manufacturers with markers for the direction of travel that would be taken and the MHRA is in the process of providing substantive guidance documents that protect patient safety, encourage innovation and provide certainty to industry.

The MHRA’s strategic approach to AI is therefore a summary of the work already undertaken as well as an indication of what is still to come. It also confirms that it will adopt the recommendations of the AI Regulation White Paper. In terms of the changes that lie ahead, the following initiatives will address the five key principles from the White Paper:

Safety, security and robustness - The current UK MDR risk-based classification system will continue in the reformed regulations. However, many AI products currently in the lowest risk classification will be up-classified. This will mean that they cannot be placed on the market without an independent assessment of conformity.

There will also be guidance on cyber security due for publication by spring 2025.

Appropriate transparency and explainability – For all devices including AI, manufacturers must provide a clear statement of the purpose of the device for all intended users. The MHRA has already provided guidance to support manufacturers with Crafting an intended purpose in the context of Software as a Medical Device. Existing MHRA guidance on applying human factors to medical devices will be supplemented by further detailed guidance specifically for AIaMD products due in spring 2025.

Fairness - Following the publication of the Independent Review of Equity in Medical Devices by Dame Margaret Whitehead, the MHRA has confirmed it is fully committed to ensuring equitable access to safe, effective, and high-quality medical devices for all individuals who use them. As with all its reforms and initiatives, the MHRA is looking to take an internationally aligned position.

Accountability and governance - The existing regulations set obligations for manufacturers, conformity assessment bodies and the MHRA, but these will be strengthened and clarified in the new regulations, including for other economic operators in the supply chain.

Accountability is also applicable to the datasets used in the creation of AI models and the potential changes that occur in post market use. The MHRA has already published guidance on principles of Predetermined Change Control Plans (PCCP) and has confirmed that it intends to introduce PCCPs in the future core regulations.

Contestability and redress - The ability to monitor AIaMD product changes will assist with meeting this principle. In addition, building on the MHRA's Yellow Card scheme, current regulations also place legal requirements for manufacturers to report incidents and these obligations will be strengthened for medical devices by new regulations which the MHRA are  aiming to put in place by the summer.

 

AI Airlock

A key part of the MHRA's strategic approach to AI is the AI Airlock project which has now been launched. Described as a 'proactive, collaborative, agile and the first of its kind approach', this pilot project will help the MHRA to identify and address the challenges for regulating standalone AI medical devices.

The regulatory sandbox model is a recognised mechanism to help address novel regulatory challenges and the AI Airlock applies this to AI in healthcare. Many of the known risks of software and AIaMD products have already been identified and are mitigated through existing regulatory requirements. However, in order to regulate AI products effectively and efficiently as they continue to develop, the objective of the AI Airlock is to identify the regulatory challenges posed by AIaMD and to work collaboratively to understand and potentially mitigate any risks that are uncovered. This involves collaboration between Approved Bodies ("Team AB") to inform standard policy positions. It also involves collaboration between the DHSC and NHS AI Lab to ensure expertise relating to deployment and post market surveillance of AI in healthcare are considered when designing regulation and guidance.

The pilot project will focus on a small number of products across a range of medical device regulatory issues. The MHRA is initially seeking out and supporting 4-6 virtual or real-world projects so manufacturers can deliver what is required to ensure the viability of their devices. The project will allow them to test a range of regulatory issues for these devices when they are used for direct clinical purposes within the NHS. Listed examples of the sorts of challenges the AI Airlock project may focus on include:

  • Detecting and reporting product performance errors (including drift) and failure modes in post market surveillance data.
  • Increased automation and decision-making responsibilities within clinical workflow and producing pre-market evidence of safety.
  • Breaking down the complexities of generative AI based medical devices.

The findings from this partnership between government, regulators and industry will then inform future AI Airlock projects and feed into future UK and international AIaMD guidance, including how the MHRA works with UK Approved Bodies on UKCA marking and with trusted regulatory partners on the international recognition of medical devices.

 

Conclusion

The MHRA’s strategic approach to AI shows how it is applying the AI Regulation White Paper's principles within its particular remit as the regulator of AI medical products, as well as how it intends to use AI to benefit its own provision of services.

The White Paper sought to foster safe and responsible AI innovation and the MHRA has applied its principles to the context in which it operates. As part of its wider regulatory reform of medical devices, some of the guidance promised in its roadmap has already been delivered while other measures are still awaited, including the core future regulations.

The launch of the AI Airlock project is another, welcome part of this process. It not only aims to bring safe products to patients more quickly but it hopes to identify and address the particular regulatory challenges of AI along the way. Considering all the potential of AI especially in the healthcare sector, it is vital for these concerns over what risks are known and unknown to be tackled and this wider objective is clearly on the MHRA's agenda. Quis custodiet ipsos custodes ("who will guard the guards themselves") has always been a challenge but the MHRA is on the task.

For more information on the AI Airlock, there is a project webinar on 5 June 2024 that you can sign up to, with the project opening up for applications thereafter.

You can review our previous report on AI in Healthcare here.

If you want to discuss your software or AI medical device, whatever stage it is at  in its lifecycle, please get in touch.


Authors