By Jade Kowalski, Charlotte Halford, Mathew Rutter, Angela Hayes & Ellen McWhirter

|

Published 07 November 2023

Overview

On 26 October 2023 the Bank of England (BoE) published a feedback statement setting out responses to its October 2022 discussion paper on Artificial Intelligence and Machine Learning.

The BoE – including the Prudential Regulation Authority (PRA) – and the Financial Conduct Authority (FCA) published the discussion paper with the aim of generating greater dialogue around AI's potential impact on these bodies' prudential and conduct supervision objectives. The paper was launched as part of wider supervisory body work related to AI (e.g. the 2022 AI Public-Private Forum).

Main themes from the feedback statement

The feedback statement acknowledges and summarises the responses to the discussion paper, identifying common themes. The statement does not set out public policy or regulatory proposals or signal the supervisory bodies' approach to AI.

There were 54 responses, from industry bodies, banks and building societies, technology providers, consumer associations, insurance, financial market infrastructure and consultancies. There was no significant divergence of opinion between sectors.

Respondents made the following key points:

  • A regulatory definition of AI would not be useful, given the different approaches to defining AI (e.g. principles-based or risk-based) and the difficulty of capturing all of its characteristics and associated risks.
  • Regulators could design and maintain 'live' regulatory guidance and best practice examples, linking to existing requirements such as the FCA's Consumer Duty and the Equality Act.
  • Initiatives such as the 2022 AI Public-Private Forum have been useful and could serve as templates for ongoing industry engagement.
  • Greater alignment between regulators, both domestically and internationally, would be helpful.
  • More regulatory alignment in the area of data risks would also be useful (particularly those involving fairness, bias and management of protected characteristics).
  • Regulation and supervision should, in particular, focus on consumer outcomes, ensuring fairness and considering other ethical implications.
  • More regulatory guidance in the area of third-party models and data would be useful.
  • A combined approach across business units and functions could be helpful to mitigate AI risks (e.g. closer collaboration between data management and model risk management teams).
  • The principles set out in the BoE's paper on Model Risk Management Principles for Banks are sufficient to cover AI model risk, however certain areas could be strengthened or clarified.
  • Existing firm governance structures sufficiently address AI risks (this includes regulatory frameworks such as the Senior Managers and Certification Regime).

Key takeaways

AI regulation is a complex and fast-moving area. The feedback statement is useful in setting out points and concerns which are broadly accepted across a range of industries. A key theme throughout the feedback statement is the desire for a clear and harmonised approach between the regulators.

The UK Government appears keen to facilitate AI, although the feedback to its 29 March 2023 policy paper A pro-innovation approach to AI regulation is still awaited. On 25 October 2023, it published a discussion paper on Capabilities and risks from frontier AI, followed on 27 October 2023 by a paper on Emerging processes for frontier AI safety. Measures to ensure the safe use of AI will be key to building public trust and there are many elements which will be common across all sectors.

It is clear from the BoE's feedback statement that there is widespread appetite for regulatory guidance on the application of AI and management of the associated risks, in preference to formal regulation. Reliance on existing legal and regulatory regimes, provided that they are technology-neutral, should ensure a proportionate approach which neither unfairly favours nor discriminates against AI-based technologies.

#BBD0E0 »

Authors