6 min read

The UK Government’s White Paper on AI Regulation - 5 Key Take aways

Read more

By Jade Kowalski

|

Published 17 May 2023

Overview

It is hard to do anything these days without encountering a reference to “AI”. From overnight sensation ChatGPT to the doom-mongering resignation of the “Godfather of AI” Geoffrey Hinton; AI is undeniably a globally ‘hot topic’ – with no sign that this will change anytime soon!

In the midst of this AI induced chaos, the UK Government published its White Paper “A pro-innovation approach to AI regulation” on 29 March 2023 which, it states, is “pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative”. The White Paper proposes a “Framework” built around four key elements:

  • Defining AI based on its unique characteristics to support regulator coordination;
  • Adopting a context-specific approach;
  • Providing a set of cross-sectorial principles to guide regulator responses to AI risks and opportunities; and
  • Delivering new central functions to support regulators, maximising the benefits of an iterative approach and ensuring that the Framework is coherent.

These four key elements informs the Framework’s aim to regulate AI without stifling the industry’s innovation. The White Paper diverges significantly from the European Commission’s approach in both status and approach. In contrast to the specific and detailed requirements set out in the draft EU AI Act (first published in April 2021 and updated in May 2023), the Framework provides high level guiding principles which, it is proposed, will be implemented by existing regulators.

As your guide through the ‘regulatory fog’, our Technology and Data, Privacy & Cyber teams have set out our 5 key take aways.

 

1. Principles vs legislation

The Framework is underpinned by a set of five principles which will be used to direct the development and use of AI:

  • Safety, security, and robustness – AI should be safe, and risks should be identified, assessed and managed;
  • Appropriate transparency and explainability – The person or entity should know AI is being used and be able to understand the decisions it makes;
  • Fairness – AI should not contravene the legal rights of individuals or businesses e.g. discriminate or create unfair market outcomes;
  • Accountability and governance – Use and supply of AI should be overseen and clearly accounted for; and
  • Contestability and redress – Where AIs make a harmful decision or a decision which creates a material risk, there should be a route to challenge that decision.

The rationale for any lack of specific, AI focussed legislation (at least at this point) is to ensure that regulation can keep pace with fast evolving technology while allowing businesses the flexibility to develop and use AI under the comforting watch of domain-specific regulators. These regulators will be given the ability to exercise their judgement when making decisions in their sectors, allowing for an adaptable approach dependent on the risks presented. Additionally, it is considered that an agile framework will avoid placing undue pressure on the market to follow cumbersome legislation, particularly in relation to small businesses including start-ups, who lack the resources to comply.

 

2. Innovation over Risk – unleashing Pandora’s Box?

The UK Government says it wants to utilise and capitalise on the use of AI: it has already identified AI as one of five critical technologies in the UK Science and Technology Framework. The White Paper places a great deal of emphasis on the importance of AI - specifically stating how AI could have an impact on the world comparable only to the invention of “electricity or the internet.”

It is unsurprising that the UK would take a market-friendly approach to regulating AI, with a longer run time before statutory legislation is implemented. It is nonetheless striking how the White Paper lacks detail on the dangers of AI, and how such dangers might be adequately addressed by the White Paper’s proposals.

Meanwhile in Brussels on 11th May 2023, the Internal Market Committee and the Civil Liberties Committee adopted a draft negotiating mandate on the EU AI Act. Ahead of the European Council negotiating the final form of the EU AI Act, the legislation will need to be endorsed by the entire European Parliament (session to be held between the 12-15 June 2023). In contrast to the UK’s Framework, this legislation promises to provide detailed requirements for the supply and use of AI which may pose a greater challenge to innovators.

 

3. Context-specific analysis of AI

The White Paper sets out that the UK does not intend to assign blanket rules or risk levels to the use of AI in entire sectors or categories of technologies. It will instead opt for a “context-specific” approach, based on assessing the outcome the AI will generate for specific applications. Further, the White Paper does not seek to apply a specific definition of AI. Instead, it defines the concept by reference to two characteristics: “adaptivity” and “autonomy.”

For example, whilst AI use in ‘satnavs’ to avoid congestion would likely attract low levels of regulation, AI used to identify potential suspects in a criminal investigation would require further regulation.

On the face of it, this seems like a sensible and pragmatic approach, allowing lower risk applications of AI to be utilised without disproportionately burdensome regulatory obligations.

 

4. New AI Regulator out, Existing Regulators in

Instead of creating a new AI regulator, the White Paper proposes that the principles be implemented by existing regulators (which would include the Information Commissioner’s Office, the Financial Conduct Authority, the Competition and Markets Authority and others). There will be a duty for existing regulators to comply with the Framework principles when assessing the use and supply of AI. The application of the principles will, however, be at the discretion of the regulators.

The existing regulators will be empowered to promote clarity across AI regulation by issuing both individual and joint guidance where AI crosses multiple sectors. However, there is very little detail regarding how this will work in practice.

We can foresee challenges if different regulators apply the five principles inconsistently or more broadly than their remit, and for businesses seeking to reconcile conflicting advice from multiple regulators. In addition, as each regulator will have unique enforcement powers, it is possible that two separate businesses may receive different outcomes for breach of the same principle simply due to the remit of their regulator. These challenges are yet to be realised but it is clear regulator collaboration and consistency will be key to the UK’s approach flourishing.

 

5. Enforcement is with the Regulators

Luckily (or unluckily depending on your world view) for businesses, the potential enforcement processes for contravening the principles are still in their infancy and will be developed by regulators in conjunction with a Government central risk function which will provide broad support and monitoring for the regulators.

The decision to leave the enforcement and consequences of contravention as vague may be cause for concern. The recent decision by the CMA to block the £55 billion takeover of Activision Blizzard by Microsoft together with the potential new powers given to the CMA by the Digital Markets, Competition and Consumer Bill (see our recent article here) shows the potential impact only one of the regulators can have on industry; meaning that further UK legislative developments in AI should be watched with a cunning (A) eye.

 

Final Thoughts (for now!)

This would seem a ‘work in progress’, but is a clear marker put down to the international tech market to attract investment and demonstrate a difference in direction from Brussels. Undoubtedly, however, the EU’s regulatory approach will have to be followed to a significant extent by the UK and some will question whether a more robust approach to addressing the risks of AI now would add to the attraction of the package. Credibility will depend on the early and pro-active approach of the regulators. Whatever the ultimate UK regulatory position, the policing of AI is essential at a global level.

The White Paper is open for consultation (specifically, on various questions contained within it) until 21 June 2023.

Authors