By Jade Kowalski, Christopher Little and Isabella McMeechan

|

Published 30 October 2023

Overview

Following the first instalment of our AI Explainer series, this article considers the global move towards "regulation" (in the broadest sense) of AI. This has been brought into sharp focus this week, with over 25 countries around the world endorsing the Bletchley Declaration on 'frontier' AI safety on 1st November.

Use of AI in breach of existing laws (such as data protection, copyright, equality and product liability laws) could result in regulatory enforcement and/or civil liability. More broadly, issues such as data ethics and reputational protection are increasingly important. Alongside this, governments and regulators – both in the UK and around the world – are moving at pace to develop an approach to AI regulation, many even agreeing to work together in the world-first Bletchley Declaration. 

With that in mind, this article explores how different countries are regulating AI in this new and rapidly evolving area, starting with key takeaways for a quick read, and then delving into expert analysis.

Key takeaways:

  1. Collective key themes in AI regulation have emerged around the world, with the Bletchley Declaration affirming the importance of a global approach to AI. Key themes from the Declaration and around the world include: a focus on safety and security (particularly the protection of individual rights), and transparency, together with risk-based approaches and assessments.

  2. However, there is still no uniform global approach to exactly how AI is regulated. Countries around the world have been taking different approaches. Some (such as China) already have laws in force, some are deciding not to introduce overall binding regulations as yet (for example, Japan, which has instead focused on non-binding guidance so far), and others (such as the EU, US and UK) have plans to regulate in some form.

  3. The EU, in particular, is taking a robust, risk-based approach. The draft EU AI Act focuses on the protection of rights and transparency, with certain AI use cases being prohibited, and others being considered high risk (with additional measures, such as conformity assessments, being applied).
  1. The UK has proposed a principles based, sector-led approach. Rather than implementing a new AI regulatory framework, the UK has set out plans to adopt a principles-based and context-specific approach, under which existing regulators will govern the use of AI by those within their remit.
  1. What should I be doing now? Conducting AI assessments, ensuring you understand AI use and development within your business, and implementing suitable transparency and protective measures.

Expert Analysis:

Governments and regulators around the world are now focusing on AI.

The Bletchley Declaration, in particular, demonstrates the global drive to regulate AI urgently. The Declaration largely focuses on high-risk 'frontier AI', defined in the Declaration as "highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks". The countries agreeing the Declaration (including the UK, US, Canada, France, Germany, China, Japan, and India, to name a few) confirmed the need for global cooperation on the safe and responsible development and use of AI, to understand and appropriately manage the risks and opportunities AI offers. Key themes identified include the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, oversight, ethics, bias mitigation, and privacy and data protection.

Despite this recent development, unsurprisingly, global regulators have so far adopted different approaches. From China, which already has AI regulations in force, to countries like Japan, which, eager to attract AI developers, is instead focusing on non-binding guidance (e.g. see its Governance Guidelines for Implementing AI Principles). Many countries fall somewhere in between, with plans to introduce AI regulations in some form in the near future. The EU is leading the charge on this side of the globe with the extensive and well-progressed draft EU AI Act, alongside countries such as Canada and the US who have AI laws under development.

China's Interim Measures for the Management of Generative Artificial Intelligence Services, which regulate generative AI, are already in effect – forming part of a local and application-specific framework of AI laws in China. Whilst they allow for innovation by companies' AI research labs, they place robust obligations on providers of generative AI solutions, particularly public-facing applications – such as requiring them to put in place measures to prevent discrimination and use data from 'lawful sources' (during AI algorithm development and training), establish data tagging rules, and protect user's inputs.

EU legislation, on the other hand, is still in draft form. However, the EU has taken a robust, risk-based approach in the draft EU AI Act. The focus of the Act is on protection and transparency – particularly in terms of safety and human rights – requiring the implementation of appropriate safeguards throughout the AI lifecycle. The Act categorises AI by risk as follows:

  1. Unacceptable risk AI which is prohibited (for example social scoring systems, and those that harmfully exploit children);
  1. High risk AI which is subject to measures such as conformity assessments and requiring CE marking (for example AI in medical devices, critical infrastructure, and biometric categorisation);
  1. AI which is subject to specific transparency obligations (such as impersonation bots), with requirements such as notifying users that they are interacting with AI applications; and
  1. Minimal or no risk AI, which is not subject to specific restrictions.

In Canada, the Artificial Intelligence and Data Act (AIDA) is also currently in draft form. AIDA focuses on transparency, accountability and the responsible design, development and deployment of AI (with risk assessments, mitigation strategies, monitoring and record-keeping obligations). It is intended to promote a balanced approach to regulation, taking into account all sized companies and the safety of users.  Whilst AIDA is in draft form, in September 2023, Canada introduced a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems – whilst not legally binding, this is intended to provide useful common standards and guidance around generative AI development.

In the US, there is no comprehensive federal AI legislation, but several federal agencies have been busy exploring potential guidance and policy. Notably, a bill was published in June to establish an AI commission, with, again, a risk-based approach being proposed. The White House has also introduced a (non-binding) Blueprint for an AI Bill of Rights, setting out recommendations for the safe development and use of AI, based on five key principles: Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives, Consideration, and Fallback. More recently, the Federal Trade Commission held a virtual 'roundtable' in October to compile views on the impact of generative AI for creative industries, in light of the entertainment and media industry strikes earlier this year, where key concerns included the potential for loss of control over data.

Overall, whilst there is no united global approach to AI, consistent themes are emerging, and countries around the world have identified the need for cooperation through the Bletchley Declaration. Transparency, risk-based approaches and assessments, are set to be key. There is also the more general need for balance between safety and innovation: protecting citizens' rights and content, whilst allowing for advancements in AI.

United Kingdom 

The UK Government's proposed approach for regulating AI is set out in its White Paper, titled: "A pro-innovation approach to AI regulation" which was published on 29th March 2023, and which formally introduced the idea of a principles-based and "context specific" approach to AI regulation in the UK.

The proposed UK approach differs significantly from the EU.  Instead of creating a single new AI regulator which will govern the development and use of AI in the UK, it is suggested that this will fall to existing regulators who are experts in regulating their individual sector. These regulators will include the Information Commissioner's Office (ICO), the Financial Conduct Authority (FCA), the Competition and Markets Authority (CMA) and the Office of Communications (Ofcom).  The following five principles, which are set out in the White Paper, form the parameters within which the regulators will be required to operate:

  1. Safety, security and robustness – AI must primarily be safe. AI Suppliers must have appropriate measures in place to ensure their AI systems are secure and robust and that risks are identified and managed accordingly.
  1. Appropriate transparency and "explainability" – the person or entity needs to be aware that AI is being used and to have access to (and understand) the decision-making processes of an AI system.
  1. Fairness – AI systems must not undermine the rights of individuals or organisations, including not discriminate unfairly or create unfair market outcomes.
  1. Accountability and governance – AI systems must be governed in a way that ensures effective oversight and clear accountability.
  1. Contestability and redress – users of AI need to be able to contest an AI decision which is harmful or creates a material risk.

These five principles are not expected to be statutory however, instead existing regulators will be empowered to issue guidance regarding interpretation of the five principles and what practical measures can be taken in order to ensure compliance. The stated aim of refraining from the implementation of centralised AI-specific legislation is to i) create a regulatory framework which is adaptable in the face of such a rapidly evolving area, and ii) avoid a scenario whereby the remit of existing regulators is undermined by new legislation.

Some UK regulators have already published guidance regarding AI which falls within their remit. For example, there is significant overlap between data protection requirements and AI regulation. The ICO has been particularly proactive in:

  • issuing extensive AI focused guidance as well as practical resources including the "AI and data protection risk" toolkit; and
  • taking enforcement action. In October 2023, it issued a preliminary enforcement notice to Snap (Snap, Inc and Snap Group Limited) as a result of its purported failure to adequately assess risks posed to children by use of its generative chatbot, "My AI".

There is an acknowledged concern that the approaches taken and guidance issued by different regulators need to dovetail, so that AI systems are able to operate across more than one regulatory remit in circumstances when the AI system operates across multiple sectors - the risk of course being the issuance of competing (and potentially contradictory!) guidance, and the dilemma of which takes precedence!

A key proposal put forward by the Department for Science, Innovation and Technology, in order to ensure a coordinated approach to AI governance, is a new multi-regulator AI sandbox which will "allow innovators and entrepreneurs to experiment with new products or services under enhanced regulatory supervision without the risk of fines or liability”.

The House of Commons Science, Innovation and Technology Committee has also published a report in response to the Government's White Paper which stresses the importance of implementing, as soon as is possible, a regulatory framework which will enable the five AI principles to be successfully implemented.  We consider this point in more detail in our next article within the AI Explainer series.

Next steps

Regulators are busy, and key themes are emerging in the UK and across the world. The Bletchley Declaration, and the global AI Safety Summit hosted by the UK from which the Declaration emerged, demonstrate the urgency with which countries are looking to regulate AI.

Organisations therefore need to be prepared for when the time comes. Conducting assessments, and ensuring you understand the AI you are (or are planning on) using or developing, and implementing suitable technical, contractual, and practical transparency and protective measures will be key to ensuring future-proofed use of AI in your business. 

In the meantime, please keep an eye out for the next article in our AI Explainer series which builds on the second half of this article and takes a closer look at the proposed UK AI regulatory framework, including any key developments from the AI Safety Summit.

Authors