14 min read

Delving into the EU AI Act: scoping your exposure

Read more

By Charlotte Halford, Amanda Mackenzie & Peter Given

|

Published 12 August 2024

Overview

After years of political negotiation and legal debate the European Union's Artificial Intelligence Act Regulation (EU) 2024/1689 (the EU AI Act) finally became law after it was published in the Official Journal on 12 July 2024. The EU AI Act entered into force across all 27 EU Member States on 1 August, following which the Act will be implemented in phases which will include further rule making and guidance.

It should also be remembered that the EU AI Act does not come into force in isolation. It is only a part of the EU's digital strategy and regulatory framework and it overlaps with GDPR, Data Act, Digital Markets Act, Digital Services Act and NIS2 Directive. It will also influence other international regulatory systems, potentially even the UK. We highlighted in our recent article, "The King's Speech: some surprising implications for Data, Privacy and Cyber", the newly elected Labour party are considering "appropriate legislation" for AI regulation and potentially pursuing a legislative agenda that is more aligned with the EU. Although currently the new government's focus seems to be on "highly targeted legislation that focuses on the safety risks posed by the most powerful models".

The EU AI Act takes a risk based approach to the regulation of the entire life cycle of AI systems, setting out a legal framework for the development, placing on the market, putting into service and the use of AI systems in the EU. It also prohibits certain AI practices and places specific obligations on operators of different AI systems (see below for more detail on who might be caught as an 'operator'). As with GDPR, it has wide extraterritorial reach and will impact organisations outside the EU, including the UK. Non-compliance with the EU AI Act will potentially result in significant fines with a maximum financial penalty of up to EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher. It is, therefore, important for UK organisations to identify how and when they may be caught by the EU AI Act.

It is a complex piece of legislation with 113 articles and 180 recitals and we anticipate that these will be refined and clarified by a series of regulatory guidance over the coming months. However, as obligations are likely to apply before we receive further guidance, it is important for in scope organisations to get to grips with the key definitions and concepts.

In this article we will look at our current understanding of the scope of the EU AI Act for, in particular, UK organisations. We will consider the definition of AI, who the EU AI Act applies to and its territorial scope, building on the analysis set out in our AI Explainer article back in March, "The DACB AI Explainer: EU AI Act Approved by EU Parliament – What are prohibited AI practices? What does this mean for my business?"

 

The definition of AI: what is actually regulated by the EU AI Act?

The Act doesn’t provide a definition of AI but rather an 'AI system' and 'General-Purpose AI models'.

 

AI Systems

An AI system is defined as:

 "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments". [1]

This definition bears little resemblance to definition in the original draft of the EU AI Act and it is now very similar to the Organisation for Economic Co-operation and Development's (OECD) definition agreed in November 2023[2]

Despite the changes to the definition we consider there is still some ambiguity and it is incredibly broad, so there will likely be challenges for organisations to understand whether their processing activities involve an "AI system" within the meaning of the EU AI Act.

When interpreting the definition, it is important to bear in mind that the definition is cumulative and organisations should consider each part of the definition separately. Although Recital 12 provides some guidance, we discuss below some possible implications of the potentially ambiguous terms contained in the definition:

  • a machine-based system

What does "machine-based" mean? This emphasis on machine-based highlights that the relevant systems must operate through automated processes and computational power, rather than human-driven or manual processes. However, will a system still be considered machine based and in scope if, for example, it cannot function as intended without constant human oversight?

  • designed to operate with varying levels of autonomy

Does "varying" mean that in-scope AI systems must have fluctuating levels of autonomy or, and more likely, that systems with different levels of autonomy would all be in scope. Does the term "designed to" mean that an AI system is in-scope if it is developed to operate with autonomy, even if in practice it does not?

  • that may exhibit adaptiveness after deployment

Recital 12 confirms that AI systems must "have some degree of independence of actions from human involvement and of capabilities to operate without human intervention. The adaptiveness that an AI system could exhibit after deployment, refers to self-learning capabilities, allowing the system to change while in use". However the use of the word "may" in the definition is potentially unhelpful. Is it intended that a system which "may" or "may not" exhibit adaptiveness could still fall within the definition?

  • for explicit or implicit objectives

Recital 12 gives some explanation to this part of the definition and confirms that an in scope AI system includes both systems that produce intended outputs and those that produce unintended outputs.

  • infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments

Recital 12 provides some guidance here "a key characteristic of an AI system is their capability to infer" and the definition excludes "simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations", making it clear that an AI System is more than simple pre-programmed automation.

 

General Purpose AI (GPAI) models

GPAI models are specifically regulated and classified under the EU AI Act. The EU AI Act distinguishes between obligations that apply to all GPAI models and additional obligations of GPAI models with systemic risks.

GPAI models are defined as "an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market".

The EU AI Act also includes provisions relating to GPAI systems, which are AI systems based on GPAI models. The EU AI Act defines a GPAI system as "an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems".

It is clear that some of the general terms used in these definitions, for example "large amount of data" will need further guidance and review. However, we question if even updated guidance can keep pace with the fast development of GPAIs and whether the definitions will become outdated. Although certain models are excluded from this definition e.g. GPAI models that have not been released on the market (i.e. experimental or prototype models), the definition is wide and currently, without further guidance, will capture a significant number of models.

A level of uncertainty will remain until the European Commission, standards-setting bodies, relevant EU regulators or the courts provide guidance or enforcement decisions that will give clarity to the interpretation of these definitions.

 

Who does the EU AI Act apply to – Who are the key players?

Unlike the GDPR which only imposes obligations on two categories of player: controllers and processors, the EU AI Act imposes obligations on a much wider range of entities and businesses together referred to as "operators" [3]. These operators are as follows:

 

Providers

A provider is an individual or entity that first develops (or had developed on their behalf) an AI system or GPAI model. They will fall within the scope of the EU AI Act where:

  • they place the AI system or GPAI model on the market in the EU; or
  • they put the AI system into service in the EU; or
  • the output of the AI system is used in the EU; and
  • the AI system must be released under the provider's own name or trade mark.

 

Key Definitions:

'placing on the market’ means the first making available of an AI system or a general-purpose AI model on the Union market;

making available on the market’ means the supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge;

putting into service’ means the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose;

intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation

Providers that meet these criteria must comply with the EU AI Act irrespective of their place of establishment and location, and whether the AI system or GPAI model is provided in exchange for payment or free of charge.

 

Deployers

A deployer is an individual or entity using an AI system under its authority (except where the AI system is used in the course of a personal non-professional activity). A deployer falls within the scope of the EU AI Act where:

  • it is established or located in the EU; or
  • similar to providers, in a third country if outputs produced by the AI system are used in the EU.

 

Importers

Importers are neither providers nor deployers. They are any individual or entity located or established in the EU and place AI systems on the EU market that bear the name or trademark of individuals or entities based in third countries, i.e. they are "importing" AI systems into the EU. Importers are the first to make these third-country AI systems available in the EU.

 

Distributors

Distributors are individuals or entities in the AI supply chain, other than providers or importers, that make AI systems available in the EU as a follow-on action, after the AI system is imported and placed on the EU market.

Notably the roles of the importer and distributor are only relevant with regard to the distribution of AI systems, but not to GPAI models.

 

Product Manufacturers

A ‘product manufacturer’ is not defined in the EU AI Act, but instead in the EU harmonisation legislation listed in Annex I to the EU AI Act. Product manufacturers are caught within the scope of the EU AI Act’s scope where they place AI systems on the EU market or put them into service together with their own product and under their own name or trade mark.

 

Authorised representative

Representatives in the EU of providers based outside the EU. They are similar to the authorised representatives under GDPR.

 

Summary

Operator type

In scope only if based within EU?

Provider

No. Within or outside of EU

Deployer

No. Within or outside of EU

Importer

Yes. Within EU only.

Distributor

Yes. Within EU only.

Product Manufacturer

No. Within or outside of EU

Authorised representative

Yes. Within EU only.

 

Why is it important to understand your organisation's role in the AI value chain?

Each type of operator is subject to a different set of obligations. The most heavily regulated operators under the EU AI Act are providers and most obligations apply to providers of high-risk AI systems, such as implementing risk and quality management, carrying out conformity assessments and extensive documentation obligations.

Therefore, it is crucial to identify if your organisation qualifies as a provider. Despite the detailed definition, difficulties may arise when determining whether an organisation qualifies as a provider. For example, where an organisation not only uses (as "a deployer") an AI system or GPAI model, but also significantly modifies it so that a ‘new’ AI system or GPAI model may have been generated. In such a circumstance a "deployer" could become a "provider".

We also see potential difficulties assigning operator roles for international companies. For example, where a UK parent has an AI system developed on their behalf in the UK. At this point they are not a provider under the EU AI Act. However if the UK parent then makes the AI system available to entities in their group within the EU it is likely that that they are "placing on the market" or "putting into service" an AI system in the EU and will therefore be deemed a provider and so subject to all the provider obligations under the EU AI Act. The EU AI Act is clear that there is no exemption for organisations that offer their AI systems free of charge, so it would not be possible for intragroup provision of AI systems to avoid the obligations on this basis.

When assessing the role your organisation may play as an operator under the EU AI Act, it is important to note:

  • an operator can hold different roles concurrently (for example both a provider and deployer) and they will need fulfil the relevant obligations associated with both those roles;
  • several operators can hold the same role for one AI system (e.g. several different entities can be providers for one AI system); and
  • roles can be set out contractually but, like GDPR, how the parties act in practice will be a true determination of their role.

 

What is the territorial scope?

As indicated above in the definitions of the various operators under it, the EU AI Act has a very broad extraterritorial scope, much broader than Article 3 of GDPR, covering not only businesses that are based or incorporated within the EU but also:

  • providers that put AI systems or GPAI models onto the market in the EU, even if based outside the EU.[4]

Any UK entity that sells products or services utilising an AI system or GPAI model into the EU will be caught. Also as explained above, a group company headquartered in the UK or with service companies in the UK could be providers where there is onward use of the UK AI system by EU group entities.

  • product manufacturers that place AI systems with their own product on the market in the EU.
  • providers or deployers based outside the EU that deploy AI systems outside the EU but where the output of the AI system is used in the EU.[5]

This provision is very wide and there is currently no commentary around what "used in the EU" means. There is also some discrepancy in the wording of Article 2(1)(c) and its associated Recital 22. Recital 22 seems to limit this provision by the words "to the extent the output produced by those systems is intended (our emphasis added) to be used in the EU". Article 2 (1)(c) does not contain the word "intend". This could have the effect that the EU AI Act could apply to a UK company even if they had no knowledge that the relevant AI system outputs would be used in the EU. We await further guidance and clarification on this point, but in the meantime UK businesses should seek to clarify where possible how relevant output will be used, and potentially seek to limit it, to minimise the risk of inadvertently coming within the scope of the EU AI Act.

 

What is not in scope?

The EU AI Act sets out some exemptions from its scope. These include;

  • traditional software that does not fall within the cumulative definition of an AI system;
  • AI systems used solely for scientific research and development, subject to certain conditions;
  • AI systems exclusively for military, defence or national security purposes;
  • AI systems released under free open source licenses, other than in relation to certain prohibited and high risk AI systems; and
  • personal and non-professional use of AI systems.

 

Our conclusions

The implications of the EU AI Act are wide-ranging for both organisations and individuals in the EU and worldwide. The Act is intended to ensure that organisations at all levels of the AI supply chain are subject to the EU AI Act’s requirements, often irrespective of jurisdiction.

However, there is still some uncertainty for non-EU organisations as to whether they will fall within the scope of the EU AI Act. Key definitions including the fundamental definition of an AI system and a provider's role need further clarification. In particular, Article 2(1)(c) in its current form is at best uncertain and could in fact bring many non-EU organisations unknowingly within the EU AI Act's scope. How the term ‘output’ will be interpreted and whether the concept of ‘intention’ in Recital 22 will play a part in the narrowing the interpretation of Article 2(1)(c) remains to be seen. Until we have further guidance on the key definitions and Articles such uncertainty will remain.

In the meantime, non-EU based organisations, and those operating across borders, should proceed with caution when using AI and ensure that considered and comprehensive AI governance and compliance programs are in place including mapping the AI systems and GPAI models that they develop, have had developed on their behalf and/or use. Organisations should take time to determine whether they are in scope of the EU AI Act. If your organisation is not based in the EU, consider if your AI systems or GPAI models have any link to the EU that could trigger the EU AI Act. It is also worth considering if you can legitimately structure relationships, even within group companies, to mitigate the compliance risk from the EU AI Act.

[1] Article 3 (1)

[2] https://oecd.ai/en/wonk/definition

[3] Article 2 and associated definitions in Article 3

[4] Article 2 (1) (a)

[5] Article 2 (1) (c)

Authors