There is a new legislative regime on the horizon; one which has been coming for a while now - as authorities worldwide have considered whether and how to manage the risks of the use of artificial intelligence (AI) in business and elsewhere.
Keen to maintain their position as the global standard setters when it comes to all things data following the GDPR, it is the European Commission who have almost led the way in the last few days, by publishing draft Regulations which will, if implemented, apply to a wide range of AI use cases. We say ‘almost led’ because there are a few other jurisdictions (notably Singapore) which have already published draft AI governance frameworks.
Most agree the accelerating rate of progress in AI research and deployment is both exhilarating and a bit alarming (the Zuckerberg adage of ‘Move fast and break things’ has been applied to AI also). The potential impacts on society are seen as both positive and negative - the negative impacts include reputational risk to the major users of AI resulting in what has been termed a ‘techlash’ – a lack of trust in the emerging global controllers of large parts of our lives. The emerging recognition of the real risks have led many to agree that society needs sensible guideposts for responsible use of AI during the incubator stage of this transformative technology. Currently there are hardly any limits of the use of AI.
It should be borne in mind that regulation can also benefit those deploying the technology as it creates a common set of rules that all have to comply with – as has happened with other areas where technology has emerged in the past, such as data protection (love it or hate it, we have a common set of rules).
So it is a good thing that the EU has started the ball rolling to plug the legal vacuum surrounding the use of AI in business, government and everyday life: but, as we see below, what is proposed is both broad and vague; and of course, any laws need to work on an international basis - because AI is global; it works without recognising international borders – how far the proposed Regulation will be accepted as a GDPR-like standard by other major jurisdictions will depend on how well it balances the need to both control inappropriate use and also support innovation and positive deployment in this important area.
In this article we look at what is proposed in the 108 page draft Regulation - but start with the now relevant question, ‘will this apply to the UK?’
Likely applicability to UK
Although the draft Regulation will not have direct effect in the UK following the UK’s exit from the EU, it is not something that can be ignored by UK organisations for several reasons:
Extra-territorial effect
The draft Regulation proposes that it shall apply to “providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union”1. The effect of this is that where UK based companies are offering AI services to EU based companies that fall within the definition of a high-risk system (see below for further detail on “high-risk systems”) they will be required to comply with the requirements of the Regulation to be able to offer such services into the EU.
EU aim for proposed regulation to be global norm
In introducing this proposed Regulation the EU is looking to replicate the reach of the General Data Protection Regulation which has become the model for the regulation of privacy around the world.
The EU has positioned itself well to do this by being a first-mover and actually having a piece of paper to show for the discussions that have been had over the last couple of years. The US, under President Joe Biden, has also made moves towards regulation of AI with a desire to establish “shared democratic norms”2. However, the US is yet to put forward a concrete proposal, so the EU’s draft is currently the one on the table.
If the EU’s approach is adopted as a global norm, the UK may find it is forced to either regulate in line with such approach to be able to operate and compete in the provision of AI services globally. In any case, UK based organisations may find themselves subject to the EU’s approach when engaging with the EU and any other jurisdictions which have adopted it, regardless of whether it is adopted in the UK or not.
Requirement for equivalence
Although the EU’s proposals on AI go beyond its impact on personal data, the UK may come under pressure to align with the EU’s approach to regulating AI in order to maintain any decision as to the adequacy of the UK’s data protection regime, given how inextricably linked the use of AI is with the processing of personal data.
All of the above suggests that there is likely to be some formal alignment between how the UK and the EU regulate the use of AI, but in any case UK based businesses are likely to be subject to the EU regime, so it is something that they should be preparing for even before the UK makes any formal moves towards regulation itself.
Timeline for implementation
Although, more than a year after the EU first published its White Paper on Artificial Intelligence, and we now finally have a draft Regulation, there is still potentially a long road ahead before it will become law: the proposal will now need to go through the usual slow moving European legislative process, being subject to scrutiny by the European Parliament and members states in the Council of Europe.
Challenges for the European legislative machine will include: the lack of worldwide precedent, with the EU being the first major jurisdiction to introduce a proposal to regulate AI; keeping pace with the extremely rapid development of AI; and ensuring a coherent framework of regulation across Europe, particularly in light of the Commission’s data strategy which is being developed in parallel.
All this means it is potentially going to be several years before a final text is agreed and then even when it is adopted that current draft envisages an 18 month implementation period before it officially comes into force.
However, given the likely scale of work required to comply with the proposed Regulation, and the increased scrutiny the use of AI is likely to be subject to, it would be prudent to consider making some preparations now.
Scope – What is covered?
What is AI defined as?
Clearly the scope of what the draft Regulations cover is hugely important, especially as AI is a concept which can be interpreted broadly or narrowly.
The proposal seeks to regulate “Artificial Intelligence” or “AI” by regulating the systems that contain AI (not the AI itself) referred to as the “AI systems” in the draft proposal. By taking this approach the proposal seeks to regulate the placing on the market, putting into service and use of AI systems throughout the EU.
When trying to balance the need for legal certainty with the ever-evolving spectrum of what constitutes AI, the proposal has sought to define an “AI system” in what it claims to be a “technology neutral and as future-proof as possible” manner which accounts for “the fast technological and market developments related to AI.” In order to do this, it has first defined an “artificial intelligence system or AI system” as:
Software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Annex I –Artificial Intelligence Techniques and Approaches is drafted as follows in the proposal:
- Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
- Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
- Statistical approaches, Bayesian estimation, search and optimization
By referring to Annex I, the proposal states that the Commission will be able to adapt the Annex to align with new technological developments as and when the market develops. As far as the proposal goes, at this time Annex I contains a high level list of approaches and techniques for the development of AI. To fall within the definition of an “AI system” we need to combine: (i) these techniques and approaches; with (ii) the stated set of human-defined objectives and the ability to generate outputs influencing the environment the software interacts with.
Given the broad definition of what systems constitute an AI system and so are to be regulated, this will be the subject of much debate going forward.
“AI systems” – which are regulated?
In terms of risk categories, the draft Regulation classifies AI Systems according to risks which are one of:
- unacceptable (‘Prohibited AI’) high (‘High Risk AI’) or
- low/minimal (anything else).
Perhaps not surprisingly, a significant proportion of the draft Regulation is dedicated to the onerous and detailed compliance obligations which accompany systems classed as High Risk, but let’s start with what is actually prohibited…
Prohibited AI
The list of prohibited AI is set out in Title II and is intended to “cover practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm”3.
They also prohibit AI systems which are used for:
- indiscriminate surveillance
- general purpose social scoring (i.e. trustworthiness) of natural persons by public authorities; and
- ‘real time’ remote biometric identification systems in public spaces for law enforcement (although this is not an absolute prohibition and certain limited exceptions apply e.g. threats to life or searching for missing children, provided further criteria are met around proportionality etc.).
The rationale for prohibiting such AI systems is centred around protecting the fundamental values of the EU, such as equality and human rights.
The prohibition on use of real time remote biometric identification systems would obviously cover use of facial recognition technology (FRT) in public spaces – this particular requirement comes against a current backdrop of increasing acknowledgment that FRT based algorithms are often prone to problems around bias when it comes to minority groups and it will be interesting to see how the law (which obviously overlaps with equality legislation, the Human Rights Act and data protection law) will continue to be applied in this area.
High Risk AI Systems
AI systems are categorised as high-risk according to their intended use and also in accordance with existing product safety legislation (and therefore the industry in which it is put to use). Certain AI systems are categorised as High Risk owing to their connection with safety components for products. Of particular note are the High Risk AI systems specifically listed in Annex III. These are systems which are used for:
- Biometric identification and categorisation of natural persons (other than those Prohibited already);
- Management and operation of critical infrastructure; Education and vocational training;
- Employment, workers management and access to self-employment;
- Access to and enjoyment of essential private services and public services and benefits; Law enforcement;
- Migration, asylum and border control management; and Administration of justice and democratic processes.
The draft Regulation does not set out an absolute prohibition on High Risk AI systems: rather, if a system is classed as High Risk then, in order for it to be used lawfully within the EU, it must conform with the requirements set out in Chapters 2-6, which impose various obligations on the provider of the AI system including obligations relating to data governance, record keeping, transparency and provision of information to users, human oversight, security and conformity assessment procedures. Conformity assessment procedures are essentially a form of audit (carried out by third parties known as conformity assessment bodies) to confirm that the High Risk AI systems complies with the obligations set out in Chapter 2.
Transparency
Readers familiar with the requirements under the GDPR around fairness and transparency will perhaps recognise similar themes within the draft AI Regulation around transparency, which essentially require that, for AI systems where there is human interaction or other touch points between the system and a human (e.g. use of biometrics, facial recognition, detecting emotions and interestingly even deepfakes), the provider of the system must inform the user of the existence of the AI system. The rationale for such transparency, is to enable individuals who will potentially be affected, to be informed and have greater choice as to whether they wish to proceed with interacting with such systems.
Innovation
The draft Regulation also includes provisions encouraging regulators to set up regulatory sandboxes in order to stimulate innovation. We have already seen various UK regulators, including the FCA and ICO set up regulatory sandboxes – which have proved popular and have contributed to assisting organisations trialling innovative services and products achieve compliance, whilst advancing the regulator’s knowledge and exposure to the cutting edge of industry, helping shape future policy and guidance.
Governance – the European Artificial Intelligence Board
Governance systems are proposed at both the EU and national levels within Member States. First and foremost, it proposes a European Artificial Intelligence Board (Board) to act as a sort of expert body within the EU providing advice and assistance to the Commission in order to:
- contribute to the effective cooperation of national supervisory authorities and the Commission;
- coordinate and contribute to guidance and analysis by competent authorities on emerging issues across the EU market; and
- assist the national supervisory authorities and the Commission with its consistent application.
The tasks of the Board when providing its advice and assistance to the Commission are also listed in the proposal as: (a) collecting and sharing expertise and best practices among Member States; (b) contributing to uniform administrative practices in the Member States; and (c) issuing opinions, recommendations or written contributions on matters related to the implementation of the draft Regulation.
The Commission itself will chair the Board, but the Board itself will be made up of the European Data Protection Supervisor, which itself has been earmarked as the EU’s competent for supervision for all EU institutions, agencies and bodies, and the lead officials of the various national supervisory authorities. The national supervisory authorities that will form the remainder of the Board are to be established or designated by each Member State. Given that the EU’s supervisory authority is to be the European Data Protection Supervisor, it is likely that most Member States will look to designate such authority to their own data protection supervisors. Once established or designated, these national supervisory authorities will act as the notifying authority and market surveillance authority for its Member State.
Concluding thoughts
There is little doubt that the UK will review and consider the extent to which it wishes to support the draft regulatory framework in the coming months – some level of UK regulation is likely to follow, probably similar to the eventual EU position.
The EU said that it wanted to achieve "proportionate and flexible rules" to address the risks of AI and to strengthen Europe's position in setting the highest standard in regulating the AI technology. That said, the draft has potentially very broad application, with at this stage in places vague and ambiguous language. If implemented as drafted, it would set significant limits on the use of artificial intelligence in a myriad of activities, just a few examples being many financial services systems, self-driving cars, hiring decisions and infrastructure, as well as biometrics systems and medical software products and systems.
It is interesting but understandable that there is a strong linkage with the data protection authorities and the GDPR – albeit much of what AI is used for and the issues arising are quite separate. Further, it does not address some of the key areas of legal risk related to AI such as liability, intellectual property and competition.
There will no doubt now be close scrutiny of the draft by the technology sector and others affected: the existing white paper has already attracted much sector input (mostly negative!); however initial positive reaction from major players to the new draft suggests an acceptance that some level of regulation is inevitable and that it has some positive outcomes.
Further reading
We helped write a recent global overview of thinking in this area through our close involvement with the International Technology Law Association. This foresaw much of what is now proposed, but considered the ethical and economic merits in detail also. The original and updated report as well as a draft impact assessment tool may all be viewed at https://www.itechlaw.org/ResponsibleAI2021
References
1Paragraph 11 Preamble to draft Regulation 2021/0106 2https://www.whitehouse.gov/briefing-room/speeches-remarks/2021/04/16/remarks-by-president-biden-and-prime-minister-suga-of-japan-at-press-conference/
3Paragraph 5.5.2 in Explanatory Memorandum to Regulation 2021/0106