By Isabella McMeechan & Maia Crockford

|

Published 26 March 2024

Overview

In a world first, the comprehensive EU AI Act (Act) was approved by the European Parliament on 13 March, and is now set to become law from May. A key part of the Act is prohibited AI practices. But what are they, and what does this mean for businesses?

This next article in the DACB AI Explainer series covers these questions, and what you should be doing, and when, to prepare for the new rules about prohibited practices.

 

KEY TAKEAWAYS

  1. Unlike 'high risk' or other AI systems, which can be used with certain safeguards, prohibited practices are AI uses and systems that are banned by the Act.
  1. Key types of prohibited AI practices are those which unacceptably threaten the rights of citizens, and include AI which causes significant harm, manipulates people, or uses protected characteristics unfairly, such as AI systems involving biometric categorisation, profiling, and social scoring leading to unjustified detrimental treatment, and AI which creates facial recognition databases from untargeted internet and CCTV scraping.
  1. You should ensure not to use prohibited AI practices in the EU from the end of November this year*.
  1. The coverage of the Act is broad: it covers almost anyone handling or using AI, and even UK-only businesses should reconsider their use or development of prohibited AI, particularly as the Act can apply where, for example, the AI systems or outputs affect anyone in the EU.
  1. Suggested next steps include:
    1. Keeping tabs on related guidance issued by governmental and industry bodies;
    2. Conducting an AI audit to work out what AI is, or could be, used in your business;
    3. Stopping use of prohibited AI practices;
    4. Considering having an AI policy to help protect against future use of prohibited AI;
    5. Upskilling staff and ensuring internal training on the Act and prohibited AI practices.

 

IN-DEPTH ANALYSIS

 

What are prohibited AI practices?

The now ratified EU AI Act takes a risk-based approach. The severity of the rules depends on the risk of a particular use of AI. Whilst certain high risk AI systems and practices can be used with appropriate safeguards, prohibited AI practices are types and uses of AI which are banned due to posing an "unacceptable risk". This includes AI systems which:

  1. are likely to cause significant harm through manipulating people's behaviour to circumvent free will or impair informed decision-making, or through exploiting people's vulnerabilities due to characteristics such as age and disability;
  1. are a biometric categorisation system based on biometric data that infers sensitive characteristics;
  1. socially score persons based on social behaviour or personal characteristics, leading to detrimental treatment of people that is unjustified or disproportionate to their behaviour, or used in unrelated social contexts;
  1. use real-time remote biometric identification in public spaces for law enforcement (except for investigating a serious crime);
  1. assess the risk that people will commit a crime based solely on profiling or assessing their personality traits / characteristics;
  1. involve untargeted internet scraping of facial images from the internet or CCTV footage to develop facial recognition databases; or
  1. infer people's emotions in the workplace or in education (except where used only for medical or safety reasons).

 

Which businesses does this affect, and where? What if my business is UK-only?

The Act covers almost everyone that handles or uses AI. Specifically, the Act distinguishes between 'providers', 'importers', 'distributors', 'deployers' and 'product manufacturers' of AI systems, all of which are in scope of the Act and the ban on prohibited AI practices.

The Act also has broad extra-territorial scope. As well as anyone located in the EU, it also covers those located elsewhere where:

  1. they are providers (or product manufacturers, where combined with their own names and products) placing AI systems on the market or into service in the EU;
  2. they are providers or users of AI systems and the AI output is used in the EU;
  3. they import or distribute AI systems in the EU;
  4. the AI systems or outputs are used in the EU; or
  5. persons affected by the AI system are located in the EU.

UK-only businesses therefore need to be mindful that almost all businesses making available or using AI systems or outputs in the EU, or which affect people located, in the EU will be caught by the Act. Even where there is no potential for a businesses' AI solutions or outputs to be made available, used or affect people, in the EU, it may well be that similar prohibitions will apply in the UK – although we’re of course yet to see exactly how regulators interpret the UK government's guiding principles and existing laws (and how 'pro-innovation' the UK's approach will be in practice).

 

What does this mean for those businesses?

Businesses covered by the Act will need to stop using prohibited AI, or risk facing hefty penalties. Such penalties include a potential fine – which exceed those for GDPR non-compliance – of up to €35,000,000 or up to 7% of annual worldwide turnover, whichever is greater.

 

What should I be doing and when?

* The ban on prohibited AI applies six months after the AI Act enters into force. Based on the current timetable, this will mean it applies from end of November 2024, so any actions should be taken by businesses before then, with initial reviews of AI used happening as possible. Suggested actions and timescales include:

  • Look out for further EU regulatory and industry bodies' standards and guidance which will help to further clarify the provisions of the Act, e.g. technical standards and regulatory sandboxes: now and ongoing.
  • Conduct an AI audit to work out what AI systems are, or could be, used in your business: as soon as possible.
  • Stop use of prohibited AI practices: well before the end of November, giving time to allow for any necessary transition. You will need to consider your rights to end certain contracts for AI solutions, or the prohibited parts of them.
  • Consider having an AI policy to help protect against future use: by end of November.
  • Think about whether the business has sufficient internal knowledge and support to understand, and ensure ongoing compliance with, the Act (for example, consider training and upskilling staff): well before the end of November.

Look out for future DACB AI Explainer articles, in which we'll explore further themes under the Act. This will include AI systems that are deemed 'high risk' under the Act, and compliance steps for businesses wishing to develop, sell or deploy such systems.

 

Authors