7 Min Read

The DACB AI Explainer - The What, Why and How: What's AI? Why should I care? How is it being regulated?

Read More

By Jade Kowalski, Isabella McMeechan & Ellen Husion

|

Published 28 September 2023

Overview

Following the surge earlier this year in AI excitement and fear (presented in largely equal measure), the topic is now high on the agenda of many organisations. DAC Beachcroft's "AI Explainer" series aims to help you cut through the headlines – explaining what AI is, balancing the associated opportunities with the risks and, most importantly, helping you to understand what AI means for you and your business.

We recognise that many organisations are at different stages in their AI journey and so each article in our series will provide both "key takeaways" and an "expert analysis" section. For those of you who are considering the use of AI for the first time or are looking for a brief overview to share with senior stakeholders, our "key takeaways" provide a summary that is both easy to understand and digestible. For those considering the evolution of the use of AI or looking for more detail, our "expert analysis" section provides technical application and insight commentary.

Throughout this series, you will hear from experts across DAC Beachcroft including those with a focus on technology, privacy and intellectual property. To kick off our series: this article explains some of the fundamental concepts of AI and answers key 'what, why, and how' questions.

KEY TAKEAWAYS

  1. What is AI? Essentially, Artificial Intelligence (AI) are systems or machines which appear to have elements of human-like intelligence. Not all AI is created equal; there are different types of AI which vary in complexity - from basic 'reactive' machines (that react to new content in a predictable way) to 'generative AI' (which can teach itself and create new, original content).

  2. Why does it matter? The use of AI can offer considerable benefits and opportunities, from simple business efficiencies to competitive commercial advantages. However, it can also carry significant risks, including legal and regulatory enforcement.

  3. How is AI being regulated? There is currently no legislation or regulation specifically designed for AI in the UK. However, there is much in the pipeline, and regulators around the world are taking different approaches, varying from specific, rules based legislation to lighter touch principles.

EXPERT ANALYSIS

Since the sudden growth in generative AI in early 2023, barely a day has passed without AI hitting the headlines. In some instances, it has been proclaimed an existential threat to humanity and decried for disinformation in recent elections. In others, it has been billed as a revolution, with the British Computer Society publishing a letter calling for AI to be seen as a "force for good".

So what is it? Why should you care? And how are regulators dealing with it?

1. What is AI?

'AI' is already being used by businesses on a large scale, however, there is currently no official legal definition of 'Artificial Intelligence' in the UK. Numerous authors and organisations have attempted to define it and, although there is no universal consensus on its specifics, all broadly accept that AI is systems or machines which emulate human-like intelligence. Although not codified in law, the UK Government's AI White Paper of 29 March 2023 defines AI as systems by reference to two characteristics: 'adaptivity' and 'autonomy'.

Not all AI is created equal

There are multiple types of AI which vary in complexity, ranging from basic 'reactive' systems which perform predictable analysis and tasks on new data, to more complex AI - with the latter dominating headlines. These complex models include:

  • Machine learning: Types of AI which use 'algorithms' (a set of instructions or procedures designed to achieve certain outcomes) to conduct 'self-supervised' learning based on data inputted into the system. These systems can make predictions based on that data, and can adapt to new data without having to be programmed to do so each time.

  • Deep Learning: A type of machine learning where the machine uses complex multiple processing layers known as 'neural networks' (named because of how they mimic the way in which neurons in the brain interact) to model and learn from complex patterns within a data set.

  • Generative AI: This is technology which uses machine learning models (particularly 'large language models' or 'LLMs'), which train themselves on huge sets of data, to refine and generate new, original content – including text, images, music and videos. The output of generative AI is often referred to as 'synthetic data' or 'synthetic media'. This is the category which ChatGPT fits into. "GPT" stands for Generative Pre-trained Transformer.

When considering the risk and opportunity presented by a specific use of AI, it is crucial to understand the type of AI being considered, to ensure that all relevant risks are properly considered and, conversely, that the approach being taken is not overly risk adverse in the context.

2. Why does it matter?

Both the rapid acceleration of AI advancement and differing national approaches to regulation have the potential to vastly impact individuals and businesses. AI can offer considerable benefits and opportunities, from simple business efficiencies to competitive commercial advantages. Arguably, for almost all industries, there is an inherent risk which arises out of not engaging with AI – a business taking such an approach may find itself being rapidly left behind. However, any use of AI must be strategically and carefully considered to ensure that risks are mitigated appropriately (at least within an organisation's risk tolerance).

The key legal and regulatory risks which we will be considering over the course of this series include those related to:

  • Data protection: Ensuring that personal data is used compliantly will be a key issue arising out of any AI initiative, with a failure to comply potentially resulting in significant enforcement action by data protection authorities, alongside privacy claims and reputational damage.

  • Confidentiality: Both in terms of legal duties of confidentiality and those arising out of contractual commitments.

  • Competition Law: Those using AI will need to ensure that they are doing so in a way that is compliant with Competition Law. Breaches of competition law for example, facilitating algorithmic price fixing or collusion, with or without the knowledge of the user, can lead to significant fines - totalling up to 10% of worldwide group turnover.

  • Copyright and other intellectual property laws: Through the use of unlicensed content, which may include copyrighted materials, within the algorithm's dataset (several court cases are currently grappling with this very issue) or through concerns over the protection of your own intellectual property.

  • Negligence claims: Arising out of the development of an AI product or the use of its output.

  • Sector specific regulatory breach: For example, a breach of FCA principles.

Importantly, the risks set out above should be considered at each stage of the AI lifecycle: design and development; data collection and selection; development and evaluation; deployment and monitoring; and decommission.

Governments and regulators across the globe are trying to account for all of this, leading us to ask…

3. How is AI being regulated?

The use of AI is not completely unregulated. Existing regimes already provide a framework for responsible use of AI. In particular, in respect of AI which processes personal data, the data protection regime (with well-developed principles such as transparency and fairness) is already governing use and we have seen data protection regulators take a proactive role to issuing guidance and intervening (for example, the temporary ban of ChatGPT in Italy).

At an EU level, the draft EU AI Act seeks to govern in the form of prescriptive, detailed legislation. In contrast, the UK approach has been billed as "pro-innovation"; principles based governance, enforced by existing regulators (although we note signs that this position may be evolving to a more balanced approach).

Look out for our next article in this series coming soon!

Authors