5 min read

Judiciary publishes guidance for judicial office holders on the use of AI

Read more

By Christopher Air, Omar Kamal and Ellen McWhirter

|

Published 07 February 2024

Overview

On 12 December 2023, the Courts and Tribunals Judiciary published guidance ('the Guidance') for judicial office holders (including their clerks and other support staff) on the use of artificial intelligence ('AI'). It was developed in consultation with the Lady Chief Justice, the Master of the Rolls, the Senior President of Tribunals and the Deputy Head of Civil Justice.

 

The Guidance and Principle

The Guidance sets out key principles to be considered and highlights opportunities and risks associated with AI – the ultimate objective being to ensure that the judiciary's duty to protect the integrity of the administration of justice is maintained. These are summarised below:

  1. Understand AI and its applications
  • Judicial office holders should have a basic understanding of AI and its capabilities and potential limitations prior to use. Examples of limitations identified include: AI tools being a poor method of researching new information that cannot be verified; the quality of AI chatbot answers depending on user engagement and the nature of prompts; and currently available tools often having a 'view' of the law based heavily on US law.
  1. Uphold confidentiality and privacy
  • Judicial officer holders should not enter private or confidential information into public AI chatbots, as this could become publicly known. AI chatbot history should be disabled where possible and permission requests to access device information should be refused in all circumstances.
  1. Ensure accountability and accuracy
  • Judicial office holders should verify the accuracy of any information provided by AI before using or relying on it. AI tools can, for example, invent case law (as seen in a recent tax tribunal case), provide incorrect legal information or make factual errors.
  1. Be aware of bias
  • Information generated by AI tools based on large language models (LLMs) can inevitably reflect errors and biases in the underlying training data.
  1. Maintain security
  • Best practices include using work devices and work email addresses when accessing AI tools, using paid AI platform subscriptions (where possible), as these are generally more secure than non-paid. Furthermore, it is recommended to follow relevant security breach processes e.g. for reporting personal data breaches or disclosures of confidential/sensitive information.
  1. Take responsibility
  • Judicial office holders are personally responsible for material produced in their name (e.g. judgments). Provided the Guidance is appropriately followed, there is no reason why generative AI cannot be a useful secondary tool for research and preparation (interestingly, there is no obligation currently on judicial office holders to disclose the sources of their research or preparatory work which supports a judgement, so based on existing practice, they would not need to reveal that they had been using generative AI to reach a decision).
  1. Be aware that court / tribunal users may have used AI tools
  • All legal representatives are responsible for the material they put before the court / tribunal and have a professional obligation to ensure its accuracy and appropriateness. The judiciary may, where appropriate, need to confirm that parties have independently verified the accuracy of research or case law generated with the assistance of AI. This could be necessary where cases involve unrepresented litigants, who are increasingly using AI without the skills to independently verify its output. Judicial office holders should also be aware of the possibility of forgery and deepfake technology.

The Guidance goes on to set out specific examples of uses and risks of AI in the courts and tribunals. Potential uses include: summarising large bodies of text; writing presentations and administrative activities like composing emails. Tasks that are not recommended include legal research which cannot be properly verified and legal analysis.  

The Guidance also provides examples of indications that work may have been produced by AI, including: references to unfamiliar case law; submissions that do not accord with the judge's general understanding of the law in the area, and submissions using American spelling or containing obvious substantive errors.

Key Takeaways

The UK judiciary has traditionally been slower to adopt technology such as AI, when compared to other legal practitioners. The Guidance suggests an acceptance that the judiciary too is also increasingly affected by use of AI, as well as a recognition that clear principles need to be in place to facilitate responsible use of AI within the legal system.

Although the Guidance technically only applies to judicial office holders, it does set out useful principles for parties and legal representatives to consider, as well as an indication of the judiciary's expectations around the use of AI in litigation. Considering the speed at which AI is developing, the Guidance is expected to change over time, with the judicial group behind it intending to publish an FAQ's document in the future.

Authors