The integration of AI into the legal profession is transforming how legal services are delivered, with improved efficiency and new capabilities that were previously unimaginable. Lawyers will need to embrace AI to remain relevant and competitive in an increasingly tech-driven industry. Lawyers with strong technical IT skills will be needed to manage the changes required to make AI a part of day-to-day legal practice.
On 1 May 2024, in its article "Why it's time for law firms to start investing in generative AI", The Times reported that "Goldman Sachs has predicted that the legal industry is likely to be the second most impacted by AI, with 46 per cent of tasks able to be automated by this technology".
Law firms have been using AI for some time. For example, in disclosure which involves manually sifting through numerous documents to identify relevant information, AI tools have revolutionised this process by automating the analysis and management of documents and employing natural language processing (NLP) to automate the identification and classification of relevant documents to sort documents for relevance, privilege, and confidentiality. The benefits of using AI in disclosure are substantial, including increased efficiency, greater accuracy, and significant cost reductions, enabling legal teams to focus on more strategic tasks. AI tools can also quickly review and identify key terms, anomalies, and potential risks in contracts and feed these back to the fee earners. Tasked with legal research, AI can be used to retrieve legal information precedents to ensure that lawyers have access to current and relevant information, providing relevant results much faster than traditional methods.
As it evolves, AI is transitioning from basic automation to providing support which enhances the decision-making of legal practitioners, who maintain oversight. The launch of ChatGPT on 30 November 2022, a chatbot app built by OpenAI and based on large language models (LLMs), is credited with starting the AI boom as it is seen to mimic human intelligence and generate human-like text. Many other LLM-based AI systems have followed, such Microsoft CoPilot and Gemini (formerly Google Bard).
Generative AI uses deep machine learning algorithms. LLMs enable generative AI and use statistical models to scan vast amounts of text data, learning the connections and patterns between words and phrases and once trained, can be used to create, or “generate,” content such as text, graphics, and documents and natural language reading. As set out in the Law Society Guidance on "Generative AI- the essentials" https://www.lawsociety.org.uk/topics/ai-and-lawtech/generative-ai-the-essentials "In simple terms, traditional AI recognises, while generative AI creates".
One of the challenges with generative AI is ensuring the content generated is accurate and reliable and does not contain "hallucinations", namely a response generated by AI which contains false or misleading information and presents this as fact. The issue is that LLMs are designed to provide answers, which they do remarkably well in many cases, but answers can be provided which are not true – this can be because it does not have the information or fills in gaps. It cannot evaluate the extent to which that information is true or false – it cannot exercise judgment.
In Felicity Harber v The Commissioners for HMRC [2023] UKFTT 1007 (TC) a litigant in person sought to rely on nine authorities in support of her appeal, based on similar grounds. The authorities were found to be hallucinations from "an AI system such as ChatGPT". None of the cases existed on the FTT website. The Tribunal highlighted the waste of time and money required to expose this and the harm caused to the judicial system.
The Law Society guidance, published in November 2023, provides a broad overview of the opportunities and risks the legal profession should be aware of with AI technologies and provides a checklist when considering generative AI use. Judicial Guidance on Artificial Intelligence was published in December 2023 https://www.judiciary.uk/guidance-and-resources/artificial-intelligence-ai-judicial-guidance/. The SRA has also published its "Risk Outlook report: The use of artificial intelligence in the legal market" advising on the opportunities and risks of Generative AI and risk management. https://www.sra.org.uk/sra/research-publications/artificial-intelligence-legal-market/. It is expected that the SRA will introduce a regulatory framework to govern the use of AI.
Simon Konsta of DACB states: "Regulators are not going to tolerate any derogation from the established concepts of professional standards and conduct and customer care. They will still expect us to act as trusted professional advisers, using our knowledge and expertise to guide and advise clients. In such areas, we cannot allow AI to stand in our shoes. Picking a path through the AI labyrinth requires a clear sense of purpose. Firms need to be very clear on where and how they are deploying AI and manage expectations with their client base, their staff and teams. Clear-sighted governance, robust risk management and technical oversight are absolutely essential with a technology that is going to be evolving quickly. The deployment of AI in professional services firms is a Board issue first and foremost. This means oversight of the development of the technology, oversight of the broader acceptance and use of AI, oversight of the use by lawyers and transparency with clients on how the output is used and where safeguards are applied."
Naturally, the rise of AI has sparked concerns about potential job losses in the legal profession. However, the prevailing view is that AI will augment rather than replace human lawyers. AI can handle repetitive and mundane tasks, freeing up solicitors to engage in more complex and nuanced work that requires human judgment and creativity. On 22 May 2024 The Right Honourable Sir Geffrey Vos, Master of the Rolls, delivered a speech to the Professional Negligence Bar Association "Damned if you do and damned if you don't: is using AI a brave new world for professional negligence?" https://www.bailii.org/uk/other/speeches/2024/O7RCQ.html. As Sir Geoffrey Vos stated in his lecture "We will all need to understand clearly where the human adds value. I am sure that humans will always be needed to explain legal situations and solutions to human clients, and to try to make sure that automated legal work product accords with what a human would expect to be the outcome".
The profession has historically been slow to adapt, and firms need to bring their people with them on the AI journey. The pace of change will be different depending on the type of firm, its size, resources, work type, client profile – and the race to be the first firm leading in this space will not necessarily create winners. Tim Ryan of DACB sits on the City of London Law Society (CLLS) AI Committee, which is made up of individuals from leading City law firms who are working to coordinate the City's response to AI and are examining and helping to provide guidance on the use of AI technology. DACB already uses AI within aspects of its own IT infrastructure and has an internal AI Committee to prepare DACB's own AI Strategy. As part of this, DACB has been testing and building a number of AI tools.
Understanding and working with AI is vital for lawyers, as those who can effectively leverage these technologies will likely enhance their practice and provide better service to their clients. AI must be implanted properly and embedded within a firm's culture. Whilst AI will become part of a new normal for professional services firms, there is a clear divide between where AI stops and skilled professionals step in.