3 min read

Generative AI: A means to an end or an end to means?

Read more

By Charlotte Halford & Sonali Malhotra

|

Published 09 April 2024

Overview

Over the past six years, Generative Artificial Intelligence ("Gen AI") technology and Large Language Models ("LLMs") such as ChatGPT, Bing, and Google’s PaLM, have emerged, reshaping the public discourse on the opportunities and risks posed by AI as these systems continue to grow and develop advanced capabilities.

Lloyds has recently published a report on the rapid evolution of Gen AI which considers its transformative implications on the cyber risk landscape, the widespread impact of cyber threats on national security and businesses presently and the measures which must be taken to mitigate the frequency, severity, and diversity of smaller scale cyber losses which are inevitably due to grow over the next one to two years ("the Lloyds Gen AI Report").

Gen AI presents incredible opportunities for innovation, simplifying accessibility to tools and services for mass populations and to date, applications of LLMs to cybercrime have been minimal as a result of the effectiveness of AI model governance, cost and hardware barriers, and content safeguards, as acknowledged in the Lloyds Gen AI Report. However, the rapid rate at which sophisticated Gen AI technology is developing has meant that as of September 2023, numerous LLMs exist and can function on commodity hardware such as a MacBook. While the Lloyds Gen AI Report states that it will take some time to understand the extent to which the capabilities of these specialised and powerful models may be used for illegal purposes, we are entering an era of proliferation where threat actors are more empowered to maliciously exploit and misuse these evolving models and tools to harm individuals, property and both tangible and intangible assets.

Dr Kirsten Mitchell-Wallace, Director of Portfolio Risk Management, Lloyd’s has said, "Generative AI is not the first, and won’t be the last, disruptive technology to impact the cyber threat landscape, so it is critical that businesses improve their risk mitigation, security and defence technologies, as well as seek appropriate risk transfer today, more than ever before.” These comments, when read in line with recent news that the proposed UK AI regulation bill has received its second reading in Parliament, is both promising of the safeguards which will likely be able to prevent the release of future advanced models or proliferation-enhancing technologies, while also reiterating the importance for individuals, business and the broader market at large, taking accountability to manage their cyber risks to protect consumers and engender public trust in emerging AI technologies and systems.

With the above in mind, it is increasingly important for cyber insurers to routinely review their policy wordings to ensure they are in line with emerging AI developments and meet customer needs. Businesses should have appropriate risk mitigation strategies in place, particularly companies that utilise AI technologies and LLMs, and should ensure that there are adequate systems in place for managing and protecting personal data. We will continue to monitor this landscape and advise on developments in the AI field.

If you would like to discuss the contents of the article, please contact the writers for further information and queries.

Authors