By Jade Kowalski, Zoe Carpenter & Astrid Hardy

|

Published 07 November 2023

Overview

On 6 October 2023, the Information Commissioner's Office (ICO) issued the developers of Snapchat - Snap, Inc. and Snap Group Limited ("Snap") - with a preliminary enforcement notice in respect of a failure to properly assess the privacy risks associated with Snap's 'My AI' chatbot.

 

'My AI'

'My AI' allows users to message a computer, which then mimics human conversation with them. It was launched by Snap in February 2023 for UK Snapchat+ subscribers, before being rolled out to the entire UK user base in April 2023. Around this time, Snapchat had roughly 21 million monthly active UK users. Using OpenAI's GPT technology, 'My AI' was the first instance of generative AI being embedded into a major UK messaging platform.

 

ICO investigation

The ICO investigation provisionally found that Snap's risk assessment did not adequately identify and assess the data protection risks posed by the generative AI technology to the millions of UK 'My AI' users. This failure was particularly significant in the context of the large user base, which included children aged 13 to 17.

 

Preliminary enforcement notice and next steps

The ICO stressed that the findings of the investigation and preliminary enforcement notice are provisional, and that no conclusions should be drawn as to any data protection law breaches. The preliminary enforcement notice sets out the steps which the ICO may require Snap to take.

Snap has until 27 October 2023 to make any representations. It has publicly commented that "My AI went through a robust legal and privacy review process before being made publicly available".

Should the ICO issue a final enforcement notice, Snap may be required to stop offering 'My AI' to UK users until it carries out an adequate risk assessment. Additionally, any ICO fine could be to £17.5 million or 4% of Snap's annual global turnover, whichever is higher.

 

Key takeaways

Notably, this is the first time the ICO has acted in respect of generative AI. The ICO has increasingly warned of the data protection issues associated with such technology through, for example, guidance in April 2023 and a reminder in June 2023.

This development is relevant to all organisations who are considering the use of AI, particularly generative AI, not least because we expect to see this as a growing area of focus for the ICO and supervisory authorities across the EU. In this case, Snap had conducted a specific risk assessment, but the ICO has provisionally concluded that such assessment did not adequately assess the relevant risks. This is a timely reminder of the need to ensure that such assessments are a thorough consideration of all relevant risks, rather than simply a tick-box exercise.

Further, we have seen other data protection agencies in the EU such as the Italian Garante (our previous article linked here) scrutinising Replika which was a similar chatbot being advertised as a "virtual friend". In fact, we predicted that regulators would soon investigate Snap for its AI chatbot.

The combination of misinformation with the availability of AI "virtual friends" has led to prison time for the first time in the UK. This month a criminal Court heard evidence from the man who broke into Windsor Castle on Christmas Day in 2021 with a crossbow declaring that he wished to kill the late Queen. The Court case showed evidence that the Defendant had done so because of the direct encouragement of his AI virtual friend on Replika. Similarly, an AI chatbot named Chai has recently been removed from Apple and Google stores due to the evidence shown that the AI chatbot encouraged underage sex, suicide and murder.

Interestingly, Meta has this month announced a range of 28 expert companions who are advertised "to help guide users through different life challenges", with the intention of the "virtual friend" soon to be an avatar in its Metaverse. What remains to be seen is whether Meta have introduced safeguarding measures before is launch. Although many of these "virtual friends" are not intended to be dangerous, more needs to be done at the initial development stage to introduce adequate safeguarding measures, especially when it comes to the protection of children and/or vulnerable adults.

The AI Safety Summit is being held at Bletchley Park this week. In its submission, OpenAI have confirmed that "individualised persuasion" is at the top of their list of risks. We agree, and 2023 will be a significant year for the regulation of AI tools more widely, with a particular focus on the protection of children.

Authors