By Jade Kowalski, Charlotte Halford, Peter Given & Hans Allnutt

|

Published 06 March 2025

Overview

Our 'In Case You Missed It' section of the Data, Privacy and Cyber Bulletin provides readers with a high-level digest of important regulatory and legal developments from February 2025.

 

Contents

  1. Case Law Updates
  2. Regulatory Developments
  3. Data & Privacy Developments
  4. Cyber Developments

 

Case Law Updates

European Court of Justice issues guidance on GDPR fines against subsidiaries (Case C383-23)

Following a referral from the High Court of Western Denmark, in relation to criminal proceedings brought against ILVA A/S, the Court of Justice of the European Union ("CJEU") has issued clarification on the calculation of GDPR fines against subsidiaries and whether the turnover of the subsidiary or the parent company should be the reference point. The CJEU held that GDPR fines against subsidiaries should take into account the worldwide annual review of the entire parent company group.

The referral asked for an interpretation of the meaning of the term 'undertaking' within Article 83 GDPR. The CJEU held that, with reference to the 2023 Deutsche Wohnen decision, the meaning of 'undertaking' covers “any entity engaged in an economic activity, irrespective of the legal status of that entity and the way in which it is financed.” For the purposes of a parent company and subsidiary relationship, the concept “designates a financial unit even if in law that economic unit consists of several persons, natural or legal.”

The CJEU held that where a fine is levied for a breach of GDPR, where the controller of the personal data "is or forms part of an undertaking [the parent company] the maximum amount of the fine is to be determined on the basis of a percentage of the undertaking's worldwide annual turnover in the preceding business year."

The full text judgment can be found here.

 

CJEU clarifies guidance to be provided to data subjects following automated decision-making (Case C-203/22)

The CJEU has provided guidance on the information that data subjects are entitled to when they are the subject of automated decision-making. The referral resulted from an action commenced in Austria following the refusal of a mobile telephone operator to conclude a contract of €10 per month with a customer. The refusal was prompted by an automated credit assessment conducted by Dun & Bradstreet (D&B).

The Austrian court found that D&B had infringed GDPR, but sought clarification from the CJEU on the steps that D&B should have taken when informing the customer of the logic involved in the automated decision-making.

The CJEU held that the controller must describe the procedure and principles applied in the automated decision-making so that the data subject can understand which of their personal data has been used, and how. The mere communication of the use of an algorithm is not a sufficient concise and intelligible explanation, and instead it could be proper to inform the data subject of the extent to which a change in the personal data could have brought about a different result.

The full text judgment can be found here, and the press release summarising the decision can be found here.

 

Regulatory Developments

Data (Use and Access) Bill progresses in the House of Commons

The first and second readings of the Data (Use and Access) Bill in the House of Commons were completed in February following passage through the Lords. This Bill has now been sent to a Public Bill Committee which is scrutinising the Bill line by line at the time of writing and is expected to report to the House by Tuesday 18 March 2025.

The Information Commissioner also issued comments on the amendments made to the Bill in the House of Lords. The Commissioner welcomed the certainty provided by the simplified definition of 'scientific research', emphasising that further guidance will be offered on what is meant by the 'public interest' in this context.

The Commissioner also noted his intention to continue to monitor parliamentary debate around the question of automated decision making, and to provide practical support to organisations once the new legislation is passed.

 

European Commission publishes guidelines on 'prohibited AI practices' under the AI Act

The European Commission has provided guidelines on the AI practices deemed unacceptable, and thus prohibited, under the AI Act. Article 5 of the AI Act prohibits the placing, putting into service or use of AI systems for manipulative, exploitative, social control or surveillance practices.

The guidelines provide insight into the Commission's interpretation of the prohibitions established in Article 5 to ensure consistent, effective and uniform application. It should be noted that the guidelines are non-binding, and any authoritative interpretation can only be provided by the CJEU. Please note that the Commission has approved the draft guidelines, but not yet formally adopted them.

The guidelines can be found here.

 

European Commission withdraws plans for AI Liability Directive and ePrivacy Regulation

The European Commission has adopted its work programme for 2025, which confirmed that the proposed AI Liability Directive and ePrivacy Regulation have been withdrawn from the legislative programme.

The AI Liability Directive ("AILD"), when proposed in 2022, aimed to "improve the functioning of the internal market by laying down uniform rules for certain aspects of non-contractual civil liability for damage caused with the involvement of AI systems." However, the proposals stalled, despite the European Parliament publishing an impact assessment in 2024, which included proposals to expand the AILD into a more comprehensive software liability. The removal of the AILD from the work programme was identified as being due to 'no foreseeable agreement'.

The proposed ePrivacy Regulation was initially proposed in 2017, as an update to the 2002 ePrivacy Directive, and as an accompanying piece to the GDPR. Following the substantial delay in progressing the proposal, it has now been withdrawn, with the Commission noting that it was "outdated in view of some recent legislation in both the technological and legislative landscape."

The full work programme with reference to the above can be found here.

 

DSIT establishes 'UK AI Security Institute'

The UK's AI Safety Institute has been recast as the AI Security Institute by the Department of Science, Innovation and Technology. Announcing the change, DSIT stated that the change reflected the focus on serious AI risks and security implications. These concerns included the use of AI technology to create biological and chemical weapons, and how it could be used to carry out cyber-attacks.

The Institute will work with wider government and the national security community, building on the expertise of the National Cyber Security Centre, to build up a scientific basis of evidence to help keep the UK safe as AI develops further.

 

Data & Privacy Developments

ICO publishes Tech Horizons report for 2025

The ICO has released a new Tech Horizons report identifying concerns with selected technologies expected to be significantly adopted within the next two to seven years.

This year's report covers the following:

  • Connected transport:the convergence of technologies that is transforming how vehicles operate and interact with their environment and the people they are carrying. 
  • Quantum sensing and imaging, which offer new or radically improved capabilities compared with existing sensors and imaging techniques. With focus on use cases in healthcare and medical research. 
  • Digital diagnostics, therapeutics and healthcare infrastructure, such as smart pills, digital twins and AI-assisted diagnosis. 
  • Synthetic media and its identification and detection: content that has been wholly or partially generated using AI/machine learning technologies (such as images, videos and audio) and its detection. 

In addition, the report provides retrospective reviews of technologies covered in previous Tech Horizons reports. The 2025 ICO Tech Horizons report can be accessed here.

 

EDPB to extend scope of ChatGPT taskforce to AI enforcement

During its February 2025 plenary meeting, the European Data Protection Board (EDPB) decided to create a taskforce on AI enforcement. The extension of the existing ChatGPT taskforce will allow EU data protection authorities (DPA) to coordinate on urgent sensitive matters, including the creation of a quick response team.

This decision can be viewed through the prism of recent developments in respect of other Generative AI platforms such as DeepSeek. In moves reminiscent of steps taken in response to ChatGPT, the Italian DPA, the Garante, ordered DeepSeek to block its chatbot in the country in late January in a response to concerns about the collection of personal data from Italian users.

Authorities in Ireland and France are among a number of EU DPAs to have also raised questions with DeepSeek in relation to data processing for subjects in their respective countries.

 

ICO joins with other DPAs to reaffirm commitments on data governance

The ICO and other data protection authorities from Ireland, Australia, South Korea and France have signed a joint declaration to reaffirm their commitment to implementing data governance that promotes innovative and privacy-protecting AI.

The statement highlighted the leading role of DPAs in shaping data governance to address AI's evolving challenges, committing signatories to foster shared understanding of the lawful grounds for processing data and proportionate safety measures. The group also committed to monitor the societal and technical implication of AI, leveraging their own and others experience and expertise in this area.

The full statement can be found here.

 

AI Playbook for civil servants published by UK Government

The Government Digital Service has published an AI Playbook for civil servants and people working in government organisations to encourage safe, effective and secure use of AI. The Playbook set out ten common principles to guide use of AI in government organisations, building on the five principles set out in the white paper, 'A pro-innovation to AI regulation'.

The document also discusses the fields, applications and limitation of AI, and the ethical, legal, security, privacy and governance implications in using AI safely and responsibly. The AI Playbook can be found here.

 

EDPB adopts statement on age assurance

The EDPB has adopted a statement on age assurance, providing specific guidance and high-level principles stemming from the GDPR that are to be taken into consideration when personal data is processed in the context of age assurance.

Statement 1/2025 (which can be found here) focuses on the principles applicable to different online use cases, including the use of services which may harm children, and when there is a duty of care to protect children, such as ensuring services are designed or offered in an age-appropriate manner.

 

European Parliament committee raises concerns over EU-US Data Privacy Framework

The European Parliament LIBE Committee has written to the European Commission on the current position of the EU-US Data Privacy Framework.

An extract from the letter, posted on social media, indicate that the concerns raised within the Commission include:

  • Unlike other third countries in receipt of an adequacy decision, the US still lacks a federal data protection law
  • In light of the previous decision to strike down EU-US data transfer mechanism, European businesses are still left in an uncertain decision, and
  • The remedies provided for commercial matters under the data adequacy decision are insufficient

 

European Parliament summarises algorithm discrimination risks under AI Act and GDPR

The European Parliamentary Research Service (EPRS) has produced a summary of the risks associated with algorithmic discrimination under the AI Act and GDPR. The summary notes that the legal uncertainty currently provided by the interplay between the AI Act and GDPR may need to be resolved by legislative reform or further guidance.

A summary of the EPRS summary can be found here.

 

Cyber Developments

UK Government publishes response to call for views on cyber governance

The UK Government has confirmed it is finalising a cyber governance code of practice to help boards and directors understand the minimum requirements for overseeing cyber risk management. The initial call for views was issued in January 2024.

The response states that a final version of the code is expected to be published sometime in early 2025. The full Government response can be found here, and our cyber team have reviewed the proposals in detail in our accompanying cyber analysis piece.

 

UK Government publishes AI cyber security code of practice

The Department for Science, Innovation and Technology (DSIT) published a new and voluntary code of practice for artificial intelligence AI cyber security. As above, the code of practice is discussed in our detailed cyber analysis piece for this month.

 

UK National Cyber Security Centre issues edge device security measures

The NCSC has partnered with a number of other national cybersecurity agencies to issue guidance for manufacturers of edge devices, which are internet-connected devices sitting at the 'edge' of a network, acting as entry points for data between local networks and the wider internet.

These devices include routers, smart appliances, IoT devices, sensors and cameras. The guidance sets out steps that manufacturers can take to ensure that network defenders can easily detect malicious activity and investigate following intrusions.

The guidance can be found here.

 

European Commission launches new cybersecurity blueprint

The European Commission has published a proposal to ensure an effective and efficient response to large-scale cyber incidents. The proposed plan builds on existing frameworks, proposing measures to strengthen collaboration and securing communication and strategic efforts to disinformation.

The Commission's press release can be found here and the full text for the blueprint can be found here.

 

Australian Department of Home Affairs bans use of Kaspersky products

The use of Kaspersky products and web services on Australian Government systems and devices has been banned by the Department of Home Affairs. The direction has been issued in response to Kaspersky representing "an unacceptable security risk… arising from threats of foreign interference, espionage and sabotage."

The full text of the direction from the Department of Home Affairs can be found here.

Authors