The challenge faced by the U.K. in combatting cyber threats is complex, with both the public and private sectors being asked to continually consider and increase cyber security.
The integration of digital technologies means that cyber incidents inevitably lead to significant disruption, with interruption not necessarily limited to the organisation directly impacted. The CrowdStrike incident of July 2024 demonstrated the widespread impact that a cyber incident can have on wider digital supply chains.
Both the previous and current UK governments, represented by the Department of Science, Innovation and Technology (DSIT), have made it clear that there is a fundamental need to ensure that the UK has adequate cyber security and resilience in place, both generally and also when faced with new risks such as the continued development of artificial intelligence (AI).
There are a number of pillars comprising this overarching strategy, including the expected imminent publication of the Cyber Security and Resilience Bill which will make updates to the UK's existing regulatory framework, which plays the key role in safeguarding the UK's critical national infrastructure, by:
- Expanding the remit of existing regulation to protect more digital services and supply chains;
- Putting regulators on a strong footing to ensure essential cyber safety mechanisms are being implemented and;
- Mandating increased incident reporting to give government better data on cyber attacks.
In addition, the Product Security and Telecommunications Infrastructure Act introduced a response to the cyber risk created by connected products operating outside of traditional network infrastructure and corporate systems.
The scope of cyber legislation is constantly growing, having moved from personal data and critical infrastructure to legislation for connected devices and beyond.
Supporting this growth is the continuing development of a number of codes of practice which formalise expectations of cyber risk response in relation to specific actions and sectors. These voluntary codes set out good practice in those areas deemed, in the words of DSIT, to have "significant cyber security risks which are not being sufficiently addressed by industry."
DSIT has taken a modular approach to these codes, with an approved code of practice for app store operators and app developers already in place, and a draft code of practice for software vendors ("Software Code") consulted on in 2024. The Government response to the call for views on the Software Code was published on 3 March, confirming that minor edits to the draft code would be made, ahead of publication in 2025, with further implementation guidance to be provided.
There have also been recent developments in respect of two key codes of practice discussed below, the Cyber Governance Code of Practice and the AI Cyber Security Code of Practice.
Cyber Governance Code of Practice
As the legislative scope grows to capture more of our cyber-reliant society, it is unsurprising that the scrutiny on directors has increased. In 2024, the Government published a draft Cyber Governance Code of Practice ("Cyber Governance Code"), noting that "Boards and directors [should] place the same importance on governing cyber risk as they do with other principal risks.”
Developed in collaboration with the National Cyber Security Centre and industry, the code, when finalised, will formalise expectations regarding the governance of cyber security by organisations, including that the actions that directors and non-executive directors need to take to meet their responsibilities in this area. The draft code was a response to suggestions that organisations need to know what 'good looks like', bringing together critical governance areas that directors need to take ownership of.
The draft code, found here at Annex A, set out five principles; risk management, cyber strategy, people, incident planning and response, and assurance and oversight. These principles were also accompanied by a number of proposed actions to comply with them.
A call for views seeking feedback on the design, proposed compliance and the merits of a proposed assurance process was commenced.
The Government has now published its response, confirming that DSIT will now work with the NCSC to make minor edits to the Cyber Governance Code before publishing in early 2025, alongside materials to support implementation.
The responses to the call for view confirmed there was widespread support for the principles as drafted, with support being limited for additional principles or actions proposed within. As expected, the focus of the code remains on the broad target audience of directors who may not be cyber specialists.
There was support by respondents for an assurance scheme to demonstrate compliance with the Cyber Governance Code, but the Government noted there are considerable challenges. The code will be published without an assurance scheme, with further discussions with key stakeholders to follow on whether a scheme is viable.
The code will be directed at medium and large businesses and organisations due to government expectations that implementation should be possible. Smaller businesses will be encouraged to use the code depending on their cyber maturity and their possible involvement in critical infrastructure.
Some responses to the call for reviews suggested that the Cyber Governance Code be placed on a statutory footing, whether via integration into an existing piece of legislation or the introduction of new regulations. Although these proposals will not be taken forward, the response does note that if uptake is limited, that firmer levers such as the "introduction of legislation and/or the utilisation of public procurement requirements."
AI Cyber Security Code of Practice
In response to the increasing cyber security risks associated with AI, the Government has recently published an updated AI Cyber Security Code of Practice ("the AI Code"). The announcement was a response to the call for views issued in May 2024 on a draft version of the AI Code.
The AI Code published by the Government is largely reflective of the draft version, with the addition of an additional principle, one of 13 in total, and is identified as distinct from the aforementioned Software Code due to the distinct nature of risks associated with AI.
Specific security risks such as data poisoning, model obfuscation and indirect prompt injection were all raised by respondents to the consultation and helped shaped the update. However, the response does note that AI stakeholders should view the AI Code as an addendum to the Software Code when published.
Separating the principles into five phases, the AI Code is structured into the aforementioned 13 principles and is directed at various stakeholders who will hold specific responsibilities for an indicated selection of the 13 principles. Those stakeholders are identified as Developers, System Operators, Data Custodians, End-users and Affected Entities, the definitions of which are included on the policy page.
Cross-referencing to the Cyber Governance Code, the AI Code notes that senior leaders in these organisations will also have responsibilities to protect staff and infrastructure.
An implementation guide for the AI Code was also published to help understand how each principle can be met with reference to specific examples of how AI may be utilised such as chatbot apps, fraud detection, large language model (LLM) providers and open-access LLMs.
The phased structure and individual principles, and some examples of measures and controls for organisations as set out in the implementation guide are discussed below.
Secure Design
- Raise awareness of security risks – Organisations may wish to create AI security awareness training covering basic concepts and threats.
- Design your AI system for security as well as for functionality and performance – Organisations are encouraged to review and document business alignment reviews and risk assessments.
- Evaluate the threats and manage the risks to your AI system – Threat modelling should be performed including configuration changes.
- Enable human responsibility for AI systems – Implement mechanisms for and then validate and measure the accuracy of human oversight decisions.
Secure Development
- Identify, track and protect your assets – Organisations are encouraged to implement AI asset tracking which may include the authentication, authorisation and logging of access to those assets.
- Secure your infrastructure – The creation of dedicated development and production environments and AI-specific incident management plans.
- Secure your supply chain – Organisation should adopt secure supply chain frameworks, and document justification for the use of untrusted components.
- Document your data, models and prompts – Development of comprehensive system design and maintenance documentation, including relevant security information.
- Conduct appropriate testing and evaluation – Implementation of comprehensive pre-deployment testing and security assessment processes for all releases.
Secure Deployment
- Communication and processes associated with End-users and Affected Entities – Provision of comprehensive user guides and tutorials, and notify users of security updates.
Secure Maintenance
- Maintain regular security updates, patches and mitigations – Organisations may look to implement a structured patch management process.
- Monitor your system’s behaviour – Implementation of comprehensive logging for security and compliance, with the appropriate secure storage and retention of those logs.
Secure End of Life
- Ensure proper data and model disposal – Organisations should consider developing and implementing a secure transfer and disposal policy.
The AI Code is a clearly focused intervention to help stakeholders in the AI chain understand the baseline security requirements to be implemented to protect AI systems. The full response to the call for views once again noted that certain respondents, as with the Cyber Governance Code, supporting the mandating of the security requirements.
However, the Government's efforts will be directed at further international developments, as highlighted by the planned submission of the AI Code to the European Telecommunications Standards Institute (ETSI) to assist with the create of a global standard.
The response to the call for views acknowledged the "additional challenges and other challenges" introduced by the AI Code. However, the costs of a successful cyber-attack were considered to be likely greater that implementation of the measures proposed, and therefore, should be considered a benefit to stakeholders.