In May 2024, the Organisation for Economic Co-operation and Development (OECD) revised its Recommendation on Artificial Intelligence (the Recommendation) to reflect technological and policy developments, specifically covering generative AI.
By way of reminder, the OECD is an international organisation that brings together 38 member countries[1] to co-operate on economic, social and environmental issues. It develops legal instruments, such as decisions, recommendations and agreements, that provide guidance and standards.
The Recommendation was originally adopted by the OECD Council in 2019 and subsequently revised in 2023. It is the first intergovernmental standard on AI and aims to foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values.
Although OECD recommendations are not legally binding, they represent a political commitment to the principles they contain and entail an expectation that the members of the OECD will do their best to implement them. In the case of the Recommendation both members and non-members of OECD have agreed to adhere to the principles. The adherents are listed here.
Therefore, the Recommendation will influence AI policy in UK and should be of interest to UK businesses as these principles, in conjunction with 5 cross-sectoral principles proposed in the UK Government's AI Regulation White Paper[2], are a useful starting point for informing internal AI Governance policies.
A brief overview of the OECD Recommendation on Artificial Intelligence (AI)
Background to the Recommendation and the OECD's work on AI
- The OECD has been working on AI policy activities since 2016. It recognised the need to shape a stable policy environment at the international level to foster trust in and adoption of AI in society, and to address the challenges and opportunities posed by AI for human rights, democracy, labour, competition, innovation and well-being. It therefore established an informal AI Group of experts, comprising over 50 experts from different disciplines and sectors, to scope principles for trustworthy AI.
- Based on the output of this expert group, the OECD Digital Policy Committee (DPC) developed a draft recommendation that was adopted by the OECD Council on 22 May 2019.
- The Recommendation was revised by the OECD Council on 8 November 2023 to update its definition of an "AI system" and to ensure the Recommendation continued to be technically accurate and reflect technological developments.
- The Recommendation has now been further revised by the OECD Council on 3 May 2024. The revisions included clarifying the information AI actors should provide regarding AI systems, recognising the importance of addressing misinformation and disinformation, addressing safety concerns including uses outside the intended purpose and intentional or unintentional misuse, emphasising responsible business conduct, promoting interoperable governance and policy environments, and introducing an explicit reference to environmental sustainability.
Principles for responsible stewardship of trustworthy AI
The Recommendation contains five high-level values-based principles which are relevant to all stakeholders involved in AI systems, and call on AI actors to promote and implement them according to their roles and the context. The principles are:
- Inclusive growth, sustainable development and well-being: Stakeholders should engage in responsible stewardship of trustworthy AI to ensure beneficial outcomes for people and the planet, such as "augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations", reducing inequalities, and protecting natural environments to ensure sustainable development and environmental sustainability.
- Respect for the rule of law, human rights and democratic values, including fairness and privacy: AI actors should respect the rule of law and human rights throughout the AI system lifecycle. These include "non-discrimination and equality, freedom, dignity, autonomy of individuals, privacy and data protection, diversity, fairness, social justice, and labour rights". They should also address misinformation and disinformation, while respecting freedom of expression. "They should implement mechanisms and safeguards, such as capacity for human agency and oversight, to address risks arising from uses outside of the intended purpose, intentional misuse, or unintentional misuse".
- Transparency and explainability: AI actors should provide transparency in relation to AI systems. They should provide "meaningful information, appropriate to the context, and consistent with the state of the art" to enable a general understanding of the AI systems, to make stakeholders aware of their interactions with AI systems, to enable those affected by an AI system to understand the output, and to enable those adversely affected by an AI system to challenge its output.
- Robustness, security and safety: AI systems should be robust, secure and safe throughout their entire lifecycle so that they function appropriately and do not pose unreasonable safety and/or security risks. AI actors should have in place mechanisms to ensure that if "AI systems risk causing undue harm or exhibit undesired behaviour, they can be overridden, repaired, and/or decommissioned safely". Mechanisms should also be in place to strengthen information integrity.
- Accountability: AI actors should be "accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and the state of the art". They should ensure "traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system's outputs and responses to inquiry". They should also apply a "systematic risk management approach to each phase of the AI system lifecycle" and adopt responsible business conduct to address these risks, including through co-operation with other AI actors.
National policies and international co-operation for trustworthy AI
Although not directly relevant to businesses, it is worth noting the five recommendations to Governments which should be implemented in their national policies and facilitate international co-operation. The recommendations, in summary, are:
- Investing in AI research and development: Governments should consider long-term public investment, and encourage private investment, in research and development to create innovation in trustworthy AI that focuses not only on technical issues but also the associated social, legal and ethical implications. They should encourage investment in open-source tools and open datasets that are representative and respect privacy and data protection so that the AI research and development is free of harmful bias and will improve interoperability and the use of standards.
- Fostering an inclusive AI-enabling ecosystem: Governments should foster the development of inclusive, dynamic, sustainable, and interoperable digital ecosystem for trustworthy AI. Such ecosystems include data, AI technologies, computational and connectivity infrastructure, and mechanisms for sharing AI knowledge. They should also promote mechanisms, such as data trusts, to support the safe, fair, legal and ethical sharing of data.
- Shaping an enabling interoperable governance and policy environment for AI: Governments should create an agile policy environment that supports transitioning from research and development to deployment and operation for trustworthy AI systems and consider providing a controlled environment in which AI systems can be tested. They should also adopt outcome-based approaches that provide flexibility in achieving governance objectives and co-operate across jurisdictions. They should review and adapt their AI policy and regulatory frameworks to encourage innovation and competition for trustworthy AI.
- Building human capacity and preparing for labour market transformation: Governments should work closely with stakeholders to prepare for the transformation of the workplace. They should empower people to effectively use and interact with AI systems by equipping them with the necessary skills, such as training programmes, and provide support for those affected by displacement. They should also work closely with stakeholders to promote the responsible use of AI at work to enhance the safety of workers, the quality of jobs and aim to ensure that the benefits from AI are broadly and fairly shared.
- International co-operation for trustworthy AI: Governments should actively co-operate to advance these principles and to progress responsible stewardship of trustworthy AI. They should encourage international, cross-sectoral initiatives to garner long-term expertise on AI. They should promote the development of multi-stakeholder global technical standards and gather the evidence to assess progress in the implementation of these principles.
How the recommendation will be implemented
To support the implementation of the Recommendation, the OECD launched the AI Policy Observatory (OECD.AI) and the informal OECD Network of Experts on AI (ONE AI) in February 2020. OECD.AI is an hub that includes a live database of AI strategies, policies and initiatives that countries and other stakeholders can share and update. ONE AI is an informal group of AI experts from government, business, academia that provides AI expertise to the OECD. The network provides a space for the international AI community to have discussions about shared AI policy opportunities and challenges.
Why is the Recommendation relevant to our business
As noted above the Recommendation is not legally binding on the OECD adherent Governments or their national stakeholders. However it is worthy of note because it provides a common internationally approved framework and a set of good practices for the development and use of trustworthy AI systems that respect human rights, democratic values and the rule of law.
Many organisations, are increasingly adopting AI applications across their business. The principles set out in the Recommendation can help businesses to address these challenges and risks; aligning an AI Governance strategy with the principles will help ensure that AI systems are trustworthy, transparent, robust, secure, safe and accountable. Moreover, the Recommendation can help organisations to anticipate and adapt to the policy and regulatory changes that may result from the implementation of the Recommendation by Governments and to participate in the multi-stakeholder and international co-operation initiatives that the Recommendation promotes.
[1] Australia, Austria, Belgium, Canada, Chile, Colombia, Costa Rica, Czechia, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Latvia, Lithuania, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Türkiye, United Kingdom United States
[2] https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper