How to assure trustworthy AI in local government


The rise of Artificial Intelligence (AI) is increasingly changing how we work, live, and engage with others. AI technologies underpin the digital services we use every day and are helping to make public services more personalised and effective. However, to fully deliver its potential benefits, AI must be developed and deployed safely and responsibly.

To ensure that AI systems are trustworthy, it’s crucial that organisations using AI embed effective systems and processes to understand how their AI systems work and that they are being used responsibly. But how can organisations achieve this in practice?

What is AI assurance?

A key way organisations can ensure their systems are trustworthy is assurance. AI assurance involves measuring, evaluating and communicating information about a system that can be used to show its trustworthiness to a range of audiences. These include regulators (for compliance), the public (to support confidence and consent) and internal teams (to ensure effective management).

Measure Organisations must collect sufficient data about how an AI system is performing to ensure they understand how it operates.
Evaluate Organisations must assess the risks and effects of deploying the AI system and consider its implications against standards/regulatory guidance
Communicate Information about the trustworthiness of the system is communicated effectively and appropriately to the intended audience.

 

Assurance is also an important tool for supporting regulatory compliance as it provides a framework for monitoring and delivering on the regulatory outcomes set out in the UK Government’s white paper: A pro-innovation approach to AI regulation.

AI assurance mechanisms

There are a range of different AI assurance mechanisms available, from quantitative approaches that measure results to a high degree of accuracy and certainty (e.g. performance testing to formal verification) to qualitative approaches, such as risk and impact assessments to ensure that risks and impacts are considered. 

These mechanisms can, and should, be used together across the lifecycle of an AI system. Using a diverse range of tools allows for a proportionate approach to assurance. Where your use case is lower risk, your organisation will be able to rely on a smaller range of mechanisms, whereas higher risk use-cases will utilise a more robust combination.

Further information on available assurance mechanisms can be found in our Introduction to AI assurance with practical case studies in our Portfolio of Assurance Techniques. 

How is the government supporting the AI Assurance ecosystem?

In 2021, we published the Roadmap to an Effective AI Assurance Ecosystem that set-out the steps needed to grow the UK’s AI assurance ecosystem. Since then, DSIT’s AI assurance programme has delivered a range of products to support awareness and the development of the UK’s assurance industry. Significant publications include the Introduction to AI Assurance published in February introduces AI assurance to a general audience and our Responsible AI in Recruitment guidance which aims to support those procuring and deploying AI in the HR and recruitment sector.  We have also developed a Portfolio of Assurance Techniques that sets out a suite of practical assurance use-cases to support organisations to find and adopt these techniques. Alongside guidance, DSIT is supporting organisations to develop innovative assurance mechanisms, for example through its Fairness Innovation Challenge, which provides support for socio-technical solutions addressing bias and discrimination in AI systems.  

In future, we will look to provide additional tools to support organisations to implement responsible AI assurance practices and will continue to work with industry to better understand and support the growth of the UK’s AI assurance industry.

How to implement assurance

Local authorities who are interested in implementing AI assurance could look to begin with the following steps:

1: Consider existing regulations: Whilst there is currently no statutory regulation in the UK, local authorities implementing AI must adhere to existing regulations such as UK GDPR and the Equality Act 2010. Equality Impact Assessments (EIA) and Data Protection Impact Assessments’ (DPIA’s) also apply.

2: Upskill: Local authorities should look to understand their existing assurance capabilities/gaps and likely future requirements. 

3: Review internal governance and risk management: Local authorities should review existing AI governance risk management processes to ensure they can quickly and effectively highlight and escalate risk relating to AI.

4: Keep an eye on regulation: Regulators will be developing a range of sector-specific guidance in their respective areas. For example, the ICO has developed guidance on AI and data protection.

 

Want to find out more?

If you’d like more information about AI assurance and want support on how it can be applied to your own organisation, don’t hesitate to get in contact with the AI assurance team at: ai-assurance@dsit.gov.uk.

Please also get in-touch if you would like to submit assurance techniques you are already working on in our Portfolio of Assurance Techniques.

 

Further Reading

Department for Science, Innovation and Technology (2023): A pro-innovation approach to AI regulation  

Department for Science, Innovation and Technology (2024): Introduction to AI Assurance

Department for Science, Innovation and Technology (2024): Responsible AI in Recruitment Guide

Department for Science, Innovation and Technology (2022): Industry Temperature Check: Barriers and Enablers to AI Assurance  

Department for Science, Innovation and Technology (2023): Portfolio of Assurance Techniques  

UK AI Standards Hub: Upcoming Events  

UK AI Standards Hub: AI Standards Database  

Burr, C., & Leslie, D. (2022). Ethical assurance: A practical approach to the responsible design, development, and deployment of data-driven technologies  

National Institute of Science and Technology (NIST):  AI Risk Management Framework  

National Cyber Security Centre (NCSC): Cyber Essentials

This blog has been written by James Scott from DSIT’s (Department of Science, Innovation and Technology) Responsible Technology Adoption Unit.

Responsible AI

James Scott
20 May 2024 ·
Skip to content

Join the LOTI conversation


Sign up for our monthly newsletter to get the latest news and updates