AI in public services: four vital steps towards public trust
Artificial intelligence has the potential to transform public services, yet with great opportunity comes significant risk, particularly in a sector already grappling with limited funding and legacy infrastructure. How can public authorities harness AI’s efficiencies while safeguarding transparency, ethics, and public trust?
Greenwich-based firm, DG Cities was commissioned by the Department for Science Innovation and Technology to research AI assurance in industry, and to investigate the language used to describe the approaches to evaluating AI in different sectors. This work formed part of the government’s report, Assuring a Responsible Future for AI, published in November. Ed Houghton, who led the research at DG Cities, draws practical lessons from some of the key findings for LOTI’s network.
What does ‘AI assurance’ mean?
Transparency is critical to good local government. Authorities are striving to open up processes to public engagement and scrutiny, yet to the public, much of local government can still feel like a ‘black box’. AI continues to play an increasingly important role in our lives, yet how can open government and the basic principle of transparent democracy continue when closed AI systems make up the vast majority of the market?
This is where AI assurance comes in. Assurance describes the steps by which we understand a tool or process does or is used as intended. In local government, assurance is fundamental to knowing if public services are working effectively, ethically, and are not creating public harm.
In 2024, DG Cities was asked by the Department for Science Innovation and Technology to take a closer look at how UK industry and public sector understand AI assurance, and to investigate the language leaders across all sectors use to describe their approaches to evaluating AI. Our work, which was reported recently by the department in the report Assuring a Responsible Future for AI was the result of a national survey of over 1000 business and public sector leaders, and interviews with 30 managers across sectors.
What we found highlighted four key steps we think local authority leaders should be considering if they’re to maximise the benefits of AI safely and transparently.
- Define terms – and make sure all departments are speaking the same AI language.
Unclear and overly complex language around AI can be a block to better business practice. This is particularly important when it comes to conveying risk and opportunity in ways that are realistic and understandable. Across our interviews, we found examples of terminology being defined to help to share the reality of complex tools and processes. For example, AI leads were defining terminology such as ‘explainability’ through workshops, toolkits and guides – to help colleagues in their organisations understand the risks related to tools, such as Chatbots. Clarifying key AI terms was an important step to helping non-expert colleagues to appreciate the limitations of tools, and explore how best they can be deployed.
For those procuring tools – say, Directors of Digital, or Heads of Customer Service – risks such as explainability can be mitigated through good tool selection. By this, we mean procuring tools that convey data relationships through visualisation, or present correlatory or causal probabilities in clear ways to help users understand where training data is likely to be shaping specific AI responses. To apply pressure to vendors, those procuring tools will need to help departments using them explain their needs – this is only possible when common terms are clearly defined and used.
- Get different teams together to define decision principles, consider shared problems, and then procure.
Our interviews with digital leads in the banking sector offered an interesting perspective on ‘responsible’ AI that appeared directly relevant to those in the UK public sector. For example, across this sector, the overarching principles of responsible banking have been translated into digital and AI strategies. The aim of these is to use data and insight for good, and to ensure safety and security at every step. Those business leads assuring tools work closely with teams in the business to deploy impact assessments, and assess risks across the entire user journey.
These ideas can also work in the public sector. In many ways, the responsible banking ethos that followed the 2008 financial crisis has established a set of principles that have created the foundations for AI projects to build on. Local government officials should therefore follow this best practice – bring teams together to define their responsible AI approach clearly, and only then start to procure the tools that fit.
- Evaluate, evaluate, evaluate!
The most effective digital leaders know that evaluation is the only way to improve services and manage risk. But we found that assurance across the lifecycle of a digital AI product can be difficult to administer when the AI product market is changing at such a rapid pace. For some businesses we spoke to, the level of continuous learning over which tools were being used and how presented a real risk – whether in terms of data security, or risks of leaky intellectual property.
Assuring AI is therefore not a single-point in time “once and done” practice. It’s a continuous process, from technology scoping and selection right through to technology retirement. Impact assessments, ethical audits, and spot-checks of tools are just some of the approaches we found being used to help learn what’s working, and to inform future AI procurement. In a fast-moving field, evaluation can too often be an afterthought, yet those we spoke to put it front and centre of their digital strategies as a key enabler of effective procurement.
- Engage communities to grow a clearer understanding of the abilities and limitations of AI
DSIT’s public awareness tracker highlights that there is an increasing awareness of AI risk across the public, but that public understanding is limited to what many consider the exponential and long-term risks of AI – e.g. changing labour market, or use of AI by bad actors. There doesn’t appear to be enough dialogue about the short-term risks of AI to individuals, and the steps public authorities are taken to manage these risks with the public.
There is an important role here for local authorities to utilise their convening power and position to draw together groups to learn, understand and assess how they wish AI to be deployed in their services. This dialogue on AI is a vital aspect of assurance that, at present, is missing. As the end user, and often subject to AI data processing, the public is central to the future AI assurance ecosystem.
In brief
Leaders across the public and private sectors are using AI. Many see it as an enabler to better, quicker and more efficient decision-making. But this future can only be realised through a safe, transparent and methodical approach to buying and building AI tools. As our research highlights, AI assurance is a fundamental element of digital strategies that all sectors need to get right.
Image used to accompany the blog: Elise Racine / Better Images of AI / Street Fair / CC-BY 4.0

Ed Houghton