10 ideas for a roadmap of responsible AI in local government


Over the past few months, I’ve been researching the state of play for what AI in local government currently looks like. Off the back of that, LOTI have created a set of initial resources aimed at different audiences within local government to help them at the initial steps of their journey on AI. However, we know that this journey is just getting started.

As we publish these resources, I also wanted to look to the future and imagine what steps that local authorities individually, or local government as whole, might want to take so that we can be better users of AI, both more innovative and responsible at the same time. So, I am using this blog to make 10 proposals which together constitute a roadmap of sorts for local authorities, with a time horizon of a couple of years for most of these potential activities. If you are reading this and have any feedback or thoughts on these ideas, or think I’ve missed anything, please do feel free to reach out to me!

1. Most importantly, local government needs to keep testing, learning and sharing. There is so much that we still don’t know, but we will only learn this by testing. LOTI will be running meetups for officers in local authorities to help, and we are committed to engaging with other networks like the LGA and Socitm so we can make knowledge sharing a default practice.

2. AI is built on data, and this data foundation will likely need to be improved if we want to use AI in all the areas that we want. In particular, we have very poor data quality. It’s no use having a hypothetical AI assistant who can retrieve a record on a resident in a few seconds if we have information for the same resident in different places. AI will only add to the confusion. (This is a general data problem in local government, where our data scientists and machine learning experts have to spend too much time on data quality – a big programme from central government might really help with this).

3. One thing that CIOs have told me is that they don’t yet have a good understanding of what the costs and benefits are of actually developing AI solutions in local government. We need to be able to answer questions like: ‘How much money would a given use case save or generate for us?’ before we can launch into scaling up tests and prototypes in the name of delivering better value public services. This might mean doing this research ourselves, as an implausibly high number of companies seem to be selling AI as the answer to every question. They may be right, but if we don’t know ourselves then we can’t procure wisely as well as ambitiously.

4. Perhaps, given the high potential costs of some proposed AI solutions, and the risk that we don’t know they will work, councils should share resources and risks and develop solutions collaboratively. In particular, this might also help councils with fewer resources have access to AI solutions faster.

5. We need better ways of evaluating AI model performance. Some standards exist, but these were never designed for the context of public service delivery. We need standards that reflect how well we need a model to perform in our context. This will be a huge boost for innovation as it will allow us to easily share pilots and tests for models because we can compare them with the same evaluation methods. It will also be a way to build in ethics into model design.

6. To properly test these models, local government might need creative and innovative approaches that we currently don’t have. As a principle, we should not test if an AI is discriminatory by deploying it and waiting to see if it is discriminatory or not. As a sector, we might have to look into better ways of doing this experimentation, possibly through things like sandboxes or other more novel methods.

7. We need to upskill staff to use AI well. We will probably need skills for all officers to use AI in their jobs. We need a future skills audit to understand what these skills are, so we can prepare for the future. Currently, only a select few local authorities in the country will have any officer with the skills to properly develop AI applications themselves. Left unchanged, this would mean councils will always have to work in partnership with companies or researchers with these skills, but might we want to have some of these skills ourselves in the future?

8. On governance, as well as standards that reflect our ethical values, we need to be ambitious but flexible. The UK is lagging behind other cities in Europe on algorithmic transparency mechanisms, so we need to catch up here, and I suggest councils should be exploring initial pilots of the national UK Algorithmic Transparency Standard. Also, whilst it is good to have AI policies, councils should be aware that they should be flexible, and as much as possible adapt them as we learn more about AI in our context and as the technology evolves. For now, prioritise the testing and learning approach, and let the policies emerge from that.

9. We need to make sure our practices reflect the concerns of our staff and of the public. As unions announce that they are looking at the impact of AI on the workplace, authorities with foresight might want to proactively engage with their unions to ensure that their ambitions don’t disenfranchise staff with concerns about AI.

10. There may be applications of AI where we need the public to give us a steer about what the appropriate standards should be. For example, it may transpire that many residents prefer letters and messages written by AI because they are clearer and better quality than those written by humans. Or, they may say we are very uncomfortable with this type of decision having automation involved, so we can deprioritise that. Ultimately, if we lose the trust of the public, it will be really hard to do anything with AI, but if we have them on board, then our future is far brighter.

Responsible AI

Sam Nutt
6 September 2023 ·
Skip to content

Join the LOTI conversation


Sign up for our monthly newsletter to get the latest news and updates