The State of Play of AI in Local Government


Over the past 12 months, like most industries, local government has decided that artificial intelligence (AI) is a technology that we should be paying immediate attention to. The wave of interest is built on the back of large language models (LLMs) that have underpinned popular public generative AI (GenAI) applications like ChatGPT and Bard, which might not only empower individuals in their jobs but also transform how we operate and deliver services at a more fundamental level. Understandably, many officers are also concerned about how we use this technology responsibly, given our duties as a public authority.

Accordingly, LOTI has been working over the summer to produce initial resources and guidance for local authorities around GenAI, which are now available for public access here. These include:

  1. Guidance for leaders and all council staff
  2. Guidance for CIOs and data leaders, produced with Faculty AI
  3. A state of play report
A picture of the first page of LOTI's guidance for leaders.

The first side of LOTI’s guidance for leaders, which you can access here.

We believe these are comprehensive documents, but also reflect a snapshot of where the sector is and where the technology is as of the time of writing. Below I will summarise some of the findings of this research, but I would encourage those interested to explore the documents here for more detail.

State of Play and Initial Guidance (September 2023)


First, to help understand the usage of GenAI, we split use cases into three categories: how individual staff use it; at an organisational level, the applications or software and service integrations that use GenAI models; and how the public might use it that will affect local authorities.

For most local authorities, the only use cases currently happening are individual officers using GenAI to help do their work faster. Typically, our survey showed these are often officers whose roles involve a lot of research and writing, for example producing briefings, but some officers are also using it to help with coding.

Councils are experimenting with how they might develop AI applications or integrations into their services. Sporadically, some boroughs are trying things like hackathons to try to jumpstart potential ideas. In some councils, it simply happens through entrepreneurial officers with the skills to test these things (although they typically do this in their spare time and these skills are rare). Other councils are working through strategic partnerships with technology companies or universities. The most popular use cases that are currently being explored by local authorities are:

  • (Internal) search of (already public) documents, like HR policies, council meeting documents, or policies or strategies.
  • Automated logging of call centre transcripts with natural language processing, and then analysis of these documents using GenAI
  • Automated (low-risk) communications with residents, like letter writing around non-sensitive matters.

Councils are also concerned about how they should adapt to the public using GenAI. Currently, this is mostly with its use in job applications. But in the future, there is also the potential for fraud through deepfakes, or huge automated correspondence campaigns overwhelming councils’ ability to respond to residents. LOTI doesn’t quite have answers for these yet, but we are keen to develop possible solutions.

In general, LOTI’s position is that rather than fear AI changing how we do something, and trying to find a way to keep doing it the same way as before, councils need to be open to adapting their services and processes to recognise that AI is here. The cat is out of the metaphorical bag. Given this, I strongly suggest councils include service designers in conversations about AI so they can be ambitious about transforming their services from the start.

One of the main questions for councils was how to use AI responsibly. Officers are particularly nervous about trust, bias, ‘hallucination’, and data privacy. On privacy, councils have had to stress never to enter personal or private details into public models like ChatGPT. It is true that bias and hallucination are particularly bad in some of the large free models like ChatGPT, so it is good that council officers have been aware of this when surveyed. The question will be, how we can find ways to reduce these flaws if we apply these models to our own data (which will require common agreements of what ‘good enough’ is).

LOTI also created a small framework to help officers across councils with issues around ethics, trust and transparency. We suggest that officers using tools like ChatGPT should:

  1. Consider yourself accountable for everything the AI creates.
  2. Abide by existing data policies and other laws
  3. Never upload private information about residents
  4. Check outputs for political and social context
  5. Reference whenever AI is significantly used in public communications or for something important.
  6. Don’t let AI make your decisions.

There is also considerable anxiety about the possibility of officers’ jobs being replaced. From my perspective, the only use case where this might happen is in call centres, as councils are trying to find savings with rising costs and high inflation. However, generally speaking, from our research it doesn’t look like councils are planning for GenAI to displace many jobs in local government anytime soon.

From discussions with our members, the most common position was that councils should be exploring use cases that fall in the ‘low-risk/high-reward’ bracket. However, two points emerged on this: firstly, different authorities will have different risk profiles, different ideas of risk depending on their values, so these projects may be different for different authorities. Second, the risk-profile might also change over time. So what we may need isn’t a rigid risk classifying scale, but a process to help each authority determine the risk according to their own context at a given time.

Altogether, these paint the picture of a sector that is still finding its way with generative AI. In light of this, we believe in LOTI that the key to innovation is collaboration: we must continue experimenting and sharing our learnings with each other, to avoid duplication and accelerate progress. Lastly, if you want to read more about my proposed ideas for where local government can go next, I have also written another blog detailing 10 ideas for what I think local government should be doing next to be better uses of AI.

Responsible AI

Sam Nutt
6 September 2023 ·
Skip to content

Join the LOTI conversation


Sign up for our monthly newsletter to get the latest news and updates