Consultation thematic analysis using large language models
Open text data in local authorities may take many forms, such as social care case documents, complaints, and consultation responses. Councils are overwhelmed by the amount of text that needs to be understood, summarised, and synthesised into reports. Often there is insufficient capacity to process all this text with sufficient quality by statutory deadlines – and under the current funding landscape for local authorities it seems unlikely that this situation will change for the better.
Recent improvements in large language models (LLMs) makes available the possibility of more efficient working with open text datasets. The difficult part is to create open test processing systems that are reliable, efficient, and secure. At Lambeth, we are piloting AI-integrated analysis of consultation responses to see if LLMs can potentially provide more efficient and reliable reporting. Particularly in the area of consultations, initial pilot testing has shown there is great potential in using LLMs to perform thematic analysis and summarising results to inform reporting.
Initial work has shown that similar tools could also have a significant impact in working with social care case notes and case summaries, and complaints data. We are keen to share our experience (and code) with other councils to leverage the power of these AI models today to work with open text data. With programming skills in languages such as Python, in-house data science or IT teams could achieve these efficiency gains.
Thematic consultation analysis – our approach
Currently, council officers consider consultations response by response, manually assigning responses to topics. For long consultations this analysis method is time consuming, and risks human error. LLMs can help in assigning consultation responses to themes and creating summaries in a more systematic way.
We initially considered analysing consultations by feeding responses to LLMs via a chat interface, i.e. by manually copying and pasting in responses to a secure web browser-based chatbot. However, in testing we found that using AI chat interfaces to analyse public consultations has some limitations, namely:
- Limits to the amount of text that can be fed into the chat interface at one time – large consultations could not be considered in a single query.
- Inconsistent thematic assignment between LLM queries, as well as responses missing from analysis.
- A text output format that is not immediately usable by analysts to use in their consultation work (i.e. it had to be copy and pasted back into analysis software).
With the above limitations in mind, we wanted to create a tool that could analyse batches of consultation responses in an automated, systematic way with the same parameters and settings, and return outputs that can be easily used by officers.
We developed a Python-based tool (LLM Topic Modeller) that can analyse consultation data with the click of a button. The tool can work with large consultations to create a thematic breakdown in Excel tables that matches the standard analysis format used by Council officers. The app can be run locally or on AWS cloud.
Out of the box, the app is compatible with AWS Bedrock models, or LLMs that run on local systems (e.g. GPT-OSS, Gemma 3). We are also working on integration with Azure cloud models.

Figure 1. The LLM topic modeller interface
To use the app, the user uploads the results from a survey in a tabular data file format (e.g. xlsx or csv). The user can then ask the model to create its own topics, or to follow a pre-defined list of topics for the model to follow.
When you press the ‘Extract topics’ button, the app will query the LLM multiple times with batches of consultation responses and will return a thematic in tabular format. Themes from the consultation are broken down into general topics, subtopics, sentiment, and an open text summary of each topic (see image below). The entire list of responses can be considered together, or they can be considered ‘by group’, e.g. creating a different thematic analysis for each age group separately.

Figure 2. Example thematic output from dummy consultation data
The outputs are packaged into an Excel file output, with output tables fitting into the analysis format that teams currently use for their thematic analysis.
Feedback from consultation teams
Consultation teams have recently been trialling comparing the thematic analysis from the LLM topic modeller against their existing analysis techniques to compare accuracy and time taken. Feedback from consultation teams has been very positive – teams have reported that the thematic analysis provided by the tool is useful and could save significant time for analysis of consultations.
Application to other use cases
The tool we have developed is flexible and can be adapted with relatively little effort to other use cases – all that is required to use the tool is open text in a tabular data file.
Since our initial work with consultations, we have found that thematic analysis of open text could potentially be useful to improve efficiency of other processes within the Council. For example:
- Social Care audits, and case summaries
- Children’s Social Care chronologies
- Complaints summaries
- Qualitative research data such as interview transcripts
We have been consulting with relevant teams about the possibility of using thematic analysis within these areas. Initial feedback has been positive, and we are currently exploring these use cases more to establish the potential of LLM-based open text analysis for each use case.
Improving ease of use and access to the app for officers
The graphical user interface format of the tool is suitable for use by officers who want to be able to customise the analysis to their needs. However, many council officers will not have the time or need to learn the full functionality of the tool – they would be happy with ‘standard’ outputs from the app.
Alongside our work on the LLM Topic Modeller itself, we are also working on ways to simplify access to the analysis, for example by creating simple SharePoint-based web forms to request analyses that can then call to the LLM topic modeller tool in an automated fashion.

Figure 3. Prototype LLM topic modeller request form in SharePoint lists
Sharing our work
We are still in pilot testing for this work; however, we are keen to share our learning with other government organisations for everyone’s benefit. The code underlying the app is available to councils and other government bodies on request, as well as the underlying prompts – contact data@lambeth.gov.uk if interested.

Sean Pedrick-Case