5 design hacks for AI-enabled public services


Imagine this: You’re a social worker in a local authority, reviewing cases at your desk. You suddenly get a notification.

The AI-enabled (Artificial Intelligence enabled) safeguarding system has automatically escalated a family’s case to urgent status based on flagged keywords in your notes—triggering a mandatory investigation.

You can see the AI has misinterpreted routine follow-up comments as signs of imminent risk, but the system won’t let you de-escalate without managerial override, and your manager is in meetings all afternoon.

The wrongly-flagged family receives a distressing call from the investigation team about an issue they don’t fully understand.

This scenario – although fictional – is no longer farfetched as AI tools are rapidly being integrated into public services. When the services we design go wrong, it’s often our residents most in need that are most adversely affected.

In this article, I cover five design hacks we need to remember when designing AI-enabled services. This is not an exhaustive list, rather a practical set of things you can embed in your thinking.

Hack 1: Start with residents’ needs not the latest tech trend

For many of us in the public sector, starting with residents’ needs is our default approach. When designing AI-enabled services, this still applies—perhaps now more than ever.

The temptation is to ask “Where in our organisation or services can we use AI? It’s exciting tech that everyone is talking about, and there’s pressure to innovate in order to save. But that’s the wrong starting point.

Instead, we should be asking: “What are our biggest service challenges, where AI could actually help improve outcomes for residents and staff?” Maybe it’s the long wait for social care assessments, residents struggling to find the right information on your website or staff spending hours manually cross-checking that residents meet the benefits criteria against multiple databases.

So, start by understanding the real human needs—both for residents and staff—then work out if AI is the right or only solution. Sometimes it will be. Often it won’t, and it might mean that what’s needed  is a simpler form, easy-to-read or plain language in communications, or an extra staff member might solve the problem better than any algorithm.

Hack 2: Build in human intervention

When you read the scenario above, your first thought might have been that the social worker had the expertise to spot the error and could have easily corrected it. In this case, they had no authority to do so.

The simple answer in this case might be to – build in human checkpoints – where qualified staff such as social workers can quickly and easily correct errors such as the one in this scenario.

This example raises a broader point around human intervention in AI-decisions. If you’re reading this article, it’s highly likely that you’ve come across the phrase ‘human-in-the-loop’ – the idea that a human should be involved in any AI-decision making pathway. In the scenario above, I hope it’s clear to see that a ‘human in the lead’ rather than a ‘human-in-the-loop’  may have completely avoided the family being contacted in the first place. 

So, an important design consideration when it comes to AI-enabled services is that qualified humans – who may not always be the manager or service leader – should always make the ultimate decision, especially in high-stakes contexts such as this one.

Hack 3: Make your AI explain itself

One critical aspect in the scenario is why the system flagged that particular family. Its reasoning was an opaque misinterpretation of keywords.

This isn’t good enough!

As designers of public services, if we can’t explain to our residents and staff how AI reached a decision in plain language – we simply should not be using it!

The point here isn’t about publishing the algorithm – it’s more about providing a meaningful reason for the output, in this case for flagging a family for being at risk. To illustrate:

  • A bad AI output to the case worker (as may have happened in this case): Risk score 9.2. Keywords matched: “struggles”, “stressed” etc.
  • A good AI output to the case worker: Suggested flag: This case mentions a new mental health diagnosis and recent financial stress. Does this match your assessment? Yes, continue / No, this is incorrect.

A change as simple as this can completely change the interaction between the council and family, and in this case could have prevented the unexpected call.

Hack 4: Test with your most digitally excluded residents

AI-enabled services often work well for digitally confident people, designed badly; they could put undue stress and pressure on people at the most vulnerable points in their lives.

The hack here is to deliberately test with residents who have low digital literacy, English as a second language, or limited access to technology.

As part of this you could check:

  1. Can they understand what the AI is doing?
  2. Can they challenge a decision?
  3. Can they access the service at all?

Digital inclusion isn’t just about access. It’s about ensuring your AI enhancement doesn’t create a two-tier service where digitally confident residents get faster, better outcomes. Consider how your AI will work alongside non-digital channels, such as the contact centre phone calls and face-to-face support – these need to be part of the process, not an afterthought.

Hack 5: Make your AI-service a living one

The social worker scenario went wrong partly because the AI had too much power in the decision making process. A better approach might be one that starts with AI that assists staff rather than replaces their judgment. Once you build confidence and learn how it works in practice you could then test automations with ‘human in the lead’ like the one in our scenario.

Another thing to think about is that unlike a traditional IT system rollout, when designing an AI-enabled service – you never ‘finish’ building it. It’s a living and evolving process involving ongoing improvements and tweaks, including monitoring for bias.

As part of this, consider building in regular reviews of AI decisions, clear governance structures, and ongoing training. Make sure someone is accountable for the AI’s performance—not just its implementation.

If you get the design fundamentals right, you’ll create AI-enabled services that genuinely serve your residents’ needs. Get them wrong, and you’ll create systems that entrench inequality and erode trust.


Genta Hajri
24 November 2025 ·

Join the LOTI conversation


Sign up for our monthly newsletter to get the latest news and updates