LDW DataThinks: Equalities Duties for Data in the Public Sector


Throughout this month, to celebrate London Data Week, LOTI will be publishing a series of think-pieces, ‘DataThinks’, written by experts working with data and artificial intelligence (AI), challenging data practitioners in London local government and beyond to think about data in new ways. The views expressed within each article are solely those of the authors.

This article is written by Dr. Sue Chadwick, Strategic Planning Advisor at Pinsent & Masons, and Member of the Data for London Advisory Board and Brent Council’s Data Ethics Advisory Board. It describes upcoming research undertaken by Dr. Chadwick with Ruchi Parekh (Cornerstone Barristers) and Dr. Janis Wong (The Alan Turing Institute)

A screenshot of the Equalities and Human Rights Commission's webpage on the Public Sector Equalities Duty

How do the public sector’s equalities duties affect how they should use data?

To re-use an existing phrase in the tech lexicon – we need to move fast and make things (better). In particular, London’s digital future should be one that includes everyone and ensures that technology doesn’t replicate or amplify existing inequalities.

Documents like the London Data Charter Principles promoting transparency, accountability and ethics, are a good place to start – but we need good practice too and this shouldn’t mean reinventing the wheel when we can repurpose trusted governance tools.  I recently worked with Ruchi Parekh and  Janis Wong on how we could expand the use of Equalities Impact Assessments (EQIAS) in this area and this blog is a summary of the issues we considered.

EQIA and PSED – what they are…

An EQIA is an existing tool used by local authorities to demonstrate compliance with the Public Sector Equality Duty or PSED, introduced in 2010. This (broadly)  requires a public authority, carrying out a public function, to have “due regard” to the need to eliminate discrimination, advance equality, and foster good relations between people who share a protected characteristic and those who don’t.  Those protected characteristics are age, disability, gender reassignment, pregnancy and maternity, race, religion or belief, sex and sexual orientation.

But what has that got to do with data and AI?

First, there’s the issue of the digital divide, recently described by Parliament as “the gap between people in society who have full access to digital technology, such as the internet and computers, and those who do not”.  This can be down to inadequate access to infrastructure through geography or income or both, lack of access to devices, lack of skills, or all of the above.

There’s a clear overlap between digital exclusion and the characteristics protected by PSED.  A 2021 report by Age UK showed that exclusion increases with age.   The Good Things Foundation notes that  “Limited users of the internet are 1.5 times more likely to be from Black, Asian and minority ethnic groups compared with extensive users” and the 2021 Census recorded that 60% of internet non-users between 16 and 24 years were disabled.

Next, there’s the question of Artificial Intelligence (AI) and bias.  AI is increasingly used to support public functions and decisions, but there is always a risk that, as recently recognised by a Justice of the UK Supreme Court the algorithm can operate “so as to unfairly or unjustly prejudice a group with some characteristic, particularly a protected characteristic”.

And why should we care?

The risks are not theoretical.  A 2019 court case based on the use of facial recognition technology recognised that the algorithms processing the images could operate in a discriminatory way. The use of algorithms to assess student performance in 2020 operated so that overperforming students from poor schools were downgraded while other students benefitted simply from the fact that they attended schools with a good past performance.  An EU paper on chatgpt  notes that large language models such as ChatGPT  may contain biases that “can undermine the ability of public administrations to act impartially” and in last week’s Parliamentary debate on Artificial Intelligence  Dawn Butler MP noted that “Many, if not most, AI-powered systems have been shown to contain bias, whether against people of colour, women, people with disabilities or those with other protected characteristics”

So what can we do?

The regulatory landscape is changing quickly.  The Government has issued guidance for using AI in the public sector, and the recent AI white paper includes five “values-focused cross-sectoral principles”: safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.   However, there are no proposals, in England, for a formal regulatory response to the issues raised here.

It’s easy to feel overwhelmed either by the pace of change or the breadth of available guidance – yet we all know we should do something – the question is what?  My personal view is that one starting point could be to take your local authority’s current guidance and/or templates on PSED and carry out EQIAs template and add some questions that probe whether or not the function to be exercised has digital impacts including digital exclusion, algorithmic bias, opacity and interpretability.

Conclusion

It’s not all bad news.  This piece highlights some of the downsides of data and AI but this definitely shouldn’t put us off using new technologies.  Greater use of data and AI in the public sector has the potential also improve citizens’ lives, modernise government services, and accelerate economic development.  And you can use an EQIA to record these digital benefits too!

London Data Week

Sue Chadwick
6 July 2023 ·
Skip to content

Join the LOTI conversation


Sign up for our monthly newsletter to get the latest news and updates