LDW Datathinks: More than a checklist exercise: Trustworthy and Ethical Assurance


As part of LOTI’s LDW DataThinks, Dr Sophie Arana, Research Application Manager, Turing Research and Innovation Cluster in Digital Twins (TRIC-DT), discusses a novel approach to building systems that are trustworthy and ethical for end users. 

The public sector is expanding its use of data-driven technologies. 

Digital twinning pilots like the Harrow’s Street digital twin or the remote sensing trials of the Breathe London Network and Pan-London IoT Damp and Mold Project  are great examples of how government is paving the way for smarter, more efficient public services and infrastructure. 

But harnessing novel tech requires careful oversight. Does local government have the right tools to build trust and uphold ethical standards for those evolving technologies?  

In this blog, I’ll argue why checklists are not enough when it comes to building systems that are trustworthy and ethical for end users like councils, residents and community groups. I also introduce a novel approach that considers ethical principles and empowers teams to turn them into actionable forms of guidance and decision-making.  

What is trustworthy and ethical assurance and why is it hard right now?  

Assurance is the “process of measuring, evaluating and communicating something about a system” (Department for Science, Innovation and Technology, Introduction to AI assurance). The end goal of this process is justified or warranted confidence in a specific system property. When assuring trustworthiness, the focus is on properties like data quality, accountability, fairness, and explainability. 

As digital twinning and AI capabilities become increasingly coupled through advanced cyber-physical infrastructure (e.g. IoT (Internet of Things), large language models, reinforcement learning), assurance of those systems also becomes more challenging: For example, how to ensure data quality when sources are distributed and changing over time? How to safeguard against novel security risks in federated and interoperable systems? How to assure fairness, when systems depend on large datasets, which are often incomplete and biased? 

Assuring a system’s trustworthiness can be daunting, especially if teams lack the necessary expertise to develop a comprehensive assurance strategy from scratch. Often, practitioners will gravitate towards checklists to guide their assurance practices.  

The good and the bad of checklists  

Checklists can be invaluable tools because they create a repeatable and auditable process. This is especially relevant in safety-critical areas such as healthcare or aviation, where it is vital that steps are not missed.  

But checklists are not one-size-fits-all solutions, and when dealing with trustworthiness and ethics of emerging technology, they may not be sufficient. Complementary frameworks are needed for comprehensive assurance.  

Trustworthy and ethical assurance isn’t about rigidly following procedures; it’s about critical reflection and deliberation on which procedures make sense in context. It’s not about ticking off a list of actions; it’s about generating a list that includes the right actions. 

Trustworthy and ethical assurance extends beyond checklists in several ways: 

 Alignment of assurance activities with higher-level goals 

Although numerous checklists for ethical and trustworthy technology are readily accessible, selecting one over another requires justification.  Simply following a checklist does not address the deeper issues of explaining the rationale behind why this set of steps is the relevant one. Teams should be explicit about what broader ethical goals are relevant for their specific system and define actions that meet these goals. 

Adequacy of assurance activities 

As actionable as checklists may seem, they can create a false sense of security. They imply more action taken to guarantee quality assurance. On the contrary, assurance techniques can be irrelevant or conflicting given a specific context. Teams should be questioning whether the actions being undertaken are sufficient and well-evidenced. After all, what if a checklist is incomplete? 

Evolution of assurance throughout a project’s lifecycle 

Considerations around assurance should start early and extend throughout the entire project’s lifecycle. Actions performed once may need revisiting later. Assurance is not a one-time task but a continuous, evolving process. 

Transparency of assurance activities 

Transparency of assurance activities is essential because completing a set of actions does not automatically lead to a trustworthy system. Actions need to be effectively communicated to translate into trust. 

For example, transparent communication is key in collaborations like the London IoT Declaration. Structured, accessible assurance arguments allow stakeholders to review and flag issues. This approach fosters confidence and accountability, which are crucial for successful collaboration. 

Similarly, within organisations, transparent communication ensures everyone understands their roles and responsibilities in assurance activities. Developers of data-driven technologies often need to justify the allocation of resources for assurance activities to the larger team or address assurance challenges they can’t solve alone. 

Assurance beyond checklists: Introducing the TEA Platform 

To address the need for a more robust assurance framework, the Alan Turing Institute is developing the Trustworthy and Ethical Assurance (TEA) platform. This open-source tool is designed to help users develop and structure assurance cases, effectively communicate them, and deliberate ethical principles across a wider team––that is, to go beyond simple compliance checks. 

The TEA platform uses an argument-based assurance approach, focusing on normative or ethical principles such as fairness and explainability. We recognise that to guide teams in making context-specific decisions, ethical principles need to first be operationalised. This requires multiples steps of deliberation: 

  1. Identify relevant ethical principles 
  2. Weight principles in alignment with project goals 
  3. Specify principles to enable decision-making 
  4. Revise principles together with relevant stakeholders 
  5. Implement principles through specific actions 

As part of the TEA platform, we plan to publish targeted training materials to help teams adopt this deliberative method. Beyond training, the TEA platform strives to facilitate transparency with stakeholders and users by providing a platform to host assurance cases openly. By integrating the structured process with a practical tool, the TEA ensures that assurance is not just about meeting predetermined criteria but about understanding and communicating the limits and possibilities within the context of each project. This comprehensive approach bridges the gap between ethical principles and practical actions, enhancing the overall trustworthiness and effectiveness of new data-driven technologies. 

If you are interested in the TEA platform and methodology, check out our website or reach out to us directly via email

Thanks to Dr. Christopher Burr (Senior Researcher in Trustworthy Systems) and Jennifer Ding (Senior Researcher in Research applications) for input on this blog post. 

The image accompanying this blog is licensed as by-sa: https://creativecommons.org/licenses/by-sa/4.0/.

London Data Week Responsible AI

Sophie Arana
28 August 2024 ·
Skip to content

Join the LOTI conversation


Sign up for our monthly newsletter to get the latest news and updates