New research could empower non-experts to help create trustworthy AI applications
Published: 2 April 2026
Involving people without AI expertise in the development and evaluation of artificial intelligence applications could help create better, fairer and more trustworthy automated decision-making systems, new research suggests.
Involving people without AI expertise in the development and evaluation of artificial intelligence applications could help create better, fairer and more trustworthy automated decision-making systems, new research suggests.
After enlisting members of the public to evaluate the potential impacts of two real-world applications, researchers from UK universities will present a paper at a major international computing conference which suggests how ‘participatory AI auditing’ s could improve AI decision-making in the future.
The team’s recommendations aim to tackle a key problem in how AI applications are developed. Although they can help reduce workloads and increase efficiency across public and private-sector organisations, the applications can also make poor-quality decisions unless they are properly scrutinised for signs of bias.
Currently, responsibility for ensuring the applications make fair and impartial decisions usually lies with the engineers and data scientists who develop them. When the developers fail to properly consider the social or economic conditions of the people affected by the tools’ outcomes, unexpected problems can arise.
By involving a wider group of people in the early stages of an AI applications’ development, participatory audits aim to prevent those problems before they occur. Although participants may lack knowledge of how the systems work on a technical level, they can offer insight into social and ethical considerations that traditional audits may overlook.
The team’s paper is set to be presented at the ACM Computer-Human Interaction conference in Barcelona later this month. It shows that people without AI expertise are keen to play a role in the development of AI applications from the earliest stages of their development, and can offer unexpected insights into their impacts which could otherwise be easily overlooked. However, they need significant support to help them provide the most useful feedback, and new tools are required to help guide them through the audit process.

Professor Simone Stumpf, of the University of Glasgow’s School of Computing Science, is the project’s lead investigator. She said: “Around the world, decisions made by governments, financial institutions and across the private sector are increasingly being made by AI, and the use of AI applications is likely to expand in the years to come.
“Regulations like the European Union’s AI Act, introduced in 2024, are seeking to limit the harms that badly-designed AI applications could inflict on the people affected by their decisions. Our research aims to provide a systematic framework and tools to help people without AI expertise use their lived experience to identify and report those harms through participatory audits, and ultimately be more involved in creating more trustworthy AI systems.”
The team’s paper is based on the outcomes of a series of co-design workshops they ran with 17 people without AI expertise, who were tasked with auditing two AI tools designed for use in healthcare and education.
The first application, Scottish Patients at Risk of Readmission and Admission (SPARRA), is used by NHS Scotland to predict which patients are likely to require treatment in hospital over the next year. The second, the School Attachment Monitor (SAM), is a prototype developed at the University of Glasgow to help child psychologists and psychiatrists understand the bonds between children aged between 5 and 9 and their caregivers by analysing the children’s speech.
The study participants were a diverse group of patient representatives, teachers and parents, each of whom were either potential end-users of the applications or people whose lives may be affected by the applications’ decisions.
The auditors were tasked with identifying the applications potential impacts, determining how those impacts should be measured, and suggesting how tools to support audits might work.
Each volunteer felt strongly that people affected by AI applications should be involved in their development, ideally beginning at the design stage and continuing through the entire process. They said that participatory audits should provide participants with clear explanations of the aims and objectives of the AI application, as well as transparency around who is running the audit itself.
The workshops also highlighted that the audits should capture both positive and negative impacts of the applications, and offer auditors the option to capture ambiguous results. Although the researchers initially focused on identifying risks and harms, participants were keen to ensure that the potential benefits of the applications for the groups were properly captured. They also said they felt constrained by having to mark aspects of the systems as having passed or failed an audit – instead, they suggested a third option to allow them to indicate when an impact defied a binary categorisation.
However, they felt daunted by the prospect of determining how to measure the impacts of the tools overall.
The University of Glasgow’s Dr Eva Fringi, one of the study’s first authors, said:
"The participants were able to easily identify potential problems caused by both applications, and their insights sometimes highlighted issues which didn’t seem to have been considered by the applications’ developers. They struggled to consider on their own how the impacts of those events could be properly measured, but responded positively to the introduction of step-by-step prompts and being shown examples of how metrics have been used before. That suggests that they can provide useful feedback on metrics as long as they feel adequately supported through the audit process.”
While the researchers are continuing to work to build a comprehensive framework for responsible AI auditing, they believe that employing a collaborative approach to building AI applications will help improve public trust as well as benefit the organisations which develop them.
Dr Patrizia Di Campli San Vito, the paper’s other first author, said: “What participatory auditing offers is a potential new selling point for AI: this application has been vetted by a diverse range of people right from the start, which makes it much safer and more reliable than software developed with a philosophy of ‘move fast and break things’.
“We hope that this research will help pave the way to building trustworthy AI applications which have been developed from the ground up to make the best possible decisions."
Researchers from the Universities of Sheffield, Stirling, Strathclyde and York contributed to the work and co-authored the paper.
The team’s paper, titled ‘Empowering Stakeholders with Participatory Auditing of Predictive AI: Perspectives from End-Users and Decision Subjects without AI Expertise’, will be presented at the CHI 2026 conference in Barcelona on Thursday, 16 April.
The research was supported by funding from the Engineering and Physical Sciences Research Council (EPSRC) through Responsible AI UK.
First published: 2 April 2026