Research project launches free tool to make AI safer and more trustworthy
Published: 18 February 2026
A University of Glasgow-led research project is releasing a free tool to help organisations, policymakers, and the public maximise the benefits of AI applications while identifying their potential harms.
A University of Glasgow-led research project is releasing a free tool to help organisations, policymakers, and the public maximise the benefits of AI applications while identifying their potential harms.
The tool, developed as part of the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project, aims to help address the urgent need for rigorous assessments of AI risks caused by the rapid expansion and adoption of the technology across a wide range of sectors.
It is designed to help support the aims of regulations like the European Union’s AI Act, introduced in 2024, which seek to balance AI innovation with protections against unintended negative consequences.
PHAWM’s new open-source workbench tool will help empower users without extensive backgrounds in AI to conduct an in-depth audit of the strengths and weaknesses of any AI-driven application.
It also helps to actively involve audiences who are usually excluded from the audit process, including those who will be affected by the decisions made by the AI application, in order to produce better outcomes for end-users of the applications.
The tool is the first public outcome from PHAWM, which was launched in May 2024 and supported by £3.5m in funding from Responsible AI UK (RAi UK).
It brings together more than 30 researchers from seven leading UK universities with 28 partner organisations to tackle the challenge of developing trustworthy and safe AI systems.

The tool and its accompanying framework, which guides organisations and communities to use the tool effectively, are both publicly available and free to download from the project’s website.
Professor Simone Stumpf, of the University of Glasgow’s School of Computing Science, leads the PHAWM project. She said: “Generative and predictive AI applications have the potential to give organisations valuable new ways to deliver improved services for end users. They are already influencing decisions in areas including housing, employment, finance, policing, education, and healthcare.
“However, they can be afflicted by flaws like bias and inaccuracies. In order to avoid building AI applications which enforce unfair outcomes in critical services, they must be carefully monitored and regularly audited by humans.
“Until now, those audits are usually conducted by people with a deep understanding of the processes which drive AI, but who may lack insight into the social or cultural impacts those systems may create. There is rarely an opportunity for people who will regularly use or will be affected by AI decision-making to help guide their development.
“Our new workbench tool is designed to help organisations create better, fairer, more transparent AI systems by providing diverse perspectives on AI applications which might otherwise go unexamined.”
The tool and accompanying guiding framework has been developed through extensive co-design workshops with the project’s partners and other stakeholders in the health and cultural heritage sectors. These sectors are two of the four areas that the PHAWM project was established to investigate, alongside media content and collaborative content generation.
The PHAWM tool works by systematically gathering diverse perspectives on an organisation’s current or prospective AI application through a four-stage auditing process.
First, the audit instigator is guided to provide information about the AI system in accessible, non-technical language.
Secondly, they invite relevant stakeholders, including users of the system and the people the systems’ decisions will affect such as the public or patients for health AI applicatons to participate in the auditing process.
Next, the audit participants are guided to align the audit with their concerns and lived experience of the AI application’s impact in their daily lives or profession. The tool and framework then help participants identify potential positive and negative impacts, create metrics to measure them, and assess whether the AI application under audit is capable of meeting their criteria. The AI application will receive a pass or fail grade based on the audit criteria set by each participant.
Finally, the audit instigator collects the data and insights from the audit participants, identifying areas of concern raised during the process. They can use the diverse perspectives gathered to develop action plans which will inform their decisions about how the AI application is developed or integrated into practice.
Professor Stumpf added: “The tools and processes we’ve developed offer a practical, community‑centred approach to evaluating the real‑world impacts of artificial intelligence. The workbench is a flexible tool which can be used to run in-depth audits of AI applications an organisation has developed in-house,as well as being used to investigate whether off-the-shelf AI applications will meet organisations’ needs before they are purchased.
“Being able to look in such depth and from so many different angles will help organisations make properly informed decisions which assess the balance of risk and reward which comes from adopting new technologies. Our hope is that organisations will be encouraged to use the tool and framework we’ve developed with our partners and stakeholders will enable them to reap the benefits of AI while avoiding any potential for harms.”
The PHAWM team are continuing to refine the tool and framework in collaboration with representatives from their four key areas of investigation.
Public Health Scotland and NHS National Services Scotland (NSS) contributed to PHAWM’s health use case, while Istella contribute to media content. The National Library of Scotland, Museum Data Service and David Livingstone Birthplace Trust participate in the cultural heritage use case, and Wikimedia are involved in the collaborative content generation use case.
The PHAWM team are also currently developing comprehensive training and support for certification to help organisations adopt PHAWM’s auditing tools as effectively as possible.
First published: 18 February 2026