UofG lends support to UK projects to address challenge of rapid AI advances

Published: 8 May 2024

Researchers from the University of Glasgow will play leading roles in projects supported by £12m in new funding from Responsible AI UK (RAi UK).

Researchers from the University of Glasgow will play leading roles in projects supported by £12m in new funding from Responsible AI UK (RAi UK).
 
Glasgow computing scientists are involved in two of the three new initiatives announced by RAi UK during the CogX conference in Los Angeles.
 
The projects will look to tackle emerging concerns of generative and other forms of AI currently being built and deployed across society.
 
Dr Simone Stumpf will lead the £3.5m Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project. Meanwhile, Professor Dame Muffy Calder and Dr Michele Sevegnani will play key roles in PROBabLE Futures – Probabilistic AI Systems in Law Enforcement Futures, a £3.5m project led by the University of Northumbria.
 
RAi UK is led from the University of Southampton and backed by UK Research and Innovation (UKRI), through the UKRI Technology Missions Fund and EPSRC. UKRI has also committed an additional £4m of funding to further support these initiatives. 

https://youtu.be/Xp2wWQcT5Qo
 
The PHAWM project brings together 25 researchers from seven leading UK universities with 23 partner organisations.
 
The University of Glasgow will lead the consortium, with support from colleagues at the Universities of Edinburgh, Sheffield, Stirling, Strathclyde, York and King’s College London.
 
Together, they will develop new methods for maximising the potential benefits of predictive and generative AI while minimising their potential for harm arising from bias and ‘hallucinations’, where AI tools present false or invented information as fact.
 
The project will pioneer participatory AI auditing, where non-experts including regulators, end-users and people likely to be affected by decisions made by AI systems will play a role in ensuring that those systems provide fair and reliable outputs.
 
The project will develop new tools to support the auditing process in partnership with relevant stakeholders, focusing on four key use cases for predictive and generative AI, and create new training resources to help encourage widespread adoption of the tools.
 
The predictive AI use cases in the research will focus on health and media content, analysing data sets for predicting hospital readmissions and assessing child attachment for potential bias, and examining fairness in search engines and hate speech detection on social media.
 
In the generative AI use cases, the project will look at cultural heritage and collaborative content generation. It will explore the potential of AI to deepen understanding of historical materials without misrepresentation or bias, and how AI could be used to write accurate Wikipedia articles in under-represented languages without contributing to the spread of misinformation.
 
Dr Simone Stumpf, of the University of Glasgow’s School of Computing Science, is the project’s principal investigator. She said: “AI is a fast-moving field, with developments often at risk of outpacing the ability of decisionmakers to ensure that the technology is used in ways that minimise the risk of harms. Regulators around the world are working to ensure a balance between harnessing AI’s potentially transformative benefits for society and the most effective level of oversight on its outputs.
 
“Auditing the outputs of AI can be a powerful tool to help develop more robust and reliable systems, but until now auditing has been unevenly applied and left mainly in the hands of experts. The PHAWM project will put auditing power in the hands of people who best understand the potential impact in the four fields these AI systems are operating in. That will help produce fairer and more robust outcomes for end-users and help ensure that AI technologies meet their regulatory obligations.
 
“By the project’s conclusion, we will have developed a robust training programme and a route towards certification of AI solutions, and a fully-featured workbench of tools to enable people without a background in artificial intelligence to participate in audits, make informed decisions, and shape the next generation of AI.”
 
The £3.4m PROBabLE Futures project, led by Northumbria University’s Professor Marion Oswald MBE, brings together researchers from the Universities of Glasgow, Northampton, Leicester, Cambridge and Aberdeen universities with a number of law enforcement, commercial technology, third-sector and academic partners. 
 
The project will focus on the uncertainties of using AI for law enforcement. Professor Oswald said that AI can help police and the courts to tackle digital data overload, unknown risks, and increase operational efficiencies.
 
She added: “The key problem is that AI tools take inputs from one part of the law enforcement system but their outputs have real-world, possibly life changing, effects in another part – a miscarriage of justice is only a matter of time.
 
“Our project works alongside law enforcement and partners to develop a framework that understands the implications of uncertainty and builds confidence in future probabilistic AI, with the interests of justice and responsibility at its heart.”
 
Professor Muffy Calder and Dr Michele Sevegnani, of the School of Computing Science, will lead the University’s contribution to PROBabLE Futures. Last year, Professor Calder co-authored a report from The Alan Turing Institute that examined how to balance the needs of national security with individual human rights in the use of AI.
 
Professor Dame Muffy, who is also head of the University’s College of Science & Engineering, said: “I’m pleased to be part of PROBabLE Futures. This project is well-placed to help ensure that AI can be integrated effectively and ethically into law enforcement infrastructures to help keep us all safer. 
 
“Last year, the University established our Centre for Data Science and AI to bring together our broad, multidisciplinary research base across the theory and application of artificial intelligence and machine learning.
 
“These two important research projects from RAi UK will help strengthen the University’s links with other leading institutions, as well as further establishing the UK as a leader in ethical AI.”
 
Funding has been awarded by Responsible AI UK (RAi UK) and form the pillars of its £31million programme that will run for four years. RAi UK is backed by UK Research and Innovation (UKRI), through the UKRI Technology Missions Fund and EPSRC.
 
Since its launch last year, RAi UK has delivered £13milliion of research funding. It is developing its own research programme to support ongoing work across major initiatives such as the AI safety institute, the Alan Turing Institute, and BRAID UK.
 
RAi UK is supported by UKRI, the largest public funder of research and innovation, as part of government plans to turn the UK into a powerhouse for future AI development.
 
Professor Gopal Ramchurn, CEO of Responsible Ai UK (RAi UK), said: 

These projects are the keystones of the Responsible AI UK programme and have been chosen because they address the most pressing challenges that society faces with the rapid advances in AI.

“The projects will deliver interdisciplinary research that looks to address the complex socio-technical challenges that already exist or are emerging with the use of generative AI and other forms of AI deployed in the real-world.

“The concerns around AI are not just for governments and industry to deal with – it is important that AI experts engage with researchers and policymakers to ensure we can better anticipate the issues that will be caused by AI.”


First published: 8 May 2024