Responsible AI & Technology Zoomposium: 9 October 2023

Published: 11 October 2023

Prof Simone STUMPF: 'Human-in-the-loop machine learning for responsible AI' Dr Mark WONG: 'Racial justice and fairness in AI through co-design' Dr Charlie PEEVERS: 'AI development as equitable: law, justice and regulation'

Watch this Zoomposium here Passcode: 9vz&ZV9=


Prof Simone Stumpf, School of Computing Science

'Human-in-the-loop machine learning for responsible AI

My research interests are on end-user interactions with machine learning systems, particularly around explanations and intelligibility, integrating end-user feedback for model steering and using these interactions to address fairness in AI models. I have worked in multiple domains with lay users, domain experts and data scientists to develop AI models and user interfaces, e.g. to make predictive machine learning systems intelligible, steer teachable object recognisers for people who are blind, and determine the fairness of loan application decisions. I am very keen to engage with other researchers and external organisations in order to develop funding proposals and to bring responsible and trustworthy AI to real-world settings.


Dr Mark Wong, School of Social & Political Sciences

'Racial justice and fairness in AI through co-design'

My interests are in digital society and policy, responsible/fair AI, and racial justice in AI, data, and technology. My research examines how AI and data-driven systems can perpetuate discrimination and systemic racism. My work seeks to centre the perspective of impacted communities, particularly Minoritised Ethnic people/People of Colour, in AI development. I'm particularly interested in responsible innovation through co-design and participatory approaches, which puts people’s needs and marginalised voices of impacted communities first. I’m open to collaborate with colleagues interested in these areas, particularly from an interdisciplinary perspective and new knowledge can be created through pushing disciplinary boundaries. Opportunities to collaborate with communities, governments, third sector, multinationals, and industry are also welcomed – to understand where needs are in these sectors and how research can contribute to and support their work.


Dr Charlie Peevers, School of Law

'AI development as equitable: law, justice and regulation'

My broad interests are in the role of law and the legal regulation of novel technologies that engage foundational societal questions of justice, access, and participation.  My approach emphasises the constitutive force of law in AI development that goes beyond a mere regulatory role: that legal protections and structures (whether relating to property rights or the market) play a significant though underplayed role in driving AI development and regulation in particular directions.  At the moment, this interest is channelled into examining the distributional consequences of treating AI as an 'existential threat': whose voices does such a framing amplify, whose are marginalised, how does political discourse, debate and access shift, how might a legal historical analysis provide insight into more equitable and socially just approaches both to AI development and regulation? 

I'm open to collaborating with colleagues interested in how questions of equity and social justice can be foregrounded in AI development and in debates over AI regulation.  I'm keen to move beyond the technical and formalistic engagement with lawyers that often happens in these spaces, and instead explore collaboratively how developers, participants, regulators, and other stakeholders think about creativity, innovation, and regulation and how this co-relates to issues of ethics and equity.

First published: 11 October 2023