AI at Hillhead High School in Glasgow and in London

Published: 10 January 2019

Martin Ingram and Cristina Denk-Florea, who are PhD students working with Prof.Frank Pollick, held an event at Hillhead High School in Glasgow. Martin, Cristina and Prof.Pollick also hosted an event with their industrial collaborators in London

Martin Ingram and Cristina Denk-Florea, who are PhD students working with Prof. Frank Pollick, held an event at Hillhead High School with an S2 computing class. Martin and Cristina introduced the pupils to the concepts of artificial intelligence and social science/psychology, explaining to them in simple terms how AI works and how its is currently being used. They also tried to dispel some myths surrounding it and get the young audience to think about how AI and technology might affect their lives in the future.

 

They then spoke about how they use psychological research methods to study human-technology interactions and also how psychology can help us to better understand the influence of intelligent machines on human users, which can be useful when it comes to informing the design of newer and better machines. 

Prof. Pollick, Cristina and Martin also hosted an event with their industrial collaborators, Qumodo, a company that is looking at introducing social sciences to the artificial intelligence and forensic communities. At the meeting Prof. Pollick, Cristina and Martin talked about various aspects of psychological research in relation to human-machine teams.

Prof. Pollick opened the session with an overview of the many cognitive capacities of a human team member, such as their ability to empathise and use theory of mind to infer the internal state of other team members.

Cristina spoke about the ways in which we can employ artificial intelligence within forensic settings. She argued that this was a particularly important area of application for artificial intelligence, given detectives are typically tasked with viewing and categorising thousands of images on computers seized from criminals, in which many images can be violent and distressing to the viewer. Prolonged long-term exposure to these types of images can be detrimental and damaging to the detectives that have to view them, yet artificial intelligence can be trained to not only categorise these types of images, but also to censor them for potential viewers, potentially leading to a reduction in the psychological damage to the detectives viewing them.

Lastly, Martin talked about how trust is an integral aspect of human-machine teams in the work place. Thus, if a person has to use an artificial intelligence or autonomous system, they need to appropriately trust it for this team to be most effective. Too little trust and the human doesn’t use the machine, too much trust and they may become over reliant on the system - both cases potentially lead to fatal errors, so that the framing of system performance and appropriate feedback becomes vital in the users’ evaluation of the system and is therefore integral in the overall success of the technology itself. 

 

Additional information on the event is available here


First published: 10 January 2019