GIST video

An overview of our GIST Research Section from the previous Section Head, Professor Steve Brewster.

Overview

The Human Computer Interaction research section is also known as GIST (Glasgow Interactive SysTems). In our research we create and use novel, interactive systems to better understand, entertain, protect and support humans in their everyday lives. GIST is a research section made up of several research groups, including:

A lot of the research we undertake is collaborative and interdisciplinary. We work closely with other groups in Computing Science as well as other schools including Psychology and the Institute of Health and Wellbeing. We also work closely with other world leading Universities and many private and public sector organisations (recently: Facebook, Jaguar Landrover, Logitech, Aldebaran Robotics, Pufferfish, Bang and Olufsen, Freescale Semiconductor Inc., Glasgow City Council, Scottish Business Resilience Centre, Dynamically Loaded and Cisco Systems).

What is GIST?

There is no effective human-computer interaction without a deep understanding of the ever-changing boundary between people and technology. Our approach in GIST starts at the infrastructure level, through perception and action, to social context. 

Our research focuses on ensuring the security of human-centred systems, optimising the information flow between technology and human senses, making machines capable of human-like social interactions, and making sense of digital traces left by human communities.

Section members

Academic Staff:

Affiliate Staff:

Research Staff:

Research Students:

  • Melvin Abraham
  • Nujud Aloshban
  • Norah Mohsen T Alotaibi
  • Noora Alsakar
  • Hadeel Alsaleh
  • Rawan Alsarrani
  • Basmah Alsenani
  • Huda Abdulgani O Alsofyani
  • Nesreen Farraj S Alareef
  • Ammar Altaie
  • Andrea Avogaro
  • Laura Bajorunaite
  • Jake Bhattacharyya
  • Andrei Birladeanu
  • Jacqueline Borgstedt
  • Robin Bretin
  • Abeer Buker
  • Areej Buker
  • Isna Alfi Bustoni
  • Iain Christie
  • Federico Cunico
  • Monica Duta
  • Habiba Farzand
  • Zejian Feng
  • Cristina Fiani
  • Rhiannon Fyfe
  • Thomas Goodge
  • Jinling Huang
  • Xinyu Li
  • Lin Luo
  • Omar Namnakani
  • Emily O’Hara
  • Sakrapee Paisalnan
  • Francesco Perrone
  • Michael Pelikan
  • Zhanyan Qui
  • Gordon Rennie
  • Hongyun Sheng
  • Fuxiang Tao
  • Andreas Toaiari
  • Amelie Voges
  • Jiaqi Wang
  • Weiyun Wang
  • Kieran Waugh
  • Sean Westwood
  • Antonius Bima Murti Wijaya
  • Rawan Zreik-Srour
  • Diego Drago
  • James Ross

 

Projects

Current:

Viajero: The aim of ViAjeRo (Traveller in Spanish) is to radically improve all passenger journeys by facilitating the use of immersive Virtual and Augmented Reality (together called XR) to support entertainment, work and collaboration when on the move (2019 – 2024) – Prof Stephen Brewster (PI). Funded by ERC #835197 (€2,443,657). 

UKRI Centre for Doctoral Training in Socially Intelligent Artificial Agents: The centre will train 50 PhD students in Artificial Social Intelligence, the domain aimed at endowing machines with the ability to understand social interactions like people do (2019-2027) - Prof Alessandro Vinciarelli (PI). Funded by UKRI (£4,902,252). 

Zoo Devices for Animal Welfare and Visitor Education. Funded by the Royal Society of Edinburgh (2022-2023). Dr Ilyena Hirskyj-Douglas (PI). (£65,000).

A toolkit for identification and mitigation of XR dark patterns (2023-2024). Funded by Meta Labs. Mohamed Khamis (PI), Pejman Saeghe (Co-I), Mark McGill (Co-I). ($75,000).

MetaSafeChild: Assessing Child Safety in the Metaverse and Developing Safety-Enhancing Technologies (2023-2024). Funded by REPHRAIN. Mohamed Khamis (PI), Mark McGill (Co-I), Mathieu Chollet (Co-I). (£79,877.82)

Personalised Acceptance and Commitment Therapy through app-based micro- content (PACT-am) – funded by Parkinson’s UK 2022-2024. Dr Simone Stumpf. 

RadioMe: Real-time Radio Remixing for people with mild to moderatedementia who live alone, incorporating Agitation Reduction, and Reminders (2019 – 2024) – Prof Stephen Brewster (PI).  Funded by EPSRC EP/S026991/1 (£541,362)

FETProact Sonicom: Transforming auditory-based social interaction and communication in AR/VR. Funded by the EU Horizon 2020 program - Prof Alessandro Vinciarelli and Prof Stephen Brewster (€499,876.25)

Effi: End-users fixing fairness issues: Industry-funded investigation into the role and impact of end-user human-in-the-loop tools in ensuring fair Artificial Intelligence (2022-2023) - Dr Simone Stumpf (PI). Funded by Fujitsu Ltd., Japan (£163,000). 

TAPS: Assessing, Mitigating and Raising Awareness of the Security and Privacy Risks of Thermal Imaging (2021-2023) - Dr Mohamed Khamis (PI). Funded by EPSRC EP/V008870/1 (£262,119).

PT.HEAT: Preventing THErmal ATtacks (2021-2023) - Dr Mohamed Khamis (PI). Funded by PETRAS (£177,075).

Horizon digital economy research hub: https://www.horizon.ac.uk/research/ - Prof Matthew Chalmers

A toolkit for identification and mitigation of XR dark patterns - Mohamed Khamis (PI), Pejman Saeghe (Co-I), Mark McGill (Co-I). Funded by Meta Labs ($75,000). 

Interaction Design for Trusted Sharing of Personal Health Data to live Well with HIV (INTUIT) – funded by EPSRC 2018-2022. Dr Simone Stumpf. 

First RespondXR: Digital vulnerability of immersive training for first responders - Dr Mark McGill. Funded by SPRITE+ (2021-2022)

Facilitating Parental Insight and Moderation for Safe Social VR (2021-2024) - Dr Mark McGill (Co-I) and Dr Mohamed Khamis (PI). Funded by Facebook Reality Labs ($75,000).

Using AI-Enhanced Social Robots to Improve Children’s Healthcare Experiences - Dr Mary Ellen Foster (UKRI/Canada joint project). 

Horizon: Trusted Data-Driven Products (2020-2025) - Professor Matthew Chalmers (affiiate). Funded by EPSRC EP/T022493/1 (£4M).

Past:

CoDesigning Fair AI (COFAI) – funded by Fujitsu 2019-2021. Dr Simone Stumpf.

PriXR: Protecting Extended Reality (XR) user and bystander privacy by supporting legibility of XR sensing and processing (2022-2023) - Dr Mark McGill (PI), Dr Mohamed Khamis (Co-I). Funded by REPHRAIN (£79,997.14).

Human Data Interaction: Legibility, Agency, Negotiability (2018-2022) - Prof Matthew Chalmers. Funded by EPSRC EP/R045178/1 (£1.04M).

Emergence of Cybersecurity Capability across Critical National Infrastructure (2021-2022) - Dr Mohamed Khamis. Funded by the National Cybersecurity Centre (£140,731 out of which £133,659 for UofG).

EuroFIT: Social innovation to improve physical activity and sedentary behaviour through elite European football (2013-2017) – Prof Matthew Chalmers. Funded by EU FP7.

SoCoRo: Socially Competent Robots (2016 - 2020) - Prof Alessandro Vinciarelli. Funded by the EPSRC (£355,000).

SAM: School Attachment Monitor (2015-2018) - Professor Stephen Brewster. Funded by the EPSRC (£776,875).

MuMMER: MultiModal Mall Entertainment Robot (2016 – 2020) – Dr Mary Ellen Foster. Funded by EU Horizon 2020. 

Populations: A Software Populations Approach to UbiComp Systems Design (2011 – 2016) – Prof Matthew Chalmers. Funded by EPSRC (£4M).

EuroFIT: Social innovation to improve physical activity and sedentary behaviour through elite European football (2013-2017) – Prof Matthew Chalmers. Funded by EU FP7 (€5M).

Anyscale Apps (2013 – 2017 - Prof Matthew Chalmers. Funded by EPSRC EP/L000725/1 (£1.1M)

ABBI: Audio Bracelet for Blind Interaction (2014 – 2017) – Prof Stephen Brewster. Funded by EU FP7

HAPPINESS: Haptic Printed Patterned Interfaces for Sensitive Surfaces (2015 – 2018) – Prof Stephen Brewster. Funded by EU Horizon 2020. 

Seminar series

We are happy to hear from anyone that would like to visit us to give a talk. The GIST seminar coordinators are Lin Luo

and Dr Mohamed Khamis. Please get in touch with them if you are interested. 

You can watch the recordings of the GIST seminar on the GIST Youtube Channel.

Events this week

There are currently no events scheduled this week


Upcoming events

There are currently no upcoming events


Past events

GIST Seminar (18 April, 2024)

Speaker: Prof. Duncan Brumby

Dear all,

Prof. Duncan Brumby (external speaker from UCL) will give a GIST seminar on 18 Apr. Anyone interested is welcome to attend :)

Date: 18 Apr, 2024 

Time: 13:00 - 14:00 

Title: Human-AI Interaction: Cooking, Reviewing, and Looking for Work

Abstract:  "In this talk, I’ll present three interrelated studies of human-AI interaction. First, I’ll tell you what happened when we asked students to bake a cake using only a voice user interface (VUI) to follow the recipe. The errors they encountered along the way highlight the capabilities and limitations of VUIs in supporting daily tasks. Next, I’ll explore a timely scenario in which individuals routinely copy and paste AI-generated text from Chat GPT without carefully reading it first. I’ll discuss the potential consequences of such behaviour and propose that integrated attention checks might serve as a safeguard against our inclination towards convenience. Lastly, I’ll give a detailed snapshot of daily life working in the gig economy, highlighting the tension between the flexibility offered by digital platforms and the restrictions they impose on workers. These three studies, which will be presented at CHI 2024, give insights into the benefits, challenges, and pitfalls of human-AI interaction. And thanks to Chat GPT for drafting this abstract—rest assured, it has been thoroughly reviewed before sending."

Bio:  

Duncan Brumby, a Professor of Human-Computer Interaction at University College London, focuses on how people interact with computing technology. A recognized expert in the HCI community, he has published extensively and serves as the Editor-in-Chief of the International Journal of Human-Computer Studies. Prof. Brumby has made significant contributions to HCI education and research, including leadership in postgraduate education at UCL. He has shared his expertise globally, including at top universities and industry leaders like Google and Microsoft.

Location: SAWB 423, Sir Alwyn Williams Building 

Online (Zoom link): https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GIST Seminar (07 March, 2024)

Speaker: Dr Emanuel von Zezschwitz

Dear all,

Dr. Emanuel von Zezschwitz (external speaker from Google) will give a GIST seminar on 7 MarchIn his presentation, he will provide an overview of research projects and discuss opportunities and challenges when it comes to user-centered privacy and security on the web. Anyone interested is welcome to attend :)

Date: 7 March, 2024 

Time: 13:00 - 14:00 

Topic: T&S UX Research at Google Chrome

Bio:  

Dr. Emanuel von Zezschwitz is an HCI expert, computer scientist and UX researcher with a focus on usable privacy and security. During his academic career, he focused on usable authentication mechanisms and mobile device privacy. Today, he works at Google, where he leads research efforts that shape the Chrome browser. His research focuses on web privacy, web authentication, and safe transactions.

Location: SAWB 423, Sir Alwyn Williams Building 

Online (Zoom link): https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GRILLBot In Practice: Lessons and Tradeoffs Deploying LLMs for Adaptable Conversational Task Assistants (05 March, 2024)

Speaker: Sophie Fischer

In this talk, we will introduce GRILLBot, which tackles the challenge of building real-world multimodal assistants for complex real-world tasks. We describe the practicalities and challenges of developing and deploying our leading (first and second prize winning in 2022 and 2023) system deployed in the Alexa Prize TaskBot Challenge. Building on our Open Assistant Toolkit (OAT) framework, we propose a hybrid architecture that leverages Large Language Models (LLMs) and specialised models tuned for specific subtasks requiring very low latency. OAT allows us to define when, how and which LLMs should be used in a structured and deployable manner. For knowledge-grounded question answering and live task adaptations, we show that LLM reasoning abilities over task context and world knowledge outweigh latency concerns. For dialogue state management, we implement a code generation approach and show that specialised smaller models have 84% effectiveness with 100x lower latency. Overall, we provide insights and discuss tradeoffs for deploying both traditional models and LLMs to users in complex real-world multimodal environments in the Alexa TaskBot challenge. These experiences will continue to evolve as LLMs become more capable and efficient – fundamentally reshaping OAT and future assistant architectures.

The talk is based on two recent papers about GRILLBot v2:

  • GRILLBot In Practice: Lessons and Tradeoffs Deploying Large Language Models for Adaptable Conversational Task Assistants: https://arxiv.org/pdf/2402.07647.pdf (2024, under submission)
  • GRILLBot-v2: Generative Models for Multi-Modal Task-Oriented Assistance: https://assets.amazon.science/f3/75/cbd31079434eaf0c171a1ae0c8a8/grill-tb2-final-2023.pdf (2023)


GIST Seminar (22 February, 2024)

Speaker: Shaun Alexander Macdonald

Dear all,

Dr. Shaun Alexander Macdonald will give a GIST seminar. Anyone interested is welcome to attend :)

Date: 22 February, 2024 

Time: 13:00 - 14:00 

Topic: Tailoring Comfort to Person and Place - Emotionally Resonant Vibrating Comfort Objects for Socially Anxious Situations

More about the talk: 

" I will talk about my increased prioritised individual and qualitative experiences in my pursuit of a calming vibrotactile intervention for socially anxious users and the lessons I learned about emotionally resonant vibrations, social anxiety, and my research philosophy.

We will walk through the project path of discovery, from the start just messing around with some novel vibrotactile stimuli, through to tailoring the use of those stimuli based on the cognitive model of social anxiety, emotion regulation theory, exposure therapy and the personal experiences and desires of the end-user. "

Bio: Dr Shaun Alexander Macdonald - Increasing Heart Rate and Anxiety Level (owlstown.net)

Location: SAWB 423, Sir Alwyn Williams Building 

Online (Zoom link): https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GIST Seminar (08 February, 2024)

Speaker: Mary Ellen Foster

Dear all,

Dr. Mary Ellen Foster will give a GIST seminar on 8 February. She will discuss recent and ongoing projects from her research group that involve deploying a social robot into an existing  workplace.  Anyone interested is welcome to attend. 

Date: 8 February, 2024 

Time: 13:00 - 14:00 

Title: Adding a Social Robot to an Existing Workplace: Stakeholders and Power

Abstract: The talk will focus particularly on the role of the various stakeholders in this process, including managers of the workplace as well as workers on the ground that may need to adjust to their robot co-worker.

Bio: University of Glasgow - Schools - School of Computing Science - Our staff - Dr Mary Ellen Foster

Location: SAWB 423, Sir Alwyn Williams Building 

Online (Zoom link): https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GIST Seminar (25 January, 2024)

Speaker: Julie Williamson & Mark McGill

Dear all,

Julie (ERC Consolidator Grant - FUSION) and Mark (ERC Starter Grant - AUGSOC) will give their successful ERC interview presentations and a bit of background about their newly funded ERC 2023 projects.  Anyone interested is welcome to attend. 

Date: 25 January, 2024 

Time: 13:00 - 14:00 

Location: SAWB 423, Sir Alwyn Williams Building 

Online (Zoom link): https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


Some potential harms of everyday XR and how we might prevent them (11 January, 2024)

Speaker: Joseph O'Hagan

Abstract: As extended reality technologies (virtual reality, augmented reality, and mixed reality) move ever closer to wearable, everyday form factors, and with a new year now upon us, it is a time to reflect on how these technologies might one day reshape our society through the augmentation of people and places. In this talk I will discuss work I have done over the past year exploring some of the harms such technology can enable through detecting/augmenting persons and due to a lack of social-location awareness when augmenting a real world location.

 

Kindly find the Zoom link below:

https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


Creating Trouble with Dots and Joins (14 December, 2023)

Speaker: Professor Gilbert Cockton

 
ABIOSTRACT: Gilbert Cockton was a founder member of GIST in 1988 (called GUCCHI for its first year). He is now an Emeritus Professor in both Design (Northumbria) and Computer Science (Sunderland), as well as an occasional external research mentor to the Centre for Culture and Creativity at Teesside University. He has explored the dynamics of the dot-to-dot puzzle of design work since 1982, when he authored eLearning programs while a secondary school teacher. His initial research focused on one dot, exploring specification notations and architectures for interactive systems. He then focused on the joins and more dots with his first two Glasgow PhD students: Steven Clarke had an early look at the mechanics of Contextual Design; Darryn Lavery critiqued usability problems and how to find them. Together we created trouble in different ways, challenging orthodoxies on language models for interaction, the logic and realities of Contextual Design, and the contextuality of usability problem discovery. Moving to a research chair at Sunderland in 1997 via a year at Northumbria (while still a visiting research fellow at Glasgow), he focused on evaluation work and accessibility as a specific focus for contextual design (including culturally sensitive design). He also directed large regional support projects for the digital sector in the northeast of England, adding a new (to him) dot of product strategy to the dots of digital artefacts, evaluation and usage contexts. This led to a NESTA fellowship on value-centred design (now WoFo -worth-focused design) and then a move from computing to design. In 2009 he became professor of Design Theory in the School of Design at Northumbria. He created more trouble there (for computing) by challenging the linear orthodoxies of (software) engineering design and later the dot-poverty of agile processes. His BIG Design paradigm framed design work as connecting between the four dots of artefacts, beneficiaries, evaluations and purpose. His Northumbria PhD student Jenni George developed novel ways of tracking the evolution of these four dots (design arenas) and the joins between them, using a range of novel approaches to linking artefacts to purpose, purpose to beneficiaries, and evaluation to purpose. BIG Design was combined with Jenni's framework in courses at NordiCHI, CHI, World Usability Day, Postgraduate design degrees at TU/e, and HCI PhD schools.
 
Gilbert has had leadership roles in the British HCI Group, SIGCHI, ACM and IFIP, and Associate Dean and Head of Department roles. He won't mention these at all in his talk, or how he ended up with two emeritus titles. He was awarded the SIGCHI Lifetime Service Award in 2020 and an IFIP TC13 Pioneer award in 2023.
 
In his talk, Gilbert will present BIG WoFo design as a concurrent dot-to-dot creative strategic approach to the design of interactive digital systems. He will relate relevant aspects to his research within GIST, mainstream HCI research at Sunderland, and research into creative practices at Northumbria.
 

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09
 
 
 


A journey into deep generative modelling of human expression (07 December, 2023)

Speaker: Gustav Eje Henter

Abstract:
The ability to automatically create convincing speech audio and 3D animated motion holds great promise for enhanced realism and creativity in games and film, and for improved human-computer interaction using virtual avatars and social robots. In this talk, I aim demonstrate how recent advances in data-driven, deep generative modelling have taken us closer to realising that potential, using examples from my research journey in the field. Along the way, we will consider the synthesis of locomotion, speech and gestures, facial expression, and dance. When we are done, I hope to have convinced you that probabilistic models are the right approach, and that the speech and motion communities have much to gain from working more closely together.

Bio:
Gustav Eje Henter is a docent and a WASP assistant professor in machine learning at the Division of Speech, Music and Hearing at KTH Royal Institute of Technology. His main research interests are deep probabilistic modelling for data-generation tasks, especially speech and 3D motion/animation synthesis. He has an MSc and a PhD from KTH, followed by post-docs in speech synthesis at the Centre for Speech Technology Research at the University of Edinburgh, UK, and in Prof. Junichi Yamagishi's lab at the National Institute of Informatics, Tokyo, Japan, before returning to KTH in 2018.


GIST Seminar (30 November, 2023)

Speaker: Mark Coté & Jennifer Pybus

Dear all

The next hybrid GIST seminar will take place on November 30 at 13:00 and will be delivered by 
Mark Coté (Reader in Data and Society at King/’s College London) & Jennifer Pybus (Canada Research Chair in AI Data and Society at York University).

Topic: Super SDKs: Tracking Personal Data and Platform Monopolies in the Mobile

More Introduction: "We will present our socio-technical research on the Software Development Kit (SDK), well known to developers for building applications, but little-known to researchers on the social dimensions of data. Our work demonstrates the importance of these technical objects for personal data capture and platform monopolisation. The average person has more than 40 different apps on their phone and each app uses an average of 18 SDKs which harvest, share, and process our data. Users have little access to or understanding of this core technical data hub or of how it is supercharging profits for tech giants like Google or Facebook. We will present our taxonomy and open up SDKs using socio-technical methods to demonstrate how digital giants are controlling our data. "
 

Mark Coté is a Reader in Data and Society in the Department of Digital Humanities at King/’s College London, and a socio-technical researcher focusing on big data, mobile apps, algorithms and machine learning. He has been PI or CI on EPSRC, H2020, and AHRC grants valued at more than £10 million, including SoBigData ++, the European Research Infrastructure on social data analytics, REPHRAIN, and Safe AI Assistants. His work has been published widely across leading journals ranging from Big Data & Society, CHI, CUI and the IEEE Computer.

Jennifer Pybus is Canada Research Chair in AI Data and Society at York University. Her interdisciplinary research intersects digital and algorithmic cultures and explores the capture and processing of personal data across social media platforms in relation to algorithmic profiling, monetisation, polarization and bias. She addresses the lived experiences of datafication by cultivating innovative tools, resources and pedagogy for increasing critical data literacy and agency, and through democratic debate about artificial intelligence.

 
Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09
 
 
 


GIST Seminar (02 November, 2023)

Speaker: Kieran Waugh; Jolie Bonner

Dear all

The next hybrid GIST seminar will take place on November 2nd at 13:00 and will be delivered by 
Kieran Waugh (Proxemic Cursor Interactions for Touchless Widget Control) & Jolie Bonner (When Filters Escape the Smartphone: Exploring Acceptance and Concerns Regarding Augmented Expression of Social Identity for Everyday AR).

Topic 1: Proxemic Cursor Interactions for Touchless Widget Control (by Kieran Waugh)
 
Abstract: Touchless gesture interfaces often use cursor-based interactions, where widgets are targeted by a movable cursor and activated with a mid-air gesture (e.g., push or Pinch). Continuous interactions like slider manipulation can be challenging in mid-air because users need to precisely target widgets and then maintain an ‘activated’ state whilst moving the cursor. We investigated proxemic cursor interactions as a novel alternative, where cursor proximity allows users to acquire and keep control of user interface widgets without precisely targeting them. Users took advantage of proxemic targeting, though gravitated towards widgets when negotiating the boundaries between multiple elements. This allowed users to gain control more quickly than with non-proxemic behaviour, and made it easier to move between user interface elements. We find that proxemic cursor interactions can improve the usability of touchless user interfaces, especially for slider interactions, paving the way to more comfortable and efficient use of touchless displays.
 
Topic 2: When Filters Escape the Smartphone: Exploring Acceptance and Concerns Regarding Augmented Expression of Social Identity for Everyday AR
 
Abstract: Mass adoption of Everyday Augmented Reality (AR) glasses will enable pervasive augmentation of our expression of social identity through AR filters, transforming our perception of self and others. However, despite filters’ prominent and often problematic usage in social media, research has yet to reflect on the potential impact AR filters might have when brought into everyday life. Informed by our survey of 300 existing popular AR filters used on Snapchat, Instagram and Tiktok, we conducted an AR-in-VR user study where participants (N=24) were exposed to 18 filters across six categories. We evaluated the social acceptability of these augmentations around others and attitudes towards an individual’s augmented self.Our findings highlight 1) how users broadly respected another individual’s augmented self; 2) positive use cases, such as supporting the presentation of gender identity; and 3) tensions around applying AR filters to others (e.g. censorship, changing protected characteristics) and their impact on self-perception (e.g. perpetuating unrealistic beauty standards). We raise questions regarding the rights of individuals to augment and be augmented that provoke the need for further consideration of AR augmentations in society.
 
Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09
 
 
 


Powerplay: Dismantling the White Patriarchal origins of robotics (01 November, 2023)

Speaker: Tom Williams

Abstract:
Roboticists often talk of social robots ' power to influence and persuade interactants. But roboticists also wield other, subtler types of power that are typically ignored. In this talk I will describe how roboticists wield cultural, disciplinary, and structural power, and show how understanding both the default ways this power is used and the ways it might be refocused requires understanding the cultural, political, and historical factors that shape the power landscapes in which robots are developed and deployed. Using an American context and an example, I will show how White Patriarchal ambitions in the 19th-century United States influenced the history of robotics, the way that robots by default continue to reinforce racial and gendered power hegemonies in the US and Beyond, and the types of design frameworks we can adopt to subvert these default power dynamics.
 
Bio:
Tom Williams is an Associate Professor of Computer Science at the Colorado School of Mines, where he directs the Mines Interactive Robotics Research Lab. Prior to joining Mines, Tom earned a joint PhD in Computer Science and Cognitive Science from Tufts University in 2017. Tom’s research focuses on enabling and understanding natural language based human-robot interaction that is sensitive to environmental, cognitive, social, and moral context. His work is funded by grants from NSF, ONR, and ARL, as well as by Early Career awards from NSF, NASA, and AFOSR. Tom is currently on sabbatical at the University of Bristol and the Bristol Robotics Laboratory, where he is writing a book for MIT Press on the social and ethical implications of interactive robots.


GIST Seminar (19 October, 2023)

Speaker: Dr. Simone Stumpf; Dr. Evdoxia Taka

Dear all

The next hybrid GIST seminar will take place on October 19th at 13:00 and will be delivered by Dr. Simone Stumpf and Dr. Evdoxia Taka from our GIST research group.
 
Topic: Exploring the Impact of Lay User Feedback for Improving AI Fairness
 
Abstract: Fairness in AI is a growing concern for high-stakes decision-making. Engaging stakeholders, especially lay users, in fair AI development is promising yet overlooked. Recent efforts explore enabling lay users to provide AI fairness-related feedback but there is still a lack of understanding of how to integrate users’ feedback into an AI model and the impacts of doing so. To bridge this gap, we collected feedback from 58 lay users on the fairness of a XGBoost model trained on the Home Credit dataset, and conducted offline experiments to investigate the effects of retraining models on accuracy, and individual and group fairness. Our work contributes baseline results of integrating user fairness feedback in XGBoost, and a dataset and code framework to bootstrap research in engaging stakeholders in AI fairness. Our discussion highlights the challenges of employing user feedback in AI fairness and points the way to a future application area of interactive machine learning.

Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09
 
 
 


GIST Seminar (10 August, 2023)

Speaker: Mel McKendrick

Dear all

The next hybrid GIST seminar will take place on August 10th at 13:00 and will be delivered by Dr. Mel McKendrick from Heriot-Watt University.
 
Topic: Health and care technologies: opportunities and challenges
 
The COVID-19 pandemic has highlighted weaknesses in the healthcare sector in several areas including medical training where medical students were ill equipped to deal with a change to digital training and capacity issues with a lack of trainer availability. Even prior to the pandemic, there were challenges with medical training, which lacks standardisation and is largely conducted on patients. Competency is assessed subjectively by observation, risking complications, compromising patient safety/comfort, longer recovery times and hospital stays. Moreover, there is a global shortage of surgical workforce, particularly in low to middle income countriesi . The pandemic also highlighted the mental health of medical students and the population more generally. Untreated mental health costs the UK over £118bn annually and set to double over the next 20 yearsii. Comorbid mental health, developmental, neurodegenerative or physical illnesses are issues highlighted by the WHO mental Health Action Planiii and mental health has been included in the 2030 Agenda for Sustainable Development and the Sustainable Development Goalsiv . Yet with these challenges, come opportunities to implement technologies with simulation training and digital therapies. Key technologies will include telemedicine, sensors, wearables, smartphones, digital therapies, genotyping microarrays, neuroimaging, electronic health records, healthcare data collection, natural language processing, artificial intelligence, virtual reality, augmented reality and robotics. These emerging technologies are changing the face of healthcare, but do we really understand their potential benefits and risks? How can we effectively and safely reduce the healthcare burden through emerging technologies?

Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09
 
About Dr. Mel McKendrick:
 
Mel McKendrick is a Chartered Experimental Psychologist and Associate Professor in Psychology at Heriot-Watt University, where she leads SPECTRA Labs: a series of interconnected labs which combine psychology with technologies such as extended realities, gamification and objective sensory metrics . She is also CEO of Optomize Ltd. (attention assessment and visual feedback training for clinical skills). She is interested in how we can use simulated environments, technology and feedback to reduce anxiety and improve performance.
 
 


GIST Seminar (13 July, 2023)

Speaker: Joanna Aldhous

Dear all

The next hybrid GIST seminar will take place on July 13th at 13:00 and will be delivered by Dr. Joanna Aldhous from Edinburgh Napier University.
 
Topic: Describing the Quality of Haptic UX for Virtual Reality
 
Haptic devices are being commercialised, and a plethora of new haptics are being developed for use in VR to bring forth the holy grail of touch virtuality. But what is haptic UX? How can we describe it, measure its dimensions, or understand what a good haptic UX looks like? The research relating to these questions and some common challenges and tools to support haptic UX evaluation are discussed.

Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09
 
About Dr. Joanna Aldhous:
 
Joanna is a Postdoctoral Research Associate in the School of Computing at Edinburgh Napier University, undertaking interdisciplinary mixed methods HCI research. Joanna has been developing a wide range of human-centred digital products and services over the last two decades, starting her professional life as a web developer in the early 90s. Joanna is neurodivergent and is currently conducting research on haptic (touch) user experience (UX) evaluation for virtual reality (VR).
 
 


GIST Seminar (18 May, 2023)

Speaker: Ilyena Hirskyj-Douglas, Vilma Kankaanpaa and Jiaqi Wang

Dear all

The next hybrid GIST seminar will take place on May 18th at 13:00 and will be delivered by the Animal-Computer Interaction group: Ilyena, Vilma and Jiaqi. 
 
Title: Animal-Computer Interaction Research in Glasgow

Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09
How to create media attention for your research


GIST Seminar (23 March, 2023)

Speaker: Ross Barker

Dear all

The next hybrid GIST seminar will take place on March 23rd at 13:00 and will be delivered by Ross Barker is a Senior Communications Officer at University of Glasgow and leads external relations related to the College of Science and Engineering.
 
Title: How to create media attention for your research. 

Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09
How to create media attention for your research


GIST Seminar (09 March, 2023)

Speaker: Dr Sarah Völkel

Dear all
 
The next hybrid GIST seminar will take place on March 9th at 13:00 and will be delivered by Dr Sarah Völkel (personal website, Google scholar). She will speak about her research on personality models for speech-based conversational agents, as well as her work in Google on privacy for children. 
 
Sarah Völkel (scholar) is a User Experience Researcher at Google in Munich, Germany. Her current research focuses on usable security, explainable AI, and personalisation for kids and teens. Before that, she did her PhD with Albrecht Schmidt and Heinrich Hussmann at LMU Munich. In her PhD, she developed methods to imbue conversational agents such as voice assistants and chatbots with a personality and examined user preferences for different personalities.
 
Title: What personality should ChatGPT have? 
 
Abstract: People subconsciously attribute personalities conversational agents such as chatbots and voice assistants, which significantly influence further interaction. In this talk, I present methods to systematically imbue conversational agents with personalities, showing than users perceive artificial personalities differently from human ones. Furthermore, I introduce findings into which artificial personalities are best received and how individual preferences differ. The results give developers tools to create voice assistant and chatbot personalities that people actually want to use.

Location: SAWB 423 and via zoom: https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GIST Seminar (16 February, 2023)

Speaker: Dr Theodore Zanto

Dear all,

The next hybrid GIST seminar will take place on February 16th at 13:00 and will be delivered by Theodore Zanto. Theodore is an associate professor at the University of California San Francisco, and he is visiting GIST from February 13th to February 17th. 
 
Title: Mechanisms of transfer: From digital music training to improved attention and working memory
 
Abstract: Aging has been associated with numerous cognitive declines, such as reduced attentional control and working memory. Recent research has indicated that musical training, which engages numerous cognitive abilities including attention and memory, may benefit performance of those same cognitive functions. Yet, there is limited evidence that musical training, particularly through a digital interface, may benefit cognitive aging. In this talk, I will discuss recent findings from a randomized clinical trial where older adult (aged 60-80 years) non-musicians were engaged in a digital musical rhythm training intervention. Results from this research highlights the potential for digital musical training to remediate age-related cognitive declines and elucidates the neural mechanisms that enables musical training to improve attention and working memory.
 
Speaker's bio: Dr. Zanto is an Associate Professor in Neurology at the University of California San Francisco and Director of the Neuroscape Neuroscience Division. He utilizes fMRI, EEG and non-invasive brain stimulation techniques (such as TMS & TES) to study neural mechanisms at the intersection of attention, perception, and memory. He is interested in the role of neural entrainment in cognitive control and how it may be used as a potential therapeutic, particularly in the aging population. Currently, Dr. Zanto is assessing whether select cognitive functions may be improved through neural entrainment with musical rhythms or with non-invasive rhythmic neurostimulation.

Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GIST Seminar (09 February, 2023)

Speaker: Professor Ian McLoughlin

Dear all,

The next hybrid GIST seminar will take place on February 9th at 13:00 and will be delivered by Ian McLoughlin. Professor Ian McLoughlin (马国岭), ICT Cluster Director in Singapore Institute of Technology (Singapore's 5th university) was previously a professor and Head of the School of Computing at the University of Kent (Medway Campus) from 2015-2019, a professor at the University of Science and Technology of China NELSLIP lab from 2012-2015. Before that he spent 10 years at Nanyang Technological University, Singapore and 10 years in the electronics R&D industry in New Zealand and the UK. Professor McLoughlin became a Chartered Engineer in 1998 and a Fellow of the IET in 2013. He has over 200 papers, 4 books and 13 patents in the fields of speech & audio, wireless communications and embedded systems, and has steered numerous technical innovations to successful conclusions.
 
The talk will be about Speech and Audio AI. 

Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GIST Seminar (26 January, 2023)

Speaker: Dr Stephen Lindsay

Dear all

The next hybrid GIST seminar will take place on January 26th at 13:00 and will be delivered by Dr Stephen Lindsay. Dr Lindsay is a Lecturer in Healthcare Technologies in the School of Computing Science. He will talk about his experience from being on a funding panel. 
 
Title: Experience from being on a panel.

Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GIST Seminar (01 December, 2022)

Speaker: Dr Brendan David-John

Dear all,

There are two GIST seminars this week: one on 29 November at 11:00, and one in the regular slot on1 December at 13:00. Both will be hybrid, taking place in SAWB 423 and on zoom via the link below.

Below are the details for the one on 1 December, delivered by Dr. Brendan David-John from Virgina Tech. This talk will be delivered remotely by the speaker, and is titled "Providing Privacy for Eye-Tracking Data with Applications in XR". 

Bio:
Dr. Brendan David-John (he/him/his) is an Assistant Professor of Computer Science at Virginia Tech. Brendan was the first Native American male to graduate with a doctorate in Computer Science from the University of Florida in 2022, and received his BS and MS from the Rochester Institute of Technology in 2017. He is from Salamanca NY, which is located on the Allegany reservation of the Seneca Nation of Indians. His personal goals include increasing the representation of Native Americans in STEM and higher education, specifically in computing. He is a proud member of the American Indian Science & Engineering Society and has been a Sequoyah Fellow since 2013. His research interests include virtual reality and eye tracking, with a primary focus on privacy and security for the future of virtual and mixed reality.

Abstract:
Eye-tracking sensors track where a user looks and are being increasingly integrated into mixed-reality devices. Although critical applications are being enabled, there are significant possibilities for violating user security and privacy expectations. There is an appreciable risk of unique user identification from eye-tracking camera images and the resulting eye movement data. Biometric identification would allow an app to connect a user’s personal ID with their work ID without needing their consent, for example.  Solutions were explored to address concerns related to the leaking of biometric features through eye-tracking data streams. Privacy mechanisms are introduced to reduce the risk of biometric recognition while still enabling applications of eye-tracking data streams. Gaze data streams can thus be made private while still allowing for applications key to the future of mixed-reality technology, such as animating virtual avatars or prediction models necessary for foveated rendering.

 

Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09

https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GIST Seminar (29 November, 2022)

Speaker: Nicolás Jacobo Valencia Jiménez

Dear all,

There are two GIST seminars this week: on 1 December in the regular slot, and the one below on 29 November at 11:00 which will be delivered by Prof. Nicolás Jacobo Valencia Jiménez from the Faculty of Engineering, Universidad Santiago de Cali, Cali - Colombia. 
 
Title: Engaging Children with Special Needs Through a Multisensory Environment Based on Artificial Vision System and serious games.

Abstract: Using an RGB-D camera arrangement can help with the training and therapy of special needs children (CwSN). This is demonstrated in a two-case study with a Multisensory Environment (MSE). The first case examines the effects of a game platform-based intervention protocol on proprioception parameters in children with Down Syndrome (CwDS). The second case depicts a comprehensive robot-assisted intervention for children with autism spectrum disorder (CwASD), demonstrating the conditions under which a robot-based approach can be useful in assessing autism risk factors for diagnosis. The findings suggest that MSE is a promising tool for use as an assistive technology in CwSN to improve physical, behavioral, and cognitive intervention.

Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GIST Seminar (17 November, 2022)

Speaker: Professor Dr Patrick C. K. Hung

Dear all,

The next hybrid GIST seminar will take place on November 17th at 13:00 and will be delivered by Patrick C. K. Hung from the Faculty of Business and IT, Ontario Tech University, Canada.
 
The title of the talk is "Introduction of Human Robot Interaction". 

Abstract: The concept of robots, or other autonomous constructions, can be found in many different cultures dating back to ancient times. A social robot is the Internet of Things (IoT) consisting of a physical robot component that connects to Cloud services to improve the ease and productivity of activities through networking, multi-media, and sensory technologies. Many studies found out that anthropomorphic designs of what robots are, what they can do, and how they should be understood resulted in greater user engagement within the history of Western countries. Humanoid robots usually behave like natural social interaction partners for human users, with features such as speech, gestures, and eye-gaze, referring to the users' data and social background. However, cultural differences may influence human-robot interaction with different social norms and cultural traits. This talk will give an overview of Human Robot Interaction (HRI) with case studies and demonstrations.

Short Bio: Patrick C. K. Hung is a Professor, Graduate Program Director of Computer Science, and Director of International Programs at the Faculty of Business and Information Technology at Ontario Tech University, Canada. Patrick worked with Boeing Research and Technology at Seattle on aviation services-related research with two U.S. patents on mobile network dynamic workflow systems. Before that, he was a Research Scientist with the Commonwealth Scientific and Industrial Research Organization (CSIRO) in Australia. Patrick is a founding member of the IEEE Technical Committee on Services Computing and IEEE Transactions on Services Computing. In addition, he is a coordinating editor of the Information Systems Frontiers. He has a Ph.D. and Master in Computer Science from Hong Kong University of Science and Technology, a Master in Management Sciences from the University of Waterloo, Canada, and a Bachelor in Computer Science from the University of New South Wales, Australia.

Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09 


GIST Seminar (03 November, 2022)

Speaker: Dr Gang Li

Dear General

The upcoming hybrid GIST seminar will take place on November 3rd at 13:00 and will be delivered by Dr Gang Li. The talk is titled "See VR motion sickness through the lens of brain science".

Dr Gang is a Post-doctoral Researcher in the School of Psychology and Neuroscience at the UofG also as part of the GIST-based ViAjeRo project . He is committed to pioneering the integration of multimodal biosensing approaches with non-invasive brain stimulation techniques together to understand the neural mechanisms of VR-induced motion sickness so as to improve people’s cognitive control abilities and improve the utility of consumer VR.


Location: SAWB 423 and via zoom.

Kindly find the zoom link below:
https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZL


GIST Seminar - Dr. Mathieu Chollet - 8 Septemper 2022 (08 September, 2022)

Speaker: Dr. Mathieu Chollet

Dear all,
 

The upcoming hybrid GIST seminar will take place this coming Thursday September 8th at 13:00 and will be delivered by Mathieu Chollet a lecturer in healthcare technologies in the school of computing science.

 

Location: SAWB 423 and via zoom.

 

Kindly find the zoom link below:

https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GIST Seminar (25 August, 2022)

Speaker: DR XIANGHUA DING

Dear All,

 

The upcoming hybrid GIST seminar will take place tomorrow 25th of August at 13:00 and will be delivered by Dr. Xianghua Ding a senior lecturer in healthcare technologies (School of Computing Science).

 

 

Location: SAWB 423 and via zoom.

 

Kindly find the zoom link below:

 https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09

 

 
 

 

 


Applying HCI to Combat Organic Spread of Online Misinformation (11 August, 2022)

Speaker: Sameer Patil

Bio: Sameer Patil is an Associate Professor in the School of Computing at the University of Utah. Previously, he has held several academic and industry appointments, including at Indiana University Bloomington, New York University, University of Siegen, Helsinki Institute for Information Technology (HIIT), Vienna University of Economics and Business, Yahoo Research, IBM Research, and Avaya Labs Research. Sameer’s research interests lie at the intersection of Human Computer Interaction with cybersecurity and privacy. Sameer's research interests focus on human-centered investigations of cybersecurity, covering the fields of Human Computer Interaction (HCI), Computer Supported Collaborative Work (CSCW), and social computing. His research has been funded by the National Science Foundation (NSF), Department of Homeland Security (DHS), Google, Utah System of Higher Education (USHE), and several competitive internal awards from Indiana University. He received the NSF CAREER award in 2019. Sameer’s work has been published in top-tier conferences and journals, and he holds eight US patents related to mobile technologies. Sameer obtained a Ph.D. in Computer and Information Science from the University of California, Irvine and holds Master’s degrees in Computer Science & Engineering and Information from the University of Michigan, Ann Arbor.

 

 

Abstract: Misinformation spread via social media platforms has emerged as a prominent societal challenge. The production and spread of misinformation on these platforms have evolved from a largely bot-driven operation to one that exploits everyday actions of end users. Purely computational approaches that work reasonably well for bots can be ineffective for combating such organic spread. To address this issue, we have been investigating the application of HCI principles to design user experiences that can help users recognize questionable content and dissuade them from sharing it, thus dampening its spread. Our initial study (n = 1,512) showed that flagging news headlines with credibility indicators can reduce the intent to share the articles on social media. Notably, we found that the indicator connected to professional fact checkers was the most effective, motivating two parallel threads of follow-on research.

 

In the first thread, we studied practices of professional fact checkers to understand and address their challenges. Interviews with 19 fact checkers from 18 countries surfaced a pipeline of manual and labor-intensive practices fragmented across disparate tools that lack integration. Fact checkers reported a lack of effective dissemination mechanisms that prevents fact-checking outcomes from fully achieving their potential impact. In the second thread, we explored helping users learn to seek fact checks for questionable content via a game-based approach, analyzing game analytics of more than 8,500 players interacting with 120,000 articles over a period of 19 months. As players interacted with more articles, they significantly improved their skills in spotting mainstream content, thus confirming the utility of the game for improving news literacy. At the same time, we found that exposure to social engagement signals (i.e., Likes and Shares) increased player vulnerability to low-credibility information.

 

We are applying the insight from these research efforts to design a human-in-the-loop platform driven by computation and automation to improve the effectiveness, efficiency, scale of fact-checking work and help its broad dissemination to end users.

 


GIST talk: From personal informatics to personal analytics: intelligent interactive systems for personal health (14 July, 2022)

Speaker: Lena Mamykina

Abstract:

New advance in computational modeling and AI can produce inferences and predictions with unprecedented accuracy, often surpassing that of human experts. These capabilities enable new generation of intelligent interactive systems for health and wellness. However, there remain many open questions as to how to harvest the new power of computational modeling and AI to help individuals from diverse communities improve their health. In my research, I investigate these questions in the context of self-management of chronic diseases such as type 2 diabetes. In this talk I will discuss several ongoing research initiatives that strive to facilitate reflection and learning, provide in-the-moment decision support, and guide individuals’ actions.

 

Bio:

Dr. Lena Mamykina is an Associate Professor of Biomedical Informatics at the Department of Biomedical Informatics at Columbia University. Dr. Mamykina’s research resides in the areas of Biomedical Informatics, Human-Computer Interaction, Ubiquitous and Pervasive Computing, and Computer-Supported Collaborative Work. Her broad research interests include individual and collective cognition, sensemaking, and problem-solving in the context of health and wellness. She is specifically interested in novel interactive solutions that take advantage of new streams of personal and social data and novel data science capabilities. Dr. Mamykina received her B.S. in Computer Science from the Ukrainian State University of Maritime Technology, M.S. in Human Computer Interaction from the Georgia Institute of Technology, Ph.D. in Human-Centered Computing from the Georgia Institute of Technology, and M.A. in Biomedical Informatics from Columbia University. Her dissertation work at Georgia Tech focused on facilitating reflection and learning in context of diabetes management with mobile and ubiquitous computing. Prior to joining DBMI as a faculty member, she completed a National Library of Medicine Post-Doctoral Fellowship at the department. 

 

The talk will take place at

https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09

 


Update your device (12 July, 2022)

Speaker: Adam Jenkins

 

Who? Adam Jenkins is a Usable Security and Privacy researcher, primarily focused on professional IT workers and their security decisions. Currently they work on the PhishED project, where they are designing templates for automated responses to reported Phishing emails. Before completing their PhD, Adam studied Computer Science at the University of Edinburgh, completing their MInf in 2017.

 

Abstract:

"Update your devices", is well known security advice in both academia and industry. Yet there exists very little research into the process and the system administrators tasked with sourcing, testing, applying, and troubleshooting of updates for computing systems serving a large number of end-users. These system administrators (sysadmins) play a critical role in Information security management (ISM), with their decisions impacting the security of potentially millions of end-users. However, these decisions involve complex risk assessments on an update by update basis, as although patches can remove potential software vulnerabilities, they may also introduce new errors to systems that negatively impact their organisation.

In this talk I will discuss the work done as part of my thesis. I will present one of the first attempts at studying this user group and their impact on the patching process. To do so, I primarily focuses on sysadmins' Online Communities of Practice, which provides admins with up-to-date patching information, such as known issues or related vulnerabilities. To begin,  I provide an in-depth qualitative artifact analysis of emails from a prominent patching orientated mailing list: http://PatchManagement.org. The analysis identifies several different types of information that is shared and requested by community members throughout their patching schedules. This information including requests for help troubleshooting patching errors or community generated lists of security patches to prioritise. I compliment this work by constructing descriptive case study, detailing distinct communities' collaborative information gathering and problems solving behaviours following the release of two security critical Microsoft patches. By detailing this online life cycle I find that these communities provide sysadmins with a dynamic, centralised source for their patching information, and that these communities share information often sourced from the work of other communities and their respective members. To conclude, I provide a survey of sysadmins detailing the prominence of patching behaviours at each stage of the patching process, and balance out the previous observational works with self-reported data from sysadmins from these online communities.

This work is one of the first explorations into the types of information system administrators share online with each other during patching, as well as the challenges they face and solutions they are using, such as forming these Communities of Practice. Patching, although on the surface appears very simple, is a more complicated task requiring a number of social-technical decisions to be considered before "just applying" the update. I present the lessons learned from our studies and indicate potential routes for future research within this space.


GIST Seminar (30 June, 2022)

Speaker: Dr. Marwa Mahmoud

The upcoming hybrid GIST seminar will take place on 30 June 2022 at 13:00 and will be delivered by Dr. Marwa Mahmoud a Lecturer in Socially Intelligent Technologies (School of Computing Science)

 

 

Location: SAWB 423 and via zoom.

 

Kindly find the zoom link below:

https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GIST Seminar (16 June, 2022)

Speaker: Dr. Fahim Kawsar

The upcoming hybrid GIST seminar will take place this coming Thursday at 13:00 and will be delivered by Fahim Kawsar a Professor of Mobile Systems in the School of Computing Science.

 

Location: SAWB 423 and via zoom.

 

Kindly find the zoom link below:

https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09


GIST Seminar (19 May, 2022)

Speaker: Dr. Tanaya Guha


GIST seminar (21 April, 2022)

Speaker: Stephen Lindsay


GIST Seminar- Putting the human back into Explainable AI (XAI) (07 April, 2022)

Speaker: DR. SIMONE STUMPF

Abstract:
We are currently on the cusp of a revolution in smart technologies which are being integrated into everyday life. However, stakeholders and users of these systems need to understand how these systems work so that they are trusted appropriately and used effectively. Explainable AI (XAI) has made great strides towards making these systems transparent but much of this work has neglected a human-centric approach. In this talk I will cover some of my own work in this area, and the challenges to be overcome that point the way for future research in this area.

Bio:
Dr. Simone Stumpf recently joined the GIST section, University of Glasgow, UK, as a Reader in Responsible and Interactive AI. She has a long-standing research focus on user interactions with machine learning systems. Her research includes self-management systems for people living with long-term conditions, developing teachable object recognisers for people who are blind or low vision, and investigating AI fairness. Her work has contributed to shaping the field of Explainable AI (XAI) through the Explanatory Debugging approach for interactive machine learning, providing design principles for enabling better human-computer interaction and investigating the effects of greater transparency. The prime aim of her work is to empower all users to use intelligent systems effectively.


GIST Seminar on Animal Computer Interaction (24 March, 2022)

Speaker: Dr Ilyena Hirskij-Douglas


GIST seminar (10 March, 2022)

Speaker: Professor Alessandro

Dear all,

We are reviving the GIST seminar! The first talk will be hybrid. The first talk will be delivered by Alessandro and will take place this week https://uofglasgow.zoom.us/j/96540497220?pwd=TlROd0srSGZLUzZ3QXhQV2VDOTVhZz09

Meeting ID: 965 4049 7220 
Passcode: 517624


A Practical Primer on User Research in Privacy & Security (17 March, 2021)

Speaker: Tobi Seitz

Tobi is a Senior UX Researcher at Google Munich, where he works in the Google Account and Password Manager teams. Here, he’s responsible for qualitative user research programs on privacy, safety, and security topics spanning multiple Google products. Before joining Google, he did his PhD at the Ludwig-Maximilians University of Munich, looking at ways to help people create and manage passwords.

Tobi's talk is part of the Human-Centred Security course, but we would like to invite students and colleagues from across the school on University to join us for this event. for any questions please get in touch with Jamie Ferguson at jamie.f.ferguson@glasgow.ac.uk

This talk will take place via Zoom and the info to join can be found below (you must be logged in to view the Zoom meeting information):

https://uofglasgow.zoom.us/j/95200127617?pwd=VnN3OHoyaHYxWUJZeWM5SklObmNrUT09

Meeting ID: 952 0012 7617
Passcode: 340113


Being secure: some myths of privacy and personal security (09 October, 2019)

Speaker: Professor Alan Dix

Being secure: some myths of privacy and personal security

Talk by Professor Alan Dix, Director of the Computation Foundry, Swansea. 

Abstract: It is not uncommon to see privacy regarded as a form of personal secrecy managed by restricting information egress: for many this has been the default response to GDPR.  When considering state or corporate secrecy these models have some validity, or at least hold traction: for example, levels of security clearance, or physical control of documents.  However, for people the value and meaning of data is often more critical than raw volume.  It may be the case that less information is more problematic and damaging than more information; we may confide in strangers things that we would not say to colleagues or friends; and even the use of anonymous aggregated data may be counter to the interests of those about whom it is collected.  These things are all obvious when considered explicitly, and even for corporate and governmental entities issues of reputational damage share many features with personal privacy.  Yet the myths seem to persist in many technical ‘solutions’ to privacy and in day-to-day actions, not least the destruction of documents that contributed to the Windrush scandal.  As technologists we need to understand the non-monotonic nature of privacy and offer support that is less about restrictions on data, but more about maintaining provenance, understanding and explaining the implications of data, and creating tools and mechanisms to allow long-term personal control.


Modeling Narrative Intelligence to Support Adaptive Virtual Environments (08 August, 2019)

Speaker: Rogelio Cardona-Rivera

Abstract:

Interactive narratives are used for an ever-expanding array of purposes: educational lessons, training simulations, and even organizational behaviors have had narratives woven around them because these are made more compelling in a dramatic framing. Despite their ubiquity, they remain time-consuming, expensive, and technically challenging to engineer. The automated creation of narrative content, otherwise known as procedural narrative generation, stands poised to ameliorate this challenge. However, current artificial intelligence techniques remain agnostic of the user’s narratively-oriented cognitive faculties, which are important for the effective design of interactive narrative experiences. In this talk, I will present my approach to developing intelligent systems that reify a user’s cognition to inform the automated construction of interactive stories. These systems model our human narrative intelligence, and advance a future science of interactive narrative design. My approach will be presented in the context of a specific interactive narrative phenomenon: the perception of narrative affordances, which centers on explaining how users imagine themselves taking actions in an unfolding narrative virtual environment.

Biography:

Dr Rogelio E. Cardona-Rivera is an Assistant Professor in the School of Computing and the Entertainment Arts and Engineering Program at the University of Utah, where he directs the Laboratory for Quantitative Experience Design. Alongside his students, he researches technologies to improve and define narrative intelligence through cognitive science, with an emphasis on artificial intelligence and cognitive psychology. He received his PhD and MSc in Computer Science from North Carolina State University, and his BSc in Computer Engineering from the University of Puerto Rico at Mayagüez. Rogelio has published at diverse, high-impact venues in and around intelligent narrative technologies, and his work has been recognized with a Best Paper Award at the International Conference on Interactive Digital Storytelling (ICIDS) in 2012, a Best Student Paper on a Cognitive Science Topic at the Workshop on Computational Models of Narrative in 2012, and an Honourable Mention for Best Paper at the ACM Computer-Human Interaction Conference in 2016. In 2017, he was recognized as a “New and Future Educator in AI” by the Association for the Advancement of Artificial Intelligence. He has served on numerous program committees, co-chaired the Workshops at the 2017 AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, was an Invited Keynote at the 2018 ACM SIGGRAPH Conference on Motion, Interaction and Games, and will co-chair the program of the 2019 ICIDS. Rogelio has interned as a computational narratologist at Sandia National Laboratories and Disney Research, and his research is supported by IARPA, the U.S. Department of Energy, and the U.S. National GEM Consortium.


Design of Haptic Augmented Reality (21 March, 2019)

Speaker: Max Pfeiffer

Abstract:

Our environment is becoming more and more augmented by digital systems. They support us in our everyday life, extend our abilities and help us to overcome shortcomings. Such systems mainly augment humans' visual and auditory senses. Force feedback technology is often costly and bulky, therefore the haptic sense especially the kinesthetic sense (or sense of force) is under-investigated. However, the rise of haptic feedback trough electrical muscle stimulation (EMS) in human-computer interaction, let force feedback devices become cheap and wearable. This technology currently has many advantages, but it also comes with challenges, especially when designing haptic feedback to augment our reality. This talk will give you an overview of ongoing work, facts about haptic feedback through EMS and prototyping with EMS.

 

Biography:

Max Pfeiffer is an associate researcher of the Situated Computing and Interaction Lab at the University of Münster, Germany. His research is in human-computer interaction, including situated, ubiquitous, and wearable computing with a focus on haptic feedback and assistive technology.


Title: Interaction design for emergent users - Leveraging digital technologies to solve problems of developing economies (28 February, 2019)

Speaker: Anirudha Joshi

Abstract:

In HCI courses, we learn to do user-centred design. We strive to make sure that users will find our products useful, usable and even desirable. But we typically design for the “traditional” users. For example, we may learn to use metaphors, so that users can apply their real-life experiences while using new products. But what metaphor do we use when our user will be opening her first bank account through an app? How to provide large amounts of information to someone who does not know how to search, navigate websites, or cannot read? How can one locate a contact on the phone without typing? In the Interaction Design for Indian Needs group in IIT Bombay, have been exploring design of interactive products for "emergent" users of technology. I will talk about the context of an emergent user, and how did information and communication technologies reach them. I will show examples of designs for an Indian language keyboard, a contact application, a rural enterprise project, a system to support treatment of people living with HIV, and an aid for adult literacy classes. Not all of these designs are highly successful. The technologies that I talk about might not be very new. But we learnt a lot while designing these products. Through these examples I will try explain how we design for emergent users, so that we can design for, and with them.

 

Anirudha Joshi is professor in the interaction design stream in the IDC School of Design, IIT Bombay, though currently he is on a sabbatical, visiting universities in the UK. His specialises in design of interactive products for emergent users in developing economies. He has worked in diverse domains including healthcare, literacy, Indian language text input, banking, education, industrial equipment, and FMCG packaging. Anirudha also works in the area of integrating HCI with software engineering. Anirudha is active with many HCI communities. He has played various roles in conferences including India HCI, INTERACT and CHI. He is the founding director of HCI Professionals Association of India. He represents India on IFIP TC13.  He is the Liaison for India for the ACM SIGCHI Asian Development Committee and the VP Finance of the ACM SIGCHI Executive Committee. You can find more about him here: http://www.idc.iitb.ac.in/~anirudha/aboutmeformal.htm


Realtime HandPose Recognition (07 February, 2019)

Speaker: Francesco Camastra

Abstract:

HandPose Recognition involves the recognition of the shape of the hand and it is a relevant branch of Gesture recognition. HandPose recognition must be performed in real-time, in order to be used effectively in applicative domains, e.g., Human Computer Interaction (HCI), sign language interpreters , Human- Machine Interfaces (HMI) for disabled people. The seminar will discuss how it can construct effective real-time handpose recognizers using either properly designed devices (e.g., data gloves) or (RGB and/or RGB-depth) cameras.

 

Bio:

Francesco Camastra holds a Ph.D in Computer Science at University of Genova in 2004. Since February 2006 he has beenwith University “Parthenopeof Naples, where now, as an associate professor in Computer Science, teaches Algorithms and Data Structures I, Virtual Reality and Multimodal Machine Learning. He was the recipient of Eduardo R. Caianiello Award 2005, for the best Italian Ph.D. Thesis on neural networks. He was the recipient of PR awards 2008 (with M. Filippone, F. Masulli, S. Rovetta) as coauthor of the best paper published in Pattern Recognition in the year. He was included in the Top Reviewers of Pattern Recognition Letters in the period 2008-2012. He published about 70 papers on peer-reviewed journals and proceedings of conferences and one book. He has about 2500 citations in Scholar Google. He has been in the editorial board of two Scopus-indexed journals. He served as a reviewer for more than 40 journals.

He is senior member of the IEEE and IEEE Evolutionary Computation Society. His research interests are: Machine Learning, Kernel Methods, Manifold Learning, Gesture Recognition, Time Series Prediction with Missing Data.

 

 

 


Using haptic rhythms and multilimb wearable metronomes for gait rehabilitation of hemiparetic stroke and brain injury survivors (24 January, 2019)

Speaker: Theodoros Georgiou

Abstract
Rhythm, brain, and the body are closely linked. Humans can synchronise their movement to auditory rhythms with little apparent effort. 
This is demonstrated through the widespread inclination to spontaneously move to music, either by tapping, nodding, or in more committed cases,
dancing. Current research has shown that walking to a steady audio rhythm can lead to improvements in various aspects of gait and have significant
benefits to the gait rehabilitation of people suffering from hemiparetic gait. This talk will mainly present work I conducted as part of my PhD (successfully completed last year - 2018) looking at an alternative approach
to rhythm-based gait rehabilitation, where steady haptic rhythms are used instead of audio. To investigate this approach, a multi-limb metronome
capable of delivering a steady, isochronous haptic rhythm to alternating legs was developed and purpose-built for gait rehabilitation of stroke
survivors, together with appropriate software for monitoring and assessing gait. Bio Theodoros Georgiou has an Honours degree in Computer Science from the University of St. Andrews, an MSc in Human Centred Interactive
Technologies and an MSc by Research in Computer Science from the University of York, and a PhD from the Open University.
He is currently a Research Associate in the School of Mathematical and Computer Sciences at Heriot-Watt University, whose research
focuses on Human Computer Interaction and more recently Human Robot Interaction. His research interests include: haptic technologies,
wearables, and wearable sensors.


Making Diabetes Education Interactive: Tangible Educational Toys for Children with Type-1 Diabetes. (13 December, 2018)

Speaker: Babis (Charalampos) Kyfonidis

Abstract:

Younger children (under 9 years) with type-1 diabetes are often very passive in the management of their condition and can face difculties in accessing and understanding basic diabetes related  information. This can make transitioning to self-management in later years very challenging. Previous research has mostly focused on educational interventions for older children. To create an educational tool which can support the diabetes educational process of younger children, we conducted a multiphase and multi-stakeholder user-centred design process. The result is an interactive tool that illustrates diabetes concepts in an age-appropriate way with the use of tangible toys. The tool was evaluated inside a paediatric diabetes clinic with clinicians, children and parents and was found to be engaging, acceptable and effective.

Bio:


Charalampos (Babis) Kyfonidis is a Research assistant at the University of Glasgow. His  background is on Electronic & Computer Engineering, Computing Science(MSc) and Human-Computer Interaction (PhD).
Some of his reserach interests are Serious Games,  Tangible Interaction, Wearables, Maker-Movement, Programming Education and Digital Cultural Heritage.


Values in Computing (29 November, 2018)

Speaker: Maria Ferrara

Abstract:
A show & tell and open discussion bringing to the table some of the tools and emerging findings from Values in Computing (ViC) research at the School of Computing and Communications, Lancaster UK, focusing on the study of human value in software production. The session will touch on the issues connecting tech industry, academic research and current affairs. Existential questions such as what it means to be a computing professional nowadays as well as the metaphysical roots of the binary system may also be thrown into the mix.
http://www.valuesincomputing.org/ 

Bio:
Maria Angela Ferrario is a Lecturer at the School of Computing and Communications (SCC), Lancaster University. Her professional and academic background includes intelligent on-line systems, multimedia systems design, social psychology and philosophy. She works at the intersection of  software engineering (SE) and human computer interaction (HCI). Her research adopts agile and participatory methods to technology development, investigates human values in computing, and the role of digital technology in society.
http://www.lancaster.ac.uk/people-profiles/maria-angela-ferrario 


Hot-Blooded Automatons: Computationally modeling emotion (22 November, 2018)

Speaker: Stacy Marsella

Abstract:

A large and growing body of work in psychology  has documented the functional role of emotions in human social and cognitive behavior. This has led to a significant growth in research on computational models of human emotional processes, driven by several concerns. There is increasing desire to use computational methods to model and study human emotional and social processes. Findings on the role that emotions play in human behavior have also motivated artificial intelligence and robotics research to explore whether modeling emotion processes can lead to more intelligent, flexible and capable systems. Further, as research has revealed the deep role that emotion and its expression play in human social interaction, researchers have proposed that more effective human computer interaction can be realized if the interaction is mediated both by a model of the user’s emotional state as well as by the expression of emotions.

In this talk, I will discuss the computational modeling of emotions in relation to cognitive and behavioral processes. The discussion will be motivated by illustrating the role of emotions in human behavior and by particular application areas, including large scale simulations of responses to natural disaster and virtual humans. Virtual humans are autonomous virtual characters that are designed to act like humans and socially interact with them in shared virtual environments, much as humans interact face-to-face with other humans. The simulation of emotions emerged as a key challenge for virtual human architectures, as researchers have sought to endow virtual characters with emotion to facilitate their social interaction with human users.

Bio Stacy C. Marsella is a Professor in Glasgow University’s Institute of Neuroscience and Psychology. He works in the computational modeling of cognition, emotion and social behavior, both as basic research methodology in the study of human behavior as well as the use of these computational models in applications. His current research spans the interplay of emotion and cognition on decision-making, modeling the influence that beliefs about the mental processes of others have on social interaction (Theory of Mind) and the role of nonverbal behavior in face-to-face interaction. Of special interest is the application of these models to the design of social simulations and virtual humans. Virtual humans are software-based autonomous agents that look human and can interact with humans using spoken dialog. As part of that work, he has spearheaded international efforts to establish standards for virtual human technology and have released software in the public domain to lower the barrier of entry to doing virtual human research and crafting virtual human applications. This software is currently being downloaded at the rate of over 10,000 downloads per month. He received the Association for Computing Machinery's (ACM/SIGART) 2010 Autonomous Agents Research Award, for research influencing the field of autonomous agents.
 


Future of GIST (15 November, 2018)

Speaker: Alessandro Vinciarelli

Alessandro will give a short talk about his plans as a potential new leader of GIST, and answer questions.


Privacy-respecting Ubiquitous Systems (18 October, 2018)

Speaker: Mohamed Khamis

Abstract

Ubiquitous technologies are continuously becoming more powerful and more affordable. While advances in computational power, sensors and displays can bring a myriad of benefits to the user, these same technologies do not only have serious implications on privacy and security, but can even be maliciously exploited against us. For example, thermal cameras are becoming cheaper and easier to integrate into smartphones. We recently found that thermal imaging can reveal 100% of PINs entered on smartphones up to 30 seconds after they have been entered. The ubiquity of smartphones can itself be a threat to privacy; with personal data being accessible essentially everywhere, sensitive information can easily become subject to prying eyes. There is a significant increase in the number of novel platforms in which users need to perform secure transactions (e.g., payments in VR stores), yet we still use technologies from the 1960s to protect their security. These developments underline the importance of: 1) understanding threats to privacy and security in the age of ubiquitous computing, 2) developing methods to secure access to our private data, 3) understanding the different needs of different user groups rather than designing for security with a one-size-fits-all mindset. I will talk about our work in each of the three areas and discuss the challenges, opportunities, and directions for future work and collaboration.

Bio

Mohamed Khamis is a Lecturer in school of computing science since September 2018. He received his PhD from Ludwig Maximilian University of Munich (LMU) in Germany. In addition to the LMU, he also worked at the German Research Center for Artificial Intelligence and the German University in Cairo. Mohamed worked in a diversity of topics in Human-Computer Interaction, usable security and privacy. His main research focus now is at the crossroads of user privacy and ubiquitous computing. His work focuses on understanding threats to privacy that are caused by ubiquitous technologies through empirical methods, as well as inventing novel systems for protecting security. Mohamed’s work has been published at CHI, IMWUT, UIST, MobileHCI, ICMI and other venues. He is a member of the program committee of multiple conferences such as CHI, PerDis, and MUM, and the the general co-chair for PerDis 2019.


Reading a Machine's Mind: Partner Modelling and its role in Human-Machine Dialogue Interactions (04 October, 2018)

Speaker: Benjamin Cowan

Abstract:

Through intelligent personal assistants like Siri, Google Home and Amazon Alexa, speech is set to become a mainstream interaction modality. These types of interactions fundamentally rely on a spoken dialogue between machine and human to complete tasks, which leaves the possibility that psychological mechanisms that influence human-human dialogue are also at play in human-machine dialogue interactions. My talk will focus specifically on my work into the role of people’s beliefs about machine partners’ abilities (their partner models) in this dialogue context, specifically what influences these beliefs and how these affect language production in interaction.

Bio:

Dr Benjamin R Cowan is an Assistant Professor at UCD's School of Information & Communication Studies.  He completed his undergraduate studies in Psychology & Business Studies (2006) as well as his PhD in Usability Engineering (2011) at the University of Edinburgh. He also previously held the role of Research Fellow at the School of Computer Science’s HCI Centre at the University of Birmingham. His research lies at the juncture between psychology, human-computer interaction and speech technology, investigating how the design of speech interfaces impacts user experience and user language choices in interaction.


Evaluating Child Engagement Levels in Digital Stories (14 June, 2018)

Speaker: Rui Huan

Abstract

The story-stem approach with traditional storytelling has been widely used by child psychologists investigating children’s social understanding of family/peer relationships and has made significant contributions to Attachment theory. A key step to mobilise their mental representation of attachment is to bring children into a deep engagement with a story. Previous studies have explored that children are more engaged in the process of telling stories when technology is used.In such cases, my research is on-going into applying the story stem approach with digital storytelling and detecting if a child is engaged in the digital storytelling process using the child’s observed facial expressions. We hope that children more easily get absorbed and imaginatively caught up in the digital story, so that they are able to complete the story in spontaneous play. Therefore, psychologists can investigate children’s attachment categories based on the stories completed and children’s behaviours.

Biography

I am a PhD student in the Multimodal Interaction Group at the University of Glasgow. My research focuses on measuring the levels of engagement of people from different age groups (children and young adults) in the digital storytelling process and specifically on understanding the media effect of digital storytelling on children’s engagement levels. I also finished my master’s degree in Information Technology at the University of Glasgow, working on pressure-based interaction on mobile devices.


Designing Multimodal User Interfaces to Improve Safety for Child Cyclists (14 June, 2018)

Speaker: Andrii Matviienko

Abstract:

Child cyclists are at greater risk of getting into car-to-cyclist accidents than adults. This is in part due to the developmental differences of the motor and perceptual-motor abilities between children and adults. To decrease the number of accidents for child cyclists, we augment a bicycle and a helmet with multimodal feedback in the non-distractive and understandable way. The focus lies on the perception and representation of warning signals for collision avoidance, navigation instructions for safe routing and lane keeping recommendations to correct cycling behavior on the go.

Biography:

Andrii is a PhD researcher in the Media Informatics and Multimedia Systems Group at the Department of Computer Science at the University of Oldenburg, supervised by Susanne Boll. He has previously worked on tabletop and eye gaze interaction techniques together with Jan Borchers and Johannes Schöning at RWTH Aachen University. Later, he worked on the projects related to ambient light displays for user interfaces in cars and tangible user interfaces to increase connectedness between remote work groups via implicit cues, funded by German Ministry of Education and Research and German Research Foundation respectively. Currently he is working on the multimodal user interfaces for child cyclists to increase safety on the road. His approach is currently focused on augmentation of bicycle and helmets and evaluations in bicycle simulator.


Interactive Visualizations for Data Exploration and Explanation (07 June, 2018)

Speaker: Benjamin Bach

Abstract
This talk presents a set of interactive visualizations for data exploration and recent work in how to communicate insights through data-driven stories. Among examples for exploring dynamic networks (http://datacomics.net). The questions raised by the talk are about effective ways to engage a larger audience in understanding, learning, and use of visualizations for exploration and communication. As visualizations are becoming more and more commonplace and familiar to people, we can see more and more aspects of our daily lives being potentially enriched with information presented visually. Eventually, I want to raise the question of which role novel technology such as Augmented and Virtual Reality can play in exploring, communicating, and interacting with visualizations. 
 
 
Biography
Benjamin is a Lecturer in Design Informatics and Visualization at the University of Edinburgh. His research designs and investigates interactive information visualizations to help people explore, present, and understand information hidden in data. He focuses on the visualization of dynamic networks (e.g., social networks, brain connectivity networks), as well as temporal data (e.g., changes in videos and Wikipedia articles, events on timelines), comics for storytelling with visualizations, as well as visualization and interaction in Augmented and Virtual Reality. Before joining the University of Edinburgh in 2017, Benjamin worked as a postdoc at Harvard University, Monash University, as well as the Microsoft-Research Inria Joint Centre. Benjamin was visiting researcher at the University of Washington and Microsoft Research in 2015. He obtained his PhD in 2014 from the Université Paris Sud where he worked at the Aviz Group at Inria. 


Designing multisensoty technology with and for people living with visual impairments (24 May, 2018)

Speaker: Oussama Metatla

Abstract

Involving people in the process of designing technology that affects them is now a well established component of HCI research and practice. However, as with many forms of participation in decision-making in society, people living with visual impairments have had more limited opportunities to influence technology design across a variety of domains. A number of factors contribute to this, including that many design methods rely on visual techniques to facilitate participation and the expression and communication of design ideas. Also, while using visual techniques to express ideas for designing graphical interfaces is appropriate, it is harder to use them to articulate the design of, say, sonic or haptic artefacts, which are typical alternative modalities of interaction for people living with visual impairments. In this talk, I will outline our experience of engaging with people living with visual impairments and people with mixed visual abilities, where we adapted participatory design methods in order to jointly create meaningful technology, and describe some resulting research investigations that such engagement opened up in the areas of multisensory and crossmodal interaction design.

Biography

Oussama Metatla is an EPSRC Research Fellow at the Department of Computer Science, University of Bristol, where he currently leads the CRITICAL project, which investigates inclusive education technologies for children with mixed visual abilities in mainstream schools. His research interests include designing with and for people living with visual impairments and investigating multisensory user experiences with interactive technology. He received his PhD in 2011 from Queen Mary University of London for a thesis exploring and characterising the use of sound to support non-visual collaboration. Following this, he was a Researcher Co-Investigator on two EPSRC projects Crossmodal Collaborative Interfaces, and Design Patterns for Inclusive Collaboration, also at QMUL, and an Associate Lecturer at Oxford Brookes University before being awarded an EPSRC Early Career Fellowship, hosted at the University of Bristol.


Acoustic Levitation: Recent Improvements, DIY Devices, and Applications in Display Technologies (17 May, 2018)

Speaker: Asier Marzo

Abstract

Acoustic Tweezers use sound radiation forces to trap and manipulate samples. They provide unique advantages such as high trapping force, support of numerous sample materials and operation in various media. Also, the available range of sound frequencies enable applications from the micrometre to the centimetre scale.

Despite the advantages of Acoustic Tweezers, its progress has always been behind that of Optical Tweezers. In this talk, I will present recent advancements that have reduced the gap between acoustic and optical trapping, i.e. single-beam, wavelength-scale, and multi-particle acoustic trapping. Additionally, I will introduce DIY levitators that everyone can build at home. Finally, I will showcase some specific applications of acoustic levitation in display technologies.

Biography

During his PhD, Asier conducted research in Serious Games, Augmented Reality and Acoustic Levitation. He has worked as a software engineer, videogame developer and programming teacher. Currently, he is funded by the EPSRC and his research is focused on using sound to manipulate particles. Manipulate particles such as clots or kidney stones from the exterior of our body without any incision, or to levitate hundreds of physical pixels that compose a 3D object. His background in computer science is a vital tool for controlling phased-arrays of hundreds of elements and visualizing invisible fields. One of his main objectives is to make devices and techniques for manipulating particles affordable and open for everyone.


Body Ownership Illusions and Sense of Agency in Immersive Virtual Reality (10 May, 2018)

Speaker: Sofia Seinfeld Tarafa

Abstract

Immersive Virtual Reality based on the use of a head-tracked stereo Head Mounted Display (HMD) and a full body motion capture system allows the user to see his/her real body replaced by a virtual one in a spatially congruent way from a first person perspective. Our studies demonstrate that sensorimotor correlations can enhance the sense of agency over the virtual body by making participants see their virtual body move in synchrony with their own movements. In several studies we have found that it is possible to experience a full body ownership illusion over an extremely dissimilar body, such as that one of a child, a body with different skin-colour, or a body which represents a female victim of domestic violence. In this talk I will review our recent research exploring the fundamental principles of body ownership illusions and the sense of agency using immersive virtual reality, as well as discuss the impact that virtual embodiment can have on social cognition.

Biography

Sofia Seinfeld is a postdoctoral researcher in the University of Bayreuth where she works in the EU-funded project Levitate. She earned her PhD in Immersive Virtual Reality and Clinical Psychology under the supervision of Prof. Mavi Sanchez-Vives and Prof. Mel Slater at the Event Lab located in the University of Barcelona.  During her PhD studies, Sofia researched the potential use of body ownership illusions and virtual reality as a tool to enhance empathy in offenders. Her main research interests focus on body ownership illusions, the sense of agency, multisensory integration, and human computer interactions.


New projects in human-data interaction and information visualisation (03 May, 2018)

Speaker: Matthew Chalmers

Abstract

I’ve got some new projects starting up, and I thought it might be good to spread the word about them and to talk with how they fit in with other current/potential work in SoCS.
 
The first is an EPSRC ‘Network Plus’ on human-data interaction, related to systems and practices that allow people to understand and have useful agency with regard to what happens with their data — especially as it gets shared with and used by others. I’ll lead it, with co-investigators Ewa Luger (U. Edinburgh), Atau Tanaka (Goldsmiths), Hamed Haddadi (Imperial College London) & Elvira Perez Vallejos (U. Nottingham). The project aims to guide the realisation of system design principles that are productive, and yet fit with the ethics and values acceptable to wider society.  It will (a) develop and sustain a collaborative, cross-sectoral community under the banner of Human Data Interaction, (b) develop a portfolio of system design projects addressing underexplored aspects of the DE (c) create cross-sectoral interdisciplinary synthesis of research under the HDI banner (d) conceptually develop and flesh-out the HDI framework, (e) create a suite of policy and public-facing case studies, papers, prototypes and educational materials, and (f) develop a set of core guidelines intended to inform the design of human-facing data-driven systems. It’s £1M… but we will give out 2/3 of that as we fund other people to do HDI research projects over the next 3 years.

The second project harks back to ancient work of mine on fast ‘ spring models’ — non-linear algorithms for dimensional reduction. Alistair Morrison wrote this proposal with me, and will come back to work on it. Its aim is to combine spring models, which are well established in infovis, with the more recent technique popular in the ML community, stochastic network embedding. The best known variant of the latter is called t-SNE. The two approaches have quite different models, metrics, strengths and weaknesses, but we aim to find good ways to speed up and scale up t-SNE by stealing ideas from spring models — as t-SNE really has some serious flaws that ought to be addressed (we feel!). We have a year and £100K to try to do that.

Biography

Matthew Chalmers is a professor of computer science at U. Glasgow. His PhD was at U. East Anglia, in ray tracing and object-oriented toolkits for distributed memory multiprocessors. He was an intern at Xerox PARC before starting work as a researcher at Xerox EuroPARC, where he worked on information visualisation and early ubicomp systems, e.g. BirdDog, Xerox’ first Active Badge system. He left Xerox to start up an information visualisation group at UBS Ubilab, in Zürich. He then had a brief fellowship at U. Hokkaido, Japan, before starting at U. Glasgow in 1999. He works in ubiquitous computing, data visualisation and HCI. He led an EPSRC programme grant advancing stochastic models of software use (A Population Approach to Ubicomp Systems Design, EP/J007617/1, £3.2M), and has been a PI on several other EPSRC projects including the £11M IRC, Equator: Technological Innovation in Physical and Digital Life (GR/N15986/01)—in which he led the theory work of all 8 universities, managed the largest project, City (which spanned 5 universities). He’s an associate editor for the PACM IMWUT journal, an AC for ACM CSCW, and a reviewer for a bunch of other stuff. In general, though, he’d rather be walking, skiing, bouldering or something else like that.


Investigating haptic feedback for wearable devices (11 April, 2018)

Speaker: Dr. Simon Perrault

Abstract: Thanks to the availability of powerful miniaturized electronic components, this last decade has seen the popularization of small mobile devices such as smartphones, and even smaller devices for wearable computing. The emergence of these devices with limited output capabilities (small screen or no screen) is a great opportunity to consider alternatives for output. In this talk, we discuss the advantages of different modalities for output on wearable devices

Bio: Dr Simon Perrault received  his PhD in Computer Science in Telecom ParisTech (France). He defended his PhD in April 2013, and joined the National University of Singapore (NUS) in December 2013 as a post-doctoral researcher. Dr Perrault’s research interest is in the area of Human Computer Interaction, and more specifically on mobile and wearable interaction. Because users carry their mobile and wearable devices at nearly any given time in a day, improving interaction between users and devices is a hard yet needed task. By doing so, we aim to make users’ lives easier and enhance the quality of communication between users through their devices. In concrete terms, Dr Perrault designs new interaction techniques and wearable devices and tries to get a better understanding of human behaviour in mobile contexts. 


Human-robot Interaction: challenges with user studies (05 April, 2018)

Speaker: Amol Deshmukh

Abstract:
As we look into a future where humans and robots can co-exist there is an increasing need to deploy robots in the real world environments where people can experience these systems in their daily lives. This seminar will introduce different human-robot interaction studies I have carried out in diverse social environments for example offices, schools, shopping malls and even in the wild in rural villages. I shall discuss about some social, technical and practical challenges while deploying robots in the wild and conducting user studies. I shall also introduce a recent and first of its kind social robot study conducted with rural subjects in the wild in Indian villages.

Bio:
Dr Amol Deshmukh is working as a research associate in computing science department at University of Glasgow. He is a Roboticist and built his first social robot during his Bachelor's in Electronics. He completed his Ph.D from Heriot-watt University in Edinburgh in human-robot interaction. His PhD work undertook key challenges in long-term human-robot interaction focusing on recharge behaviour of autonomous social mobile robots and an approach based on social verbal behaviour to manage user expectations during recharge. Dr Deshmukh has worked on multiple European Union projects involving social robots in work places, education and public spaces. His current research involves social signal processing for social robots in public spaces for a EU-funded project MuMMER (MultiModal Mall Entertainment Robot).


Thermal Interaction during Driving; Interpersonal Augmented Perception for Foster Parents (15 March, 2018)

Speaker: Patrizia Di Campli San Vito; Alberto Gonzáles Olmos

Talk 1: Thermal Interaction during Driving

 

Abstract:

Modern cars rely heavily on screens to convey information to the driver, which takes their visual attention away from the street. This distraction can be minimized by engaging other sensory channels, such as auditory and haptic. While audio feedback can be disrupting, when for example listening to music or conversing with passengers, haptic feedback can be conveyed unobtrusively and aimed at the driver alone. Most research into haptic in-car interaction focus on vibrotactile or shear feedback. Even though many cars have warming steering wheels and seats incorporated, thermal feedback has not been investigated in the past. This talk will report some findings of several experiments looking into thermal interaction during driving, highlighting some challenges and features.

 

Biography:

I am a second year PhD student in the Multimodal Interaction Group at the University of Glasgow. My research focusses on in-car haptic interaction and I hope to investigate thermal interaction in different locations and in combination with other modalities. I finished my bachelor’s and master’s degree in Media Informatics at the University of Ulm in Germany, where I already worked in human computer interaction, focussing on spoken dialogue systems and a communication aid for deaf participants.

 

Talk 2: Interpersonal Augmented Perception for Foster Parents

 

Abstract:
The rapidly growing field of quantified self technologies is developing useful tools to help us know more about our own physiological condition. We propose to use these devices to help with evidence based therapies. Foster parents often care for children with attachment problems who have had traumatic experiences and in many cases, have problems communicating their emotions or regulating their behavior. Attachment problems are intimately related with the development of anxiety disorders later on in life. To help in the mediation of their relationship, there are therapies such as the video interaction guidance where families learn to interact with each other in a positive way.
In our research we are investigating how quantified self devices can be used in connection to multimodal interfaces in order to augment the perception of the internal state of a child with attachment problems. We want to enhance the throughput of real time information that foster parents receive during therapy from their child so that over time the adult can acquire a better understanding of the internal state of the person they care for, which they would not be able to perceive otherwise.

Bio:
Alberto is a PhD student in the Multimodal Interaction Group at the University of Glasgow. His research is part of an European project (TEAM-ITN, technology enabled mental health) which aims to develop technologies for early diagnosis, prevention and treatment of mental health related problems. Alberto is studying how multimodal interfaces could display physiological information of children with anxiety disorders to rise awareness in their social circle about their condition. He has a background in Electrical Engineering and Master in Biomedical Engineering and has worked mainly as a biomedical engineer applying machine learning techniques to IVUS images and image/signal processing techniques to fMRI data.


Conscious Control of Behaviour with Technolgoy for Health Benefits (by Alpha Health) (08 March, 2018)

Speaker: Oliver Harrison (CEO, Alpha Health Moonshot) Aleksandar Matic (Head of Research, Alpha Health Moonshot)

Abstract

Worldwide, healthcare costs have been increasing faster than GDP for 50 years, whilst key indicators such as life expectancy have plateaued in many countries. The root cause is a change in disease burden – across the world, today’s biggest diseases and biggest killers are chronic diseases such as cancer, heart attack, stroke, and mental health disorders. It is now well-established that the main cause of chronic diseases is everyday behaviours such as poor diet, lack of exercise, using tobacco, excessive alcohol, or lack of sleep, yet healthcare systems today are simply not designed to help people change these root causes of chronic disease. To make tackling everyday behaviours even harder, pioneers in neuroscience and behavioural economics have begun to reveal that people are not even really in conscious control of their behaviour. 

 

We will talk about our work at Alpha Health (part of Telefonica’s Moonshot facility) focused on helping people take more conscious control of their actions. Our objective is to help people manage a range of diseases, and help prevent a range of health conditions. To do this, we are using digital insights and the intelligent interpretation of data to develop targeted tools, built around the user. We believe that we live in a historic moment, in which advances in neuroscience, mobile computing, and machine learning can help people to take control of their own behaviour, optimise their life, and limit the effects of unhealthy behaviour on their bodies and their minds. We have the opportunity to work systematically through the scientific, engineering, design, and commercial challenges to build a breakthrough solution.  To support our work, we are partnering with the best academics, developers, companies, and professionals around the world.


Motion Matching: A New Interaction Paradigm for the IoT (22 February, 2018)

Speaker: Augusto Esteves

Abstract:

Motion matching is a novel interaction technique designed for the Internet-of-Things (IoT) that is device-independent, scalable, and consistent. Unlike traditional interfaces where targets are static (to facilitate pointing), motion matching interfaces display targets that move continuously in a specific trajectory - with users selecting a target by simply tracking that target’s movement in real-time. This seminar will introduce different case studies of the technique, covering varied IoT domains such as wearables, smart rooms, smart TVs, and even augmented-reality.

 

Bio:

Augusto Esteves is a Lecturer at Edinburgh Napier University with a PhD in Informatics Engineering (Human-Computer Interaction) from the University of Madeira. His research interests lie in lowering the physical and cognitive demands of computer interfaces through the design of multimodal interaction, and he has explored these concepts as a Visiting Researcher in several institutes. These include the Siemens Healthcare Technology Center, Lancaster University, the Ulsan National Institute of Science and Technology, the Eindhoven University of Technology, and the Korean Advanced Institute Science and Technology. Finally, the leads the HCI Lab at Edinburgh Napier University. This is an interdisciplinary research group that designs, develops and studies novel interaction techniques for the computers of the future (http://hci.soc.napier.ac.uk).


Promoting mental health with technology (15 February, 2018)

Speaker: Petr Slovak

Abstract:

Technologies have already found an important role in detecting and helping treat mental health difficulties. However, much less is known about applying technology within prevention approaches, with the aim to promote resilience of those at risk and mitigate the occurrence of mental illness later in life. In this talk, I will speak about our work on supporting the development specific aspects of resilience---such as self-regulation, coping with stress, or conflict resolution---in real-world contexts. The emphasis will be on two on-going case-studies done in collaboration with Committee for Children, developers of a prevention program used in 30% of US schools: The first project explores the potential of physical computing and smart textiles, as a proof-of-concept example for physical interventions that can be situated directly within children’s everyday practices to support self-regulation. The second examines the opportunities for helping children develop constructive conflict resolution strategies in digital multiplayer worlds, where children spend more and more time while encountering many interpersonal challenges arising from gameplay. I hope to discuss with the audience what the future of technology-enabled prevention interventions may be.

Bio:

Petr is a Visiting Research Fellow at UCL Interaction Centre and Evidence Based Practice Unit at UCL, funded by the Schroedinger Fellowship from Austrian Science Fund; he also holds a Visiting Researcher position at Oxford University. His research interests are positioned at the intersection of HCI, mental health promotion, and learning sciences, with the main focus in understanding how technology can meaningfully help in supporting the development of social-emotional competencies `in the wild.

 


Future Assistive Technologies for Cognition (18 January, 2018)

Speaker: Matthew Jamieson

Abstract:

In this talk I will discuss the potential areas where state-of-the-art computing research could have an impact on rehabilitation. Augmented reality technologies have the potential to provide guidance and support during everyday activities that people with brain injury can find challenging. Virtual reality can provide a platform to assess cognition, and provide training in problematic situations or scenarios (e.g. when in a busy supermarket or on public transport). Embedded, situation sensitive technologies can help provide people with optimal support in a personalized manner. I will outline key research questions and challenges to address when undertaking work in this exciting area. 

Bio:

My research has focused on the rehabilitation of cognitive difficulties resulting from neurological disorders, human computer interaction, and the use of assistive technology in neuropsychological rehabilitation. My work has combined applied neuropsychology and HCI research methods, a range of psychological theories and approaches including theories of human cognition and decision making.

 


GIST Seminar: Sensory Maps (11 January, 2018)

Speaker: Daniele Quercia

Abstract: 

Quercia’s work blends urban computing with social media to create maps that improve our lives and answer fundamental research questions.  Can we rethink existing mapping tools [happy-maps]? Is it possible to capture smellscapes of entire cities and celebrate good odors [smelly-maps]? And soundscapes [chatty-maps]?

[happy-maps] http://www.ted.com/talks/daniele_quercia_happy_maps 

[smelly-maps] http://goodcitylife.org/smellymaps/index.html 

[chatty-maps] http://goodcitylife.org/chattymaps/index.html

Bio: 

Daniele Quercia leads the Social Dynamics group at Bell Labs in Cambridge (UK). He has been named one of Fortune magazine's 2014 Data All-Stars, and spoke about “happy maps” at TED. His research has been focusing in the area of urban informatics and received best paper awards from Ubicomp 2014 and from ICWSM 2015, and an honourable mention from ICWSM 2013. He was Research Scientist at Yahoo Labs, a Horizon senior researcher at the University of Cambridge, and Postdoctoral Associate at the department of Urban Studies and Planning at MIT. He received his PhD from UC London. His thesis was sponsored by Microsoft Research and was nominated for BCS Best British PhD dissertation in Computer Science.


GIST Seminar: Sensing, Modelling and Understanding Human Behaviour from Mobile Data (14 December, 2017)

Speaker: Mirco Musolesi

Abstract: In the recent years, the emergence and widespread adoption of new technologies from social media to smartphones are rapidly changing the social sciences, since they allow researchers to analyse, study and model human behavior at a scale and at a granularity that were unthinkable just a few years ago. These developments can be seen as the emergence of a new data-driven and computation-based approach to social science research, usually referred to as "computational social science”.

In this talk I will discuss the work of my lab in a key area of this emerging discipline, namely the analysis and modelling of human behavioural patterns from mobile and sensor data. I will also give an overview of our work on mobile sensing for human behaviour modelling and prediction.  I will present our ongoing projects in the area of mobile systems for mental health. In particular, I will show how mobile phones can be used to collect and analyse mobility patterns of individuals in order to quantitatively understand how mental health problems affect their daily routines and behaviour and how potential changes can be automatically detected.  More in general, I will discuss our research directions in the area of anticipatory mobile computing, outlining open questions and opportunities for cross-disciplinary collaboration.

 

Bio: Mirco Musolesi is a Reader (equivalent to an Associate Professor in the North-American system) in Data Science at University College London and a Turing Fellow at the Alan Turing Institute, the UK national institute for data science. At UCL he leads the Intelligent Social Systems Lab.  He held research and teaching positions at Dartmouth, Cambridge, St Andrews and Birmingham. He is a computer scientist with a strong interest in sensing, modelling, understanding and predicting human behaviour and social dynamics in space and time, at different scales, using the "digital traces" we generate daily in our online and offline lives. He is interested in developing mathematical and computational models as well as implementing real-world systems based on them. This work has applications in a variety of domains, such as intelligent systems design, ubiquitous computing, digital health, security&privacy, and data science for social good. More details about his research profile can be found at: https://forum.databoxproject.uk/

 

Bio:

Richard Mortier is a University Lecturer in the Cambridge University Computer Lab Systems Research Group, and contracts as an engineer for Docker. Past work includes Internet routing, distributed system performance analysis, network management, aesthetic designable machine-readable codes and home networking. He works in the intersection of systems with HCI, building user-centric systems infrastructure to enable Human-Data Interaction in our ubiquitous computing world. For more see http://mort.io


GIST Seminar: Why camels and self-driving cars make us sick (09 November, 2017)

Speaker: Dr. Cyriel Diels

Abstract: 

Motion sickness has been around the moment Man stopped using its feet and travelled from A to B using passive forms of transport. As such, motion sickness has to be understood as a normal response to abnormal motion environments. Conditions that lead to motion sickness are characterised by sensory rearrangements that we are not accustomed to. Coincidentally, two of today’s major technology trends, Virtual Reality and self-driving cars, share a common faith: they can make us feel unwell...  This talk will introduce the basic underlying causes of motion sickness and discuss different approaches to reduce or eliminate the occurrence of motion sickness in these new motion environments.

 

Short Bio:

Dr Cyriel Diels is a psychologist focussing on transport human factors and design. Following his PhD at Loughborough University into visually induced motion sickness, he worked as a research scientist at the Transport Research Laboratory (TRL) in the areas of driver behaviour, simulation technology, and Human Machine Interactions. He subsequently joined the research department at Jaguar Land Rover (JLR) developing novel HMI concepts before returning to academia in 2012. As a Human Factors lecturer and researcher in the Centre for Mobility and Transport at Coventry University, his work focusses on the human-centred design of future vehicles, in particular the passenger experience and design implications for future vehicles. In 2017 he was appointed as the Academic Director of the National Transport Design Centre (NTDC), a cross-disciplinary centre exploring influences on future vehicle design and the articulation of design through improved physical and virtual tools.


GIST Seminar: Interaction with Levitating Objects (26 October, 2017)

Speaker: Dr. Euan Freeman

Abstract:

The Levitate project is developing new types of user interface based on levitating objects, which users interact with in mid-air. I'm going to talk about acoustic levitation and two ways we're using it to create new interaction techniques. I'll also talk about ultrasound haptic feedback, another novel use of acoustics in HCI. I'll finish with some demos of our work, so you can try everything yourself.

Short bio:

Euan Freeman is a Research Associate in the Levitate project, which is developing a new form of human-computer interface based on un-instrumented mid-air interactions with levitating physical particles.

He received his PhD from the University of Glasgow, supervised by Stephen Brewster and Vuokko Lantz from Nokia, Finland. His PhD thesis describes interaction techniques for addressing in-air gesture systems. 

His research interests include multimodal human-computer interaction, novel around-device interaction and gesture interaction with small devices.


GIST Seminar: Better than life? How long-term virtual & augmented reality use could help or harm (19 October, 2017)

Speaker: Graham Wilson

Abstract:

Decades of short, lab-based research has shown that experiences in virtual reality (VR) have strong and often subconscious effects on users, providing psychological or cognitive benefits such as improved learning, increased self-esteem or treating mental illness, but also harm, through manipulation, escalated gambling or experiences of realistic violence. It had not been possible to measure the long-term effects of ‘real world’ VR use until the recent availability of affordable commercial devices. The fidelity of experiences will only increase over the near future, as technologists and interface designers seek greater realism, and these advances could lead to both measurable benefit and harm. Augmented reality (AR) also has huge potential, but through a different mechanism to VR, altering perception of physical reality rather than creating a virtual world. Therefore, it is imperative to understand what media, technological and personality factors lead to these benefits and detriments, to be able to change the nature of the content or computer interaction to best amplify benefits and mitigate detriments and inform the public. This area of research is the topic of my upcoming fellowship application, and so this talk will be speculative, discussing informative past research, including a recent study of our own, and what it says about possible futures. Therefore, during the talk I invite attendees to offer their own opinions on the topic, research and the future.

Short bio:

I am a research associate in Human-Computer Interaction in the Multimodal Interaction Group at the University of Glasgow, researching the areas of virtual reality and affective feedback in digital communication. My interests are predominantly the perceptual, cognitive and psychophysical aspects of interaction with computing devices, and designing interfaces to suit the range and limits of human ability in terms of both input and output.

 


Children with Autism: Moving Towards Fully Inclusive Design Practices (15 June, 2017)

Speaker: Cara Wilson (Queensland University of Technology)

Involving children with autism in the design of technologies can produce truly effective tools. However, much of the content used in such technologies is predefined by neuro-typical adult designers, and little research in HCI focuses on leveraging the child’s own strengths, interests and capabilities in order to support important competencies such as social interaction and self-expression. Further, much current research focuses on co-design with children with autism who are verbal, while their non-verbal peers are often excluded from the design process. We begin to investigate how practices from disciplines such as Speech and Language Therapy, and Education may be applicable to the co-design process with non-verbal children with autism. I will discuss our aims to make our co-design sessions truly child-led, moving towards design beyond words. I will present MyPortfolio, a suite of simple apps which aim to provide holistic, interest-based support for child, teacher and parent in autism-specific school settings. Following this brief talk, I’d like to open the topic to discussion, encouraging your opinions on design with children.

Cara Wilson is a PhD candidate at Queensland University of Technology


GIST Seminar: To See What Isn’t There – Visualization of Missing Data (20 April, 2017)

Speaker: Dr. Sara J. Fernstad

Abstract:

Missing data are records that are absent from a data set. They are data that were intended to be recorded, but for some reason were not. Missing data occur in almost any domain and is a common data analysis challenge that causes problems such as biased results and reduced statistical rigour. Although data visualization has great potential to provide invaluable support for the investigation of missing data, missing data challenges are rarely addressed by the visualization society. This talk will cover various concepts and aspects in missing data analysis, suggesting patterns of relevance for gaining further understanding of ‘missingness’ in datasets and present the result of an evaluation of different visual representations of missing data. It will also suggest some directions for designing visualization to support the understanding of ‘missingness’ in data.

Biography: Dr Sara Johansson Fernstad

Sara received a PhD in Visualization and Interaction from Linköping University (Sweden) in 2011. She will take up a lectureship at the School of Computing Science at Newcastle University in May 2017, and has since 2014 held a lectureship at the Dept. of Computer and Information Sciences at University of Northumbria. Between 2011 and 2014 she carried out post-doctoral research at Unilever R&D Port Sunlight, and at Cambridge University. Her main research focuses on Information Visualization, with particular interest in visualization of high dimensional data, heterogeneous data and incomplete data, and the application of visualization approaches in analysis of ‘Omics type data.


GIST Seminar: Co-Designed, Collocated & Playful Mobile Interactions (13 April, 2017)

Speaker: Dr. Andrés Lucero

Abstract: Mobile devices such as smartphones and tablets were originally conceived and have traditionally been utilized for individual use. Research on mobile collocated interactions has explored situations in which collocated users engage in collaborative activities using their mobile devices, thus going from personal/individual toward shared/multiuser experiences and interactions. The Social and Spatial Interactions(SSI) platform extends the current individual use of these devices to support shared collocated interactions with mobile phones. The platform supports shared collocated interactions, using the mobile phone as a physical interface and a sensor network built in the phone to track the position of the phones on a flat surface. The question the platform addresses is if people are willing to share their devices and engage in collaborative interactions. In this talk I will discuss the different methods used to create playful and engaging interactions in the context of the SSI project.

Bio: Andrés Lucero is Associate Professor of Interaction Design at Aalto University. His work focuses on the design and evaluation of novel interaction techniques for mobile devices and other interactive surfaces. He received his MA degree in Visual Communication Design from Universidad Tecnológica Metropolitana (1999), PDEng in User-System Interaction from Eindhoven University of Technology (2004), and PhD in Human-Computer Interaction from Eindhoven University of Technology (2009). His research interests include human-computer interaction, design, and play.

 


GIST Seminar: Experiments in Positive Technology: the positives and negatives of meddling online (16 March, 2017)

Speaker: Dr. Lisa Tweedie

Experiments in Positive Technology: The positives and negatives of meddling online 

This talk is going to report on a few informal action research experiments I have conducted over a period of seven years using social media. Some have been more successful than others. The focus behind each is "How do we use technology/social media to make positive change?"

I will briefly discuss four interventions and what I have learnt from them.

A) Chile earthquake emergency response via Twitter and WordPress 

B) Make Malmesbury Even Better - Community Facebook page

C) Langtang lost and found - Facebook support group for families involved in the Langtang earthquake, Nepal

D) I am Amira - educational resources for British schools about the refugee crisis downloaded by 4000+ schools from Times Educational Supplement Resources online (TES)

www.iamamira.wordpress.co.uk

Three of these are still ongoing projects. I will make the case that these projects have all initiated positive change. But that they also each have their darker side. I will discuss how each has affected me personally.

I will conclude with how I plan to carry forward my findings into the education arena. My current research thoughts are around education, play and outdoor learning.

 

 

Lisa started her academic life as a psychologist (via engineering product design at South Bank Poly) gaining a BSc (Hons) in Human Psychology from Aston University. She was then Phil Barnard's RA at the applied psychology unit in Cambridge (MRC APU). Researching low level cognitive models for icon search. She soon realised she wanted to look at the world in a more pragmatic way. 

Professor Bob Spence invited her to do a PhD in the visualisation of data at Imperial College, London (Dept of EEE). This was the start of a successful collaboration that continues to this day. She presented her work internationally at CHI, Parc (Palo Alto) and Apple (Cupertino) amongst other places. Lisa's visualisation work is still taught in computer science courses worldwide. She did a couple of years post doc at Imperial into developing visual tools to support problem holders create advanced statistical models (generalised linear models - Nelder - EPSRC) but felt industry calling. She then spent six happy years working for Nortel and Oracle as part of development teams. She worked on telephone network fault visualisations, managing vast quantities of live telephone fraud data from generated by genetic matching algorithms (SuperSleuth) and interactive UML models of code (Oracle Jdeveloper). She is named on two patents from this work.

Once Lisa had her second child she choose to leave corporate life. She had a teaching fellowship at Bath University in 2005. In 2007 she started a consultancy based around "positive technology". She worked as a UX mentor with over 50 companies remotely via Skype from her kitchen. Many of these were start ups in Silicon Valley. In 2011 she was awarded an honorary Research fellowship at Imperial College.

Four years ago she trained as a secondary Maths teacher and has a huge interest in special needs. She tutors students of all abilities and age groups in maths, english and reading each week. Most recently she returned to the corporate world working as a Senior User Experience Architect for St James Place. On the 5th January 2017 she became self employed and is looking to return to the academic research arena with a focus on education, play and outdoor learning. Action research is where she wants to be. 

Lisa is also a community activist, hands on parent to three lively children and a disability rights campaigner. She has lived with Ehlers-Danlos Syndrome, a rare genetic connective tissue disorder, her whole life. She is also a keen photographer, iPad artist (www.tweepics.wordpress.co.uk), writer, maker and has run numerous book clubs.

https://www.linkedin.com/in/lisatweedie/

lisa@wheatridge.co.uk

 


GIST Seminar: Success and failure in ubiquitous computing, 30 years on. (23 February, 2017)

Speaker: Prof. Lars Erik Holmquist

Success and failure in ubiquitous computing, 30 years on.
 
It is almost three decades since Mark Weiser coined the term "ubiquitous computing" at Xerox PARC around 1988. The paper The Computer for the 21st Century was published in 1991, and the first Ubiquitous and Handheld Computing (now UBICOMP) conference was organized in 1999. It is clear that some of the ubicomp vision has come to pass (e.g. ubiquitous handheld computing terminals) whereas other have failed (arguably, any notion of ”calm technology” and ”computers that get out of the way of the work”!) I’d like to take this seminar to discuss some of my top picks for success and failure in ubicomp, and I invite participants to come do the same!
Homework: Think of at least one ubicomp success and one ubicomp failure, as they relate to the various visions of ubiquiotus/pervasive/invisible/etc. computing!
 
Lars Erik Holmquist is newly appointed Professor of Innovation at Northumbria University, Department of Design. He has worked in ubicomp and design research for 20 years, including as co-founder of The Mobile Life Centre in Sweden and Principal Scientist at Yahoo! Research in Silicon Valley. His book on how research can lead to useful results, "Grounded Innovation: Strategies for Developing Digital Products", was published by Morgan Kaufmann in 2012. Before joining Northumbria, he spent two years in Japan where he was a Guest Researcher at the University of Tokyo, learned Japanese, wrote a novel about augmented reality and played in the garage punk band Fuzz Things.


GIST Seminar: Understanding the usage of onscreen widgets and exploring ways to design better widgets for different contexts (16 February, 2017)

Speaker: Dr. Christian Frisson

Interaction designers and HCI researchers are expected to have skills for both creating and evaluating systems and interaction techniques. For evaluation phases, they often need to collect information regarding usage of applications and devices to interpret quantitative and behavioural aspects from users or to provide design guidelines. Unfortunately, it is often difficult to collect users' behaviours in real world scenarios from existing applications due to the unavailability of scripting support and access to the source code. For creation phases, they often have to comply with constraints imposed by the interdisciplinary team they are working with and by the diversity of the contexts of usage. For instance, the car industry may decide that dashboards may be easier to manufacture and to service with controls printed flat or curved, rather than when mounted with physical controls, while the body of research has shown that the latter are more efficient and safe for drivers.

This talk will first present InspectorWidget, an open-source suite which tracks and analyses users' behaviours with existing software and programs.  InspectorWidget covers the whole pipeline of software analysis from logging input events to visual statistics through browsing and programmable annotation.  To achieve this, InspectorWidget combines low-level event logging (e.g. mouse and keyboard events) and high-level screen features (e.g. interface widgets) captured though computer vision techniques.  The goal is to provide a tool for designers and researchers to understand users and develop more useful interfaces for different devices.

The talk will then discuss an ongoing project which explores ways to design haptic widgets, such as buttons, sliders and dials, for touchscreens and touch-sensitive surfaces on in-car centre consoles.  Touchscreens are now commonly found in cars, replacing the need for physical buttons and switchgear but there are safety concerns regarding driver distraction due to the loss of haptic feedback.  We propose the use of interactive sound synthesis techniques to design and develop effective widgets with haptic feedback capabilities for in-car touchscreens to reduce visual distractions on the driver. 

 

Christian Frisson graduated a MSc. in "Art, Science, Technology (AST)" from Institut National Polytechnique de Grenoble (INPG) and the Association for the Creation and Research on Expression Tools (ACROE), France, including a visiting research internship at the MusicTech group, McGill University, Montreal, Québec, Canada, in 2006. In February 2015, he obtained his PhD degree with Professor Thierry Dutoit at the University of Mons (UMONS), numediart Institute, Belgium, on designing interaction for browsing media collections (by similarity). Since June 2016, he is a postdoc at Inria Lille, Mjolnir team, on designing vibrotactile feedback for dashboard widgets within H2020 EU project HAPPINESS, whose partners feature Alexander Ng and Stephen Brewster from the Multimodal Interaction Group of the University of Glasgow.


GIST Seminar: Sharing emotions in collaborative virtual environments (19 January, 2017)

Speaker: Arindam Dey

Interfaces for collaborative tasks, such as multiplayer games can enable effective remote collaboration and enjoyable gameplay. However, in these systems the emotional states of the users are often not communicated properly due to the remoteness. In this talk, I will present two of the recent work at Empathic Computing Lab (UniSA). 
In the first work, we investigated for the first time, the effects of sharing emotional states of one collaborator to the other during an immersive Virtual Reality (VR) gameplay experience. We created two collaborative immersive VR games that display the real-time heart rate of one player to the other. The two different games elicited different emotions, one joyous and the other scary. We tested the effects of visualizing heart-rate feedback in comparison with conditions where such a feedback was absent. Based on subjective feedback, we noticed clear indication of higher positive affect, collaborative communication, and subjective preferences when the heart-rate feedback was shown. The games had significant main effects on the overall emotional experience.
In the second work, we explore the effect of different VR games on human emotional responses measured physiologically and subjectively in a within-subjects user study. In the user study, six different types of VR experiences were experienced by 11 participants, and nine emotions were elicited and analyzed from physiological signals. The results indicate that there are primarily three emotions that are dominant when experiencing VR and the same emotions are elicited in all experiences we tested. Both subjective and objective measurement of emotions showed similar results, but subjectively participants reported to experience emotions more strongly then what they did objectively.


Health technologies for all: designing for use "in the wild" (23 November, 2016)

Speaker: Prof. Ann Blandford

Abstract: There is a plethora of technologies for helping people manage their health and wellbeing: from self-care of chronic conditions (e.g. renal disease, diabetes) and palliative care at end of life through to supporting people in developing mindfulness practices or managing weight or exercise. In some cases, digital health technologies are becoming consumer products; in others, they remain under the oversight of healthcare professionals but are increasingly managed by lay people. How (and whether) these technologies are used depends on how they fit into people’s lives and address people’s values. In this talk, I will present studies on how and why people adopt digital health technologies, the challenges they face, how they fit them into their lives, and how to identify design requirements for future systems. There is no one-size-fits-all design solution for any condition: people have different lifestyles, motivations and needs. Appropriate use depends on fitness for purpose. This requires either customisable solutions or solutions that are tailored to different user populations.

Biography: Ann Blandford is Professor of Human–Computer Interaction at University College London and Director of the UCL Institute of Digital Health. Her expertise is in human factors for health technologies, and particularly how to design systems that fit well in their context of use. She is involved in several research projects studying health technology design, patient safety and user experience. She has published widely on the design and use of interactive health technologies, and on how technology can be designed to better support people’s needs.


Implementing Ethics for a Mobile App Deployment (17 November, 2016)

Speaker: John Rooksby

In this talk I’ll discuss a paper I’ll be presenting at OzCHI 2016.

Abstract: "This paper discusses the ethical dimensions of a research project in which we deployed a personal tracking app on the Apple App Store and collected data from users with whom we had little or no direct contact. We describe the in-app functionality we created for supporting consent and withdrawal, our approach to privacy, our navigation of a formal ethical review, and navigation of the Apple approval process. We highlight two key issues for deployment-based research. Firstly, that it involves addressing multiple, sometimes conflicting ethical principles and guidelines. Secondly, that research ethics are not readily separable from design, but the two are enmeshed. As such, we argue that in-action and situational perspectives on research ethics are relevant to deployment-based research, even where the technology is relatively mundane. We also argue that it is desirable to produce and share relevant design knowledge and embed in-action and situational approaches in design activities.”

Authors: John Rooksby, Parvin Asadzadeh, Alistair Morrison, Claire McCallum, Cindy Gray, Matthew Chalmers. 


Towards a Better Integration of Information Visualisation and Graph Mining (22 September, 2016)

Speaker: Daniel Archambault

As we enter the big data age, the fields of information visualisation and data mining need to work together to tackle problems at scale.  Both of these areas provide complimentary techniques for big data.  Machine learning provides automatic methods that quickly summarise very large data sets which would otherwise be incomprehensible.  Information visualisation provides interfaces that leverage human creativity that can facilitate the discovery of unanticipated patterns.  This talk presents an overview of some of the work conducted in graph mining - an area of data mining that deals specifically with network data.  Subsequently, the talk considers synergies between these two areas in order to scale to larger data sets and examples of projects are presented.  We conclude with a discussion of how information visualisation and data mining can collaborate effectively together in the future.


Logitech presentation (22 August, 2016)

Speaker: Logitech staff

Logitech are visiting the school on Monday. As part of the visit they are going to talk about the company and their research interests. If you want to come along it will be at 11:00 in F121. Will be about 30-40 mins.

 


Human-Pokemon Interaction (and other challenges for designing mixed-reality entertainment) (28 July, 2016)

Speaker: Prof Steve Benford

It’s terrifically exciting to see to arrival of Pokémon Go as the first example of a mixed reality game to  reach a mass audience. Maybe we are witnessing the birth of a new game format? As someone who  has been involved in developing and studying mixed reality entertainment for over fifteen years now, it’s also unsurprising to see people getting hot and bothered about how such games impact on the public settings in which they is played – is Pokémon Go engaging, healthy and social on the one hand or inappropriate, annoying and even dangerous on the other?

 My talk will draw on diverse examples of mixed reality entertainment – from artistic performances and games to museum visits and amusement rides (and occasionally on Pokémon Go too) to reveal the opportunities and challenges that arise when combining digital content with physical experience. In response, I will introduce an approach to creating engaging, coherent and appropriate mixed reality experiences based on designing different kinds of trajectory through hybrid structures of digital and physical content.

 Steve Benford is Professor of Collaborative Computing in the Mixed Reality Laboratory at the University of Nottingham where he also directs the ‘Horizon: My Life in Data’ Centre for Doctoral Training. He was previously an EPSRC Dream Fellow, Visiting Professor at the BBC and Visiting Researcher at Microsoft Research. He has received best paper awards at the ACM’s annual Computer-Human Interaction (CHI) conference in 2005, 2009, 2011 and 2012. He also won the 2003 Prix Ars Electronica for Interactive Art, the 2007 Nokia Mindtrek award for Innovative Applications of Ubiquitous Computing, and has received four BAFTA nominations. He was elected to the CHI Academy in 2012. His book Performing Mixed Reality was published by MIT Press in 2011.


Formal Analysis meets HCI: Probabilistic formal analysis of app usage to inform redesign (30 June, 2016)

Speaker: Muffy Calder (University of Glasgow)

Evaluation of how users engage with applications is part of software engineering, informing redesign and/or design of future apps.  Good evaluation is based on good analysis –  but users are difficult to analyse – they adopt different styles at different times!  What characterises usage style, of a user and of populations of users, how should we characterise the different styles,  how do characterisations evolve, e.g. over an individual user trace,and/or over a number of sessions over days and months, and how do characteristics of usage inform evaluation for redesign and future design?

I try to answer these questions in 30 minutes by outlining a formal, probabilistic approach based on discrete time Markov chains and stochastic temporal logic properties, applying it to a mobile app developed right here in Glasgow and used by tens of thousands of users worldwide.    A new version of the app, based on our analysis and evaluation, has just been deployed. This is experimental design and formal analysis in the wild.  You will be surprised how accessible I can make the formal material.


Perspectives on 'Crowdsourcing' (16 June, 2016)

Speaker: Helen Purchase

It is now commonplace to collect data from ‘the crowd’. This seminar will summarise discussions that took place during a recent Dagstuhl seminar entitled “Evaluation in the Crowd: Crowdsourcing and Human-Centred Experiments” – with contributions from psychology, sociology, information visualisation and technology researchers. Bring your favourite definition of ‘Crowdsourcing’ with you!


Articulatory directness and exploring audio-tactile maps (09 June, 2016)

Speaker: Alistair Edwards (University of York)

Articulatory directness is a property of interaction first described by Don Norman. The favourite examples are steering a car or scrolling a window. However, I suggest (with examples) that these are arbitrary, learned mappings.  This has become important in work which we have been doing on interactive audio-tactile maps for blind people. Unlike conventional tactile maps, ours can be rotated, maintaining an ego-centric frame of reference for the user. Early experiments suggest that this helps the user to build a more accurate internal representation of the real world - and that a steering wheel does not show articularly directness.


Making for Madagascar (02 June, 2016)

Speaker: Janet Read (University of Central Lancashire)

It is commonly touted in HCI that engagement with users is essential for great product design.  Research reports only successes in participatory design research with children but in reality there is much to be concerned about and there is not any great case to be made for children's engagement in these endeavors.  This talk will situate the work of the ChiCI group in designing with children for children by exploring how two games were designed for, and built for, children in rural Madagascar.  There is something in the talk for anyone doing research in HCI.. and for anyone doing research with human participants.  


Emotion Recognition On the Move (28 April, 2016)

Speaker: Juan Ye (University of St Andrews)

Past research in pervasive computing focuses on location-, context-, activity-, and behaviour-awareness; that is, systems provide personalised services to users adapting to their current locations, environmental context, tasks at hand, and ongoing activities. With the rising of requirements in new types of applications, emotion recognition is becoming more and more desirable; for example, from adjusting the response or interaction of the system to the emotional states of users in the HCI community, to detecting early symptoms of depression in the health domain, and to better understanding the environmental impact on users’ mood in a more wide-scale city engineering area. However, recognising different emotional types is a non-trivial task, in terms of the computation complexity and user study designs; that is is, how we inspire and capture natural expressions of users in real-world tasks. In this talk, I will introduce two emotion recognition systems that are recently developed by our senior honour students in St Andrews, and share our experiences in conducting real-world user studies. One system is a smartphone-based application that unobtrusively and continuously monitor and collect users’ acceleration data and infer their emotional states such as neutral, happy, sad, angry, and scared. The other system infers social cues of a conversation (such as positive and negative emotions, agreement and disagreement) through streaming video captured in imaging glasses.


Why don't SMEs take Cyber Security seriously? (21 April, 2016)

Speaker: Karen Renaud

I have been seconded to the Scottish Business Resilience Centre this year, trying to answer the question in the title. I will explain how I went about carrying out my study and what my findings were.


EulerSmooth: Smoothing of Euler Diagrams (14 April, 2016)

Speaker: Dan Archambault (Swansea University)

Drawing sets of elements and their intersections is important for many applications in the sciences and social sciences. In this talk, we presented a method for improving the appearance of Euler diagrams. The approach works on any diagram drawn with closed curves using polygons. It is based on a force system derived from curve shortening flow. In this talk we present this method and discuss its use on practical data sets.


Personal Tracking and Behaviour Change (07 April, 2016)

Speaker: John Rooksby

In this talk I’ll give a brief overview of the personal tracking applications we have been working on at Glasgow, and then describe our work-in-progress on the EuroFIT programme (this is a men’s health intervention being delivered via European football clubs). I’ll conclude with some considerations of the role of Human Computer Interaction in researching behaviour change and developing lifestyle interventions - particularly the role of innovation, user experience design and field trials.

 


Blast Off: Performance, design, and HCI at the Mixed Reality Lab (17 March, 2016)

Speaker: Dr Jocelyn Spence (University of Nottingham)

The University of Nottingham's Mixed Reality Lab is renowned for its work at the forefront of experience design using artistic performance to drive public interactions with technology. However, there is far more going on at the MRL than its inspiring collaborations with Blast Theory. Jocelyn Spence has worked at the intersection of performance and HCI by focusing on more private, intimate groupings involving storytelling. She is now a visiting researcher at the MRL, leading and contributing to projects that take a similarly personal approach to public performance with digital technologies. This talk will cover her current and previous work in Performative Experience Design.


Kinesthetic Communication of Emotions in Human-Computer Interaction (21 January, 2016)

Speaker: Yoren Gaffary (INRIA)

The communication of emotions use several modalities of expression, as facial expressions or touch. Even though touch is an effective vector of emotions, it remains little explored. This talk concerns the exploration of the kinesthetic expression and perception of emotions in a human-computer interaction setting. It discusses on the kinesthetic expression of some semantically close and acted emotions, and its role on the perception of these emotions. Finally, this talk will go beyond acted emotions by exploring the expression and perception of a spontaneous state of stress. Results have multiple applications, as a better integration of the kinesthetic modality in virtual environment and of human-human remote communications.


Multidisciplinary Madness in the Wild (29 October, 2015)

Speaker: Prof Jon Whittle (Lancaster University)

This talk will reflect on a major 3 year project, called Catalyst, that carried out 13 multidisciplinary, rapid innovation digital technology research projects in collaboration with community organisations “in the wild”. These projects covered a wide range of application domains including quantified self, behaviour change, and bio-feedback, but were all aimed at developing innovative digital solutions that could promote social change. Over the 3 year project, Catalyst worked in collaboration with around 90 community groups, charities, local councils and other organisations to co-develop research questions, co-design solutions, and co-produce and co-evaluate them. The talk will reflect on what worked well and badly in this kind of highly multidisciplinary research ‘in the wild’ project. www.catalystproject.org.uk

Bio: Jon Whittle is Professor of Computer Science and Head of School at Lancaster’s School of Computing and Communications. His background is in software engineering and human-computer interaction research but in the last six years, he has taken a keen interest in interdisciplinary research. During this time, he has led five major interdisciplinary research projects funded to around £6M. Through these, he has learned a lot about what works — and what doesn’t — when trying to bring researchers from different disciplinary backgrounds together.


How do I Look in This? Embodiment and Social Robotics (16 October, 2015)

Speaker: Ruth Aylett
Glasgow Social Robotics Seminar Series

Robots have been produced with a wild variety of embodiments, from plastic-skinned dinosaurs to human lookalikes, via any number of different machine-like robots. Why is embodiment important? What do we know about the impact of embodiment on the human interaction partners of a social robot? How naturalistic should we try to be? Can one robot have multiple embodiments? How do we engineer expressive behaviour across embodiments? I will discuss some of these issues in relation to work in the field.


Intent aware Interactive Displays: Recent Research and its Antecedents at Cambridge Engineering (15 October, 2015)

Speaker: Pat Langdon and Bashar Ahmad (University of Cambridge)

Current work at CUED aimed at stabilising pointing for moving touchscreen displays has met recent success in Automotive, including funding and Patents. This talk will establish the antecedents of the approach in studies aimed at improving access to computers for people with impairments of movement and vision.

One theme in the EDC has been Computer assisted interaction movement impairment using haptic feedback devices. This early approach showed some promise in mitigating extremes of movement but was dependent on hardware implementations such as the Logitech Haptic mouse. Other major studies since have examined more general issues behind the development of multimodal interfaces: for an Interactive digital TV (EU GUIDE), and for use in adaptive mobile interfaces for new developments of wireless communication, in The India UK Advanced Technology Centre (IU-ATC).
Most recently Pat Langdon’s collaboration with the department’s signal processing group has led to the realisation that predicting the users pointing intentions from extremely perturbed cursor movement is a similar problem to that of the prediction of a moving objects future position based on irregularly timed and cluttered trajectory data points from multiple sources. This raised an opportunity in the Automotive domain and Bashar Ahmad will describe in detail recent research on using software filtering as a way of improving interactions with touchscreens in a moving vehicle.

BIO

Dr Pat Langdon is a Principal Research Associate for the Cambridge University Engineering Department and lead researcher in Inclusive design within the Engineering Design Centre. He has originated numerous research project in design for inclusion and HMI since joining the department in 1997. Currently, he is PI of 2 projects, 1 commercial collaboration in automotive and a Co-I of a 4 year EPSRC research collaboration.

Dr Bashar Ahmad is a Senior Research Associate in the Signal Processing and Communications (SigProC) Laboratory, Engineering Department, Cambridge University. Prior to joining SigProC, Bashar was a postdoctoral researcher at Imperial College London. His research interests include statistical signal processing, Bayesian inference, multi-modal human computer interactions, sub-Nyquist sampling and cognitive radio.


GlobalFestival: Evaluating Real World Interaction on a Spherical Display (03 September, 2015)

Speaker: Julie Williamson (University of Glasgow)

Spherical displays present compelling opportunities for interaction in public spaces. However, there is little research into how touch interaction should control a spherical surface or how these displays are used in real world settings. This paper presents an in the wild deployment of an application for a spherical display called GlobalFestival that utilises two different touch interaction techniques. The first version of the application allows users to spin and tilt content on the display, while the second version only allows spinning the content. During the 4-day deployment, we collected overhead video data and on-display interaction logs. The analysis brings together quantitative and qualitative methods to understand how users approach and move around the display, how on screen interaction compares in the two versions of the application, and how the display supports social interaction given its novel form factor.


Breaching the Smart Home (26 June, 2015)

Speaker: Chris Speed (University of Edinburgh)
Breaching the Smart Home

This talk reflects upon the work of the Centre for Design Informatics across the Internet of Things. From toilet roll holders that operate as burglar alarms, to designing across the Block Chain, the talk will use design case studies to explore both the opportunities that an interoperability offers for designing new products, practices and markets, but also the dangers. In order to really explore the potential for an Internet of Things ethical boundaries are stressed and sometimes breached. This talk will trace the line between imaginative designing with data, and the exploitation of personal identities.

Prof Chris Speed is Chair of Design Informatics at the University of Edinburgh where his research focuses upon the Network Society, Digital Art and Technology, and The Internet of Things. 


Intro to the Singapore Institute of Technology & Interactive Computing Research Initiatives at SIT (25 June, 2015)

Speaker: Jeannie Lee

Established in 2009, Singapore Institute of Technology (SIT) is Singapore's 5th and newest autonomous university on the island. We will first start with some background and information about the university, and then an overview of potential HCI-related research initiatives and collaborations in the context of Singapore healthcare, hospitality, creative and technology industries. Ideas and discussions are welcome!


Recruitment to research trials: Linking action with outcome (11 June, 2015)

Speaker: Graham Brennan (University of Glasgow)

Bio: Dr Graham Brennan is a Research Associate and Project Manager in the Institute of Health and Wellbeing with a specialisation in recruitment to behaviour change programmes at the University of Glasgow. He is interested in the impact of health behaviour change programmes on the health of the individual and society as well as the process of engagement and participation. More specifically, his work examines the process and mechanisms of engagement that affect recruitment.

 


FeedFinder: A Location-Mapping Mobile Application for Breastfeeding Women (04 June, 2015)

Speaker: Madeline Balaam (University of Newcastle)

Breastfeeding is positively encouraged across many countries as a public health endeavour. The World Health Organisation recommends breastfeeding exclusively for the first six months of an infant’s life. However, women can struggle to breastfeed, and to persist with breastfeeding, for a number of reasons from technique to social acceptance. This paper reports on four phases of a design and research project, from sensitising user-engagement and user-centred design, to the development and in-the-wild deployment of a mobile phone application called FeedFinder. FeedFinder has been developed with breastfeeding women to support them in finding, reviewing and sharing public breastfeeding places with other breastfeeding women. We discuss how mobile technologies can be designed to support public health endeavours, and suggest that public health technologies are better aimed at communities and societies rather than individual. 

Dr Madeline Balaam is a lecturer in the School of Computing Science within Newcastle University. 

 


Analyzing online interaction using conversation analysis: Affordances and practices (14 May, 2015)

Speaker: Dr Joanne Meredith (University of Salford)

The aim of this paper is to show how conversation analysis – a method devised for spoken interaction – can be used to analyze online interaction. The specific focus of this presentation will be on demonstrating how the impact of the design features, or affordances, of an online medium can be analyzed using conversation analysis. I will use examples from a corpus of 75 one-to-one Facebook ‘chats’, collected using screen capture software, which I argue can provide us with additional information about participants’ real-time, lived experiences of online interaction.  Through examining a number of interactional practices found in my data corpus, I will show how the analysis of real-life examples of online interaction can provide us with insights in to how participants adapt their interactional practices to suit the affordances of the medium.

Jo Meredith is a Lecturer in Psychology at the University of Salford. Before joining the University of Salford, Jo was a Lecturer at the University of Manchester and completed her doctoral thesis at Loughborough University. She is interested in developing the use of conversation analysis for online interaction, as well as investigating innovative methods for collecting online data.  


Trainable Interaction Models for Embodied Conversational Agents (30 April, 2015)

Speaker: Mary Ellen Foster

Human communication is inherently multimodal: when we communicate with one another, we use a wide variety of channels, including speech, facial expressions, body postures, and gestures. An embodied conversational agent (ECA) is an interactive character -- virtual or physically embodied -- with a human-like appearance, which uses its face and body to communicate in a natural way. Giving such an agent the ability to understand and produce natural, multimodal communicative behaviour will allow humans to interact with such agents as naturally and freely as they interact with one another, enabling the agents to be used in applications as diverse as service robots, manufacturing, personal companions, automated customer support, and therapy.

To develop an agent capable of such natural, multimodal communication, we must first record and analyse how humans communicate with one another. Based on that analysis, we then develop models of human multimodal interaction and integrate those models into the reasoning process of an ECA. Finally, the models are tested and validated through human-agent interactions in a range of contexts.

In this talk, I will give three examples where the above steps have been followed to create interaction models for ECAs. First, I will describe how human-like referring expressions improve user satisfaction with a collaborative robot; then I show how data-driven generation of facial displays affects interactions with an animated virtual agent; finally, I describe how trained classifiers can be used to estimate engagement for customers of a robot bartender.

Bio: Mary Ellen Foster will join the GIST group as a Lecturer in July 2015. Her main research interest is embodied communication: understanding human face-to-face conversation by implementing and evaluating embodied conversational agents (such as animated virtual characters and humanoid robots) that are able to engage in natural, face-to-face conversation with human users. She is currently a Research Fellow in the Interaction Lab at the School of Mathematical and Computer Sciences at Heriot-Watt University in Edinburgh, and has previously worked in the Robotics and Embedded Systems Group at the Technical University of Munich and in the School of Informatics at the University of Edinburgh.  She received her Ph.D. in Informatics from the University of Edinburgh in 2007.


To Beep or Not to Beep? Comparing Abstract versus Language-Based Multimodal Driver Displays (02 April, 2015)

Speaker: Ioannis Politis

Abstract: Multimodal displays are increasingly being utilized as driver warnings. Abstract warnings, without any semantic association to the signified event, and language-based warnings are examples of such displays. This paper presents a first comparison between these two types, across all combinations of audio, visual and tactile modalities. Speech, text and Speech Tactons (a novel form of tactile warnings synchronous to speech) were compared to abstract pulses in two experiments. Results showed that recognition times of warning urgency during a non-critical driving situation were shorter for abstract warnings, highly urgent warnings and warnings including visual feedback. Response times during a critical situation were shorter for warnings including audio. We therefore suggest abstract visual feedback when informing drivers during a non-critical situation and audio in a highly critical one. Language-based warnings during a critical situation performed equally well as abstract ones, so they are suggested as less annoying vehicle alerts.


Situated Social Media Use: A Methodological Approach to Locating Social Media Practices and Trajectories (24 March, 2015)

Speaker: Alexandra Weilenmann (University of Gothenburg)

In this talk, I will present a few examples of methodological explorations of social media activities, trying to capture and understand them as located, situated practices. This methodological endeavor spans over analyzing patterns in big data feeds (here Instagram) as well as small-scale video-based ethnographic studies of user activities. A situated social media perspective involves examining how production and consumption of social media are intertwined. Drawing upon our studies of social media use in cultural institutions we show how visitors are orienting to their social media presence while attending to physical space during the visit, and how editing and sharing processes are formed by the trajectory through the space. I will discuss the application and relevance of this approach for understanding social media and social photography in situ. I am happy to take comments and feedback on this approach, as we are currently working to develop it.

Alexandra Weilenmann holds a PhD in informatics and currently works at the Department of Applied IT, University of Gothenburg, Sweden. She has over 15 years experience researching the use of mobile technologies, with a particular focus on adapting traditional ethnographic and sociological methods to enable the study of new practices. Previous studies includes mobile technology use among hunters, journalists, airport personnel, professional drivers, museum visitors, teenagers and the elderly. Weilenmann has experience working in projects in close collaboration with stakeholders, both regarding IT development projects (e.g. Ricoh Japan) and with Swedish special interest organizations (e.g. Swedish Institute of Assistive Technology). She has served on several boards dealing with issues of the integration of IT in society, for example the Swedish Government’s Use Forum, Swedish Governmental Agency for Innovation Systems (Vinnova) and as an expert for telephone company DORO.


Mobile interactions from the wild (19 March, 2015)

Speaker: Kyle Montague (Dundee)

Laboratory-based evaluations allow researchers to control for external factors that can influence participant interaction performance. Typically, these studies tailor situations to remove distraction and interruption, thus ensuring users’ attention on the task and relative precision in interaction accuracy. While highly controlled laboratory experiments provide clean measurements with minimal errors, interaction behaviors captured within natural settings differ from those captured within the laboratory. Additionally, laboratory-based evaluations impose time restrictions on user studies. Characteristically lasting no more than an hour at a time, they restrict the potential for capturing the performance changes that naturally occur throughout daily usage as a result of fatigue or situational constraints. These changes are particularly interesting when designing for mobile interactions where the environmental factors can pose significant constraints and complications on the users interaction abilities.

This talk will discuss recent works exploring mobile touchscreen interactions from the wild involving participants with motor and visual impairments - sharing the successes and pitfalls of these approaches, and the creation of a new data collection framework to support future mobile interaction studies in-the-wild.


HCI in cars: Designing and evaluating user-experiences for vehicles (12 March, 2015)

Speaker: Gary Burnett (University of Nottingham)

Driving is an everyday task which is fundamentally changing, largely as a result of the rapid increase in the number of computing and communications-based technologies within/connecting vehicles. Whilst there is considerable potential for different systems (e.g. on safety, efficiency, comfort, productivity, entertainment etc.), one must always adopt a human-centred perspective.  This talk will raise the key HCI issues involved in the driving context and the effects on the design of the user-interface – initially aiming to minimise the likelihood of distraction. In addition, the advantages and disadvantages of different evaluation methods commonly employed in the area will be discussed. In the final part of the talk, issues will be raised for future vehicles, particularly considering the impact of increasing amounts of automation functionality, fundamentally changing the role of the human “driver” - potentially from that of vehicle controller periodically to one of system status monitor. Such a paradigm shift raises profound issues concerning the design of the vehicle HMI which must allow a user to understand the “system" and also to seamlessly forgo and regain control in an intuitive manner. 

Gary Burnett is Associate Professor in Human Factors in the Faculty of Engineering at the University of Nottingham. 


Generating Implications for Design (05 March, 2015)

Speaker: Corina Sas (Lancaster University)

A central tenet of HCI is that technology should be user-centric, with designs being based around social science findings about users. Nevertheless a key challenge in interaction design is translating empirical findings into actionable ideas that inform design. Despite various design methods aiming to bridge this gap, such implications for informing design are still seen as problematic. However there has been little exploration into what practitioners understand by implications for design, the functions of such implications and the principles behind their creation. We report on interviews with twelve expert HCI design researchers probing: the roles and types of implications, their intended beneficiaries, and the process of generating and evaluating them. We synthesize different types of implications into a framework to guide the generation of implications. Our findings identify a broader range of implications than those described in ethnographical studies, capturing technologically implementable knowledge that generalizes to different settings. We conclude with suggestions about how we might reliably generate more actionable implications.

Dr. Sas is a Senior Lecturer in HCI, School of Computing and Communications, Lancaster University. Her research interests include human-computer interaction, interaction design, user experience, designing tools and interactive systems to support high level skill acquisition and training such as creative and reflective thinking in design, autobiographical reasoning, emotional processing and spatial cognition. Her work explores and integrates wearable bio sensors, lifelogging and memory technologies, and virtual reality.


Apache Cordova Tutorial (26 February, 2015)

Speaker: Mattias Rost

Mattias Rost will lead a two hour, hands-on tutorial on Apache Cordova (http://cordova.apache.org/). Apache Cordova is a platform for building native mobile applications using HTML, CSS and JavaScript. Everyone welcome. Bring a laptop!


Blocks: A Tool Supporting Code-based Exploratory Data Analysis (12 February, 2015)

Speaker: Mattias Rost

Large scale trials of mobile apps can generate a lot of log data. Logs contain information about the use of the apps. Existing support for analysing such log data include mobile logging frameworks such as Flurry and Mixpanel, and more general visualisation tools such as Tableau and Spotfire. While these tools are great for giving a first glimpse at the content of the data and producing generic descriptive statistics, they are not great for drilling down into the details of the app at hand. In our own work we end up writing custom interactive visualisation tools for the application at hand, to get a deeper understanding of the use of the particular app. Therefore we have developed a new type of tool that supports the practice of writing custom data analysis and visualisation. We call it Blocks. In this talk I will talk describe what Blocks is, how Blocks encourages code writing, and how it supports the craft of log data analysis.

Mattias Rost is a researcher in Computing Science at the University of Glasgow. He is currently working on the EPSRC funded Populations Programme.


The DeepTree Exhibit: Visualizing the Tree of Life to Facilitate Informal Learning (05 February, 2015)

Speaker: Florian Block (Harvard University)

More than 40% of Americans still reject the theory of evolution. This talk focuses on the DeepTree exhibit, a multi-user multi-touch interactive visualization of the Tree of Life. The DeepTree has been developed to facilitate collaborative visual learning of evolutionary concepts. The talk will outline an iterative process in which a multi-disciplinary team of computer scientists, learning scientists, biologists, and museum curators worked together throughout design, development, and evaluation. The outcome of this process is a fractal-based tree layout that reduces visual complexity while being able to capture all life on earth; a custom rendering and navigation engine that prioritizes visual appeal and smooth fly-through; a multi-user interface that encourages collaborative exploration while offering guided discovery. The talk will present initial evaluation outcomes illustrating that the large dataset encouraged free exploration, triggers emotional responses, and visitor self-selected multi-level engagement and learning.

Bio: Florian earned his PhD in 2010 at Lancaster University, UK (thesis titled “Reimagining Graphical User Interface Ecologies”). Florian’s work at SDR Lab has focused on using multi-touch technology and information visualization to facilitate discovery and learning in museums. He has worked on designing user interfaces for crowd interaction, developed the DeepTree exhibit, an interactive visualization of the tree of life (tolweb.org), as well as introduced methodological tools to quantify engagement of fluid group configurations around multi-touch tabletops in museums. Ultimately, Florian is interested in how interactive technology can provide unique new opportunities for learning, to understand which aspects of interactivity and collaboration contributes to learning, and how to use large datasets to engage the general public in scientific discovery and learning.


Supporting text entry review mode and other lessons from studying older adult text entry (29 January, 2015)

Speaker: Emma Nicol and Mark Dunlop (Strathclyde)

As part of an EPSRC project on Text Entry for Older Adults we have ran several workshops. A theme of support "write then review" style of entry has emerged from these workshops. In this talk we will present the lessons from our workshops along with our experimental keyboard that supports review mode through highlighting various elements of the text you have entered. Android demo available for download during talk.


Addressing the Fundamental Attribution Error of Design Using the ABCS (11 December, 2014)

Speaker: Gordon Baxter

Why is it that designers continue to be irritated when users struggle to make their apparently intuitive systems work? I will explain how we believe that this perception is related to the fundamental attribution error concept from social psychology. The problem of understanding users is hard, though, because there is so much to learn and understand. I will go on to talk about the ABCS framework, a concept we developed to help organise and understand the information we know about users, and using examples will illustrate how it can affect system design.

Gordon Baxter is a co-author of the book Foundations For Designing User Centred Systems


Augmenting and Evaluating Communication with Multimodal Flexible Interfaces (04 December, 2014)

Speaker: Eve Hoggan

This talk will detail an exploratory study of remote interpersonal communication using our ForcePhone prototype. This research focuses on the types of information that can be expressed between two people using the haptic modality, and the impact of different feedback designs. Based on the results of this study and my current work, I will briefly discuss the potential of deformable interfaces and multimodal interaction techniques to enrich communication for users with impairments. Then I will finish with an introduction to neurophysiological measurements of such interfaces.

Bio
Eve Hoggan is a Research Fellow at the Aalto Science Institute and the Helsinki Institute for Information Technology HIIT in Finland, where she is vice-leader of the Ubiquitous Interaction research group. Her current research focuses on the creation of novel interaction techniques, interpersonal communication and non-visual multimodal feedback. The aim of her research is to use multimodal interaction and varying form factors to create more natural and effortless methods of interaction between humans and technology regardless of any situational or physical impairment. More information can be found at www.evehoggan.com


Blocks: A Tool Supporting Code-based Exploratory Data Analysis (20 November, 2014)

Speaker: Mattias Rost

Large scale trials of mobile apps can generate a lot of log data. Logs contain information about the use of the apps. Existing support for analysing such log data include mobile logging frameworks such as Flurry and Mixpanel, and more general visualisation tools such as Tableau and Spotfire. While these tools are great for giving a first glimpse at the content of the data and producing generic descriptive statistics, they are not great for drilling down into the details of the app at hand. In our own work we end up writing custom interactive visualisation tools for the application at hand, to get a deeper understanding of the use of the particular app. Therefore we have developed a new type of tool that supports the practice of writing custom data analysis and visualisation. We call it Blocks. In this talk I will talk describe what Blocks is, how Blocks encourages code writing, and how it supports the craft of log data analysis.

Mattias Rost is a researcher in Computing Science at the University of Glasgow. He is currently working on the EPSRC funded Populations Programme. He was awarded his PhD by the University of Stockholm in 2013. 


MyCity: Glasgow 2014 (13 November, 2014)

Speaker: Marilyn Lennon

During the summer of 2014, we (a small team of researchers at Glasgow University) designed, developed and deployed a smartphone app based game for the commonwealth games in Glasgow. The aim of our game overall was to try to get people to engage with Glasgow, find out more about the commonwealth games, and above all to get people to walk more through 'gamification'. In reality though - we had no time or money for a well designed research study and proper exploration of gamification and engagement and in fact a huge amount of our efforts were focused instead on testing in app advertising models, understanding business models for 'wellness' apps, dealing with research and enterprise and considering routes for commercialisation of our underlying platform and game. Come along and hear what we learned (good and bad) about deploying a health and wellness app in the 'real world'.

Dr Marilyn Lennon is a senior lecturer in Computer and Information Sciences at the University of Strathclyde.


Ms. Male Character - Tropes vs Women (23 October, 2014)

Speaker: YouTube Video - Anita Sarkeesian

In this session we will view and discuss a video from the Feminist Frequency website (http://www.feministfrequency.com). The video is outlined as follows: "In this episode we examine the Ms. Male Character trope and briefly discuss a related pattern called the Smurfette Principle. We’ve defined the Ms. Male Character Trope as: The female version of an already established or default male character. Ms. Male Characters are defined primarily by their relationship to their male counterparts via visual properties, narrative connection or occasionally through promotional materials."


Use of Eye Tracking to Rethink Display Blindness. (16 October, 2014)

Speaker: Sheep Dalton

Public and situated display technologies are an increasingly common part of many urban spaces, including advertising displays on bus stops, interactive screens providing information to tourists or visitors to a shopping centre, and large screens in transport hubs showing travel information as well as news and advertising content. Situated display research has also been prominent in HCI, ranging from studies of community displays in cafes and village shops to large interactive games in public spaces and techniques to allow users to interact with different configurations of display and personal technologies.

Observational studies of situated displays have suggested that they are rarely looked at. Using a mobile eye tracker during a realistic shopping task in a shopping center, we show that people look at displays more than might be expected given observational studies but for very short times (1/3rd of second on average), and from quite far away. We characterize the patterns of eye-movements that precede looking at a display and discuss some of the design implications for the design of situated display technologies that are deployed in public space.


Economic Models of Search (02 October, 2014)

Speaker: Leif Azzopardi

Understanding how people interact when searching is central to the study of Interactive Information Retrieval (IIR). Most of the prior work has either been conceptual, observational or empirical. While this has led to numerous insights and findings regarding the interaction between users and systems, the theory has lagged behind. In this talk, I will first provide an overview of the typically IIR process. Then I will introduce an economic model of search based on production theory. This initial model is then extended to incorporate other variables that affect the interaction between the user and the search engine. The refined model is more realistic, provides a better description of the IIR process and enables us to generate eight interaction-based hypotheses regarding search behavior. To validate the model, I will show how the observed search behaviors from an empirical study with thirty-six participants were consistent with the theory. This work, not only, describes a concise and compact representation of search behavior, but also provides a strong theoretical basis for future IIR research. The modeling techniques used are also more generally applicable to other situations involving Human Computer Interaction, and could be helpful in understand many other scenarios.

This talk is based on the paper, “Modeling Interaction with Economic Models of Search” which received an Honorable Mention at ACM SIGIR 2014, see: http://dl.acm.org/citation.cfm?id=2609574


CANCELLED Instrumental Interaction in Multisurface Environments (25 September, 2014)

Speaker: Michel Beaudouin-Lafon
This talk will illustrate the principles and applications of instrumental interaction, in particular in the context of the WILD multi surface environment.

Unfortunately this talk has been cancelled.

 


Using degraded MP3 quality to encourage a health improving walking pace: BeatClearWalker (18 September, 2014)

Speaker: Andreas Komninos

Promotion of walking is integral to improving public health for many sectors of the population. National governments and health authorities now widely recommend a total daily step target (typically 7,000- 10,000 steps/day). Meeting this target can provide considerable physical and mental health benefits and is seen as a key target for reducing national obesity levels, and improving public health. However, to optimise the health benefits, walking should be performed at a “moderate” intensity - often defined as 3 times resting metabolic rate, or 3 METs. While there are numerous mobile fitness applications that monitor distance walked, none directly target the pace, or cadence, of walkers.

BeatClearWalker is a fitness application for smartphones, designed to help users learn how to walk at a moderate pace (monitored via walking cadence, steps/min.) and encourage maintenance of that cadence. The application features a music player with linked pedometer. Based on the user’s target cadence, BeatClearWalker will apply real-time audio effects to the music if the target walking cadence is not being reached. This provides an immersive and intuitive application that can easily be integrated into everyday life as it allows users to walk while listening to their own music and encourages eyes-free interaction with the device.

This talk introduces the application, its design and evaluation. Results show that using our degraded music decreases the number of below-cadence steps and, furthermore, that the effect can persist when the degradation is stopped.


GIST Seminar (Automotive UI / Mobile HCI) (11 September, 2014)

Speaker: Alex Ng and Ioannis Politis
Ioannis and Alex will present their papers from Automotive UI and Mobile HCI

Speaker: Ioannis Politis
Title: Speech Tactons Improve Speech Warnings for Drivers

This paper describes two experiments evaluating a set of speech and tactile driver warnings. Six speech messages of three urgency levels were designed, along with their tactile equivalents, Speech Tactons. These new tactile warnings retained the rhythm of speech and used different levels of roughness and intensity to convey urgency. The perceived urgency, annoyance and alerting effectiveness of these warnings were evaluated. Results showed that bimodal (audio and tactile) warnings were rated as more urgent, more annoying and more effective compared to unimodal ones (audio or tactile). Perceived urgency and alerting effectiveness decreased along with the designed urgency, while perceived annoyance was lowest for warnings of medium designed urgency. In the tactile modality, ratings varied less as compared to the audio and audiotactile modalities. Roughness decreased and intensity increased ratings for Speech Tactons in all the measures used. Finally, Speech Tactons produced acceptable recognition accuracy when tested without their speech counterparts. These results demonstrate the utility of Speech Tactons as a new form of tactile alert while driving, especially when synchronized with speech.

Speaker: Alex Ng
Title: Comparing Evaluation Methods for Encumbrance and Walking on Interaction with Touchscreen Mobile Devices

In this talk, I will be presenting our accepted paper at this year’s MobileHCI. The paper compares two mobile evaluation methods, walking on a treadmill and walking on the ground, to evaluate the effects of encumbrance (holding objects during interaction with mobile devices) while the preferred walking speed (PWS) is controlled. We will discuss the advantages and limitations of each evaluation method when examining the impact of encumbrance.


GIST Talk - Accent the Positive (10 April, 2014)

Speaker: Alistair Edwards

The way people speak tells a lot about their origins – geographical and social, but when someone can only speak with the aid of an artificial voice (such as Stephen Hawking), conventional expectations are subverted. The ultimate aim of most speech synthesis research is more human-sounding voices, yet the most commonly used one, DecTalk, is quite robotic. Why is this - and is a human voice always appropriate?

This seminar will explore some of the limitations and possibilities of speech technology.


GIST Talk - Socially Intelligent Sensing Systems (04 February, 2014)

Speaker: Dr Hayley Hung

One of the fundamental questions of computer science is about understanding how machines can best serve people. In this talk, I will focus on answering the question of how automated systems can achieve this by being aware of people as social beings. So much of our lives revolve around face-to-face communication. It affects our relationships with others, the influence they have over us, and how this can ultimately transform into decisions that affect a single person or many more people. However, we understand relatively little about how to automate the perception of social behaviour and recent research findings only touch the tip of the iceberg.

In this talk, I will describe some of the research I have carried out to address this gap by presenting my work on devising models to automatically interpret face-to-face human social behaviour using cameras, microphones, and wearable sensors. This will include addressing problems such as automatically estimating who is dominating the conversation? Are these two people attracted to each other? I will highlight the challenges facing this fascinating research problem and open research questions that remain.

Bio: Hayley Hung is an Assistant Professor and Delft Technology Fellow in the Pattern Recognition and Bioinformatics group at the Technical University of Delft in the Netherlands. Before that she held a Marie Curie Intra-European Fellowship at the Intelligent Systems Lab at the University of Amsterdam, working on devising models to estimate various aspects of human behaviour in large social gatherings. Between 2007-2010, she was a post-doctoral researcher at Idiap Research Institute in Switzerland, working on methods to automatically estimate human interactive behaviour in meetings such as dominance, cohesion and deception. She obtained her PhD in Computer Vision from Queen Mary University of London, UK in 2007 and her first degree from Imperial College, UK in Electrical and Electronic Engineering.


GIST Talk - Passive Brain-Computer Interfaces for Automated Adaptation and Implicit Control in Human-Computer Interaction (31 January, 2014)

Speaker: Dr Thortsen Zander

In the last 3 decades the interaction mean of Brain-Computer Interfaces (BCIs) has been investigated extensively. While most research aimed at the design of supportive systems for severely disabled persons, the last decade showed a trend towards applications for the general population. For users without disabilities a specific type of BCIs, that of passive Brain-Computer Interfaces (pBCIs), has shown high potential of improving Human-Machine and Human-Computer Interaction. In this seminar I will discuss the categorization of BCI research, in which we introduced the idea of pBCIs in 2008 and potential areas of application. Specifically, I will present several studies providing evidence that pBCIs can have a significant effect on the usability and efficiency of given systems. I will show that the users situational interpretation, intention and strategy can be detected by pBCIs. This information can be used for adapting the technical system automatically during interaction and enhance the performance of the Human-Machine System. From the perspective of pBCIs a new type of interaction, which is based on implicit control, emerges. Implicit Interaction aims at controlling a computer system by behavioral or psychophysiological aspects of user state, independently of any intentionally communicated commands. This introduces a new type of Human-Computer Interaction, which in contrast to most forms of interaction implemented nowadays, does not require the user to explicitly communicate with the machine. Users can focus on understanding the current state of the system and developing strategies for optimally reaching the goal of the given interaction. Based on information extracted by a pBCI and the given context the system can adapt automatically to the current strategies of the user. In a first study, a proof of principle is given, by implementing an Implicit Interaction to guide simple cursor movements in a 2D grid to a target. The results of this study clearly indicate the high potential of Implicit Interaction and introduce a new bandwidth of applications for passive Brain-Computer Interfaces.


GIST Talk - Mindless Versus Mindful Interaction (30 January, 2014)

Speaker: Yvonne Rogers

We are increasingly living in our digital bubbles. Even when physically together – as families and friends in our living rooms, outdoors and public places - we have our eyes glued to our own phones, tablets and laptops. The new generation of ‘all about me’ health and fitness gadgets, wallpapered in gamification, is making it worse. Do we really need smart shoes that tell us when we are being lazy and glasses that tell us what we can and cannot eat? Is this what we want from technology – ever more forms of digital narcissism, virtual nagging and data addiction? In contrast, I argue for a radical rethink of our relationship with future digital technologies. One that inspires us, through shared devices, tools and data, to be more creative, playful and thoughtful of each other and our surrounding environments.


GIST Talk - Designing Hybrid Input Paradigms (16 January, 2014)

Speaker: Abigail Sellen

Visions of multimodal interaction with computers are as old as the field of HCI itself: by adding voice, gesture, gaze and other forms of input, the hope is that engaging with computers might be more efficient, expressive and natural. Yet it is only in the last decade that the dominance of multi-touch and the rise of gesture-based interaction are radically altering the ways we interact with computers. On the one hand these changes are inspirational and open up the design space; on the other hand, it has caused fractionation in interface design and added complexity for users.  Many of these complexities are caused by layering new forms of input on top of existing systems and practices. I will discuss our own recent adventures in trying to design and implement these hybrid forms of input, and highlight the challenges and the opportunities for future input paradigms. In particular, I conclude that the acid test for any of these new techniques is testing in the wild. Only then can we really design for diversity of people and of experiences


GIST Seminar (28 November, 2013)

Speaker: Graham Wilson/Ioannis Politis
Perception of Ultrasonic Haptic Feedback / Evaluating Multimodal Driver Displays under Varying Situational Urgency

Two talks this week from members of the GIST group. 

Graham Wilson: Perception of Ultrasonic Haptic Feedback

Abstract: Ultrasonic haptic feedback produces tactile sensations in mid-air through acoustic radiation pressure. It is a promising means of providing 3D tactile sensations in open space without the user having to hold an actuator. However, research is needed to understand the basic characteristics of perception of this new feedback medium, and so how best to utilize ultrasonic haptics in an interface. This talk describes the technology behind producing ultrasonic haptic feedback and reports two experiments on fundamental aspects of tactile perception: 1) localisation of a static point and 2) the perception of motion. Traditional ultrasonic haptic devices are large and fixed to a horizontal surface, limiting the interaction and feedback space. To expand the interaction possibilities, the talk also discusses the feasibility of a mobile, wrist-mounted device for gestural interaction throughout a larger space. 

Ioannis Politis: Evaluating Multimodal Driver Displays under Varying Situational Urgency

Abstract: Previous studies have investigated audio, visual and tactile driver warnings, indicating the importance of conveying the appropriate level of urgency to the drivers. However, these modalities have never been combined exhaustively and tested under conditions of varying situational urgency, to assess their effectiveness both in the presence and absence of critical driving events. This talk will describe an experiment evaluating all multimodal combinations of such warnings under two contexts of situational urgency: a lead car braking and not braking. The results showed that responses were quicker when more urgent warnings were used, especially in the presence of a car braking. Participants also responded faster to the multimodal as opposed to unimodal signals. Driving behaviour improved in the presence of the warnings and the absence of a car braking. These results highlight the utility of multimodal displays to rapidly and effectively alert drivers and demonstrate how driving behaviour can be improved by such signals.


[GIST] Talk -- The Value of Visualization for Exploring and Understanding Data (11 July, 2013)

Speaker: Prof John Stasko

Investigators have an ever-growing suite of tools available for analyzing and understanding their data. While techniques such as statistical analysis, machine learning, and data mining all have benefits, visualization provides an additional unique set of capabilities. In this talk I will identify the particular advantages that visualization brings to data analysis beyond other techniques, and I will describe the situations when it can be most beneficial. To help support these arguments, I'll present a number of provocative examples from my own work and others'. One particular system will demonstrate how visualization can facilitate exploration and knowledge acquisition from a collection of thousands of narrative text documents, in this case, reviews of wines from Tuscany.


Information Visualization for Knowledge Discovery (13 June, 2013)

Speaker: Professor Ben Schneiderman, University of Maryland - College Park
This talk reviews the growing commercial success stories such as www.spotfire.com, and www.smartmoney.com/marketmap, plus emerging products such as www.hivegroup.com will be covered.

This talk reviews the growing commercial success stories such as University events listings.


[GIST] Talk -- Shape-changing Displays: The next revolution in display technology? (28 March, 2013)

Speaker: Dr Jason Alexander

Shape-changing interfaces physically mutate their visual display surface
to better represent on-screen content, provide an additional information
channel, and facilitate tangible interaction with digital content. This
talk will preview the current state-of-the art in shape-changing
displays, discuss our current work in this area, and explore the grand
challenges in this field. The talk will include a hardware demonstration
of one such shape-changing device, a Tilt Display.

Bio:
 
Jason is a lecturer in the School of Computing and Communications at
Lancaster University. His primary research interests are in
Human-Computer Interaction, with a particular interest in developing the
next generation of interaction techniques. His recent research is
hardware-driven, combining tangible interaction and future display
technologies. He was previously a post-doctoral researcher in the
Bristol Interaction and Graphics (BIG) group at the University of
Bristol. Before that he was a Ph.D. student in the HCI and Multimedia
Lab at the University of Canterbury, New Zealand. More information can
be found at http://www.scc.lancs.ac.uk/~jason/.


GIST Seminar: A Study of Information Management Processes across the Patient Surgical Pathway in NHS Scotland (14 March, 2013)

Speaker: Matt-Mouley Bouamrane

Preoperative assessment is a routine medical screening process to assess a patient's fitness for surgery. Systematic reviews of the evidence have suggested that existing practices are not underpinned by a strong evidence-base and may be sub-optimal.

We conducted a study of information management processes across the patient surgical pathway in NHS Scotland, using the Medical Research Council Complex Intervention Framework and mixed-methods.

Most preoperative services were created in the last 10 years to reduce late theatre cancellations and increase the ratio of day-case surgery. 2 health-boards have set up electronic preoperative information systems and stakeholders at these services reported overall improvements in processes. General Practitioners' (GPs) referrals are now done electronically and GPs considered electronic referrals as a substantial improvement. GPs reported minimal interaction with preoperative services. Post- operative discharge information was often considered unsatisfactory.

Conclusion: Although some substantial progress have been made in recent years towards improving information transfer and sharing among care providers within the NHS surgical pathway, there remains considerable scope for improvements at the interface between services.


MultiMemoHome Project Showcase (19 February, 2013)

Speaker: various

This event is the final showcase of research and prototypes developed during the MultiMemoHome Project (funded by EPSRC). 


GIST Seminar: : Understanding Visualization: A Formal Approach using Category Theory and Semiotics (31 January, 2013)

Speaker: Dr Paul Vickers

We combine the vocabulary of semiotics and category theory to provide general framework for understanding visualization in practice, including: relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. Further work will explore how the framework may be used to compare visualizations, especially those of different modalities. This may offer predictive potential before expensive user studies are carried out.



RESEARCH GROUPS IN GIST