Human Computer Interaction (GIST)

The Human Computer Interaction research section is also known as GIST (Glasgow Interactive SysTems). In our research we create and use novel, interactive systems to better understand, entertain, protect and support humans in their everyday lives. GIST is a research section made up of several research groups, including:

A lot of the research we undertake is collaborative and interdisciplinary. We work closely with other groups in Computing Science as well as other schools including Psychology and the Institute of Health and Wellbeing. We also work closely with other world leading Universities and many private and public sector organisations (recently: Aldebaran Robotics, Pufferfish, Bang and Olufsen, Freescale Semiconductor Inc., Glasgow City Council, Dynamically Loaded and Cisco Systems).

GIST seminars are usually held on Thursdays. Everyone from the University of Glasgow and beyond is welcome to attend these talks - see the Events tab for more details. We are happy to hear from anyone that would like to visit us to give a talk.

The GIST coordinators are Dong-Bach Vo and John Rooksby.

Academic Staff:

Prof Stephen A Brewster [staff page] [personal page] [wikipedia entry]

Prof Matthew J Chalmers [staff page] [personal page]

Dr Helen C Purchase [staff page] [personal page]

Dr Karen Renaud [staff page] [personal page]

Dr Alessandro Vinciarelli [staff page] [personal page]

Dr Julie Williamson [staff page] [personal page]

Dr Mary Ellen Foster [staff page] [personal page]

Dr Ron Poet [staff page] [personal page]

 

Research Staff:

Dr Parvin Asadzadeh [staff page] [personal page]

Dr Euan Freeman [staff page] [personal page]

Dr Alistair Morrison [staff page] [personal page]

Dr John Rooksby [staff page] [personal page]

Dr Mattias Rost [staff page] [personal page]

Alexander Ng [staff page] [personal page]

Dr Dong-Bach Vo [staff page] [personal page]

Dr Graham Wilson [staff page] [personal page]

Dr Doudou Tang [staff page] [personal page]

Dr Yolanda Vazquez-Alvarez [staff page] [personal page]

 

Research Students:

Mark McGill [staff page] [personal page]

Matthew Jamieson [staff page] [personal page]

Gosel Shakeri [staff page] [personal page]

Rui Huan [staff page] [personal page]

Claire McCallum [staff page] [personal page]

Nora Alkaldi [staff page] [personal page]

Research Projects (Current) :

MuMMER: MultiModal Mall Entertainment Robot (2016 – 2020) – Dr Mary Ellen Foster. Funded by EU Horizon 2020. 

Populations: A Software Populations Approach to UbiComp Systems Design (2011 – 2016) – Prof Matthew Chalmers. Funded by EPSRC.

EuroFIT: Social innovation to improve physical activity and sedentary behaviour through elite European football (2013-2017) – Prof Matthew Chalmers. Funded by EU FP7.

ABBI: Audio Bracelet for Blind Interaction (2014 – 2017) – Prof Stephen Brewster. Funded by EU FP7

HAPPINESS: Haptic Printed Patterned Interfaces for Sensitive Surfaces (2015 – 2018) – Prof Stephen Brewster. Funded by EU Horizon 2020. 

SAM: Automated Attachment Analysis Using the School Attachment Monitor (2015 - 2018) - Prof Stephen Brewster. Funded by EPSRC.

 

 

This Week’s EventsAll Upcoming EventsPast EventsWebapp

This Week’s Events

There are no events scheduled for this week

Past Events

GIST Seminar: Experiments in Positive Technology: the positives and negatives of meddling online (16 March, 2017)

Speaker: Dr. Lisa Tweedie

Experiments in Positive Technology: The positives and negatives of meddling online 

This talk is going to report on a few informal action research experiments I have conducted over a period of seven years using social media. Some have been more successful than others. The focus behind each is "How do we use technology/social media to make positive change?"

I will briefly discuss four interventions and what I have learnt from them.

A) Chile earthquake emergency response via Twitter and WordPress 

B) Make Malmesbury Even Better - Community Facebook page

C) Langtang lost and found - Facebook support group for families involved in the Langtang earthquake, Nepal

D) I am Amira - educational resources for British schools about the refugee crisis downloaded by 4000+ schools from Times Educational Supplement Resources online (TES)

www.iamamira.wordpress.co.uk

Three of these are still ongoing projects. I will make the case that these projects have all initiated positive change. But that they also each have their darker side. I will discuss how each has affected me personally.

I will conclude with how I plan to carry forward my findings into the education arena. My current research thoughts are around education, play and outdoor learning.

 

 

Lisa started her academic life as a psychologist (via engineering product design at South Bank Poly) gaining a BSc (Hons) in Human Psychology from Aston University. She was then Phil Barnard's RA at the applied psychology unit in Cambridge (MRC APU). Researching low level cognitive models for icon search. She soon realised she wanted to look at the world in a more pragmatic way. 

Professor Bob Spence invited her to do a PhD in the visualisation of data at Imperial College, London (Dept of EEE). This was the start of a successful collaboration that continues to this day. She presented her work internationally at CHI, Parc (Palo Alto) and Apple (Cupertino) amongst other places. Lisa's visualisation work is still taught in computer science courses worldwide. She did a couple of years post doc at Imperial into developing visual tools to support problem holders create advanced statistical models (generalised linear models - Nelder - EPSRC) but felt industry calling. She then spent six happy years working for Nortel and Oracle as part of development teams. She worked on telephone network fault visualisations, managing vast quantities of live telephone fraud data from generated by genetic matching algorithms (SuperSleuth) and interactive UML models of code (Oracle Jdeveloper). She is named on two patents from this work.

Once Lisa had her second child she choose to leave corporate life. She had a teaching fellowship at Bath University in 2005. In 2007 she started a consultancy based around "positive technology". She worked as a UX mentor with over 50 companies remotely via Skype from her kitchen. Many of these were start ups in Silicon Valley. In 2011 she was awarded an honorary Research fellowship at Imperial College.

Four years ago she trained as a secondary Maths teacher and has a huge interest in special needs. She tutors students of all abilities and age groups in maths, english and reading each week. Most recently she returned to the corporate world working as a Senior User Experience Architect for St James Place. On the 5th January 2017 she became self employed and is looking to return to the academic research arena with a focus on education, play and outdoor learning. Action research is where she wants to be. 

Lisa is also a community activist, hands on parent to three lively children and a disability rights campaigner. She has lived with Ehlers-Danlos Syndrome, a rare genetic connective tissue disorder, her whole life. She is also a keen photographer, iPad artist (www.tweepics.wordpress.co.uk), writer, maker and has run numerous book clubs.

https://www.linkedin.com/in/lisatweedie/

lisa@wheatridge.co.uk

 

GIST Seminar: Success and failure in ubiquitous computing, 30 years on. (23 February, 2017)

Speaker: Prof. Lars Erik Holmquist

Success and failure in ubiquitous computing, 30 years on.
 
It is almost three decades since Mark Weiser coined the term "ubiquitous computing" at Xerox PARC around 1988. The paper The Computer for the 21st Century was published in 1991, and the first Ubiquitous and Handheld Computing (now UBICOMP) conference was organized in 1999. It is clear that some of the ubicomp vision has come to pass (e.g. ubiquitous handheld computing terminals) whereas other have failed (arguably, any notion of ”calm technology” and ”computers that get out of the way of the work”!) I’d like to take this seminar to discuss some of my top picks for success and failure in ubicomp, and I invite participants to come do the same!
Homework: Think of at least one ubicomp success and one ubicomp failure, as they relate to the various visions of ubiquiotus/pervasive/invisible/etc. computing!
 
Lars Erik Holmquist is newly appointed Professor of Innovation at Northumbria University, Department of Design. He has worked in ubicomp and design research for 20 years, including as co-founder of The Mobile Life Centre in Sweden and Principal Scientist at Yahoo! Research in Silicon Valley. His book on how research can lead to useful results, "Grounded Innovation: Strategies for Developing Digital Products", was published by Morgan Kaufmann in 2012. Before joining Northumbria, he spent two years in Japan where he was a Guest Researcher at the University of Tokyo, learned Japanese, wrote a novel about augmented reality and played in the garage punk band Fuzz Things.

GIST Seminar: Understanding the usage of onscreen widgets and exploring ways to design better widgets for different contexts (16 February, 2017)

Speaker: Dr. Christian Frisson

Interaction designers and HCI researchers are expected to have skills for both creating and evaluating systems and interaction techniques. For evaluation phases, they often need to collect information regarding usage of applications and devices to interpret quantitative and behavioural aspects from users or to provide design guidelines. Unfortunately, it is often difficult to collect users' behaviours in real world scenarios from existing applications due to the unavailability of scripting support and access to the source code. For creation phases, they often have to comply with constraints imposed by the interdisciplinary team they are working with and by the diversity of the contexts of usage. For instance, the car industry may decide that dashboards may be easier to manufacture and to service with controls printed flat or curved, rather than when mounted with physical controls, while the body of research has shown that the latter are more efficient and safe for drivers.

This talk will first present InspectorWidget, an open-source suite which tracks and analyses users' behaviours with existing software and programs.  InspectorWidget covers the whole pipeline of software analysis from logging input events to visual statistics through browsing and programmable annotation.  To achieve this, InspectorWidget combines low-level event logging (e.g. mouse and keyboard events) and high-level screen features (e.g. interface widgets) captured though computer vision techniques.  The goal is to provide a tool for designers and researchers to understand users and develop more useful interfaces for different devices.

The talk will then discuss an ongoing project which explores ways to design haptic widgets, such as buttons, sliders and dials, for touchscreens and touch-sensitive surfaces on in-car centre consoles.  Touchscreens are now commonly found in cars, replacing the need for physical buttons and switchgear but there are safety concerns regarding driver distraction due to the loss of haptic feedback.  We propose the use of interactive sound synthesis techniques to design and develop effective widgets with haptic feedback capabilities for in-car touchscreens to reduce visual distractions on the driver. 

 

Christian Frisson graduated a MSc. in "Art, Science, Technology (AST)" from Institut National Polytechnique de Grenoble (INPG) and the Association for the Creation and Research on Expression Tools (ACROE), France, including a visiting research internship at the MusicTech group, McGill University, Montreal, Québec, Canada, in 2006. In February 2015, he obtained his PhD degree with Professor Thierry Dutoit at the University of Mons (UMONS), numediart Institute, Belgium, on designing interaction for browsing media collections (by similarity). Since June 2016, he is a postdoc at Inria Lille, Mjolnir team, on designing vibrotactile feedback for dashboard widgets within H2020 EU project HAPPINESS, whose partners feature Alexander Ng and Stephen Brewster from the Multimodal Interaction Group of the University of Glasgow.

GIST Seminar: Sharing emotions in collaborative virtual environments (19 January, 2017)

Speaker: Arindam Dey

Interfaces for collaborative tasks, such as multiplayer games can enable effective remote collaboration and enjoyable gameplay. However, in these systems the emotional states of the users are often not communicated properly due to the remoteness. In this talk, I will present two of the recent work at Empathic Computing Lab (UniSA). 
In the first work, we investigated for the first time, the effects of sharing emotional states of one collaborator to the other during an immersive Virtual Reality (VR) gameplay experience. We created two collaborative immersive VR games that display the real-time heart rate of one player to the other. The two different games elicited different emotions, one joyous and the other scary. We tested the effects of visualizing heart-rate feedback in comparison with conditions where such a feedback was absent. Based on subjective feedback, we noticed clear indication of higher positive affect, collaborative communication, and subjective preferences when the heart-rate feedback was shown. The games had significant main effects on the overall emotional experience.
In the second work, we explore the effect of different VR games on human emotional responses measured physiologically and subjectively in a within-subjects user study. In the user study, six different types of VR experiences were experienced by 11 participants, and nine emotions were elicited and analyzed from physiological signals. The results indicate that there are primarily three emotions that are dominant when experiencing VR and the same emotions are elicited in all experiences we tested. Both subjective and objective measurement of emotions showed similar results, but subjectively participants reported to experience emotions more strongly then what they did objectively.

Health technologies for all: designing for use "in the wild" (23 November, 2016)

Speaker: Prof. Ann Blandford

Abstract: There is a plethora of technologies for helping people manage their health and wellbeing: from self-care of chronic conditions (e.g. renal disease, diabetes) and palliative care at end of life through to supporting people in developing mindfulness practices or managing weight or exercise. In some cases, digital health technologies are becoming consumer products; in others, they remain under the oversight of healthcare professionals but are increasingly managed by lay people. How (and whether) these technologies are used depends on how they fit into people’s lives and address people’s values. In this talk, I will present studies on how and why people adopt digital health technologies, the challenges they face, how they fit them into their lives, and how to identify design requirements for future systems. There is no one-size-fits-all design solution for any condition: people have different lifestyles, motivations and needs. Appropriate use depends on fitness for purpose. This requires either customisable solutions or solutions that are tailored to different user populations.

Biography: Ann Blandford is Professor of Human–Computer Interaction at University College London and Director of the UCL Institute of Digital Health. Her expertise is in human factors for health technologies, and particularly how to design systems that fit well in their context of use. She is involved in several research projects studying health technology design, patient safety and user experience. She has published widely on the design and use of interactive health technologies, and on how technology can be designed to better support people’s needs.

Implementing Ethics for a Mobile App Deployment (17 November, 2016)

Speaker: John Rooksby

In this talk I’ll discuss a paper I’ll be presenting at OzCHI 2016.

Abstract: "This paper discusses the ethical dimensions of a research project in which we deployed a personal tracking app on the Apple App Store and collected data from users with whom we had little or no direct contact. We describe the in-app functionality we created for supporting consent and withdrawal, our approach to privacy, our navigation of a formal ethical review, and navigation of the Apple approval process. We highlight two key issues for deployment-based research. Firstly, that it involves addressing multiple, sometimes conflicting ethical principles and guidelines. Secondly, that research ethics are not readily separable from design, but the two are enmeshed. As such, we argue that in-action and situational perspectives on research ethics are relevant to deployment-based research, even where the technology is relatively mundane. We also argue that it is desirable to produce and share relevant design knowledge and embed in-action and situational approaches in design activities.”

Authors: John Rooksby, Parvin Asadzadeh, Alistair Morrison, Claire McCallum, Cindy Gray, Matthew Chalmers. 

Towards a Better Integration of Information Visualisation and Graph Mining (22 September, 2016)

Speaker: Daniel Archambault

As we enter the big data age, the fields of information visualisation and data mining need to work together to tackle problems at scale.  Both of these areas provide complimentary techniques for big data.  Machine learning provides automatic methods that quickly summarise very large data sets which would otherwise be incomprehensible.  Information visualisation provides interfaces that leverage human creativity that can facilitate the discovery of unanticipated patterns.  This talk presents an overview of some of the work conducted in graph mining - an area of data mining that deals specifically with network data.  Subsequently, the talk considers synergies between these two areas in order to scale to larger data sets and examples of projects are presented.  We conclude with a discussion of how information visualisation and data mining can collaborate effectively together in the future.

Logitech presentation (22 August, 2016)

Speaker: Logitech staff

Logitech are visiting the school on Monday. As part of the visit they are going to talk about the company and their research interests. If you want to come along it will be at 11:00 in F121. Will be about 30-40 mins.

 

Human-Pokemon Interaction (and other challenges for designing mixed-reality entertainment) (28 July, 2016)

Speaker: Prof Steve Benford

It’s terrifically exciting to see to arrival of Pokémon Go as the first example of a mixed reality game to  reach a mass audience. Maybe we are witnessing the birth of a new game format? As someone who  has been involved in developing and studying mixed reality entertainment for over fifteen years now, it’s also unsurprising to see people getting hot and bothered about how such games impact on the public settings in which they is played – is Pokémon Go engaging, healthy and social on the one hand or inappropriate, annoying and even dangerous on the other?

 My talk will draw on diverse examples of mixed reality entertainment – from artistic performances and games to museum visits and amusement rides (and occasionally on Pokémon Go too) to reveal the opportunities and challenges that arise when combining digital content with physical experience. In response, I will introduce an approach to creating engaging, coherent and appropriate mixed reality experiences based on designing different kinds of trajectory through hybrid structures of digital and physical content.

 Steve Benford is Professor of Collaborative Computing in the Mixed Reality Laboratory at the University of Nottingham where he also directs the ‘Horizon: My Life in Data’ Centre for Doctoral Training. He was previously an EPSRC Dream Fellow, Visiting Professor at the BBC and Visiting Researcher at Microsoft Research. He has received best paper awards at the ACM’s annual Computer-Human Interaction (CHI) conference in 2005, 2009, 2011 and 2012. He also won the 2003 Prix Ars Electronica for Interactive Art, the 2007 Nokia Mindtrek award for Innovative Applications of Ubiquitous Computing, and has received four BAFTA nominations. He was elected to the CHI Academy in 2012. His book Performing Mixed Reality was published by MIT Press in 2011.

Formal Analysis meets HCI: Probabilistic formal analysis of app usage to inform redesign (30 June, 2016)

Speaker: Muffy Calder (University of Glasgow)

Evaluation of how users engage with applications is part of software engineering, informing redesign and/or design of future apps.  Good evaluation is based on good analysis –  but users are difficult to analyse – they adopt different styles at different times!  What characterises usage style, of a user and of populations of users, how should we characterise the different styles,  how do characterisations evolve, e.g. over an individual user trace,and/or over a number of sessions over days and months, and how do characteristics of usage inform evaluation for redesign and future design?

I try to answer these questions in 30 minutes by outlining a formal, probabilistic approach based on discrete time Markov chains and stochastic temporal logic properties, applying it to a mobile app developed right here in Glasgow and used by tens of thousands of users worldwide.    A new version of the app, based on our analysis and evaluation, has just been deployed. This is experimental design and formal analysis in the wild.  You will be surprised how accessible I can make the formal material.

Perspectives on 'Crowdsourcing' (16 June, 2016)

Speaker: Helen Purchase

It is now commonplace to collect data from ‘the crowd’. This seminar will summarise discussions that took place during a recent Dagstuhl seminar entitled “Evaluation in the Crowd: Crowdsourcing and Human-Centred Experiments” – with contributions from psychology, sociology, information visualisation and technology researchers. Bring your favourite definition of ‘Crowdsourcing’ with you!

Articulatory directness and exploring audio-tactile maps (09 June, 2016)

Speaker: Alistair Edwards (University of York)

Articulatory directness is a property of interaction first described by Don Norman. The favourite examples are steering a car or scrolling a window. However, I suggest (with examples) that these are arbitrary, learned mappings.  This has become important in work which we have been doing on interactive audio-tactile maps for blind people. Unlike conventional tactile maps, ours can be rotated, maintaining an ego-centric frame of reference for the user. Early experiments suggest that this helps the user to build a more accurate internal representation of the real world - and that a steering wheel does not show articularly directness.

Making for Madagascar (02 June, 2016)

Speaker: Janet Read (University of Central Lancashire)

It is commonly touted in HCI that engagement with users is essential for great product design.  Research reports only successes in participatory design research with children but in reality there is much to be concerned about and there is not any great case to be made for children's engagement in these endeavors.  This talk will situate the work of the ChiCI group in designing with children for children by exploring how two games were designed for, and built for, children in rural Madagascar.  There is something in the talk for anyone doing research in HCI.. and for anyone doing research with human participants.  

Emotion Recognition On the Move (28 April, 2016)

Speaker: Juan Ye (University of St Andrews)

Past research in pervasive computing focuses on location-, context-, activity-, and behaviour-awareness; that is, systems provide personalised services to users adapting to their current locations, environmental context, tasks at hand, and ongoing activities. With the rising of requirements in new types of applications, emotion recognition is becoming more and more desirable; for example, from adjusting the response or interaction of the system to the emotional states of users in the HCI community, to detecting early symptoms of depression in the health domain, and to better understanding the environmental impact on users’ mood in a more wide-scale city engineering area. However, recognising different emotional types is a non-trivial task, in terms of the computation complexity and user study designs; that is is, how we inspire and capture natural expressions of users in real-world tasks. In this talk, I will introduce two emotion recognition systems that are recently developed by our senior honour students in St Andrews, and share our experiences in conducting real-world user studies. One system is a smartphone-based application that unobtrusively and continuously monitor and collect users’ acceleration data and infer their emotional states such as neutral, happy, sad, angry, and scared. The other system infers social cues of a conversation (such as positive and negative emotions, agreement and disagreement) through streaming video captured in imaging glasses.

Why don't SMEs take Cyber Security seriously? (21 April, 2016)

Speaker: Karen Renaud

I have been seconded to the Scottish Business Resilience Centre this year, trying to answer the question in the title. I will explain how I went about carrying out my study and what my findings were.

EulerSmooth: Smoothing of Euler Diagrams (14 April, 2016)

Speaker: Dan Archambault (Swansea University)

Drawing sets of elements and their intersections is important for many applications in the sciences and social sciences. In this talk, we presented a method for improving the appearance of Euler diagrams. The approach works on any diagram drawn with closed curves using polygons. It is based on a force system derived from curve shortening flow. In this talk we present this method and discuss its use on practical data sets.

Personal Tracking and Behaviour Change (07 April, 2016)

Speaker: John Rooksby

In this talk I’ll give a brief overview of the personal tracking applications we have been working on at Glasgow, and then describe our work-in-progress on the EuroFIT programme (this is a men’s health intervention being delivered via European football clubs). I’ll conclude with some considerations of the role of Human Computer Interaction in researching behaviour change and developing lifestyle interventions - particularly the role of innovation, user experience design and field trials.

 

Blast Off: Performance, design, and HCI at the Mixed Reality Lab (17 March, 2016)

Speaker: Dr Jocelyn Spence (University of Nottingham)

The University of Nottingham's Mixed Reality Lab is renowned for its work at the forefront of experience design using artistic performance to drive public interactions with technology. However, there is far more going on at the MRL than its inspiring collaborations with Blast Theory. Jocelyn Spence has worked at the intersection of performance and HCI by focusing on more private, intimate groupings involving storytelling. She is now a visiting researcher at the MRL, leading and contributing to projects that take a similarly personal approach to public performance with digital technologies. This talk will cover her current and previous work in Performative Experience Design.

Kinesthetic Communication of Emotions in Human-Computer Interaction (21 January, 2016)

Speaker: Yoren Gaffary (INRIA)

The communication of emotions use several modalities of expression, as facial expressions or touch. Even though touch is an effective vector of emotions, it remains little explored. This talk concerns the exploration of the kinesthetic expression and perception of emotions in a human-computer interaction setting. It discusses on the kinesthetic expression of some semantically close and acted emotions, and its role on the perception of these emotions. Finally, this talk will go beyond acted emotions by exploring the expression and perception of a spontaneous state of stress. Results have multiple applications, as a better integration of the kinesthetic modality in virtual environment and of human-human remote communications.

Multidisciplinary Madness in the Wild (29 October, 2015)

Speaker: Prof Jon Whittle (Lancaster University)

This talk will reflect on a major 3 year project, called Catalyst, that carried out 13 multidisciplinary, rapid innovation digital technology research projects in collaboration with community organisations “in the wild”. These projects covered a wide range of application domains including quantified self, behaviour change, and bio-feedback, but were all aimed at developing innovative digital solutions that could promote social change. Over the 3 year project, Catalyst worked in collaboration with around 90 community groups, charities, local councils and other organisations to co-develop research questions, co-design solutions, and co-produce and co-evaluate them. The talk will reflect on what worked well and badly in this kind of highly multidisciplinary research ‘in the wild’ project. www.catalystproject.org.uk

Bio: Jon Whittle is Professor of Computer Science and Head of School at Lancaster’s School of Computing and Communications. His background is in software engineering and human-computer interaction research but in the last six years, he has taken a keen interest in interdisciplinary research. During this time, he has led five major interdisciplinary research projects funded to around £6M. Through these, he has learned a lot about what works — and what doesn’t — when trying to bring researchers from different disciplinary backgrounds together.

How do I Look in This? Embodiment and Social Robotics (16 October, 2015)

Speaker: Ruth Aylett
Glasgow Social Robotics Seminar Series

Robots have been produced with a wild variety of embodiments, from plastic-skinned dinosaurs to human lookalikes, via any number of different machine-like robots. Why is embodiment important? What do we know about the impact of embodiment on the human interaction partners of a social robot? How naturalistic should we try to be? Can one robot have multiple embodiments? How do we engineer expressive behaviour across embodiments? I will discuss some of these issues in relation to work in the field.

Intent aware Interactive Displays: Recent Research and its Antecedents at Cambridge Engineering (15 October, 2015)

Speaker: Pat Langdon and Bashar Ahmad (University of Cambridge)

Current work at CUED aimed at stabilising pointing for moving touchscreen displays has met recent success in Automotive, including funding and Patents. This talk will establish the antecedents of the approach in studies aimed at improving access to computers for people with impairments of movement and vision.

One theme in the EDC has been Computer assisted interaction movement impairment using haptic feedback devices. This early approach showed some promise in mitigating extremes of movement but was dependent on hardware implementations such as the Logitech Haptic mouse. Other major studies since have examined more general issues behind the development of multimodal interfaces: for an Interactive digital TV (EU GUIDE), and for use in adaptive mobile interfaces for new developments of wireless communication, in The India UK Advanced Technology Centre (IU-ATC).
Most recently Pat Langdon’s collaboration with the department’s signal processing group has led to the realisation that predicting the users pointing intentions from extremely perturbed cursor movement is a similar problem to that of the prediction of a moving objects future position based on irregularly timed and cluttered trajectory data points from multiple sources. This raised an opportunity in the Automotive domain and Bashar Ahmad will describe in detail recent research on using software filtering as a way of improving interactions with touchscreens in a moving vehicle.

BIO

Dr Pat Langdon is a Principal Research Associate for the Cambridge University Engineering Department and lead researcher in Inclusive design within the Engineering Design Centre. He has originated numerous research project in design for inclusion and HMI since joining the department in 1997. Currently, he is PI of 2 projects, 1 commercial collaboration in automotive and a Co-I of a 4 year EPSRC research collaboration.

Dr Bashar Ahmad is a Senior Research Associate in the Signal Processing and Communications (SigProC) Laboratory, Engineering Department, Cambridge University. Prior to joining SigProC, Bashar was a postdoctoral researcher at Imperial College London. His research interests include statistical signal processing, Bayesian inference, multi-modal human computer interactions, sub-Nyquist sampling and cognitive radio.

GlobalFestival: Evaluating Real World Interaction on a Spherical Display (03 September, 2015)

Speaker: Julie Williamson (University of Glasgow)

Spherical displays present compelling opportunities for interaction in public spaces. However, there is little research into how touch interaction should control a spherical surface or how these displays are used in real world settings. This paper presents an in the wild deployment of an application for a spherical display called GlobalFestival that utilises two different touch interaction techniques. The first version of the application allows users to spin and tilt content on the display, while the second version only allows spinning the content. During the 4-day deployment, we collected overhead video data and on-display interaction logs. The analysis brings together quantitative and qualitative methods to understand how users approach and move around the display, how on screen interaction compares in the two versions of the application, and how the display supports social interaction given its novel form factor.

Breaching the Smart Home (26 June, 2015)

Speaker: Chris Speed (University of Edinburgh)
Breaching the Smart Home

This talk reflects upon the work of the Centre for Design Informatics across the Internet of Things. From toilet roll holders that operate as burglar alarms, to designing across the Block Chain, the talk will use design case studies to explore both the opportunities that an interoperability offers for designing new products, practices and markets, but also the dangers. In order to really explore the potential for an Internet of Things ethical boundaries are stressed and sometimes breached. This talk will trace the line between imaginative designing with data, and the exploitation of personal identities.

Prof Chris Speed is Chair of Design Informatics at the University of Edinburgh where his research focuses upon the Network Society, Digital Art and Technology, and The Internet of Things. 

Intro to the Singapore Institute of Technology & Interactive Computing Research Initiatives at SIT (25 June, 2015)

Speaker: Jeannie Lee

Established in 2009, Singapore Institute of Technology (SIT) is Singapore's 5th and newest autonomous university on the island. We will first start with some background and information about the university, and then an overview of potential HCI-related research initiatives and collaborations in the context of Singapore healthcare, hospitality, creative and technology industries. Ideas and discussions are welcome!

Recruitment to research trials: Linking action with outcome (11 June, 2015)

Speaker: Graham Brennan (University of Glasgow)

Bio: Dr Graham Brennan is a Research Associate and Project Manager in the Institute of Health and Wellbeing with a specialisation in recruitment to behaviour change programmes at the University of Glasgow. He is interested in the impact of health behaviour change programmes on the health of the individual and society as well as the process of engagement and participation. More specifically, his work examines the process and mechanisms of engagement that affect recruitment.

 

FeedFinder: A Location-Mapping Mobile Application for Breastfeeding Women (04 June, 2015)

Speaker: Madeline Balaam (University of Newcastle)

Breastfeeding is positively encouraged across many countries as a public health endeavour. The World Health Organisation recommends breastfeeding exclusively for the first six months of an infant’s life. However, women can struggle to breastfeed, and to persist with breastfeeding, for a number of reasons from technique to social acceptance. This paper reports on four phases of a design and research project, from sensitising user-engagement and user-centred design, to the development and in-the-wild deployment of a mobile phone application called FeedFinder. FeedFinder has been developed with breastfeeding women to support them in finding, reviewing and sharing public breastfeeding places with other breastfeeding women. We discuss how mobile technologies can be designed to support public health endeavours, and suggest that public health technologies are better aimed at communities and societies rather than individual. 

Dr Madeline Balaam is a lecturer in the School of Computing Science within Newcastle University. 

 

Analyzing online interaction using conversation analysis: Affordances and practices (14 May, 2015)

Speaker: Dr Joanne Meredith (University of Salford)

The aim of this paper is to show how conversation analysis – a method devised for spoken interaction – can be used to analyze online interaction. The specific focus of this presentation will be on demonstrating how the impact of the design features, or affordances, of an online medium can be analyzed using conversation analysis. I will use examples from a corpus of 75 one-to-one Facebook ‘chats’, collected using screen capture software, which I argue can provide us with additional information about participants’ real-time, lived experiences of online interaction.  Through examining a number of interactional practices found in my data corpus, I will show how the analysis of real-life examples of online interaction can provide us with insights in to how participants adapt their interactional practices to suit the affordances of the medium.

Jo Meredith is a Lecturer in Psychology at the University of Salford. Before joining the University of Salford, Jo was a Lecturer at the University of Manchester and completed her doctoral thesis at Loughborough University. She is interested in developing the use of conversation analysis for online interaction, as well as investigating innovative methods for collecting online data.  

Trainable Interaction Models for Embodied Conversational Agents (30 April, 2015)

Speaker: Mary Ellen Foster

Human communication is inherently multimodal: when we communicate with one another, we use a wide variety of channels, including speech, facial expressions, body postures, and gestures. An embodied conversational agent (ECA) is an interactive character -- virtual or physically embodied -- with a human-like appearance, which uses its face and body to communicate in a natural way. Giving such an agent the ability to understand and produce natural, multimodal communicative behaviour will allow humans to interact with such agents as naturally and freely as they interact with one another, enabling the agents to be used in applications as diverse as service robots, manufacturing, personal companions, automated customer support, and therapy.

To develop an agent capable of such natural, multimodal communication, we must first record and analyse how humans communicate with one another. Based on that analysis, we then develop models of human multimodal interaction and integrate those models into the reasoning process of an ECA. Finally, the models are tested and validated through human-agent interactions in a range of contexts.

In this talk, I will give three examples where the above steps have been followed to create interaction models for ECAs. First, I will describe how human-like referring expressions improve user satisfaction with a collaborative robot; then I show how data-driven generation of facial displays affects interactions with an animated virtual agent; finally, I describe how trained classifiers can be used to estimate engagement for customers of a robot bartender.

Bio: Mary Ellen Foster will join the GIST group as a Lecturer in July 2015. Her main research interest is embodied communication: understanding human face-to-face conversation by implementing and evaluating embodied conversational agents (such as animated virtual characters and humanoid robots) that are able to engage in natural, face-to-face conversation with human users. She is currently a Research Fellow in the Interaction Lab at the School of Mathematical and Computer Sciences at Heriot-Watt University in Edinburgh, and has previously worked in the Robotics and Embedded Systems Group at the Technical University of Munich and in the School of Informatics at the University of Edinburgh.  She received her Ph.D. in Informatics from the University of Edinburgh in 2007.

To Beep or Not to Beep? Comparing Abstract versus Language-Based Multimodal Driver Displays (02 April, 2015)

Speaker: Ioannis Politis

Abstract: Multimodal displays are increasingly being utilized as driver warnings. Abstract warnings, without any semantic association to the signified event, and language-based warnings are examples of such displays. This paper presents a first comparison between these two types, across all combinations of audio, visual and tactile modalities. Speech, text and Speech Tactons (a novel form of tactile warnings synchronous to speech) were compared to abstract pulses in two experiments. Results showed that recognition times of warning urgency during a non-critical driving situation were shorter for abstract warnings, highly urgent warnings and warnings including visual feedback. Response times during a critical situation were shorter for warnings including audio. We therefore suggest abstract visual feedback when informing drivers during a non-critical situation and audio in a highly critical one. Language-based warnings during a critical situation performed equally well as abstract ones, so they are suggested as less annoying vehicle alerts.

Situated Social Media Use: A Methodological Approach to Locating Social Media Practices and Trajectories (24 March, 2015)

Speaker: Alexandra Weilenmann (University of Gothenburg)

In this talk, I will present a few examples of methodological explorations of social media activities, trying to capture and understand them as located, situated practices. This methodological endeavor spans over analyzing patterns in big data feeds (here Instagram) as well as small-scale video-based ethnographic studies of user activities. A situated social media perspective involves examining how production and consumption of social media are intertwined. Drawing upon our studies of social media use in cultural institutions we show how visitors are orienting to their social media presence while attending to physical space during the visit, and how editing and sharing processes are formed by the trajectory through the space. I will discuss the application and relevance of this approach for understanding social media and social photography in situ. I am happy to take comments and feedback on this approach, as we are currently working to develop it.

Alexandra Weilenmann holds a PhD in informatics and currently works at the Department of Applied IT, University of Gothenburg, Sweden. She has over 15 years experience researching the use of mobile technologies, with a particular focus on adapting traditional ethnographic and sociological methods to enable the study of new practices. Previous studies includes mobile technology use among hunters, journalists, airport personnel, professional drivers, museum visitors, teenagers and the elderly. Weilenmann has experience working in projects in close collaboration with stakeholders, both regarding IT development projects (e.g. Ricoh Japan) and with Swedish special interest organizations (e.g. Swedish Institute of Assistive Technology). She has served on several boards dealing with issues of the integration of IT in society, for example the Swedish Government’s Use Forum, Swedish Governmental Agency for Innovation Systems (Vinnova) and as an expert for telephone company DORO.

Mobile interactions from the wild (19 March, 2015)

Speaker: Kyle Montague (Dundee)

Laboratory-based evaluations allow researchers to control for external factors that can influence participant interaction performance. Typically, these studies tailor situations to remove distraction and interruption, thus ensuring users’ attention on the task and relative precision in interaction accuracy. While highly controlled laboratory experiments provide clean measurements with minimal errors, interaction behaviors captured within natural settings differ from those captured within the laboratory. Additionally, laboratory-based evaluations impose time restrictions on user studies. Characteristically lasting no more than an hour at a time, they restrict the potential for capturing the performance changes that naturally occur throughout daily usage as a result of fatigue or situational constraints. These changes are particularly interesting when designing for mobile interactions where the environmental factors can pose significant constraints and complications on the users interaction abilities.

This talk will discuss recent works exploring mobile touchscreen interactions from the wild involving participants with motor and visual impairments - sharing the successes and pitfalls of these approaches, and the creation of a new data collection framework to support future mobile interaction studies in-the-wild.

HCI in cars: Designing and evaluating user-experiences for vehicles (12 March, 2015)

Speaker: Gary Burnett (University of Nottingham)

Driving is an everyday task which is fundamentally changing, largely as a result of the rapid increase in the number of computing and communications-based technologies within/connecting vehicles. Whilst there is considerable potential for different systems (e.g. on safety, efficiency, comfort, productivity, entertainment etc.), one must always adopt a human-centred perspective.  This talk will raise the key HCI issues involved in the driving context and the effects on the design of the user-interface – initially aiming to minimise the likelihood of distraction. In addition, the advantages and disadvantages of different evaluation methods commonly employed in the area will be discussed. In the final part of the talk, issues will be raised for future vehicles, particularly considering the impact of increasing amounts of automation functionality, fundamentally changing the role of the human “driver” - potentially from that of vehicle controller periodically to one of system status monitor. Such a paradigm shift raises profound issues concerning the design of the vehicle HMI which must allow a user to understand the “system" and also to seamlessly forgo and regain control in an intuitive manner. 

Gary Burnett is Associate Professor in Human Factors in the Faculty of Engineering at the University of Nottingham. 

Generating Implications for Design (05 March, 2015)

Speaker: Corina Sas (Lancaster University)

A central tenet of HCI is that technology should be user-centric, with designs being based around social science findings about users. Nevertheless a key challenge in interaction design is translating empirical findings into actionable ideas that inform design. Despite various design methods aiming to bridge this gap, such implications for informing design are still seen as problematic. However there has been little exploration into what practitioners understand by implications for design, the functions of such implications and the principles behind their creation. We report on interviews with twelve expert HCI design researchers probing: the roles and types of implications, their intended beneficiaries, and the process of generating and evaluating them. We synthesize different types of implications into a framework to guide the generation of implications. Our findings identify a broader range of implications than those described in ethnographical studies, capturing technologically implementable knowledge that generalizes to different settings. We conclude with suggestions about how we might reliably generate more actionable implications.

Dr. Sas is a Senior Lecturer in HCI, School of Computing and Communications, Lancaster University. Her research interests include human-computer interaction, interaction design, user experience, designing tools and interactive systems to support high level skill acquisition and training such as creative and reflective thinking in design, autobiographical reasoning, emotional processing and spatial cognition. Her work explores and integrates wearable bio sensors, lifelogging and memory technologies, and virtual reality.

Apache Cordova Tutorial (26 February, 2015)

Speaker: Mattias Rost

Mattias Rost will lead a two hour, hands-on tutorial on Apache Cordova (http://cordova.apache.org/). Apache Cordova is a platform for building native mobile applications using HTML, CSS and JavaScript. Everyone welcome. Bring a laptop!

Blocks: A Tool Supporting Code-based Exploratory Data Analysis (12 February, 2015)

Speaker: Mattias Rost

Large scale trials of mobile apps can generate a lot of log data. Logs contain information about the use of the apps. Existing support for analysing such log data include mobile logging frameworks such as Flurry and Mixpanel, and more general visualisation tools such as Tableau and Spotfire. While these tools are great for giving a first glimpse at the content of the data and producing generic descriptive statistics, they are not great for drilling down into the details of the app at hand. In our own work we end up writing custom interactive visualisation tools for the application at hand, to get a deeper understanding of the use of the particular app. Therefore we have developed a new type of tool that supports the practice of writing custom data analysis and visualisation. We call it Blocks. In this talk I will talk describe what Blocks is, how Blocks encourages code writing, and how it supports the craft of log data analysis.

Mattias Rost is a researcher in Computing Science at the University of Glasgow. He is currently working on the EPSRC funded Populations Programme.

The DeepTree Exhibit: Visualizing the Tree of Life to Facilitate Informal Learning (05 February, 2015)

Speaker: Florian Block (Harvard University)

More than 40% of Americans still reject the theory of evolution. This talk focuses on the DeepTree exhibit, a multi-user multi-touch interactive visualization of the Tree of Life. The DeepTree has been developed to facilitate collaborative visual learning of evolutionary concepts. The talk will outline an iterative process in which a multi-disciplinary team of computer scientists, learning scientists, biologists, and museum curators worked together throughout design, development, and evaluation. The outcome of this process is a fractal-based tree layout that reduces visual complexity while being able to capture all life on earth; a custom rendering and navigation engine that prioritizes visual appeal and smooth fly-through; a multi-user interface that encourages collaborative exploration while offering guided discovery. The talk will present initial evaluation outcomes illustrating that the large dataset encouraged free exploration, triggers emotional responses, and visitor self-selected multi-level engagement and learning.

Bio: Florian earned his PhD in 2010 at Lancaster University, UK (thesis titled “Reimagining Graphical User Interface Ecologies”). Florian’s work at SDR Lab has focused on using multi-touch technology and information visualization to facilitate discovery and learning in museums. He has worked on designing user interfaces for crowd interaction, developed the DeepTree exhibit, an interactive visualization of the tree of life (tolweb.org), as well as introduced methodological tools to quantify engagement of fluid group configurations around multi-touch tabletops in museums. Ultimately, Florian is interested in how interactive technology can provide unique new opportunities for learning, to understand which aspects of interactivity and collaboration contributes to learning, and how to use large datasets to engage the general public in scientific discovery and learning.

Supporting text entry review mode and other lessons from studying older adult text entry (29 January, 2015)

Speaker: Emma Nicol and Mark Dunlop (Strathclyde)

As part of an EPSRC project on Text Entry for Older Adults we have ran several workshops. A theme of support "write then review" style of entry has emerged from these workshops. In this talk we will present the lessons from our workshops along with our experimental keyboard that supports review mode through highlighting various elements of the text you have entered. Android demo available for download during talk.

Addressing the Fundamental Attribution Error of Design Using the ABCS (11 December, 2014)

Speaker: Gordon Baxter

Why is it that designers continue to be irritated when users struggle to make their apparently intuitive systems work? I will explain how we believe that this perception is related to the fundamental attribution error concept from social psychology. The problem of understanding users is hard, though, because there is so much to learn and understand. I will go on to talk about the ABCS framework, a concept we developed to help organise and understand the information we know about users, and using examples will illustrate how it can affect system design.

Gordon Baxter is a co-author of the book Foundations For Designing User Centred Systems

Augmenting and Evaluating Communication with Multimodal Flexible Interfaces (04 December, 2014)

Speaker: Eve Hoggan

This talk will detail an exploratory study of remote interpersonal communication using our ForcePhone prototype. This research focuses on the types of information that can be expressed between two people using the haptic modality, and the impact of different feedback designs. Based on the results of this study and my current work, I will briefly discuss the potential of deformable interfaces and multimodal interaction techniques to enrich communication for users with impairments. Then I will finish with an introduction to neurophysiological measurements of such interfaces.

Bio
Eve Hoggan is a Research Fellow at the Aalto Science Institute and the Helsinki Institute for Information Technology HIIT in Finland, where she is vice-leader of the Ubiquitous Interaction research group. Her current research focuses on the creation of novel interaction techniques, interpersonal communication and non-visual multimodal feedback. The aim of her research is to use multimodal interaction and varying form factors to create more natural and effortless methods of interaction between humans and technology regardless of any situational or physical impairment. More information can be found at www.evehoggan.com

Blocks: A Tool Supporting Code-based Exploratory Data Analysis (20 November, 2014)

Speaker: Mattias Rost

Large scale trials of mobile apps can generate a lot of log data. Logs contain information about the use of the apps. Existing support for analysing such log data include mobile logging frameworks such as Flurry and Mixpanel, and more general visualisation tools such as Tableau and Spotfire. While these tools are great for giving a first glimpse at the content of the data and producing generic descriptive statistics, they are not great for drilling down into the details of the app at hand. In our own work we end up writing custom interactive visualisation tools for the application at hand, to get a deeper understanding of the use of the particular app. Therefore we have developed a new type of tool that supports the practice of writing custom data analysis and visualisation. We call it Blocks. In this talk I will talk describe what Blocks is, how Blocks encourages code writing, and how it supports the craft of log data analysis.

Mattias Rost is a researcher in Computing Science at the University of Glasgow. He is currently working on the EPSRC funded Populations Programme. He was awarded his PhD by the University of Stockholm in 2013. 

MyCity: Glasgow 2014 (13 November, 2014)

Speaker: Marilyn Lennon

During the summer of 2014, we (a small team of researchers at Glasgow University) designed, developed and deployed a smartphone app based game for the commonwealth games in Glasgow. The aim of our game overall was to try to get people to engage with Glasgow, find out more about the commonwealth games, and above all to get people to walk more through 'gamification'. In reality though - we had no time or money for a well designed research study and proper exploration of gamification and engagement and in fact a huge amount of our efforts were focused instead on testing in app advertising models, understanding business models for 'wellness' apps, dealing with research and enterprise and considering routes for commercialisation of our underlying platform and game. Come along and hear what we learned (good and bad) about deploying a health and wellness app in the 'real world'.

Dr Marilyn Lennon is a senior lecturer in Computer and Information Sciences at the University of Strathclyde.

Ms. Male Character - Tropes vs Women (23 October, 2014)

Speaker: YouTube Video - Anita Sarkeesian

In this session we will view and discuss a video from the Feminist Frequency website (http://www.feministfrequency.com). The video is outlined as follows: "In this episode we examine the Ms. Male Character trope and briefly discuss a related pattern called the Smurfette Principle. We’ve defined the Ms. Male Character Trope as: The female version of an already established or default male character. Ms. Male Characters are defined primarily by their relationship to their male counterparts via visual properties, narrative connection or occasionally through promotional materials."

Use of Eye Tracking to Rethink Display Blindness. (16 October, 2014)

Speaker: Sheep Dalton

Public and situated display technologies are an increasingly common part of many urban spaces, including advertising displays on bus stops, interactive screens providing information to tourists or visitors to a shopping centre, and large screens in transport hubs showing travel information as well as news and advertising content. Situated display research has also been prominent in HCI, ranging from studies of community displays in cafes and village shops to large interactive games in public spaces and techniques to allow users to interact with different configurations of display and personal technologies.

Observational studies of situated displays have suggested that they are rarely looked at. Using a mobile eye tracker during a realistic shopping task in a shopping center, we show that people look at displays more than might be expected given observational studies but for very short times (1/3rd of second on average), and from quite far away. We characterize the patterns of eye-movements that precede looking at a display and discuss some of the design implications for the design of situated display technologies that are deployed in public space.

Economic Models of Search (02 October, 2014)

Speaker: Leif Azzopardi

Understanding how people interact when searching is central to the study of Interactive Information Retrieval (IIR). Most of the prior work has either been conceptual, observational or empirical. While this has led to numerous insights and findings regarding the interaction between users and systems, the theory has lagged behind. In this talk, I will first provide an overview of the typically IIR process. Then I will introduce an economic model of search based on production theory. This initial model is then extended to incorporate other variables that affect the interaction between the user and the search engine. The refined model is more realistic, provides a better description of the IIR process and enables us to generate eight interaction-based hypotheses regarding search behavior. To validate the model, I will show how the observed search behaviors from an empirical study with thirty-six participants were consistent with the theory. This work, not only, describes a concise and compact representation of search behavior, but also provides a strong theoretical basis for future IIR research. The modeling techniques used are also more generally applicable to other situations involving Human Computer Interaction, and could be helpful in understand many other scenarios.

This talk is based on the paper, “Modeling Interaction with Economic Models of Search” which received an Honorable Mention at ACM SIGIR 2014, see: http://dl.acm.org/citation.cfm?id=2609574

CANCELLED Instrumental Interaction in Multisurface Environments (25 September, 2014)

Speaker: Michel Beaudouin-Lafon
This talk will illustrate the principles and applications of instrumental interaction, in particular in the context of the WILD multi surface environment.

Unfortunately this talk has been cancelled.

 

Using degraded MP3 quality to encourage a health improving walking pace: BeatClearWalker (18 September, 2014)

Speaker: Andreas Komninos

Promotion of walking is integral to improving public health for many sectors of the population. National governments and health authorities now widely recommend a total daily step target (typically 7,000- 10,000 steps/day). Meeting this target can provide considerable physical and mental health benefits and is seen as a key target for reducing national obesity levels, and improving public health. However, to optimise the health benefits, walking should be performed at a “moderate” intensity - often defined as 3 times resting metabolic rate, or 3 METs. While there are numerous mobile fitness applications that monitor distance walked, none directly target the pace, or cadence, of walkers.

BeatClearWalker is a fitness application for smartphones, designed to help users learn how to walk at a moderate pace (monitored via walking cadence, steps/min.) and encourage maintenance of that cadence. The application features a music player with linked pedometer. Based on the user’s target cadence, BeatClearWalker will apply real-time audio effects to the music if the target walking cadence is not being reached. This provides an immersive and intuitive application that can easily be integrated into everyday life as it allows users to walk while listening to their own music and encourages eyes-free interaction with the device.

This talk introduces the application, its design and evaluation. Results show that using our degraded music decreases the number of below-cadence steps and, furthermore, that the effect can persist when the degradation is stopped.

GIST Seminar (Automotive UI / Mobile HCI) (11 September, 2014)

Speaker: Alex Ng and Ioannis Politis
Ioannis and Alex will present their papers from Automotive UI and Mobile HCI

Speaker: Ioannis Politis
Title: Speech Tactons Improve Speech Warnings for Drivers

This paper describes two experiments evaluating a set of speech and tactile driver warnings. Six speech messages of three urgency levels were designed, along with their tactile equivalents, Speech Tactons. These new tactile warnings retained the rhythm of speech and used different levels of roughness and intensity to convey urgency. The perceived urgency, annoyance and alerting effectiveness of these warnings were evaluated. Results showed that bimodal (audio and tactile) warnings were rated as more urgent, more annoying and more effective compared to unimodal ones (audio or tactile). Perceived urgency and alerting effectiveness decreased along with the designed urgency, while perceived annoyance was lowest for warnings of medium designed urgency. In the tactile modality, ratings varied less as compared to the audio and audiotactile modalities. Roughness decreased and intensity increased ratings for Speech Tactons in all the measures used. Finally, Speech Tactons produced acceptable recognition accuracy when tested without their speech counterparts. These results demonstrate the utility of Speech Tactons as a new form of tactile alert while driving, especially when synchronized with speech.

Speaker: Alex Ng
Title: Comparing Evaluation Methods for Encumbrance and Walking on Interaction with Touchscreen Mobile Devices

In this talk, I will be presenting our accepted paper at this year’s MobileHCI. The paper compares two mobile evaluation methods, walking on a treadmill and walking on the ground, to evaluate the effects of encumbrance (holding objects during interaction with mobile devices) while the preferred walking speed (PWS) is controlled. We will discuss the advantages and limitations of each evaluation method when examining the impact of encumbrance.

GIST Talk - Accent the Positive (10 April, 2014)

Speaker: Alistair Edwards

The way people speak tells a lot about their origins – geographical and social, but when someone can only speak with the aid of an artificial voice (such as Stephen Hawking), conventional expectations are subverted. The ultimate aim of most speech synthesis research is more human-sounding voices, yet the most commonly used one, DecTalk, is quite robotic. Why is this - and is a human voice always appropriate?

This seminar will explore some of the limitations and possibilities of speech technology.

GIST Talk - Socially Intelligent Sensing Systems (04 February, 2014)

Speaker: Dr Hayley Hung

One of the fundamental questions of computer science is about understanding how machines can best serve people. In this talk, I will focus on answering the question of how automated systems can achieve this by being aware of people as social beings. So much of our lives revolve around face-to-face communication. It affects our relationships with others, the influence they have over us, and how this can ultimately transform into decisions that affect a single person or many more people. However, we understand relatively little about how to automate the perception of social behaviour and recent research findings only touch the tip of the iceberg.

In this talk, I will describe some of the research I have carried out to address this gap by presenting my work on devising models to automatically interpret face-to-face human social behaviour using cameras, microphones, and wearable sensors. This will include addressing problems such as automatically estimating who is dominating the conversation? Are these two people attracted to each other? I will highlight the challenges facing this fascinating research problem and open research questions that remain.

Bio: Hayley Hung is an Assistant Professor and Delft Technology Fellow in the Pattern Recognition and Bioinformatics group at the Technical University of Delft in the Netherlands. Before that she held a Marie Curie Intra-European Fellowship at the Intelligent Systems Lab at the University of Amsterdam, working on devising models to estimate various aspects of human behaviour in large social gatherings. Between 2007-2010, she was a post-doctoral researcher at Idiap Research Institute in Switzerland, working on methods to automatically estimate human interactive behaviour in meetings such as dominance, cohesion and deception. She obtained her PhD in Computer Vision from Queen Mary University of London, UK in 2007 and her first degree from Imperial College, UK in Electrical and Electronic Engineering.

GIST Talk - Passive Brain-Computer Interfaces for Automated Adaptation and Implicit Control in Human-Computer Interaction (31 January, 2014)

Speaker: Dr Thortsen Zander

In the last 3 decades the interaction mean of Brain-Computer Interfaces (BCIs) has been investigated extensively. While most research aimed at the design of supportive systems for severely disabled persons, the last decade showed a trend towards applications for the general population. For users without disabilities a specific type of BCIs, that of passive Brain-Computer Interfaces (pBCIs), has shown high potential of improving Human-Machine and Human-Computer Interaction. In this seminar I will discuss the categorization of BCI research, in which we introduced the idea of pBCIs in 2008 and potential areas of application. Specifically, I will present several studies providing evidence that pBCIs can have a significant effect on the usability and efficiency of given systems. I will show that the users situational interpretation, intention and strategy can be detected by pBCIs. This information can be used for adapting the technical system automatically during interaction and enhance the performance of the Human-Machine System. From the perspective of pBCIs a new type of interaction, which is based on implicit control, emerges. Implicit Interaction aims at controlling a computer system by behavioral or psychophysiological aspects of user state, independently of any intentionally communicated commands. This introduces a new type of Human-Computer Interaction, which in contrast to most forms of interaction implemented nowadays, does not require the user to explicitly communicate with the machine. Users can focus on understanding the current state of the system and developing strategies for optimally reaching the goal of the given interaction. Based on information extracted by a pBCI and the given context the system can adapt automatically to the current strategies of the user. In a first study, a proof of principle is given, by implementing an Implicit Interaction to guide simple cursor movements in a 2D grid to a target. The results of this study clearly indicate the high potential of Implicit Interaction and introduce a new bandwidth of applications for passive Brain-Computer Interfaces.

GIST Talk - Mindless Versus Mindful Interaction (30 January, 2014)

Speaker: Yvonne Rogers

We are increasingly living in our digital bubbles. Even when physically together – as families and friends in our living rooms, outdoors and public places - we have our eyes glued to our own phones, tablets and laptops. The new generation of ‘all about me’ health and fitness gadgets, wallpapered in gamification, is making it worse. Do we really need smart shoes that tell us when we are being lazy and glasses that tell us what we can and cannot eat? Is this what we want from technology – ever more forms of digital narcissism, virtual nagging and data addiction? In contrast, I argue for a radical rethink of our relationship with future digital technologies. One that inspires us, through shared devices, tools and data, to be more creative, playful and thoughtful of each other and our surrounding environments.

GIST Talk - Designing Hybrid Input Paradigms (16 January, 2014)

Speaker: Abigail Sellen

Visions of multimodal interaction with computers are as old as the field of HCI itself: by adding voice, gesture, gaze and other forms of input, the hope is that engaging with computers might be more efficient, expressive and natural. Yet it is only in the last decade that the dominance of multi-touch and the rise of gesture-based interaction are radically altering the ways we interact with computers. On the one hand these changes are inspirational and open up the design space; on the other hand, it has caused fractionation in interface design and added complexity for users.  Many of these complexities are caused by layering new forms of input on top of existing systems and practices. I will discuss our own recent adventures in trying to design and implement these hybrid forms of input, and highlight the challenges and the opportunities for future input paradigms. In particular, I conclude that the acid test for any of these new techniques is testing in the wild. Only then can we really design for diversity of people and of experiences

GIST Seminar (28 November, 2013)

Speaker: Graham Wilson/Ioannis Politis
Perception of Ultrasonic Haptic Feedback / Evaluating Multimodal Driver Displays under Varying Situational Urgency

Two talks this week from members of the GIST group. 

Graham Wilson: Perception of Ultrasonic Haptic Feedback

Abstract: Ultrasonic haptic feedback produces tactile sensations in mid-air through acoustic radiation pressure. It is a promising means of providing 3D tactile sensations in open space without the user having to hold an actuator. However, research is needed to understand the basic characteristics of perception of this new feedback medium, and so how best to utilize ultrasonic haptics in an interface. This talk describes the technology behind producing ultrasonic haptic feedback and reports two experiments on fundamental aspects of tactile perception: 1) localisation of a static point and 2) the perception of motion. Traditional ultrasonic haptic devices are large and fixed to a horizontal surface, limiting the interaction and feedback space. To expand the interaction possibilities, the talk also discusses the feasibility of a mobile, wrist-mounted device for gestural interaction throughout a larger space. 

Ioannis Politis: Evaluating Multimodal Driver Displays under Varying Situational Urgency

Abstract: Previous studies have investigated audio, visual and tactile driver warnings, indicating the importance of conveying the appropriate level of urgency to the drivers. However, these modalities have never been combined exhaustively and tested under conditions of varying situational urgency, to assess their effectiveness both in the presence and absence of critical driving events. This talk will describe an experiment evaluating all multimodal combinations of such warnings under two contexts of situational urgency: a lead car braking and not braking. The results showed that responses were quicker when more urgent warnings were used, especially in the presence of a car braking. Participants also responded faster to the multimodal as opposed to unimodal signals. Driving behaviour improved in the presence of the warnings and the absence of a car braking. These results highlight the utility of multimodal displays to rapidly and effectively alert drivers and demonstrate how driving behaviour can be improved by such signals.

[GIST] Talk -- The Value of Visualization for Exploring and Understanding Data (11 July, 2013)

Speaker: Prof John Stasko

Investigators have an ever-growing suite of tools available for analyzing and understanding their data. While techniques such as statistical analysis, machine learning, and data mining all have benefits, visualization provides an additional unique set of capabilities. In this talk I will identify the particular advantages that visualization brings to data analysis beyond other techniques, and I will describe the situations when it can be most beneficial. To help support these arguments, I'll present a number of provocative examples from my own work and others'. One particular system will demonstrate how visualization can facilitate exploration and knowledge acquisition from a collection of thousands of narrative text documents, in this case, reviews of wines from Tuscany.

Information Visualization for Knowledge Discovery (13 June, 2013)

Speaker: Professor Ben Schneiderman, University of Maryland - College Park
This talk reviews the growing commercial success stories such as www.spotfire.com, and www.smartmoney.com/marketmap, plus emerging products such as www.hivegroup.com will be covered.

This talk reviews the growing commercial success stories such as www.spotfire.com, and www.smartmoney.com/marketmap, plus emerging products such as www.hivegroup.com will be covered.

Full information on the talk is available on the University events listings.

[GIST] Talk -- Shape-changing Displays: The next revolution in display technology? (28 March, 2013)

Speaker: Dr Jason Alexander

Shape-changing interfaces physically mutate their visual display surface
to better represent on-screen content, provide an additional information
channel, and facilitate tangible interaction with digital content. This
talk will preview the current state-of-the art in shape-changing
displays, discuss our current work in this area, and explore the grand
challenges in this field. The talk will include a hardware demonstration
of one such shape-changing device, a Tilt Display.

Bio:
 
Jason is a lecturer in the School of Computing and Communications at
Lancaster University. His primary research interests are in
Human-Computer Interaction, with a particular interest in developing the
next generation of interaction techniques. His recent research is
hardware-driven, combining tangible interaction and future display
technologies. He was previously a post-doctoral researcher in the
Bristol Interaction and Graphics (BIG) group at the University of
Bristol. Before that he was a Ph.D. student in the HCI and Multimedia
Lab at the University of Canterbury, New Zealand. More information can
be found at http://www.scc.lancs.ac.uk/~jason/.

GIST Seminar: A Study of Information Management Processes across the Patient Surgical Pathway in NHS Scotland (14 March, 2013)

Speaker: Matt-Mouley Bouamrane

Preoperative assessment is a routine medical screening process to assess a patient's fitness for surgery. Systematic reviews of the evidence have suggested that existing practices are not underpinned by a strong evidence-base and may be sub-optimal.

We conducted a study of information management processes across the patient surgical pathway in NHS Scotland, using the Medical Research Council Complex Intervention Framework and mixed-methods.

Most preoperative services were created in the last 10 years to reduce late theatre cancellations and increase the ratio of day-case surgery. 2 health-boards have set up electronic preoperative information systems and stakeholders at these services reported overall improvements in processes. General Practitioners' (GPs) referrals are now done electronically and GPs considered electronic referrals as a substantial improvement. GPs reported minimal interaction with preoperative services. Post- operative discharge information was often considered unsatisfactory.

Conclusion: Although some substantial progress have been made in recent years towards improving information transfer and sharing among care providers within the NHS surgical pathway, there remains considerable scope for improvements at the interface between services.

MultiMemoHome Project Showcase (19 February, 2013)

Speaker: various

This event is the final showcase of research and prototypes developed during the MultiMemoHome Project (funded by EPSRC). 

GIST Seminar: : Understanding Visualization: A Formal Approach using Category Theory and Semiotics (31 January, 2013)

Speaker: Dr Paul Vickers

We combine the vocabulary of semiotics and category theory to provide general framework for understanding visualization in practice, including: relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. Further work will explore how the framework may be used to compare visualizations, especially those of different modalities. This may offer predictive potential before expensive user studies are carried out.

Events Webapp