Inference, Dynamics and Interaction Group

Overview

The Inference, Dynamics and Interaction group is a research group within the Information, Data and Analytics Section, and brings together three fundamental research areas: modern inference techniques, dynamic systems and control theory and interaction design. These are applied in wide range of situations:

- Computational Interaction
- Active Inference
- Closed-loop data science
- Mobile Interaction and novel sensors
- Machine learning in Science
- Computational Biology
- Computational methods in quantum imaging
- Entertainment systems
- Vision systems

The group's strength lies in the unusual combination of theoretical backgrounds from machine learning to HCI, and the focus on building innovative working systems which achieve performance previously thought impossible, using the latest algorithms, sensors and devices. The group's skills in combining software engineering and mathematical inference allows us to attack complex systems problems with large high-dimensional data spaces and so in real-time.

Research Projects

Current Research Projects:

Previous Research Projects:

  • QuantIC1 - Quantum Technology Hub in Quantum imaging
  • MoreGrasp – EC Horizon2020 project 2015-2018
  • CoSound A Cognitive Systems Approach to Enriched and Actionable Information from Audio Streams Supported by the Danish Strategic Research Council, Jan. 2012 – Dec. 2016
  • Information Theory approach for measuring & optimising computer-human interaction, Nokia-funded Ph.D. studentship.
  • Bang & Olufsen funded Ph.D. studentship.
  • Stomatal-based systems analysis of water use efficiency, BBSRC funded project, Prof. Michael Blatt (PI), Dr. Simon Rogers (coI) (BB/L001276/1
  • In-silico integration of primary CML stem cell polyomic datasets to identify kinase-independent networks and novel prognostic biomarkers, Leukaemia and Lymphoma Research funded project, Prof. Tessa Holyoake (PI), Dr. Simon Rogers (coI)
  • Computational inference of biopathway dynamics and structures, EPSRC funded (EP/L020319/1), Prof. Dirk Husmeier (PI), Dr. Simon Rogers (coI), Dr. Maurizio Filippone (coI)
  • Unifying metabolome and proteome informatics, BBSRC funded (BB/L018616/2), Dr. Andrew Dowsey (PI), Dr. Simon Rogers (coI)
  • Human Emotional Communication in the field of Quality and Rapport, Nokia-funded Ph.D. studentship.
  • EC-COST action IC0601 on Sonic Interaction Design.
  • TOBI: Tools for Brain-Computer Interaction, EC-funded project. Roderick Murray-Smith (Glasgow PI), John Williamson, project coordinator:Prof. José del R. Millán, 2008-2013.
  • Multimodal, Negotiated Interaction in Mobile Scenarios, EPSRC funded project (£638k), Roderick Murray-Smith (PI), with Matt Jones (Swansea), Stephen Brewster, 2007-2010.
  • PASCAL network member, EC-funded network in Pattern Analysis, Statistical Modelling and Computational Learning.
  • Social Interaction: A Cognitive-Neurosciences Approach, ESRC funded project (£3.7 million) , Simon Garrod (PI), 2008-2012.

Keywords

  • Machine Learning
  • Statistical Pattern Recognition
  • Human Computer Interaction
  • Mobile HCI
  • Brain Computer Interaction
  • Sensor systems
  • Urban Interactions/Smart Cities

Events

This Week’s EventsAll Upcoming EventsPast EventsWebapp

This Week’s Events

There are no events scheduled for this week

Upcoming Events

There are no upcoming events

Past Events

Halting Climate Change

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Professor Carl Rasmussen, University of Cambridge
Date: 19 February, 2024
Time: 16:00 - 17:00
Location: SAWB 423, Sir Alwyn Williams Building

Addressing climate change is essentially a problem of international cooperation. The necessary properties of successful cooperative schemes are well understood, but our main current international approaches, such as the Paris Agreement, have none of these properties, and are consequently extremely unlikely to succeed. You may think that the whole problem is simply completely intractable, but I think not. I’ll discuss a simple proposal eliminating the main shortcomings of the Paris Agreement, and aspects of how it might be implemented in practice.

This is a very informal talk, which will hopefully generate a lot of discussion. Some related ideas are discussed here: https://mlg.eng.cam.ac.uk/carl/climate/ 

Exploring Medical Image Segmentation with Fully Convolutional Vision Transformers

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Dr Chaitanya Kaul, University of Glasgow
Date: 17 November, 2023
Time: 14:00 - 15:00
Location: SAWB 423, Sir Alwyn Williams Building

Vision Transformers have been applied to various domains of computer vision applications. Challenges posed by the fine-grained nature of medical image analysis mean that the adaptation of the transformer for their analysis is still at nascent stages. The overwhelming success of the encoder decoder architecture like UNet, lay in its ability to appreciate the fine-grained nature of the segmentation task, an ability which most existing transformer based models do not currently posses. In this talk, I will go through our recent works [1] [2] [3] to address this shortcoming of transformer models for medical image segmentation tasks showing how inductive bias towards images can be introduced to transformers to learn long range semantic dependencies inside them, and how such feature dependencies can be processed for effective, faster,  segmentation of CT, MRI and RGB modalities.

References

[1] Tragakis, A., Kaul, C., Murray-Smith, R. and Husmeier, D., 2023. The fully convolutional transformer for medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 3660-3669).

[2] Liu, Q., Kaul, C., Wang, J., Anagnostopoulos, C., Murray-Smith, R. and Deligianni, F., 2023, June. Optimizing Vision Transformers for Medical Image Segmentation. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE.

[3] GLFNet: Global-Local (Frequency) Filter Networks for efficient Medical Image Segmentation. (Under Review, ISBI 2024)

Speaker: Dr. Chaitanya Kaul is a Research Associate in the Inference Dynamics and Interaction Group, at School of Computing Science, University of Glasgow, working under Prof. Roderick Murray-Smith. He is currently funded by Google, and QuantIC, working on 3D Computational Imaging problems where he investigates how unconvential imaging sensors like radars and SPADs can be used for 3D scene understanding and 3D scene interaction. He was previously funded by iCAIRD where he investigated adversarial testing of machine learning algorithms to understand feature leakage in medical imaging applications. His research interests are in Computational Imaging, Medical Image Segmentation and 3D Shape Analysis.

DiffInfinite: Large Mask-Image Synthesis via Parallel Random Patch Diffusion in Histopathology

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Marco Aversa, University of Glasgow
Date: 10 November, 2023
Time: 14:00 - 15:00
Location: SAWB 423, Sir Alwyn Williams Building

We present DiffInfinite, a hierarchical diffusion model that generates arbitrarily large histological images while preserving long-range correlation structural information. Our approach first generates synthetic segmentation masks, subsequently used as conditions for the high-fidelity generative diffusion process. The proposed sampling method can be scaled up to any desired image size while only requiring small patches for fast training. Moreover, it can be parallelized more efficiently than previous large-content generation methods while avoiding tiling artifacts. The training leverages classifier-free guidance to augment a small, sparsely annotated dataset with unlabelled data. Our method alleviates unique challenges in histopathological imaging practice: large-scale information, costly manual annotation, and protective data handling. The biological plausibility of DiffInfinite data is evaluated in a survey by ten experienced pathologists as well as a downstream classification and segmentation task. Samples from the model score strongly on anti-copying metrics which is relevant for the protection of patient data.

From Brain Waves to Pixels: EEG-Driven GANs for Semantic Image Editing and Visual Cognition

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Carlos de la Torre-Ortiz, University of Helsinki
Date: 03 November, 2023
Time: 14:00 - 15:00
Location: SAWB 423, Sir Alwyn Williams Building

Carlos de la Torre is a 3rd-year PhD student at the University of Helsinki focusing on brain-computer interfacing who is visiting our group for two months (mid-October to mid-December).

His talk will explore the applications of brain-computer interfaces (BCIs) and generative adversarial networks (GANs) in the domains of semantic image editing and visual cognition research. First, he will introduce a novel approach that employs electroencephalography (EEG) as implicit feedback for training GANs in semantic feature representation. Second, he will show how to use EEG-based feedback to guide the latent representation within GANs, enabling nuanced image editing. Lastly, he will investigate the relationship between EEG and image perception, quantifying the distance between a perceived and a target image in the GAN's latent space. He will conclude by arguing that this graded response mechanism sets the stage for future BCI research that moves beyond binary classifications (e.g., P3 spellers) to leverage graded relevance based on proximity to a target.

Breaking Boundaries of Human-in-the-Loop Design Optimization

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Yi-Chi Liao, Aalto University
Date: 20 October, 2023
Time: 14:00 - 15:00
Location: SAWB 423, Sir Alwyn Williams Building

Human-in-the-loop optimization (HILO) has emerged as a principled solution for design optimization, utilizing computational optimization to intelligently select designs for user testing. While HILO has demonstrated success within the human-computer interaction (HCI) domain, its application has faced various constraints. This talk explores computational augmentations that push the boundaries of HILO, enabling its deployment in diverse and realistic design tasks. The talk explores several enhancements for HILO, addressing its limitations and expanding its scope; it includes extensions of HILO to multi-objective design tasks, population-level optimization within HILO, and the application of HILO in designing physical interfaces. Additionally, the talk investigates the future potential of HILO, empowered by advanced user models and simulations. Overall, this talk aims to showcase HILO's progress, its capacity to tackle real-world design problems, and its role in shaping the future of design optimization.

Simulating Interaction Movements via Optimal Feedback Control and Deep Reinforcement Learning

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Markus Klar, University of Bayreuth
Date: 06 October, 2023
Time: 14:00 - 15:00
Location: SAWB 302 (Rod's office) to watch an online talk

Extensive user studies are required for the development of interaction techniques, which can be both time-consuming and expensive. To keep pace with the growing market for VR/AR applications, the ability to predict user behaviour using in silico methods and apply this knowledge during the development process is crucial.

We formulate the interaction of humans with computers as an Optimal Control Problem and explore, how different Optimal Feedback Control (OFC) methods can predict user behaviour. In particular, we combine Model Predictive Control with a state-of-the-art biomechanical model, implemented in the fast physics engine MuJoCo. Comparing to real users performing mid-air pointing movements, our approach can produce end-effector trajectories as well as joint movements that are within the between-user variance.

In addition, we train agents to solve different interaction tasks, e.g., tracking or choice reaction, using Deep Reinforcement Learning (DRL). Unlike most OFC methods, DRL approaches can cope well with larger control/state spaces and therefore allow the integration of direct muscle control as well as visual and proprioceptive perception. The resulting simulations can help designers of interaction techniques to learn about possible impacts of design choices and to optimise interfaces in terms of ergonomics or efficiency.

In the future, it is possible that real-time predictions may enhance both the speed and precision of interactions, ultimately leading to seamless interactions with the virtual world.

Detecting and Countering Untrustworthy Artificial Intelligence (AI)

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Nikola Banovic, University of Michigan, USA
Date: 04 May, 2023
Time: 15:00 - 16:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

The ability to distinguish trustworthy from untrustworthy Artificial Intelligence (AI) is critical for broader societal adoption of AI. Yet, the existing Explainable AI (XAI) methods attempt to persuade end-users that an AI is trustworthy by justifying its decisions. Here, we first show how untrustworthy AI can misuse such explanations to exaggerate its competence under the guise of transparency to deceive end-users—particularly those who are not savvy computer scientists. Then, we present fndings from the design and evaluation of two alternative XAI mechanism that help end-users form their own explanations about trustworthiness of AI. We use our findings to propose an alternative framing of XAI that helps end-users develop AI literacy they require to critically refect on AI to assess its trustworthiness. We conclude with implications for future AI development and testing, public education and investigative journalism about AI, and end-user advocacy to increase access to AI for a broader audience of end-users.
Additional Key Words and Phrases: Artificial Intelligence (AI); Explainable AI; Trustworthy AI, Responsible AI.

Design Engineering for AI Engineering

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Per Ola Kristensson, University of Cambridge
Date: 19 January, 2023
Time: 11:00 - 12:00
Location: Room 422, SAWB

In this talk I will give an overview of some of our recent work on designing AI-infused interactive systems for a variety of applications, including efficient communication systems for augmentative and alternative communication and gesture-based systems for virtual and augmented reality. I will then discuss the challenges in AI engineering such and other systems and propose design engineering approaches that can help ensure AI-infused systems are designed to be effective, efficient, and safe.

From Gambles to User Interfaces: Simulating Decision-Making in the Real World

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Aini Putkonen, Aalto University
Date: 08 December, 2022
Time: 13:00 - 14:00
Location: Room 422, SAWB

Classical models of decision-making offer valuable insights about people's decision-making tendencies, for example, how they manage risk and uncertainty. This behaviour is often studied in tasks where individuals choose between uncertain outcomes, or gambles. Such tasks are also common when using interactive systems. However, applying models of decision-making in naturalistic settings can be a challenge as they were largely developed in controlled experiments. Experimental settings allow controlling the task design, whereas real-world user interfaces often lack this level of control. In this talk, I hypothesise that considering aspects of the human cognition is key in moving from modelling gambles to similar tasks on real information-rich user interfaces. Such aspects include the visual system, memory and cognitive capacity. I address how to model real-world user behaviour by combining understanding of cognition with reinforcement learning. In particular, theories of human decision-making and psychology are used to process information on displays, producing human-like observations for the learning problem. This problem is then solved using reinforcement learning. The advantages of this approach will be discussed, including construction of simulation models of users for applications like prototyping, recommender systems, and decision support.

NeurIPS warm up talks: Bessel Equivariant Networks for Inversion of Transmission Effects in Multi-Mode Optical Fibres and Physical Data Models in Machine Learning Imaging Pipelines

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Marco Aversa and Josh Mitton , University of Glasgow
Date: 24 November, 2022
Time: 15:00 - 16:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

Josh and Marco will give practice talks for their upcoming NeurIPS papers:

J. Mitton, S.P. Mekhail, M. Padgett, D. Faccio, M. Aversa, and R. Murray-Smith, Bessel Equivariant Networks for Inversion of Transmission Effects in Multi-Mode Optical Fibres, NeurIPS 2022.  pdf

We develop a new type of model for solving the task of inverting the transmission effects of multi-mode optical fibres through the construction of an SO+(2, 1)- equivariant neural network. This model takes advantage of the of the azimuthal correlations known to exist in fibre speckle patterns and naturally accounts for the difference in spatial arrangement between input and speckle patterns. In addition, we use a second post-processing network to remove circular artifacts, fill gaps, and sharpen the images, which is required due to the nature of optical fibre transmission. This two stage approach allows for the inspection of the predicted images produced by the more robust physically motivated equivariant model, which could be useful in a safety-critical application, or by the output of both models, which produces high quality images. Further, this model can scale to previously unachievable resolutions of imaging with multi-mode optical fibres and is demonstrated on 256 × 256 pixel images. This is a result of improving the trainable parameter requirement from O(N4 ) to O(m), where N is pixel size and m is number of fibre modes. Finally, this model generalises to new images, outside of the set of training data classes, better than previous models.

Aversa, Marco*; Oala, Luis; Clausen, Christoph; Murray-Smith, Roderick; Sanguinetti, Bruno, Physical Data Models in Machine Learning Imaging Pipelines https://ml4physicalsciences.github.io/2022/files/NeurIPS_ML4PS_2022_136.pdf   https://ml4physicalsciences.github.io/2022/ 
Light propagates from the object through the optics up to the sensor to create an image. Once the raw data is collected, it is processed through a complex image signal processing (ISP) pipeline to produce an image compatible with human perception. However, this processing is rarely considered in machine learning modelling because available benchmark data sets are generally not in raw format. This study shows how to embed the forward acquisition process into the machine learning model. We consider the optical system and the ISP separately. Following the acquisition process, we start from a drone and airship image dataset to emulate realistic satellite raw images with on-demand parameters. The end-to-end process is built to resemble the optics and sensor of the satellite setup. These parameters are satellite mirror size, focal length, pixel size and pattern, exposure time and atmospheric haze. After raw data collection, the ISP plays a crucial role in neural network robustness. We jointly optimize a parameterized differentiable image processing pipeline with a neural network model. This can lead to speed up and stabilization of classifier training at a margin of up to 20% in validation accuracy.

Generating music in the raw audio domain

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Sander Dieleman, Deep Mind
Date: 18 February, 2021
Time: 12:00 - 13:00
Location: Zoom

Realistic music generation is a challenging task. When machine learning is used to build generative models of music, typically high-level representations such as scores, piano rolls or MIDI sequences are used that abstract away the idiosyncrasies of a particular performance. But these nuances are very important for our perception of musicality and realism, so we embark on modelling music in the raw audio domain. I will discuss some of the advantages and disadvantages of this approach, and the challenges it entails.

Closing the Dequantization Gap: PixelCNN as a Single-Layer Flow

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Ole Winther, Technical University of Denmark
Date: 21 January, 2021
Time: 12:00 - 13:00
Location: Zoom

 https://uofglasgow.zoom.us/j/91703102253?pwd=QnlYYWJyVWVoanFJTk5nNFJ4Tms4UT09

Flow models have recently made great progress at modeling ordinal discrete data such as images and audio. Due to the continuous nature of flow models, dequantization is typically applied when using them for such discrete data, resulting in lower bound estimates of the likelihood. In this paper, we introduce subset flows, a class of flows that can tractably transform finite volumes and thus allow exact computation of likelihoods for discrete data. Based on subset flows, we identify ordinal discrete autoregressive models, including WaveNets, PixelCNNs and Transformers, as single-layer flows. We use the flow formulation to compare models trained and evaluated with either the exact likelihood or its dequantization lower bound. Finally, we study multilayer flows composed of PixelCNNs and non-autoregressive coupling layers and demonstrate state-of-the-art results on CIFAR-10 for flow models trained with dequantization.

Using geometry to form identifiable latent variable models and Isometric Gaussian Process Latent Variable Model

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Søren Hauberg & Martin Jørgensen, Technical University of Denmark
Date: 03 December, 2020
Time: 12:00 - 13:30
Location: Zoom

Please note that the timeslot has changed to 12:00-13:30.


There will be two talks in this session:

-----------------------------------------------------------

12:00-13:00 Using geometry to form identifiable latent variable models - Prof Søren Hauberg

Generative models learn a compressed representation of data that is often used for downstream tasks such as interpretation, visualization and prediction via transfer learning. Unfortunately, the learned representations are generally not statistically identifiable, leading to a high risk of arbitrariness in the downstream tasks. We propose to use differential geometry to construct representations that are invariant to reparametrizations, thereby solving the bulk of the identifiability problem. We demonstrate that the approach is deeply tied to the uncertainty of the representation and that practical applications require high-quality uncertainty quantification. With the identifiability problem solved, we show how to construct better priors for generative models, and that the identifiable representations reveal signals in the data that were otherwise hidden.

----------------------------------------------------------

 

13:00-13:30: Isometric Gaussian Process Latent Variable Model - Martin Jørgensen, Postdoc

 

I present a generative unsupervised model where the latent variable respects both the distances and the topology of the modeled data. The model leverages the Riemannian geometry of the generated manifold to endow the latent space with a well-defined stochastic distance measure, which is modeled as Nakagami distributions. These stochastic distances are sought to be as similar as possible to observed distances along a neighborhood graph through a censoring process. The model is inferred by variational inference. I demonstrate how the model can encode invariances in the learned manifolds.

-----------------------------------------------------------


Zoom link: 
https://uofglasgow.zoom.us/j/95874104571?pwd=Rjc0VERQR25ReHRSSzRweUtEUlYvUT09

 

Soft Squishy Electronic Skin

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Ravinder Dahiya, University of Glasgow
Date: 23 November, 2020
Time: 13:00 - 14:00
Location: Zoom

https://uofglasgow.zoom.us/j/92128593434?pwd=enhTc2xvKyt5Njd5MDU3K1p0ZkFDdz09

The miniaturization led advances in microelectronics over 50 years have revolutionized our lives through fast computing and communication. Recent advances in the field are propelled by applications such as electronic skin in robotics, wearable systems, and healthcare technologies etc. Often these applications require electronics to be soft and Squishy so as to conform to 3D surfaces. These requirements call for new methods to realize sensors, actuators electronic devices and circuits on unconventional substrates such as plastics, papers and elastomers. This lecture will present various approaches (over different time and dimension scales) for obtaining distributed electronic, sensing and actuation devices on soft and flexible substrates, especially in context with the tactile or electronic skin (eSkin). These approaches range from distributed off-the-shelf electronics integrated on flexible printed circuit boards, to novel alternatives such as eSkin constituents obtained by printed nanowires, graphene and ultra-thin chips, etc. The technology behind such sensitive flexible and squishy electronic systems is also the key enabler for numerous emerging fields such as internet of things, smart cities and mobile health etc. This lecture will also discuss how the flexible electronics research may unfold in the future.

Bayesian model-based clustering in high dimensions

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Paul Kirk, University of Cambridge
Date: 19 November, 2020
Time: 12:00 - 13:00
Location: Zoom

https://uofglasgow.zoom.us/j/92184721880?pwd=dExzeDhxU3h6RnplYlg1UkoxY3RjZz09

Although the challenges presented by high dimensional data in the context of regression are well-known and the subject of much current research, comparatively little work has been done on this in the context of clustering. In this setting, the key challenge is that often only a small subset of the features provides a relevant stratification of the population. Identifying relevant strata can be particularly challenging when dealing with high-dimensional datasets, in which there may be many features that provide no information whatsoever about population structure, or -- perhaps worse -- in which there may be (potentially large) feature subsets that define irrelevant stratifications. For example, when dealing with genetic data, there may be some genetic variants that allow us to group patients in terms of disease risk, but others that would provide completely irrelevant stratifications (e.g. which would group patients together on the basis of eye or hair colour). Bayesian profile regression is an outcome-guided model-based clustering approach that makes use of a response in order to guide the clustering toward relevant stratifications. Here we consider how this approach can be extended to the “multiview” setting, in which different groups of features (“views”) define different stratifications. We present some results in the context of breast cancer subtyping to illustrate how the approach can be used to perform integrative clustering of multiple ‘omics datasets.

Multiresolution Multitask Gaussian Processes: Air quality in London

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Theo Damoulas, University of Warwick
Date: 27 February, 2020
Time: 12:00 - 13:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

Date to be confirmed

We consider evidence integration from potentially dependent observation processes under varying spatio-temporal sampling resolutions and noise levels. We offer a multi-resolution multi-task framework, termed MRGPs, while allowing for both inter-task and intra-task multi-resolution and multi-fidelity. We develop shallow Gaussian Process (GP) mixtures that approximate the difficult to estimate joint likelihood with a composite one and deep GP constructions that naturally handle scaling issues and biases. By doing so, we generalize and outperform state of the art GP compositions and offer information-theoretic corrections and efficient variational approximations for inference. We demonstrate the competitiveness of MRGPs on synthetic settings and on the challenging problem of hyper-local estimation of air pollution levels across London from multiple sensing modalities operating at disparate spatio-temporal resolutions.

Artificial Intelligence for Data Analytics

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Chris Williams, University of Edinburgh
Date: 23 January, 2020
Time: 12:00 - 13:00
Location: Sir Alwyn Williams Building, 422+423 Seminar Room

 

The practical work of deploying a machine learning system is
dominated by issues outside of training a model: data preparation,
data cleaning, understanding the data set, debugging models, and so
on. The goal of the Artificial Intelligence for Data Analytics project
at the Alan Turing Institute is to help to automate the whole data
analytics process by drawing on advances in AI and machine learning.
We will describe tools to address such tasks, including identifying
syntactic and semantic data types, data integration, and identifying
and repairing missing and anomalous data.

Joint work with the AIDA team: Taha Ceritli, James Geddes, Ernesto
Jimenez-Ruiz, Ian Horrocks, Alfredo Nazabal, Tomas Petricek, Charles
Sutton, Gerrit Van Den Burg.

Disentangled representation learning in healthcare applications

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Sotirios A Tsaftaris, University of Edinburgh
Date: 20 January, 2020
Time: 12:00 - 13:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

Prof. Sotirios A Tsaftaris

Canon Medical/Royal Academy of Engineering Research Chair in Healthcare AI Chair in Machine Learning and Computer Vision at the University of Edinburgh (UK)

Turing Fellow Alan Turing Institute 

 

Abstract: The detection of disease, segmentation of anatomy and other classical image analysis tasks, have seen incredible improvements due to deep learning. Yet these advances need lots of data: for every new task, new imaging scan, new hospital, more training data are needed.  In this talk, I will show how deep neural networks can learn latent and disentangled embeddings suitable for several analysis tasks. Within a multi-task learning setting I will show that the same framework can learn embeddings drawing supervision from self-supervised tasks that use reconstruction and also temporal dynamics, and weakly supervised tasks obtaining supervision from health records [1,2]. I will then present an extension of this framework on multi-modal (multi-view) learning and inference [3]. I will then discuss how different architectural choices affect disentanglement [3] and highlight issues that raise the need for (new) metrics for assessing disentanglement in content/style disentanglement settings. Time permitting, I will present a challenging auto-regressive task: learning to age the human brain [4].  I will conclude by highlighting challenges for deep learning in healthcare in general.

 

Papers that will be discussed (in approximate order):

  1. A. Chartsias, T. Joyce, G. Papanastasiou, S. Semple, M. Williams, D. Newby, R. Dharmakumar, S.A. Tsaftaris, 'Disentangled Representation Learning in Cardiac Image Analysis,' Medical Image Analysis, Vol 58, Dec 2019 https://arxiv.org/abs/1903.09467
  2. G. Valvano, A. Chartsias, A. Leo, S.A. Tsaftaris, 'Temporal Consistency Objectives Regularize the Learning of Disentangled Representations,' First MICCAI Workshop, DART 2019, in Conjunction with MICCAI 2019, Shenzhen, China, October 13 and 17, 2019. https://arxiv.org/abs/1908.11330
  3. A. Chartsias, G. Papanastasiou, C. Wang, S. Semple, D. Newby, R. Dharmakumar, S.A. Tsaftaris, Disentangle, align and fuse for multimodal and zero-shot image segmentation,' https://arxiv.org/abs/1911.04417
  4. T. Xia, A. Chartsias, S.A. Tsaftaris, 'Consistent Brain Ageing Synthesis,' MICCAI 2019. http://tsaftaris.com/preprints/Tian_MICCAI_2019.pdf

 

 

 

 

Bio [Long]: Prof. Sotirios A. Tsaftaris, obtained his PhD and MSc degrees in Electrical Engineering and Computer Science (EECS) from Northwestern University, USA in 2006 and 2003 respectively. He obtained his Diploma in Electrical and Computer Engineering from the Aristotle University of Thessaloniki, Greece. 

            Currently, he is Canon Medical/Royal Academy of Engineering Research Chair in Healthcare AI, and Chair in Machine Learning and Computer Vision at the University of Edinburgh (UK). He is also a Turing Fellow with the Alan Turing Institute. Previously he was an Assistant Professor with IMT Institute for Advanced Studies, Lucca, Italy and Director of the Pattern Recognition and Image Analysis Unit at IMT (2011-2015). Prior to that, he held a joint Research Assistant Professor appointment at Northwestern University with the Departments of Electrical Engineering and Computer Science (EECS) and Radiology Feinberg School of Medicine (2006-2011).

Machine learning for healthcare applications: Becoming the expert

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Alison O'Neil, Canon Medical Research (Edinburgh)
Date: 05 December, 2019
Time: 12:00 - 13:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

Machine learning has shown great promise for healthcare applications, matching human performance for some classes of problem. Meantime, the use of electronic medical records is becoming more common and healthcare technologies and infrastructure are advancing, whilst radiology and many other medical specialties are under-resourced. As a result, there are huge opportunities to use automation and AI to improve workflow and to assist the doctor to make complex decisions faster and more accurately. However, data is often sensitive and difficult to access (especially for rare pathologies), expert annotators are a scarce resource, and high stakes means stringent accuracy requirements. This talk will discuss the challenges - and ways to solve them! - of training real-world expert AI systems for healthcare applications, illustrated through Canon Medical’s AI Research projects in image analysis, natural language processing, and risk stratification from clinical data.

A reinforcement learning based traffic signal control in a connected vehicle environment

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Sebastian Stein and Saeed Maadi, University of Glasgow
Date: 29 November, 2019
Time: 14:00 - 15:00
Location: Lilybank Gardens, F121 Conference Room

Understanding where cells move using microscopes, computers and equations.

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Robert Insall, University of Glasgow
Date: 29 November, 2019
Time: 10:00 - 11:30
Location: Lilybank Gardens, F121 Conference Room

Throughout our life cycles, the cells in our bodies need to move around.  If they need to get anywhere they need to be steered; Random migration is ineffective over longer distances. 

Recent research has taught us a great deal about how cells respond to steering cues, but surprisingly little about where those cues come from. Recently, a combination of mathematical modelling, studies in amoebas, and analysis of cancer cells shows that cells frequently make their own gradients, often from sources with no positional information at all, at the same time as they respond to them. 
Because this is based around positive feedback loops powered by signalling and diffusion, the results are often unpredictable and frequently fascinating and beautiful. Cells may move in waves, streams, or repel one another into carefully-delineated territories. Furthermore, the process of doing so can make them remarkably better at interpreting their environments than we have ever expected was possible. 

I will show examples of cells moving collectively at a distance from one another, solving mazes of different shapes, and the mechanisms that enable cancer cells to spread from tumours into the bloodstream.  I will also describe uses of transfer learning to identify mutations in the relevant pathways and propose a difficult inverse problem that Computing Science experts may be able to solve.

Statistical emulation of cardiac mechanics

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Dirk Husmeier, University of Glasgow
Date: 08 November, 2019
Time: 14:00 - 15:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

In recent years, we have witnessed impressive developments in the mathematical modelling of complex physiological systems. This provides unprecedented novel opportunities for improved disease diagnosis based on an enhanced quantitative physiological understanding. In a recent proof of concept study, we have shown that the biomechanical parameters of a state-of-the-art cardiac mechanics model have encouraging diagnostic power for early diagnosis of the risk to myocardial infarction (heart attack) and decision making related to alternative treatment options. However, estimating the biomechanical parameters non-invasively from magnetic resonance imaging (MRI) is computationally expensive and can take several weeks of high-performance computing time. This constitutes a severe obstacle for translational research, preventing uptake in the clinic and thwarting any pathway to genuine impact in healthcare. The problem is that state-of-the-art mathematical models of complex physiological systems are typically based on systems of nonlinear coupled partial differential equations (PDEs), which have no closed-form solution and have to be integrated numerically, e.g. using finite element simulations. This is not an issue for the so-called forward problem, where the objective is to understand a system’s behaviour for given physiological parameters. However, many physiological parameters cannot be measured noninvasively, and hence have to be estimated indirectly based on a quantitative measure of the discrepancy between model predictions and non-invasive measurements. This calls for thousands of numerical integrations as part of an iterative optimization or sampling routine, incurring computational run times in the order of days or weeks.

A potential way to deal with the high computational complexity and make progress towards a clinical decision support system that can make disease prognostications and risk assessments in real time, is statistical emulation. The idea is to approximate the computationally expensive mathematical model (the simulator) with a computationally cheap statistical surrogate model (the emulator) by a combination of massive parallelization and nonlinear regression. Starting from a space-filling design in parameter space, the underlying partial differential equations are solved numerically on a parallel computer cluster, and methods from nonparametric Bayesian statistics based on Gaussian Processes (GPs) are applied to multivariate smooth interpolation. When new data become available (e.g. myocardial strains from MRI scans) the resulting proxy objective function can be maximized (for maximum likelihood estimation) or sampled from (using Markov chain Monte Carlo) at low computational costs, without further computationally expensive simulations of the original mathematical model.

In my talk, I will compare different emulation strategies and loss functions, and assess the reduction in computational complexity.  For large data sets, it is not computationally feasible to train a GP, as the computational complexity is of the order of the third power of the data set size, and I will compare various alternative paradigms for dealing with this issue. I will describe a proof-of-concept study, with encouraging results: While conventional parameter estimation based on numerical simulations from the cardiac mechanics model leads to computational costs in the order of weeks, our emulation method reduces the computational complexity to the order of the quarter of an hour, while effectively maintaining the same level of accuracy.  However, there are still substantial hurdles to overcome in our endeavour to move this work forward towards personalised medicine and to develop a decision support system that can be used by clinical practitioners, which I will discuss.

If time permits, I will discuss an extension of this framework to uncertainty quantification in the fluid dynamics of the pulmonary blood circulation system, with applications to the diagnosis of pulmonary hypertension (high blood pressure in the lungs).

Machine Learning Models for Inference from Outliers

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Mahesan Niranjan, University of Southampton
Date: 26 September, 2019
Time: 12:00 - 13:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

Abstract:

While much of recent literature on machine learning address regression and classification problems, several problems of interest relate to detecting a relatively small number of outliers from large collections of data. Such problems have been addressed in the context of target tracking, condition monitoring of complex engines and patient health monitoring in an intensive care setting, for example. The popular approach, in these settings, of estimating a probability density over normal data and comparing the likelihood of a test observation against a threshold set from this suffers the well known problem of the curse of dimensionality. Circumventing this involves modelling – data driven or otherwise – to capture known relationships in the data and looking for novelty in the residuals. This talk will describe several problems taken from the Computational Biology, Chemistry and Fraud Detection  domains to illustrate this. We will discuss structured matrix approximation and tensor methods for multi-view data and suitable algorithms for their estimation.

 

Speaker:

Mahesan Niranjan is Professor of Electronics and Computer Science at the University of Southampton. Prior to this appointment in 2008, he has held academic positions at the University of Cambridge as Lecturer in Information Engineering and at the University of Sheffield as Professor of Computer Science. At Sheffield, he also served as Head of Computer Science and Dean of Engineering. His research is in the area of Machine Learning, and he has worked on both the algorithmic and applied aspects of the subject. Some of his work has been fairly influential in the field – e.g. the SARSA algorithm widely used in the Reinforcement Learning literature. More recently, his focus of research is in data-driven inference problems in computational biology. More from:  https://tinyurl.com/y5fnymel

Deep Residual Learning for Everyday Computer Vision Tasks

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Chaitanya Kaul , University of York
Date: 06 September, 2019
Time: 14:30 - 15:30
Location: Sir Alwyn Williams Building, 423 Seminar Room

Residual learning is a concept in neural networks that exploits feature reuse from intermediate layers of a neural network to create more robust feature embeddings. In this talk, I will present three deep learning architectures that deal with the processing of 2D images and 3D point clouds, exploiting residual learning. I will present the evaluation of these models on benchmark medical image segmentation datasets as well as benchmark 3D point cloud classification and segmentation datasets. The results show high performance gains compared to the benchmarks, as well as highly competitive performance with respect to the state of the art techniques. 

Machine learning in optics: from solving inverse problems in imaging to high-speed hardware implementations

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Alejandro Turpin, University of Glasgow
Date: 07 June, 2019
Time: 13:00 - 14:00
Location: Sir Alwyn Williams Building, 303 Meeting Room

Advanced computational algorithms such as machine learning and Bayesian inference have left their traditional space within computing science and are impacting multiple areas, such as biomedical imaging, artificial vision, and neuroscience. In this talk I will discuss two different works where machine learning, in particular artificial neural networks, have been used in inverse problems in imaging to overcome the limitations from hardware: imaging through complex media and 3D imaging with single point detectors.

Trained to Fuzz!

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Martin Sablotny, University of Glasgow
Date: 13 May, 2019
Time: 15:00 - 16:00
Location: Sir Alwyn Williams Building, 303 Meeting Room

 Software testing is used to ensure the correct functionality of a program and to discover flaws in the software which can introduce security issues. A prominent software testing technique is so-called fuzz testing. Here, a test case generator creates input data for a program under test and the execution of it is monitored to discover unintended behaviour. However, developing test case generators for fuzz testing is a labour intensive task mainly because it is necessary to study the format specifications and reimplement them before even starting to generate any test cases. In this talk, I’ll outline a novel machine learning based approach which can significantly speed up the development of fuzz testers. First, I’ll show that it is possible to improve an existing fuzzer by utilising generative deep learning methods and provide guidance on how to select good performing model without actually executing any test cases. Secondly, readily available real-world data is used to train a test generator from ground up. Finally, I will outline how deep reinforcement learning can be applied to fuzz testing and teach the fuzzer how to generate test cases which maximises code coverage in a closed-loop manner.

SICSA DVF Masterclass - Predicting multi-view and structured data with kernel methods

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Prof. Juho Rousu (SICSA DVF), Aalto University
Date: 10 May, 2019
Time: 11:00 - 13:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

During the last two decades, kernel methods - including, but not limited to the celebrated support vector machine  - have been extremely successful in many walks of life. They continue to be a good alternative to deep neural networks in many real-world applications where data is complex and high-dimensional, and the amount of training data is medium-scale - from hundreds to a few tens of thousands of training examples.

In this masterclass I will focus on how kernel methods can be used for applications where the prediction setup involves heterogeneous or structured data, in particular learning with multiple data sources and predicting structured output.

 

Bibliography

Bhadra, S., Kaski, S. and Rousu, J., 2017. Multi-view kernel completion. Machine Learning, 106(5), pp.713-739.

Cichonska, A., Pahikkala, T., Szedmak, S., Julkunen, H., Airola, A., Heinonen, M., Aittokallio, T. and Rousu, J., 2018. Learning with multiple pairwise kernels for drug bioactivity prediction. Bioinformatics, 34(13), pp.i509-i518.

Hue, M. and Vert, J.P., 2010, June. On learning with kernels for unordered pairs. In ICML (pp. 463-470).

Marchand, M., Su, H., Morvant, E., Rousu, J. and Shawe-Taylor, J.S., 2014. Multilabel structured output learning with random spanning trees of max-margin markov networks. In Advances in Neural Information Processing Systems (pp. 873-881).

Scholkopf, B. and Smola, A.J., 2001. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press.

Shawe-Taylor, J. and Cristianini, N., 2004. Kernel methods for pattern analysis. Cambridge university press.

Su, H., Gionis, A. and Rousu, J., 2014, January. Structured prediction of network response. In International Conference on Machine Learning (pp. 442-450).

Su, H. and Rousu, J., 2015. Multilabel classification through random graph ensembles. Machine Learning, 99(2).

Taskar, B., Guestrin, C. and Koller, D., 2004. Max-margin Markov networks. In Advances in neural information processing systems (pp. 25-32).

Tsochantaridis, I., Joachims, T., Hofmann, T. and Altun, Y., 2005. Large margin methods for structured and interdependent output variables. Journal of machine learning research, 6(Sep), pp.1453-1484.

Small Molecule Identification through Machine Learning: CSI:FingerID and beyond

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Prof. Juho Rousu (SICSA DVF), Aalto University
Date: 17 April, 2019
Time: 12:00 - 13:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

Abstract
Identification of small molecules from biological samples remains a major bottleneck in understanding the inner workings of biological cells and their environment. Machine learning on data from large public databases of tandem mass spectrometric data has transformed this field in recent years, with tools like CSI:FingerID, and CSI:IOKR demonstrating a step-change improvement in identification rates compared to previous approaches.  In this presentation, I will give an overview of the technology inside these tools and review some recent developments in making use of additional information sources for improving the identification rates, in particular learning to predict the order of molecules eluting from liquid-chromatographic system. 

 
References:
Bach, E., Szedmak, S., Brouard, C., Böcker, S. and Rousu, J., 2018. Liquid-chromatography retention order prediction for metabolite identification. Bioinformatics, 34(17), pp.i875-i883.
Brouard, C., Bach, E., Böcker, S. and Rousu, J., 2017, November. Magnitude-preserving ranking for structured outputs. In Asian Conference on Machine Learning (pp. 407-422).
Brouard, C., Shen, H., Dührkop, K., d'Alché-Buc, F., Böcker, S. and Rousu, J., 2016. Fast metabolite identification with input output kernel regression. Bioinformatics, 32(12), pp.i28-i36.
Dührkop, K., Fleischauer, M., Ludwig, M., Aksenov, A.A., Melnik, A.V., Meusel, M., Dorrestein, P.C., Rousu, J. and Böcker, S., 2019. SIRIUS 4: a rapid tool for turning tandem mass spectra into metabolite structure information. Nature Methods 16, pp- 299-302
Dührkop, K., Shen, H., Meusel, M., Rousu, J. and Böcker, S., 2015. Searching molecular structure databases with tandem mass spectra using CSI: FingerID. Proceedings of the National Academy of Sciences, 112(41), pp.12580-12585.

=====
Short Bio:
Juho Rousu is a Professor of Computer Science at Aalto University, Finland. Rousu obtained his PhD in 2001 form University of Helsinki, while working at VTT Technical Centre of Finland. In 2003-2005 he was a Marie Curie Fellow at Royal Holloway University of London. In 2005-2011 he held Lecturer and Professor positions at University of Helsinki, before moving to Aalto University in 2012 where he leads a research group on Kernel Methods, Pattern Analysis and Computational Metabolomics (KEPACO). Rousu’s main research interest is in learning with multiple and structured targets, multiple views and ensembles, with methodological emphasis in regularised learning, kernels and sparsity, as well as efficient convex/non-convex optimisation methods. His applications of interest include metabolomics, biomedicine, pharmacology and synthetic biology.

Joint Variational Uncertain Input Gaussian Processes

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Carl Edward Rasmussen & Adrià Garriga-Alonso, University of Cambridge & Prowler.io
Date: 20 February, 2019
Time: 14:00 - 15:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

Standard mean-field variational inference in Gaussian Processes with uncertain inputs systematically underestimates posterior uncertainty. In particular, the factorisation assumption employed in the approximating distribution severely limits the framework’s accuracy. We lift this assumption, and show that the resulting scheme gives much more realistic predictive uncertainties, and can be implemented in a sparse and practical way. The algorithm has implications for latent variable models generally, including stacked (Deep) GPs and time series models.

IDI Journal Club: Graph Attention Networks

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Joshua Mitton
Date: 31 January, 2019
Time: 14:00 - 15:00
Location: Sir Alwyn Williams Building, 203 Meeting Room

In this journal club meeting, Josh will lead the discussion of the paper "Graph Attention Networks".

Abstract:

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighbourhoods’ features, we enable (implicitly) specifying different weights to different nodes in a neighbourhood, without requiring any kind of computationally intensive matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).

Paper:

https://mila.quebec/wp-content/uploads/2018/07/d1ac95b60310f43bb5a0b8024522fbe08fb2a482.pdf

Quantum inspired image compression.

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Bruno Sanguinetti, Dotphoton
Date: 11 December, 2018
Time: 14:00 - 15:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

Pushing image sensors and algorithms to the quantum limit

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Bruno Sanguinetti, Dotphoton
Date: 11 December, 2018
Time: 14:00 - 15:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

Towards data-driven hearing aid solutions

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Widex staff, Wides A/S
Date: 04 October, 2018
Time: 11:00 - 12:00
Location: Sir Alwyn Williams Building, 404 Meeting Room

Widex will give an informal overview of the company and current challenges in the hearing aid domain. We will discuss challenges related to data collection, machine learning and real-time optimisation with humans in the loop.

Variational Sparse Coding

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Francesco Tonolini, University of Glasgow
Date: 13 June, 2018
Time: 12:00 - 13:00
Location: Sir Alwyn Williams Building, 303 Meeting Room

We propose a new method for sparse coding based on the variational auto-encoder architecture, which allows sparse representations with generally intractable probabilistic models. We assume data to be generated from a sparse distribution prior in the latent space of a generative model and aim to maximise the observed data likelihood with a variational auto-encoding approach. We consider both the Laplace and the spike and slab priors and in each case derive an analytic approximation to the regularisation term in the variational lower bound, making posterior inference as efficient as in the standard variational auto-encoder case. By inducing sparsity in the prior, training results in a recognition function that generates sparse representations of observed data. Such representations can then be used as information-rich inputs to further learning tasks. 

Deep, complex networks for inversion of transmission effects in multimode optical fibres

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Oisin Moran, University of Glasgow
Date: 30 May, 2018
Time: 12:00 - 13:00
Location: Sir Alwyn Williams Building, 303 Meeting Room

We use complex-weighted, deep convolutional networks to invert the effects of multimode optical fibre distortion of a coherent input image. We generated experimental data based on collections of optical fibre responses to greyscale, input images generated with coherent light, and measuring only image amplitude  (not amplitude and phase as is typical) at the output of the \SI{10}{\metre} long \SI{105}{\micro\metre} diameter multimode fibre. This data is made available as the {\it Optical fibre inverse problem} Benchmark collection. The experimental data is used to train complex-weighted models with a range of regularisation approaches and subsequent denoising autoencoders. A new {\it unitary regularisation} approach for complex-weighted networks is proposed which performs best in robustly inverting the fibre transmission matrix, which fits well with the physical theory.

Modelling the creative process through black-box optimisation

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Anders Kirk Uhrenholt, University of Glasgow
Date: 23 May, 2018
Time: 12:00 - 13:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

The creative process from getting an idea to having that idea materialise as an image or a piece of music can often be framed as an optimisation task where the artist makes incremental changes until a local optimum is reached. This begs the question whether machine learning has a role to play in automating the tedious part of this process thereby freeing up time and energy for the user to be creative.
 
In a typical optimisation setting the cost function can be objectively evaluated with some measurable degree of certainty. But what if the target of the optimisation is something inherently subjective such as a person's perception of sound or image? This is a central question in the intersection between predictive modelling and creative software where the aim is to support the artist throughout the creative process in an intelligent way.
 
This talk focuses on said problem specifically for the task of tuning a music synthesizer. The task can be framed as optimising a black-boxed system (the synthesizer) with regards to an unknown cost function (the user's opinion of the synthesised sound). In the proposed approach metric learning is included as part of the optimisation loop to simultaneously learn a mapping from synthesizer configuration to sound while inferring from user feedback what the artist will think of the produced result.

QUANTITATIVE EVALUATION OF CANINE PELVIC LIMB ATAXIA USING A WIRELESS ACCELEROMETER SYSTEM

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Rodrigo Gutierrez-Quintana, School of Veterinary Medicine, University of Glasgow
Date: 15 February, 2018
Time: 12:00 - 12:45
Location: Sir Alwyn Williams Building, 423 Seminar Room

R. Gutierrez-Quintana, K.L. Holmes, Z. Hatfield, P. Amengual Batle, J. Brocal, K. Lazzerini, R. José-López. Small Animal Hospital, School of Veterinary Medicine, University of Glasgow, UK.

   An inexpensive and easily available method for objectively identifying and grading pelvic limb ataxia in dogs in the clinical setting is urgently needed. An alternative approach to conventional gait analysis techniques is the use of accelerometers attached to the body. They have the advantages of being low cost and allowing non-restrictive evaluation in a normal environment. 

   The purpose of this prospective study was to perform gait analysis using a lumbar accelerometer in dogs with pelvic limb ataxia and healthy controls; and assess whether the data obtained could be used to differentiate these 2 groups.

   Fifty-three dogs (21 healthy controls and 32 dogs with pelvic limb ataxia) of different size breeds were included. All dogs were walked in a straight line, on a non-slippery surface, at a slow walking pace for 50 meters using a short lead.  Acceleration signals were measured using a wireless tri-axial accelerometer that was secured with an elastic band at the level of the fifth lumbar vertebra. The average and coefficient of variation of the peak-to-peak amplitude was calculated for each acceleration component (x: Cranio-caudal, y: Latero-lateral and z; Dorso-ventral). Mann-Whitney test was used to compare groups (p<0.05).

   A significant difference between affected and control dogs was identified in the coefficient of variation of the x axis (p<0.0001).

   The results of the present study suggest that the coefficient of variation of the cranio-caudal axis could represent an objective measure of pelvic limb ataxia in dogs. Further longitudinal studies in a larger number of cases are indicated.

Approaches to analysis of genomic data

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Thomas Otto, University of Glasgow
Date: 17 January, 2018
Time: 12:00 - 13:00
Location: Lilybank Gardens, F121 Conference Room

A huge amount of data in biological sciences are generated in the hope to answer biological questions. This is possible due to the decreased price of high throughput methods. Although many analysis tools exist, there is a need to improve many of them. Further, there are many opportunities to develop new methods by combining existing dataset sets. 

In this talk, I will present some of the datasets and the methods we used/developed to analyse genomic data, including genomic and transcriptional data from malaria. I will also describe anticipated data, such as single cell RNA-Seq or detection of biomarkers. 

Optimal input for low reliability assistive technology

Group: Inference, Dynamics and Interaction (IDI)
Speaker: John Williamson, University of Glasgow
Date: 19 October, 2017
Time: 13:00 - 14:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

Most devices used for human input are reliable, in the sense
that errors are small in proportion to the information which
passes through the interface channel. There are, however, a few
important and relevant human interface channels which have
both very low communication rates and very low reliability.
 
We present a practical and general method for
optimal human interaction using binary input devices having very
high noise levels where a reliable feedback channel is available. In
particular, we show efficient navigation and selection techniques are
viable even with a
binary channel (symmetric or asymmetric) where reliability
may be below 75%, with provably optimal performance.
This mechanism can automatically adapt to changing channel statistics
with no overhead, and does not need precise calibration. A range
of visualisations are used to implicitly code for these channels in
a way that it is transparent to users. We validate our results
through a considered process of evaluation from theoretical
analysis, automated simulation, live interaction simulators.

Leveraging from Ontologies in machine learning

Group: Inference, Dynamics and Interaction (IDI)
Speaker: David Stirling, University of Wollongong, Australia
Date: 05 October, 2017
Time: 13:00 - 14:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

This presentation considers a number of successful cases that have significantly benefited from the inclusion of an ontology framework. Firstly, a human bespoke ontology describing cyclic temporal control states has enabled successful multi-objective control (an intelligent autopilot) of a simulated aircraft. Secondly, an empirically learnt ontology was derived to identifying several industrial process modalities, which were exploited to reveal underlying causal factors for a set of undesirable modes (states) of high heat loads in a Blast Furnace. The first case reviews a novel approach for learning and building computational models of human skills that are typically utilized in complex control situations. Such skills are often internalized as sub-cognitive and automatic responses, such as those routinely used in driving a car. Previously, a degree of success in modelling these was reported via behavioural cloning. However, skills obtained by this technique, often exhibit a lack of generality and robustness when applied to different control tasks. This is now mitigated in the alternative presented approach, by segmenting and compressing a universal set of reaction plans with symbolic induction methods. This approach is termed, Compressed Heuristic Universal Reaction Planners or CHURPs. The substantially improved robustness and control performance arises from synergistic interactions and collaborations between the different CHURPs entities including, surrogate control and goal sharing. In the latter case, an abstracted ontology containing nine major heat load modalities, was initially learnt as a 38 state Gaussian Mixture Model from several years of Blast Furnace heat load data, and subsequently utilized to diagnose the casual influences determining these states. Such methodologies are now being pursued in a number of kinematic rehabilitation motion studies, as well as oncology and radiotherapy aspects of cancer care.

 

Bio:
Dr Stirling obtained his BEng degree from the Tasmanian College of Advanced Education (1976), an MSc (Digital Techniques) in from Heriot-Watt University, Scotland UK (1980), and his PhD from the University of Sydney (1995). He has worked for over 20 years in wide range of industries, including as a Principal Research Scientist with BHP Steel. More recently he joined the University of Wollongong as a Senior Lecturer. David has developed a wide range of expertise in data analysis and knowledge management with skills in problem solving, statistical methods, visualization, pattern recognition, data fusion and reduction. He has applied machine learning and data mining techniques in specialized classifier designs for noisy multivariate data to medical research, exploration geo-science, and financial markets, as well as to industrial primary operations.

 

 

Gesture Typing on Virtual Tabletop: Effect of Input Dimensions on Performance

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Antoine Loriette, University of Glasgow
Date: 28 September, 2017
Time: 13:00 - 14:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

The association of tabletop interaction with gesture typing presents interaction potential for situationally or physically impaired users. In this work, we use depth cameras to create touch surfaces on regular tabletops. We describe our prototype system and report on a supervised learning approach to fingertips touch classification. We follow with a gesture typing study that compares our system with a control tablet scenario and explore the influence of input size and aspect ratio of the virtual surface on the text input performance. We show that novice users perform with the same error rate at half the input rate with our system as compared to the control condition, that an input size between A5 and A4 ensures the best tradeoff between performance and user preference and that users’ indirect tracking ability seems to be the overall performance limiting factor. 

A Theory of How People Make Decisions Through Interaction

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Andrew Howes, Birmingham University
Date: 14 September, 2017
Time: 13:00 - 14:00
Location: Lilybank Gardens, F121 Conference Room

In this talk I will discuss current thinking concerning how people make decisions through interaction. The talk offers evidence for the adaptive, embodied and context sensitive nature of human decision making. It also offers a computational theory, inspired by machine learning, of how the constraints imposed by the human visual system, and by the the visualisation design, lead to emergent strategies for interaction. These strategies focus user attention on certain kinds of information and ignore others; they determine apparent risk preferences and, ultimately, the quality of decisions made.

Amplifying Human Abilities: Digital Technologies to Enhance Perception and Cognition

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Albrecht Schmidt, Univ. Stuttgart (soon to be LMU, Munich)
Date: 12 September, 2017
Time: 14:00 - 15:00
Location: Lilybank Gardens, F121 Conference Room

Historically the use and development of tools is strongly linked to human evolution and intelligence. The last 10.000 years show a stunning progress in physical tools that have transformed what people can do and how people live. Currently, we are at the beginning of an even more fundamental transformation: the use of digital tools to amplify the mind. Digital technologies provide us with entirely new opportunities to enhance the perceptual and cognitive abilities of humans. Many ideas, ranging from mobile access to search engines, to wearable devices for lifelogging and augmented realty application give as first indications of this transition. In our research we create novel digital technologies that systematically explore how to enhance human cognition and perception. Our experimental approach is to: first, understand the users in their context as well as the potential for enhancement. Second, we create innovative interventions that provide functionality that amplifies human capabilities. And third, we empirically evaluate and quantify the enhancement that is gained by these developments. It is exciting to see how ultimately these new ubiquitous computing technologies have the potential for overcoming fundamental limitations in human perception and cognition.

Data-Efficient Learning for Autonomous Robots

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Marc Deisenroth, Department of Computing, Imperial College London
Date: 23 August, 2017
Time: 12:00 - 13:00
Location: Sir Alwyn Williams Building, 303 Meeting Room

Fully autonomous systems and robots have been a vision for many decades, but we are still far from practical realization. One of the fundamental challenges in fully autonomous systems and robots is learning from data directly without relying on any kind of intricate human knowledge. This requires data-driven statistical methods for modeling, predicting, and decision making, while taking uncertainty into account, e.g., due to measurement noise, sparse data or stochasticity in the environment. In my talk I will focus on machine learning methods for controlling autonomous robots, which pose an additional practical challenge: Data-efficiency, i.e., we need to be able to learn controllers in a few experiments since performing millions of experiments with robots is time consuming and wears out the hardware. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, pre-shaped policies, or the underlying dynamics. In the first part of the talk, I follow a different approach and speed up learning by efficiently extracting information from sparse data. In particular, I propose to learn a probabilistic, non-parametric Gaussian process dynamics model.By explicitly incorporating model uncertainty in long-term planning and controller learning my approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art reinforcement learning our model-based policy search method achieves an unprecedented speed of learning, which makes is most promising for application to real systems. I demonstrate its applicability to autonomous learning from scratch on real robot and control tasks. In the second part of my talk, I will discuss an alternative method for learning controllers for bipedal locomotion based on Bayesian Optimization, where it is hard to learn models of the underlying dynamics due to ground contacts. Using Bayesian optimization, we sidestep this modeling issue and directly optimize the controller parameters without the need of modeling the robot's dynamics.

NOTE MEETING ROOM CHANGE - NOW IN SAWB 303 DUE TO DELAYS IN BUILDING WORK COMPLETION

Spatial Smoothing in Mass Spectrometry Imaging

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Arijus Pleska, University of Glasgow
Date: 08 May, 2017
Time: 14:30 - 15:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

In this paper, we target a data modelling approach used in computational metabolomics; to be specic, we assess whether spatial smoothing improves the topic term and noise identification. By assessing mass spectrometry imaging data, we design an enhancement for latent Dirichlet allocation-based topic models. For both data pre-processing and topic model design, we survey relevant research. Further, we present the proposed methodology in detail providing the prelimi- naries and guiding through the performed topic model en hancements. To assess the performance, we evaluate the spatial smoothing application on a number

Investigation of users' affective and physiological traits in a multi-modal interaction context

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Iulia Popescu
Date: 04 May, 2017
Time: 15:30 - 16:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

In this talk, I will present my Level 5 (MSci) project which explored how users react and what they feel when they are exposed to different types of stimuli (visual, auditory). This study aimed to understand how short-term stressors impact individuals’ behaviour when they need to complete a task in a multi-modal interaction context (e.g. searching for a flight using graphical and spoken dialogue interfaces). Additionally, I will give an overview about the data set which has been delivered as part of this project and how it can be used for further research.

Real-time Mobile Object Removal using Google Project Tango

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Rhys Simpson, University of Glasgow
Date: 04 May, 2017
Time: 15:00 - 15:30
Location: Sir Alwyn Williams Building, 422 Seminar Room

Visually removing objects from a video feed is difficult to perform in real-time, as existing solutions rely on expensive patch lookups and specific environment conditions to produce meaningful results. Results are also guessed from the image surrounding the object, usually making them physically inaccurate and visually displeasing. Recent advances in hardware and software are pushing businesses to make large investments into Augmented Reality, including furniture catalogue applications, which could greatly benefit if existing objects could be visually removed from the video feed in real-time. This paper demonstrates a novel approach where instead of painting frames in an entirely 2D context, a 3D room mesh is captured, tracked and selectively rendered to paint geometry that was behind the object over it. The object's mask, and filled textures covering the planes the object was in contact with are also sourced and tracked from this mesh. Our approach works for a broad range of objects in typical indoors scenes, where target objects are separate and against large wall and floor planes. We show that our algorithm produces much better results per frame than object removal using traditional 2D inpainting, at an interactive framerate, and demonstrate that temporal incoherence between subsequent video frames is also eliminated.

ProbUI: Generalising Touch Target Representations to Enable Declarative Gesture Definition for Probabilistic GUIs

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Daniel Buschek, LMU Munich (visitor at Glasgow University Mar-May 2017)
Date: 20 April, 2017
Time: 14:00 - 15:00
Location: Lilybank Gardens, F121 Conference Room

We present ProbUI, a mobile touch GUI framework that merges ease of use of declarative gesture definition with the benefits of probabilistic reasoning. It helps developers to handle uncertain input and implement feedback and GUI adaptations. ProbUI replaces today's static target models (bounding boxes) with probabilistic gestures ("bounding behaviours"). It is the first touch GUI framework to unite concepts from three areas of related work: 1) Developers declaratively define touch behaviours for GUI targets. As a key insight, the declarations imply simple probabilistic models (HMMs with 2D Gaussian emissions). 2) ProbUI derives these models automatically to evaluate users' touch sequences. 3) It then infers intended behaviour and target. Developers bind callbacks to gesture progress, completion, and other conditions. We show ProbUI's value by implementing existing and novel widgets, and report developer feedback from a survey and a lab study.

A stochastic formulation of a dynamical singly constrained spatial interaction model

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Mark Girolami, Chair of Statistics, Department of Mathematics, Imperial College London
Date: 02 March, 2017
Time: 14:00 - 15:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

One of the challenges of 21st-century science is to model the evolution of complex systems.  One example of practical importance is urban structure, for which the dynamics may be described by a series of non-linear first-order ordinary differential equations.  Whilst this approach provides a reasonable model of spatial interaction as are relevant in areas diverse as public health and urban retail structure, it is somewhat restrictive owing to uncertainties arising in the modelling process. 

We address these shortcomings by developing a dynamical singly constrained spatial interaction model, based on a system of stochastic differential equations.   Our model is ergodic and the invariant distribution encodes our prior knowledge of spatio-temporal interactions.  We proceed by performing inference and prediction in a Bayesian setting, and explore the resulting probability distributions with a position-specific metropolis-adjusted Langevin algorithm. Insights from studies of interactions within the city of London from retail structure are used as illustration

Rethinking eye gaze for human-computer interaction

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Hans Gellersen, Lancaster University
Date: 19 January, 2017
Time: 14:00 - 15:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

Eye movements are central to most of our interactions. We use our eyes to see and guide our actions and they are a natural interface that is reflective of our goals and interests. At the same time, our eyes afford fast and accurate control for directing our attention, selecting targets for interaction, and expressing intent. Even though our eyes play such a central part to interaction, we rarely think about the movement of our eyes and have limited awareness of the diverse ways in which we use our eyes --- for instance, to examine visual scenes, follow movement, guide our hands, communicate non-verbally, and establish shared attention. 

This talk will reflect on use of eye movement as input in human-computer interaction. Jacob's seminal work showed over 25 years ago that eye gaze is natural for pointing, albeit marred by problems of Midas Touch and limited accuracy. I will discuss new work on eye gaze as input that looks beyond conventional gaze pointing. This includes work on: gaze and touch, where we use gaze to naturally modulate manual input; gaze and motion, where we introduce a new form of gaze input based on the smooth pursuit movement our eyes perform when they follow a moving object; and gaze and games, where we explore social gaze in interaction with avatars and joint attention as multi-user input . 

Hans Gellersen is Professor of Interactive Systems at Lancaster University. Hans' research interest is in sensors and devices for ubiquitous computing and human-computer interaction. He has worked on systems that blend physical and digital interaction, methods that infer context and human activity, and techniques that facilitate spontaneous interaction across devices. In recent work he is focussing on eye movement as a source of context information and modality for interaction. 

Working toward computer generated music traditions

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Bob Sturm, QM University London
Date: 12 January, 2017
Time: 14:00 - 15:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

I will discuss research aimed at making computers intelligent and sensitive enough to working with music data, whether acoustic or symbolic. Invariably, this includes a lot of work in applying machine learning to music collections in order to divine distinguishing and identifiable characteristics of practices that defy strict definition. Many of the resulting machine music listening systems appear to be musically sensitive and intelligent, but their fraudulent ways can be revealed when they are used to create music in the styles they have been taught to identify. Such "evaluation by generation” is a powerful way to gauge the generality of what a machine has learned to do. I will present several examples, focusing in particular on our work applying deep LSTM networks to modelling folk music transcriptions, and ultimately generating new music traditions.

 

References:

https://github.com/IraKorshunova/folk-rnn

https://highnoongmt.wordpress.com/2015/05/22/lisls-stis-recurrent-neural-networks-for-folk-music-generation/ 

https://highnoongmt.wordpress.com/?s=%22Deep+learning+for+assisting+the+process%22&submit=Search

 

https://youtu.be/YMbWwU2JdLg

https://youtu.be/RaO4HpM07hE 

https://soundcloud.com/sturmen-1

SHIP: The Single-handed Interaction Problem in Mobile and Wearable Computing

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Hui-Shyong Yeo, University of St Andrews
Date: 24 November, 2016
Time: 15:00 - 16:00
Location: Lilybank Gardens, F121 Conference Room

Screen sizes on devices are becoming smaller (eg. smartwatch and music player) and larger (eg. phablets, tablets) at the same time. Each of these trends can make devices difficult to use with only one hand (eg. fat-finger or reachability problem). This Single-Handed Interaction Problem (SHIP) is not new but it has been evolving along with a growth of larger and smaller interaction surfaces. The problem is exacerbated when the other hand is occupied (encumbered) or not available (missing fingers/limbs). The use of voice command or wrist gestures can be less robust or perceived as awkward in the public. 

This talk will discuss several projects (RadarCat UIST 2016, WatchMI MobileHCI 2016, SWIM and WatchMouse) in which we are working towards achieving/supporting effective single-handed interaction for mobile and wearable computing. The work focusses on novel interaction techniques that are not being explored thoroughly for interaction purposes, using ubiquitous sensors that are widely available such as IMU, optical sensor and radar (eg. Google Soli, soon to be available).

Biography:

Hui-Shyong Yeo is a second year PhD student in SACHI, University of St Andrews, advised by Prof. Aaron Quigley. Before that he worked as a researcher in KAIST for one year. Yeo has a wide range of interest within the field of HCI, including topics such as wearable, gestures, mixed reality and text entry. Currently he is focusing on single-handed interaction for his dissertation topic. He has published in conferences such as CHI, UIST, MobileHCI (honourable mention), SIGGRAPH and journals such as MTAP and JNCA.

Visit his homepage http://hsyeo.com or twitter @hci_research

Demo of Google Soli Radar and Single Handed Smartwatch interaction

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Hui-Shyong Yeo, University of St Andrews
Date: 24 November, 2016
Time: 14:30 - 15:00
Location: Sir Alwyn Williams Building, 303 Meeting Room

This demo session will present the Google Soli Radar and Smartwatch interaction system

Biography:

Hui-Shyong Yeo is a second year PhD student in SACHI, University of St Andrews, advised by Prof. Aaron Quigley. Before that he worked as a researcher in KAIST for one year. Yeo has a wide range of interest within the field of HCI, including topics such as wearable, gestures, mixed reality and text entry. Currently he is focusing on single-handed interaction for his dissertation topic. He has published in conferences such as CHI, UIST, MobileHCI (honourable mention), SIGGRAPH and journals such as MTAP and JNCA.

Visit his homepage http://hsyeo.com or twitter @hci_research

Control Theoretical Models of Pointing

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Rod Murray-Smith, University of Glasgow
Date: 11 November, 2016
Time: 14:00 - 15:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

I will present an empirical comparison of four models from manual control theory on their ability to model targeting behaviour by human users using a mouse: McRuer's Crossover, Costello's Surge, second-order lag (2OL), and the Bang-bang model. Such dynamic models are generative, estimating not only movement time, but also pointer position, velocity, and acceleration on a moment-to-moment basis. We describe an experimental framework for acquiring pointing actions and automatically fitting the parameters of mathematical models to the empirical data. We present the use of time-series, phase space and Hooke plot visualisations of the experimental data, to gain insight into human pointing dynamics. We find that the identified control models can generate a range of dynamic behaviours that captures aspects of human pointing behaviour to varying degrees. Conditions with a low index of difficulty (ID) showed poorer fit because their unconstrained nature leads naturally to more dynamic variability. We report on characteristics of human surge behaviour in pointing.

We report differences in a number of controller performance measures, including Overshoot, Settling time, Peak time, and Rise time. We describe trade-offs among the models. We conclude that control theory offers a promising complement to Fitts' law based approaches in HCI, with models providing representations and predictions of human pointing dynamics which can improve our understanding of pointing and inform design.

Improvising minds: Improvisational interaction and cognitive engagement

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Adam Linson, University of Edinburgh
Date: 29 August, 2016
Time: 13:00 - 14:00
Location: Sir Alwyn Williams Building, 303 Meeting Room

In this talk, I present my research on improvisation as a general form of adaptive expertise. My interdisciplinary approach takes music as a tractable domain for empirical studies, which I have used to ground theoretical insights from HCI, AI/robotics, psychology, and embodied cognitive science. I will discuss interconnected aspects of digital musical instrument (DMI) interface design a musical robotic AI system, and a music psychology study of sensorimotor influences on perceptual ambiguity. I will also show how I integrate this work with an inference-based model of neural functioning, to underscore implications beyond music. On this basis, I indicate how studies of musical improvisation can shed light on a domain-general capacity: our flexible, context-sensitive responsiveness to rapidly-changing environmental conditions.

 

Recognizing manipulation actions through visual accelerometer tracking, relational histograms, and user adaptation

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Sebastian Stein, University of Dundee
Date: 26 August, 2016
Time: 13:00 - 14:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

Activity recognition research in computer vision and pervasive computing has made a remarkable trajectory from distinguishing full-body motion patterns to recognizing complex activities. Manipulation activities as occurring in food preparation are particularly challenging to recognize, as they involve many different objects, non-unique task orders and are subject to personal idiosyncrasies. Video data and data from embedded accelerometers provide complementary information, which motivates an investigation of effective methods for fusing these sensor modalities.

In this talk I present a method for multi-modal recognition of manipulation activities that combines accelerometer data and video at multiple stages of the recognition pipeline. A method for accelerometer tracking is introduced that provides for each accelerometer-equipped object a location estimate in the camera view by identifying a point trajectory that matches well the accelerometer data. It is argued that associating accelerometer data with locations in the video provides a key link for modelling interactions between accelerometer-equipped objects and other visual entities in the scene. Estimates of accelerometer locations and their visual displacements are used to extract two new types of features: (i)

Reference Tracklet Statistics characterizes statistical properties of an accelerometer’s visual trajectory, and (ii) RETLETS, a feature representation that encodes relative motion, uses an accelerometer’s visual trajectory as a reference frame for dense tracklets. In comparison to a traditional sensor fusion approach where features are extracted from each sensor-type independently and concatenated for classification, it is shown that by combining RETLETS and Reference Tracklet Statistics with those sensor-specific features performs considerably better. Specifically addressing scenarios in which a recognition

system would be primarily used by a single person (e.g., cognitive situational support), this thesis investigates three methods for adapting activity models to a target user based on user-specific training data. Via randomized control trials it is shown that these methods indeed learn user idiosyncrasies.

Skin Reading: Encoding Text in a 6-Channel Haptic Display

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Granit Luzhnica, Know Center, Graz, Austria
Date: 11 August, 2016
Time: 16:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

In this talk I will present a study we performed in to investigate the communication of natural language messages using a wearable haptic display. Our research experiments investigated both the design of the haptic display, as well as the methods for communication that use it. First, three wearable configurations are proposed basing on haptic perception fundamentals and evaluated in the first study. To encode symbols, we use an overlapping spatiotemporal stimulation (OST) method, that distributes stimuli spatially and temporally with a minima gap. Second, we propose an encoding for the entire English alphabet and a training method for letters, words and phrases. A second study investigates communication accuracy. It puts four participants through five sessions, for an overall training time of approximately 5 hours per participant. 

Casual Interaction for Smartwatch Feedback and Communication

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Henning Pohl, Univ Hannover
Date: 01 July, 2016
Time: 14:00 - 15:00
Location: University of Glasgow Casual interaction strives to enable people to scale back their engagement with interactive systems, while retaining the level of control they desire. In this talk, we will take a look on two recent developments in casual interaction systems. The first p

Casual interaction strives to enable people to scale back their engagement with interactive systems, while retaining the level of control they desire. In this talk, we will take a look on two recent developments in casual interaction systems. The first project to be presented is an indirect visual feedback system for smartwatches. Embedding LEDs into the back of a watch case enabled us to create a form of feedback that is less disruptive than vibration feedback and blends in with the body. We investigated how well such subtle feedback works in an in-the-wild study, which we will take a closer look at in this talk. Where the first project is a more casual form of feedback, the second project tries to support a more casual form of communication: emoji. Over the last years these characters have become more and more popular, yet entering them can take quite some effort. We have developed a novel emoji keyboard around zooming interaction, that makes it easier and faster to enter emoji.

An electroencephalograpy (EEG)-based real-time feedback training system for cognitive brain-machine interface (cBMI)

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Kyuwan Choi, University of Glasgow
Date: 04 November, 2015
Time: 14:00
Location: University of Glasgow

In this presentation, I will present a new cognitive brain-machine interface (cBMI) using cortical activities in the prefrontal cortex. In the cBMI system, subjects conduct directional imagination which is more intuitive than the existing motor imagery. The subjects control a bar on the monitor freely by extracting the information of direction from the prefrontal cortex, and that the subject’s prefrontal cortex is activated by giving them the movement of the bar as feedback. Furthermore, I will introduce an EEG-based wheelchair system using the cBMI concept. If we use the cBMI, it is possible to build a more intuitive BMI system. It could help improve the cognitive function of healthy people and help activate the area around the damaged area of the patients with prefrontal damage such as patients with dementia, autism, etc. by consistently activating their prefrontal cortex.

Adapting biomechanical simulation for physical ergonomics evaluation of new input methods

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Myroslav Bachynskyi, University of Glasgow/Max Planck Institute
Date: 28 October, 2015
Time: 15:00
Location: University of Glasgow

Recent advances in sensor technology and computer vision allowed new computer input methods to rapidly emerge. These methods are often considered as more intuitive and easier to learn comparing to the conventional keyboard or mouse, however most of them are poorly assessed with respect to their physical ergonomics and health impact of their usage. The main reasons for this are large input spaces provided by these interfaces, absence of a reliable, cheap and easy-to-apply physical ergonomics assessment method and absence of biomechanics expertize in user interface designers. The goal of my research is to develop a physical ergonomics assessment method, which provides support to interface designers on all stages of the design process for low cost and without specialized knowledge. Our approach is to extend biomechanical simulation tools developed for medical and rehabilitation purposes to adapt them for Human-Computer Interaction setting. The talk gives an overview of problems related to the development of the method and shows answers to some of the fundamental questions.

Detecting Swipe Errors on Touchscreens using Grip Modulation

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Faizuddin Mohd Noor, University of Glasgow
Date: 22 October, 2015
Time: 14:00
Location: University of Glasgow

We show that when users make errors on mobile devices, they make immediate and distinct physical responses that can be observed with standard sensors. We used three

standard cognitive tasks (Flanker, Stroop and SART) to induce errors from 20 participants. Using simple low-resolution capacitive touch sensors placed around a standard device and a built-in accelerometer, we demonstrate that errors can be predicted using micro-adjustments to hand grip and movement in the period after swiping the touchscreen. In a per-user model, our technique predicted error with a mean AUC of 0.71 in Flanker and 0.60 in Stroop and SART using hand grip, while with the accelerometer the mean AUC in all tasks was ≥ 0.90. Using a pooled, non-user-specific, model, our technique achieved mean AUC of 0.75 in Flanker and 0.80 in Stroop and SART using hand grip and an AUC for all tasks > 0.90 for the accelerometer. When combining these features we achieved an AUC of 0.96 (with false accept and reject rates both below 10%). These results suggest that hand grip and movement provide strong and very low latency evidence for mistakes, and could be a valuable component in interaction error detection and correction systems.

A conceptual model of the future of input devices

Group: Inference, Dynamics and Interaction (IDI)
Speaker: John Williamson, Computing Science
Date: 14 October, 2015
Time: 15:00 - 16:00
Location: University of Glasgow

Turning sensor engineering into advances into human computer interaction is slow, ad hoc and unsystematic. I'll discuss a fundamental approach to input device engineering, and illustrate how machine learning could have the exponentially-accelerating impact in HCI that it has had in other fields.

[caveat: This is a proposal: there are only words, not results!]

Haptic Gaze Interaction - EVENT CANCELLED

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Poika Isokoski, University of Tampere, Finland
Date: 05 October, 2015
Time: 16:00 - 17:00
Location: University of Glasgow Eye trackers that can be (somewhat) comfortably worn for long periods are now available. Thus, computing systems can track the gaze vector and it can be used in interactions with mobile and embedded computing systems together with other input and output

Eye trackers that can be (somewhat) comfortably worn for long periods are now available. Thus, computing systems can track the gaze vector and it can be used in interactions with mobile and embedded computing systems together with other input and output modalities. However, interaction techniques for these activities are largely missing. Furthermore, it is unclear how feedback from eye movements should be given to best support user's goals. This talk will give an overview of the results of our recent work in exploring haptic feedback on eye movements and building multimodal interaction techniques that utilize the gaze data. I will also discuss some possible future directions in this line of research.

Challenges in Metabolomics, and some Machine Learning Solutions

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Simon Rogers, University of Glasgow
Date: 30 September, 2015
Time: 15:00 - 16:00
Location: University of Glasgow

Large scale measurement of the metabolites present in an organism is very challenging, but potentially highly rewarding in the understanding of disease and the development of drugs. In this talk I will describe some of the challenges in analysis of data from Liquid Chromatography - Mass Spectrometry, one of the most popular platforms for metabolomics. I will present Statistical Machine Learning solutions to several of these challenges, including the alignment of spectra across experimental runs, the identification of metabolites within the spectra, and finish with some recent work on using text processing techniques to discover conserved metabolite substructures.

Engaging with Music Retrieval

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Daniel Boland, University of Glasgow
Date: 09 September, 2015
Time: 15:00 - 16:00
Location: University of Glasgow

Music collections available to listeners have grown at a dramatic pace, now spanning tens of millions of tracks. Interacting with a music retrieval system can thus be overwhelming, with users offered ‘too-much-choice’. The level of engagement required for such retrieval interactions can be inappropriate, such as in mobile or multitasking contexts. Using listening histories and work from music psychology, a set of engagement-stratified profiles of listening behaviour are developed. The challenge of designing music retrieval for different levels of user engagement is explored with a system allowing users to denote their level of engagement and thus the specificity of their music queries. The resulting interaction has since been adopted as a component in a commercial music system.

Deep non-parametric learning with Gaussian processes

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Andreas Damianou, Sheffield University
Date: 10 June, 2015
Time: 15:00 - 16:00
Location: University of Glasgow

http://staffwww.dcs.sheffield.ac.uk/people/A.Damianou/research/index.html#DeepGPs

This talk will discuss deep Gaussian process models, a recent approach to combining deep probabilistic structures with Bayesian nonparametrics. The obtained deep belief networks are constructed using continuous variables connected with Gaussian process mappings; therefore, the methodology used for training and inference deviates from traditional deep learning paradigms. The first part of the talk will thus outline the associated computational tools, revolving around variational inference. In the second part, we will discuss models obtained as special cases of the deep Gaussian process, namely dynamical / multi-view / dimensionality reduction models and nonparametric autoencoders. The above concepts and algorithms will be demonstrated with examples from computer vision (e.g. high-dimensional video, images) and robotics (motion capture data, humanoid robotics).

Intermittent Control in Man and Machine

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Henrik Gollee, School of Engineering, Glasgow
Date: 30 April, 2015
Time: 15:00 - 16:00
Location: University of Glasgow

An intermittent controller generates a sequence of (continuous-time) parametrised trajectories whose parameters are adjusted intermittently, based on continuous observation. This concept is related to "ballistic" control and differs from i) discrete-time control in that the control is not constant between samples, and ii) continuous-time control in that the trajectories are reset intermittently.  The Intermittent Control paradigm evolved separately in the physiological and engineering literature. The talk will give details on the experimental verification of intermittency in biological systems and its applications in engineering.

Advantages of intermittent control compared to the continuous paradigm in the context of adaptation and learning will be discussed.

Get A Grip: Predicting User Identity From Back-of-Device Sensing

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Mohammad Faizuddin Md Noor, University of Glasgow, Inference, Dynamics and Interaction group
Date: 19 March, 2015
Time: 14:00 - 15:00
Location: University of Glasgow

We demonstrate that users can be identified using back-of-device handgrip changes during the course of the interaction with mobile phone, using simple, low-resolution capacitive touch sensors placed around a standard device. As a baseline, we replicated the front-of-screen experiments of Touchalytics and compare with our results. We show that classifiers trained using back-of-device could match or exceed the performance of classifiers trained using the Touchalytics approach. Our technique achieved mean AUC, false accept rate and false reject rate of 0.9481, 3.52% and 20.66% for a vertical scrolling reading task and 0.9974, 0.85% and 2.62% for horizontal swiping game task. These results suggest that handgrip provides substantial evidence of user identity, and can be a valuable component of continuous authentication systems.

Towards Effective Non-Invasive Brain-Computer Interfaces Dedicated to Ambulatory Applications

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Matthieu Duvinage,
Date: 19 March, 2015
Time: 11:00 - 11:30
Location: University of Glasgow

Disabilities affecting mobility, in particular, often lead to exacerbated isolation and thus fewer communication opportunities, resulting in a limited participation in social life. Additionally, as costs for the health-care system can be huge, rehabilitation-related devices and lower-limb prostheses (or orthoses) have been intensively studied so far. However, although many devices are now available, they rarely integrate the direct will of the patient. Indeed, they basically use motion sensors or the residual muscle activities to track the next move.

Therefore, to integrate a more direct control from the patient, Brain-Computer Interfaces

(BCIs) are here proposed and studied under ambulatory conditions. Basically, a BCI allows you to control any electric device without the need of activating muscles. In this work, the conversion of brain signals into a prosthesis kinematic control is studied following two approaches. First, the subject transmits his desired walking speed to the BCI. Then, this high-level command is converted into a kinematics signal thanks to a Central Pattern Generator (CPG)-based gait model, which is able to produce automatic gait patterns. Our work thus focuses on how BCIs do behave in ambulatory conditions. The second strategy is based on the assumption that the brain is continuously controlling the lower limb. Thus, a direct interpretation, i.e. decoding, from the brain signals is performed. Here, our work consists in determining which part of the brain signals can be used.

Gait analysis from a single ear-worn sensor

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Delaram Jarchi, Imperical
Date: 17 March, 2015
Time: 12:00 - 12:30
Location: University of Glasgow

Objective assessment of detailed gait patterns is important for clinical applications. One common approach to clinical gait analysis is to use multiple optical or inertial sensors affixed to the patient body for detailed bio-motion and gait analysis. The complexity of sensor placement and issues related to consistent sensor placement have limited these methods only to dedicated laboratory settings, requiring the support of a highly trained technical team. The use of a single sensor for gait assessment has many advantages, particularly in terms of patient compliance, and the possibility of remote monitoring of patients in home environment. In this talk we look into the assessment of a single ear-worn sensor (e-AR sensor) for gait analysis by developing signal processing techniques and using a number of reference platforms inside and outside the gait laboratory. The results are provided considering two clinical applications such as post-surgical follow-up and rehabilitation of orthopaedic patients and investigating the gait changes of the Parkinson's Disease (PD) patients.

Imaging without cameras

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Matthew Edgar, School of Physics, Glasgow
Date: 05 March, 2015
Time: 14:00
Location: University of Glasgow

Conventional cameras rely upon a pixelated sensor to provide spatial resolution. An alternative approach replaces the sensor with a pixelated transmission mask encoded with a series of binary patterns. Combining knowledge of the series of patterns and the associated filtered intensities, measured by single-pixel detectors, allows an image to be deduced through data inversion. At Glasgow we have been extending the concept of a `single-pixel camera' to provide continuous real-time video in excess of 10 Hz, at non-visible wavelengths, using efficient computer algorithms. We have so far demonstrated some applications for our camera such as imaging through smoke, through tinted screens, and detecting gas leaks, whilst performing sub-Nyquist sampling. We are currently investigating the most effective image processing strategies and basis scanning procedures for increasing the image resolution and frame rates for single-pixel video systems.

Interactive Visualisation of Big Music Data.

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Beatrix Vad, LMU University, Munich
Date: 22 August, 2014
Time: 16:30 - 17:00
Location: University of Glasgow

Musical content can be described by a variety of features that are measured or inferred through the analysis of audio data. For a large music collection this establishes the possibility to retrieve information about its structure and underlying patterns. Dimensionality reduction techniques can be used to gain insight into such a high-dimensional dataset and to enable visualisation on two-dimensional screens. In this talk we investigate the usability of these techniques with respect to an interactive exploration interface for large music collections based on moods. A method employing Gaussian Processes to extend the visualisation with additional information about its composition is presented and evaluated

Behavioural Biometrics for Mobile Touchscreen Devices

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Daniel Buschek, LMU University, Munich
Date: 22 August, 2014
Time: 16:00 - 16:30
Location: University of Glasgow

Inference in non‐linear dynamical systems – a machine learning perspective,

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Carl Rasmussen , Cambridge University
Date: 08 July, 2014
Time: 15:00
Location: University of Glasgow

Inference in discrete-time non-linear dynamical systems is often done using the Extended Kalman Filtering and Smoothing (EKF) algorithm, which provides a Gaussian approximation to the posterior based on local linearisation of the dynamics. In challenging problems, when the non-linearities are significant and the signal to noise ratio is poor, the EKF performs poorly. In this talk we will discuss an alternative algorithm developed in the machine learning community which is based message passing in Factor Graphs and the Expectation Propagation (EP) approximation. We will show this method provides a consistent and accurate Gaussian approximation to the posterior enabling system identification using Expectation Maximisation (EM) even in cases when the EKF fails.

Gaussian Processes for Big Data

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Dr James Hensman, University of Sheffield
Date: 03 April, 2014
Time: 15:00 - 16:00
Location: University of Glasgow

Gaussian Process (GP) models are widely applicable models of functions, and are used extensively in statistics and machine learning for regression, classification and as components of more complex models. Inference in a Gaussian process model usually costs O(n^3) operations, where n is the number of data. In the Big Data (tm) world, it would initially seem unlikely that GPs might contribute due to this computational requirement.

Parametric models have been successfully applied to Big Data (tm) using the Robbins-Monro gradient method, which allows data to be processed individually or in small batches. In this talk, I'll show how these ideas can be applied to Gaussian Processes. To do this, I'll form a variational bound on the marginal likelihood: we discuss the properties of this bound, including the conditions where we recover exact GP behaviour.

Our methods have allowed GP regression on hundreds of thousands of data, using a standard desktop machine. for more details, see http://auai.org/uai2013/prints/papers/244.pdf .

Machine Learning for Back-of-the-Device Multitouch Typing

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Daniel Buschek, LMU Munich
Date: 17 December, 2013
Time: 14:00
Location: University of Glasgow

IDI Seminar: Machine Learning for Back-of-the-Device Multitouch Typing

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Daniel Buscheck, LMU University, Germany
Date: 17 December, 2013
Time: 11:00 - 12:00
Location: University of Glasgow

IDI Seminar: Uncertain Text Entry on Mobile Devices

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Daryl Weir, University of Glasgow
Date: 21 November, 2013
Time: 14:00 - 15:00
Location: University of Glasgow

Modern mobile devices typically rely on touchscreen keyboards for input. Unfortunately, users often struggle to enter text accurately on virtual keyboards. We undertook a systematic investigation into how to best utilize probabilistic information to improve these keyboards. We incorporate a state-of-the-art touch model that can learn the tap idiosyncrasies of a particular user, and show in an evaluation that character error rate can be reduced by up to 7% over a baseline, and by up to 1.3% over a leading commercial keyboard. We furthermore investigate how users can explicitly control autocorrection via how hard they touch.

IDI Seminar: Predicting Screen Touches From Back-of-Device Grip Changes

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Faizuddin Mohd Noor, University of Glasgow
Date: 14 November, 2013
Time: 14:00 - 15:00
Location: University of Glasgow

We demonstrate that front-of-screen targeting on mobile phones can be predicted from back-of-device grip manipulations. Using simple, low-resolution capacitive touch sensors placed around a standard phone, we outline a machine learning approach to modelling the grip modulation and inferring front-of-screen touch targets. We experimentally demonstrate that grip is a remarkably good predictor of touch, and we can predict touch position 200ms before contact with an accuracy of 18mm.

IDI Seminar: Around-device devices: utilizing space and objects around the phone

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Henning Pohl, University of Hannover
Date: 07 October, 2013
Time: 16:00 - 17:00
Location: University of Glasgow

For many people their phones have become their main everyday tool. While phones can fulfill many different roles, they also require users to (1) make do with affordance not specialized for the specific task, and (2) closely engage with the device itself. In this talk, I propose utilizing the space and objects around the phone to offer better task affordance and to create an opportunity for casual interactions. Around-device devices are a class of interactors, that do not require the user to bring special tangibles, but repurpose items already found in the user’s surroundings. I'll present a survey study, where we determined which places and objects are available to around-device devices. I'll also talk about a prototype implementation of hand interactions and object tracking for future mobiles with built-in depth sensing.

IDI Seminar: Extracting meaning from audio – a machine learning approach

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Jan Larsen, Technical University of Denmark
Date: 03 October, 2013
Time: 15:00 - 16:00
Location: University of Glasgow

Interdependence and Predictability of Human Mobility and Social Interactions

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Mirco Musolesi, University of Birmingham
Date: 23 May, 2013
Time: 15:00 - 16:00
Location: University of Glasgow

The study of the interdependence of human movement and social ties of individuals is one of the most interesting research areas in computational social science. Previous studies have shown that human movement is predictable to a certain extent at different geographic scales. One of the open problems is how to improve the prediction exploiting additional available information. In particular, one of the key questions is how to characterise and exploit the correlation between movements of friends and acquaintances to increase the accuracy of the forecasting algorithms.

In this talk I will discuss the results of our analysis of the Nokia Mobile Data Challenge dataset showing that, by means of multivariate nonlinear predictors, it is possible to exploit mobility data of friends in order to improve user movement forecasting. This can be seen as a process of discovering correlation patterns in networks of linked social and geographic data. I will also show how mutual information can be used to quantify this correlation; I will demonstrate how to use this quantity to select individuals with correlated mobility patterns in order to improve movement prediction. Finally, I will show how the exploitation of data related to friends improves dramatically the prediction with respect to the case of information of people that do not have social ties with the user.

Flexible models for high-dimensional probability distributions

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Iain Murray, University of Edinburgh
Date: 04 April, 2013
Time: 15:00 - 16:00
Location: University of Glasgow

Statistical modelling often involves representing high-dimensional probability distributions. The textbook baseline methods, such as mixture models (non-parametric Bayesian or not), often don’t use data efficiently. Whereas the machine learning literature has proposed methods, such as Gaussian process density models and undirected neural network models, that are often too computationally expensive to use. Using a few case-studies, I will argue for increased use of flexible autoregressive models as a strong baseline for general use.

Pre-interaction Identification By Dynamic Grip Classification

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Faizuddin Mohd Noor, University of Glasgow
Date: 28 February, 2013
Time: 14:00 - 15:00
Location: University of Glasgow

We present a novel authentication method to identify users at they pick up a mobile device. We use a combination of back-of-device capacitive sensing and accelerometer measurements to perform classification, and obtain increased performance compared to previous accelerometer-only approaches. Our initial results suggest that users can be reliably identified during the pick-up movement before interaction commences.

Evaluating Bad Query Abandonment in an Iterative SMS-Based FAQ Retrieval System

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Edwin Thuma, University of Glasgow
Date: 14 February, 2013
Time: 14:00 - 15:00
Location: University of Glasgow

We investigate how many iterations users are willing to tolerate in an iterative Frequently Asked Question (FAQ) system that provides information on HIV/AIDS. This is part of work in progress that aims to develop an automated Frequently Asked Question system that can be used to provide answers on HIV/AIDS related queries to users in Botswana. Our system engages the user in the question answering process by following an iterative interaction approach in order to avoid giving inappropriate answers to the user. Our findings provide us with an indication of how long users are willing to engage with the system. We subsequently use this to develop a novel evaluation metric to use in future developments of the system. As an additional finding, we show that the previous search experience of the users has a significant effect on their future behaviour.

IDI Seminar

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Konstantinos Georgatzis, University of Edinburgh
Date: 29 November, 2012
Time: 14:00 - 15:00
Location: University of Glasgow Efficient Optimisation for Data Visualisation as an Information Retrieval Task

Visualisation of multivariate data sets is often done by mapping data onto a low-dimensional display with nonlinear dimensionality reduction (NLDR) methods. We have introduced a formalism where NLDR for visualisation is treated as an information retrieval task, and a novel NLDR method called the Neighbor Retrieval

Visualiser (NeRV) which outperforms previous methods. The remaining concern is that NeRV has quadratic computational complexity with respect to the number of data. We introduce an efficient learning algorithm for NeRV where relationships between data are approximated through mixture modeling, yielding efficient computation with near-linear computational complexity with respect to the number of data. The method is much faster to optimise as the number of data grows, and it maintains good visualisation performance.

Events Webapp