Information, Data and Analysis (IDA)

Information, Data and Analysis (IDA)

Information, Data and Analysis (IDA)

Overview

Technological advances in sensing, data acquisition, mobile devices and the impact of the Internet are leading to increasing amounts of data sampled more rapidly and comprehensively than ever before. If we are to acquire novel insights and knowledge from this data, it needs to be matched by innovations in data management, storage and retrieval and ultimately in data analytics. The many forms of data, their complexity and variations present challenges from information and data systems, to algorithms and inference about patterns through modelling, leading to visualisation, communication and human-computer interaction.

The Information, Data and Analysis Section is led by Professor Roderick Murray-Smith, and has 13 academics, and 35 Post-Doctoral Fellows, Research Assistants and Ph.D. students active in this area. Our research is organised in four world-leading groups in data systems, human-computer interaction & machine learning, information retrieval and computer vision & autonomous systems:


Section members

 

Professor Roderick Murray-Smith

Professor (Computing Science)

Research interests: Mobile Human Computer Interaction; Machine Learning; Brain Computer Interaction; Dynamic Systems; Probabilistic Inference

Roderick Murray-Smith

Professor Iadh Ounis

Professor of Information Retrieval (Computing Science)

Research interests: Web and enterprise search engines; Large-scale information retrieval systems; Opinion finding and results diversification; Searching and mining within electronic health records; Social media retrieval (blog, twitter, news, etc).

Iadh Ounis

Professor Joemon Jose

Professor of Information Retrieval (Computing Science)

Research interests: Adaptive and personalized search systems; Multimodal interaction for information retrieval; Emotion based Search and browsing; Temporal Information Retrieval; Search result diversification and aggregation; Recommendation and collaborative filtering

Joemon Jose

Dr Paul Siebert

Reader (Computing Science)

Paul Siebert

Dr Simon Rogers

Senior Lecturer (Computing Science)

Simon Rogers

Dr John Williamson

Lecturer (School of Computing Science)

 jhw_med_contrast
 

Dr Ke Yuan

Lecturer in Computing Science (Machine Learning in Computational Biology) (Computing Science)

 Ke Yuan

Dr Craig MacDonald

Senior Lecturer (School of Computing Science)

Image result for craig macdonald glasgow IR

Dr Jeff Dalton

Lecturer (School of Computing Science)

 Jeff Dalton Profile

Dr Richard McCreadie

Lecturer (School of Computing Science)

Image result for richard mccreadie glasgow IR

Dr Nikos Ntarmos

Lecturer (School of Computing Science)

Research interests: data management, distributed systems, big data, indexing, query processing, NoSQL, graph data, scale out, scale up, databases, systems

Nikos Ntarmos

Dr Christos Anagnostopoulos

 

Lecturer (Computing Science)

Research interests: Large-scale Mobile and Distributed Computing Systems, Machine and Statistical Learning, Stochastic Optimization

 
Christos Anagnostopoulos

Dr Bjorn Jensen

Lecturer in Computing Science( Applied Machine LEarning) (Computing Science)

 

Dr Gerardo Aragon Camarasa

 

Lecturer in Computing Science (Autonomous and Socially Intelligent Robotics) (Computing Science)

Research interests: My research is on in the multidisciplinary areas of robotics, socially aware machines, robot-robot interactions, chemical robotics, machine perception/vision and geometric algebras.

 
Gerardo Aragon Camarasa

Events this week

There are currently no events scheduled this week


Upcoming events

TEST EVENT

Group: Inference, Dynamics and Interaction (IDI)
Speaker: nobody
Date: 25 May, 2019
Time: 12:15 - 13:15
Location: Sir Alwyn Williams Building


Past events

Trained to Fuzz!(13 May, 2019)

Speaker: Martin Sablotny

 Software testing is used to ensure the correct functionality of a program and to discover flaws in the software which can introduce security issues. A prominent software testing technique is so-called fuzz testing. Here, a test case generator creates input data for a program under test and the execution of it is monitored to discover unintended behaviour. However, developing test case generators for fuzz testing is a labour intensive task mainly because it is necessary to study the format specifications and reimplement them before even starting to generate any test cases. In this talk, I’ll outline a novel machine learning based approach which can significantly speed up the development of fuzz testers. First, I’ll show that it is possible to improve an existing fuzzer by utilising generative deep learning methods and provide guidance on how to select good performing model without actually executing any test cases. Secondly, readily available real-world data is used to train a test generator from ground up. Finally, I will outline how deep reinforcement learning can be applied to fuzz testing and teach the fuzzer how to generate test cases which maximises code coverage in a closed-loop manner.

SICSA DVF Masterclass - Predicting multi-view and structured data with kernel methods(10 May, 2019)

Speaker: Prof. Juho Rousu (SICSA DVF)

During the last two decades, kernel methods - including, but not limited to the celebrated support vector machine  - have been extremely succesfull in many walks of life. They continue to be a good alternative to deep neural networks in many real-world applications where data is complex and high-dimensional, and the amount of training data is medium-scale - from hundreds to a few tens of thousands of training examples.

In this masterclass I will focus on how kernel methods can be used for applications where the prediction setup involves heterogeneous or structured data, in particular learning with multiple data sources and predicting structured output.

 

Bibliography

Bhadra, S., Kaski, S. and Rousu, J., 2017. Multi-view kernel completion. Machine Learning, 106(5), pp.713-739.

Cichonska, A., Pahikkala, T., Szedmak, S., Julkunen, H., Airola, A., Heinonen, M., Aittokallio, T. and Rousu, J., 2018. Learning with multiple pairwise kernels for drug bioactivity prediction. Bioinformatics, 34(13), pp.i509-i518.

Hue, M. and Vert, J.P., 2010, June. On learning with kernels for unordered pairs. In ICML (pp. 463-470).

Marchand, M., Su, H., Morvant, E., Rousu, J. and Shawe-Taylor, J.S., 2014. Multilabel structured output learning with random spanning trees of max-margin markov networks. In Advances in Neural Information Processing Systems (pp. 873-881).

Scholkopf, B. and Smola, A.J., 2001. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press.

Shawe-Taylor, J. and Cristianini, N., 2004. Kernel methods for pattern analysis. Cambridge university press.

Su, H., Gionis, A. and Rousu, J., 2014, January. Structured prediction of network response. In International Conference on Machine Learning (pp. 442-450).

Su, H. and Rousu, J., 2015. Multilabel classification through random graph ensembles. Machine Learning, 99(2).

Taskar, B., Guestrin, C. and Koller, D., 2004. Max-margin Markov networks. In Advances in neural information processing systems (pp. 25-32).

Tsochantaridis, I., Joachims, T., Hofmann, T. and Altun, Y., 2005. Large margin methods for structured and interdependent output variables. Journal of machine learning research, 6(Sep), pp.1453-1484.

Machine Learning for Energy Disaggregation(30 April, 2019)

Speaker: Mingjun Zhong

The speaker is a candidate for a Lectureship in the School

Energy disaggregation, i.e., non-intrusive load monitoring, is a technique to separate home appliances from only the mains electricity meter readings. Energy disaggregation is a single channel Blind Source Separation problem and is thus unidentifiable. In this talk, I will present how machine learning methods could be devised to tackle this unidentifiable problem. Firstly, energy disaggregation was represented as a factorial hidden Markov model (FHMM). Bayesian methods were then developed to infer the appliance sources from the mains readings. I will present how domain knowledge can be integrated into FHMM to alleviate the unidentifiable problem. Secondly, energy disaggregation was represented as a supervised learning problem. We thus proposed a sequence-to-point (seq2point) learning with neural networks for energy disaggregation. Interestingly, we showed that interpretable fingerprints for electricity appliances could be extracted from the mains, which were used for energy disaggregation essentially. 

Multimodal Deep Learning with High Generalisation across Mobile Recognition Tasks(23 April, 2019)

Speaker: Valentin Radu

Lectureship candidate.

A growing number of devices around us embed a variety of sensors and sufficient computation power to enable them intelligent (e.g., smartphones, smart-watches, smart-toothbrushes). Despite the many sensors available, applications often use just one sensor for a task, e.g., accelerometer to count the number of steps, barometer to detect changes in elevation. By this they miss out on the opportunity to capture complementary sensing perspectives from multiple sensors to increase robustness and to enable more advanced context recognition information. Combining many sensing modalities is not easy. In this presentation I will show that using deep learning can gracefully integrate diverse sensing modalities efficiently across many recognition tasks. In our proposed solution we dedicate neural network structures to extract features specific to each sensing modality followed by additional bridging layers to perform the classification across distilled features. We show this approach generalises well across a number of recognition tasks specific to mobile and wearable devices, while operating within suitable energy budgets.

Small Molecule Identification through Machine Learning: CSI:FingerID and beyond(17 April, 2019)

Speaker: Prof. Juho Rousu (SICSA DVF)

Abstract
Identification of small molecules from biological samples remains a major bottleneck in understanding the inner workings of biological cells and their environment. Machine learning on data from large public databases of tandem mass spectrometric data has transformed this field in recent years, with tools like CSI:FingerID, and CSI:IOKR demonstrating a step-change improvement in identification rates compared to previous approaches.  In this presentation, I will give an overview of the technology inside these tools and review some recent developments in making use of additional information sources for improving the identification rates, in particular learning to predict the order of molecules eluting from liquid-chromatographic system. 

 
References:
Bach, E., Szedmak, S., Brouard, C., Böcker, S. and Rousu, J., 2018. Liquid-chromatography retention order prediction for metabolite identification. Bioinformatics, 34(17), pp.i875-i883.
Brouard, C., Bach, E., Böcker, S. and Rousu, J., 2017, November. Magnitude-preserving ranking for structured outputs. In Asian Conference on Machine Learning (pp. 407-422).
Brouard, C., Shen, H., Dührkop, K., d'Alché-Buc, F., Böcker, S. and Rousu, J., 2016. Fast metabolite identification with input output kernel regression. Bioinformatics, 32(12), pp.i28-i36.
Dührkop, K., Fleischauer, M., Ludwig, M., Aksenov, A.A., Melnik, A.V., Meusel, M., Dorrestein, P.C., Rousu, J. and Böcker, S., 2019. SIRIUS 4: a rapid tool for turning tandem mass spectra into metabolite structure information. Nature Methods 16, pp- 299-302
Dührkop, K., Shen, H., Meusel, M., Rousu, J. and Böcker, S., 2015. Searching molecular structure databases with tandem mass spectra using CSI: FingerID. Proceedings of the National Academy of Sciences, 112(41), pp.12580-12585.

=====
Short Bio:
Juho Rousu is a Professor of Computer Science at Aalto University, Finland. Rousu obtained his PhD in 2001 form University of Helsinki, while working at VTT Technical Centre of Finland. In 2003-2005 he was a Marie Curie Fellow at Royal Holloway University of London. In 2005-2011 he held Lecturer and Professor positions at University of Helsinki, before moving to Aalto University in 2012 where he leads a research group on Kernel Methods, Pattern Analysis and Computational Metabolomics (KEPACO). Rousu’s main research interest is in learning with multiple and structured targets, multiple views and ensembles, with methodological emphasis in regularised learning, kernels and sparsity, as well as efficient convex/non-convex optimisation methods. His applications of interest include metabolomics, biomedicine, pharmacology and synthetic biology.

IR Seminar: Recommendations in a Marketplace: Personalizing Explainable Recommendations with Multi-objective Contextual Bandits(08 April, 2019)

Speaker: Rishabh Mehrotra

In recent years, two sided marketplaces have emerged as viable business models in many real world applications (e.g. Amazon, AirBnb, Spotify, YouTube), wherein the platforms have customers not only on the demand side (e.g. users), but also on the supply side (e.g. retailer, artists). Such multi-sided marketplace involves interaction between multiple stakeholders among which there are different individuals with assorted needs. While traditional recommender systems focused specifically towards increasing consumer satisfaction by providing relevant content to the consumers, two-sided marketplaces face an interesting problem of optimizing their models for supplier preferences, and visibility.

In this talk, we begin by describing a contextual bandit model developed for serving explainable music recommendations to users and showcase the need for explicitly considering supplier-centric objectives during optimization. To jointly optimize the objectives of the different marketplace constituents, we present a multi-objective contextual bandit model aimed at maximizing long-term vectorial rewards across different competing objectives. Finally, we discuss theoretical performance guarantees as well as experimental results with historical log data and tests with live production traffic in a large-scale music recommendation service.

 
Bio:
Rishabh Mehrotra is a Research Scientist at Spotify Research in London. He obtained his PhD in the field of Machine Learning and Information Retrieval from University College London where he was partially supported by a Google Research Award. His PhD research focused on inference of search tasks from query logs and their applications. His current research focuses on bandit based recommendations, counterfactual analysis and experimentation. Some of his recent work has been published at top conferences including WWW, SIGIR, NAACL, CIKM, RecSys and WSDM. He has co-taught a number of tutorials at leading conferences (WWW & CIKM) & was recently invited to teach a course on "Learning from User Interactions" at a number of summer schools including Russian Summer School on Information Retrieval and the ACM SIGKDD Africa Summer School on Machine Learning for Search.

IR seminar: Unbiased Learning to Rank from User Interactions(01 April, 2019)

Speaker: Harrie Oosterhuis

Learning to rank provides methods for optimizing ranking systems, enabling effective search and recommendation systems. Traditionally, these methods relied on annotated datasets i.e. relevance labels query-document pairs provide by human judges. Over the years, the limitations of such datasets have become apparent. Recently attention has mostly shifted to methods that learn from user interactions, as they more closely indicate user preferences. However, user interactions contain large amounts of noise and bias, learning from them while naively ignoring biases can lead to detrimental results. Consequently, the current focus is on unbiased methods that can reliably learn from user interactions. In this talk I will contrast the two main approaches to unbiased learning to rank: counterfactual learning and online learning, and discuss the most recent methods from the field.
 
Bio:
Harrie Oosterhuis (https://staff.fnwi.uva.nl/h.r.oosterhuis) is a 3rd year PhD student under supervision of Prof. dr. Maarten de Rijke at the University of Amsterdam. His main topic is learning to rank from user behaviour and he has publications at major IR conferences including CIKM, SIGIR, ECIR and WSDM. In addition he has completed multiple internships at Google Research & Brain in California, and worked as a visiting student at RMIT university in Melbourne during his PhD.

On the Road to a Transfer Learning Paradigm based on Interpretable Factors of Variation(29 March, 2019)

Speaker: Tameem Adel

For the last two years, I have been working on addressing other challenges and limitations of deep models, most notably challenges that relate to the integration of such models within real-world applications, e.g. interpretability and fairness. 

I will show an example of an algorithm, referred to as prediction difference analysis, providing (local) explanations of classification decisions taken by deep models. On the other hand, developing global explanations by learning interpretable data representations is also becoming ever more important as machine learning models grow in size and complexity. In our ICML-2018 paper, we proposed two rather contrasting interpretability frameworks. The first aims at controlling the accuracy vs. interpretability tradeoff by providing an interpretable lens for an existing model (which has already been optimized for accuracy). We developed an interpretable latent variable model whose data are the representation in an existing (generative or discriminative) model, weakly supervised by limited side information. We extended the approach using an active learning strategy to choose the most useful side information to obtain, allowing a human to guide what "interpretable" means. The second framework relies on joint optimization for a representation which is both maximally informative about the interpretable information and maximally compressive about the non-interpretable data factors. This leads to a novel perspective on the relationship between compression and regularization. An intriguing, related perspective is that of developing a quantified interpretability paradigm where learning can be transferred among tasks, based on (partially) interpretable factors of variation.

I will also briefly speak about other topics I have been working on prior to that, e.g. learning and approximate inference on probabilistic graphical models (PGMs) and transfer learning.

Post-CHIIR IR Seminar (15 March, 2019)

Speaker: Jaime Arguello and Adam Roegiest

This week have have a special post-CHIIR edition IR seminar talk(s).  We have two speakers from North America who will be speaking this Friday afternoon.
 
Talk 1:Understanding How Cognitive Abilities Influence Search Behaviors and Outcomes by Jaime Arguello at the University of North Carolina at Chapel Hill
Talk 2:Total Recall and Beyond: Real-world world experience in the legal domain
by Adam Roegiest, a Research Scientist at Kira Systems
When: 3-4pm, Friday March 14th
Where: SAWB 422
Event link
 
 
Details of both talks are below
 
Title: Understanding How Cognitive Abilities Influence Search Behaviors and Outcomes
Traditionally, personalization in IR has meant predicting which results to return based on a user's query and interest profile.  However, personalization in IR should also consider how to display results based on a user's cognitive abilities.  In this talk, I will summarize several studies that have investigated the effects of different cognitive abilities on search behaviors (e.g., how easily users find relevant results on a SERP) and search outcomes (e.g., users' perceptions of workload and engagement).  In these studies, we have considered cognitive abilities such as perceptual speed, working memory, and inhibitory attention control.  Additionally, we have considered how cognitive abilities interact with other factors such as different SERP layouts and search task types.  For example, does perceptual speed have a stronger influence for SERPs that are more visually complex? I will discuss challenges faced in conducting these studies and implications for designing systems that are well-suited to users' individual cognitive abilities.
 
Bio: Jaime Arguello is an Associate Professor at the School of Information and Library Science at the University of North Carolina at Chapel Hill.  Jaime received his Ph.D. from the Language Technologies Institute at Carnegie Mellon University in 2011.  Since then, his research has focused on a wide range of areas, including aggregated/federated search, voice query reformulation, understanding search behaviors during complex tasks, developing search assistance tools for complex tasks, and (more closely related to this talk) understanding the effects of different cognitive abilities on search behaviors and outcomes.  He has received Best Paper Awards at ECIR 2017, IIiX 2014, ECIR 2011, and SIGIR 2009.  His current research is supported by two NSF grants.  Since 2015, Jaime has chaired the SIGIR Travel Awards Program, which helps support about 160 students per year to attend SIGIR-sponsored conferences.
 
Title: Total Recall and Beyond: Real-world world experience in the legal domain
In this talk, I will discuss the benefits and drawbacks to working with research real-world problems. This begins with a discussion of my work in coordinating the TREC Total Recall track and subsequent investigations. Following this, I will discuss my experiences in developing features to aid in performing due diligence. Tying these experiences together are the focus on the legal domain and need to make results accessible to non-experts and covers both system evaluations and several user studies. 
 
Bio: Adam Roegiest is a Research Scientist at Kira Systems, where he spends time developing machine learning algorithms to aid lawyers perform due diligence. As part of this work, he collaborates with designers and legal professionals to ensure that these algorithms and their results are accessible to non-experts. Prior to working at Kira Systems, Adam completed his PhD at the University of Waterloo where studied the design and evaluation of high-recall systems for technology-assisted review and helped coordinate the TREC Total Recall and Real-Time Summarization tracks. 

IR Seminar: Topic-centric sentiment analysis of UK parliamentary debate transcripts(25 February, 2019)

Speaker: Gavin Abercrombie

Debate transcripts from the UK House of Commons provide access to a wealth of information concerning the opinions and attitudes of politicians and their parties towards arguably the most important topics facing societies and their citizens, as well as potential insights into the democratic processes that take place within Parliament.


By applying natural language processing and machine learning methods to debate speeches, it is possible to automatically determine the attitudes and positions expressed by speakers towards the topics they discuss.


This talk will focus on research on speech-level sentiment analysis and opinion-topic/policy detection, as well as discussing the challenges of working in this domain.

 

Bio
Gavin Abercrombie holds a Masters degree in IT & Cognition from the University of Copenhagen, and is currently a second-year PhD student at the School of Computer Science, University of Manchester. His research interests include natural language understanding and computational social science.

Challenges and Opportunities at the Intersection of the Computing and Social Sciences(21 February, 2019)

Speaker: Multiple speakers

The workshop aims to bring together social, political and computer scientists to discuss the challenges and opportunities when studying political events and campaigns especially on & through social media. Speakers include UoG's Assistant VP Des McNulty, Philip Habel (USA), Zac Green (Strathclyde) and our own Anjie Fang, who will be defending his PhD this week.

Joint Variational Uncertain Input Gaussian Processes(20 February, 2019)

Speaker: Carl Edward Rasmussen & Adrià Garriga-Alonso

Standard mean-field variational inference in Gaussian Processes with uncertain inputs systematically underestimates posterior uncertainty. In particular, the factorisation assumption employed in the approximating distribution severely limits the framework’s accuracy. We lift this assumption, and show that the resulting scheme gives much more realistic predictive uncertainties, and can be implemented in a sparse and practical way. The algorithm has implications for latent variable models generally, including stacked (Deep) GPs and time series models.

IDI Journal Club: Graph Attention Networks(31 January, 2019)

Speaker: Joshua Mitton

In this journal club meeting, Josh will lead the discussion of the paper "Graph Attention Networks".

Abstract:

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods’ features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of computationally intensive matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).

Paper:

https://mila.quebec/wp-content/uploads/2018/07/d1ac95b60310f43bb5a0b8024522fbe08fb2a482.pdf

Big Hypotheses: a generic tool for fast Bayesian Machine Learning(18 January, 2019)

Speaker: Prof. Simon Maskell

  • There are many machine learning tasks that would ideally involve global optimisation across some parameter space. Researchers often pose such problems in terms of sampling from the distribution and favour Markov Chain Monte Carlo (MCMC) or its derivatives (e.g., Gibbs sampling, Hamiltonian Monte Carlo (HMC) and Simulated annealing). While these techniques can offer good results, they are slow. We describe an alternative numerical Bayesian algorithm, the Sequential Monte Carlo (SMC) sampler. SMC samplers are closely related to particle filters and are reminiscent of genetic algorithms. More specifically, an SMC sampler replaces the single Markov chain considered by MCMC with a population of samples. The inherent parallelism present makes the SMC sampler a promising starting point for developing a scalable Bayesian global optimiser, e.g., that runs 86,400 times faster than MCMC and might be able to be 86,400 times more computationally efficient. The University of Liverpool and STFC’s Hartree centre have recently started working on a £2.5M EPSRC-funded project (with significant support from IBM, NVidia, Intel and Atos) to develop SMC samplers into a general purpose scalable numerical Bayesian optimisation and embody them as a back-end in the software package Stan. This talk will summarise recent developments, initial results (in a subset of problems posed by Astrazeneca, AWE, Dstl, Unilever, physicists, chemists, biologists and psychologists) and planned work over the next 5 years towards developing a high-performance parallel Bayesian inference implementation that can be used for a wide range of problems relevant to researchers working in a range of application domains.

Quantum inspired image compression. (11 December, 2018)

Speaker: Bruno Sanguinetti

Pushing image sensors and algorithms to the quantum limit(11 December, 2018)

Speaker: Bruno Sanguinetti

IR Seminar: Measuring User Satisfaction and Engagement(10 December, 2018)

Speaker: Adam Zhou

Abstract:
In the online world, it is important to design user-centric applications that can engage the users and make them satisfied. In order to improve user satisfaction and engagement, one prerequisite is to find effective ways to measure them.

In this talk, I will present several efforts to measure satisfaction and engagement in the context of search and mobile app usage. Firstly, I will present my work that aims to find the best metrics (either offline or online) for various search scenarios: organic, aggregated and image search. Secondly, I will talk about various ways how mobile users engage with the apps and how to exploit them to predict their next engagement. Finally, I will very briefly cover some of our current work on conversational search.

Bio:
Ke (Adam) Zhou holds dual academic and industrial appointments as an assistant professor in the school of computer science at University of Nottingham, and a Senior Research Scientist at Nokia Bell Labs. His research interests and expertise lie in web search and analytics, evaluation metrics, text mining and human computer interaction. He has published over 50 publications in reputable conferences and journals. His past researches have won the best paper award in ECIR'15 and CHIIR'16, and best paper honorable mention in SIGIR'15.

IR Seminar: Alana: Entertaining and Informative Open-domain Social Dialogue using Ontologies and Entity Linking(03 December, 2018)

Speaker: Ioannis Konstas

Abstract:
In this talk I will present our 2018 Alexa prize system (called ‘Alana’), an open-domain spoken dialogue system aimed at maintaining a fun, engaging and informative discussion with users. Alana consists of an ensemble of bots, combining rule-based and machine learning systems. The main highlights are (1) a neural Natural Language Understanding (NLU) pipeline; (2) a family of retrieval bots that store and deliver content interactively from heterogeneous sources (e.g., News, Wikipedia, Reddit), using traditional as well as graph-based datastores; (3) an ensemble of rule-based bots aimed at laying out a certain persona for Alana, while at the same time maintaining a coherent dialogue; (4) a profanity & abuse detection model with rule-based mitigation strategies. In the second part of the talk, I will describe an ongoing project on neural conversational agents aiming to produce coherent dialogues in human-to-human interactions. I will also illustrate our efforts on a more traditional task-based dialogue setup in the e-commerce domain exploiting several modalities (vision, knowledge base) on top of the textual input.
 
Bio:
Yannis Konstas is a lecturer in the department of Mathematical and Computer Sciences at Heriot-Watt University, Edinburgh. His main research interests focus on the area of Natural Language Generation (NLG) with an emphasis on data-driven deep learning methods. Before that he was a postdoctoral researcher at the University of Washington (2015-17) working with Luke Zettlemoyer. He has received a BSc in Computer Science from AUEB (Greece) in 2007, and an MSc in Artificial Intelligence from the University of Edinburgh (2008). He continued his study at the University of Edinburgh and received a Ph.D. degree in 2014, under the supervision of Mirella Lapata. He has previously worked as a research assistant at the University of Glasgow (2008), and as a postdoctoral researcher at the University of Edinburgh (2014). 

Performance Tuning with Structured Bayesian Optimisation and Reinforcement Learning(30 November, 2018)

Speaker: Dr Eiko Yoneki

Managing efficient configurations is a central challenge for computer systems. I will introduce two recent projects to tune systems using Machine Learning: 1) Structured Bayesian Optimisation (SBO) to optimise systems in complex and high-dimensional parameter space, and 2) our framework for Reinforcement Learning (RL) to bring performance improvements to dynamically evolving tasks such as scheduling or resource management. Our work aims at filling a gap between current research and practical deployments, and it provides a software stack for RL in systems research.

IR Seminar: The Quantified Self as Testbed for Multimodal Information Retrieval(19 November, 2018)

Speaker: Frank Hopfgartner

Title: The Quantified Self as Testbed for Multimodal Information Retrieval
 
Abstract: Thanks to recent advances in the field of ubiquitous computing, an increasing number of people now rely on tools and apps that allow them to track specific aspects of their lives. The result of this is development is that people are now able to unobtrusively create records of their daily experiences, captured multi-modally through digital sensors and stored permanently as a personal lifelog archive.  From an information retrieval perspective, these personal archives are rather challenging due to the multimodal nature of data created. In this talk, I will provide an overview of NTCIR Lifelog, an evaluation campaign that focuses on promoting research on multimodal information retrieval. 
 
Bio: Frank Hopfgartner is Senior Lecturer in Data Science and Head of the Information Retrieval Research Group at University of Sheffield. His research interest is in the intersection of information and data analytics. In particular, he focuses on novel approaches to personalise information access, especially in the fields of recommender and information retrieval systems. Due to the content-rich nature of data created, he increasingly concentrates on lifelogging as a challenging use case to improve multimedia access methods.

Investigating How Conversational Search Agents Affect User’s Behaviour, Performance and Search Experience(05 November, 2018)

Speaker: Mateusz Dubiel

Voice based search systems currently do not support natural conversational interaction. Consequently, people tend to limit their use of voice search to simple navigational tasks, as more complex search tasks require more sophisticated dialogue modelling. Previous research has demonstrated that a voice based search system’s inability to preserve contextual information leads to user’s dissatisfaction and discourages further usage. In my talk I will explore how people’s search behaviour, performance and perception of usability change when interacting with a conversational search system which supports natural language interaction, as opposed to a voice based search system which does not.

Short bio:
Mateusz Dubiel is a PhD candidate in the department of Computer and information Sciences at Strathclyde University in Glasgow . His research is focused on development and evaluation of conversational search agents. Mateusz holds an MSc in Speech and Language Processing from The University of Edinburgh.

IR Seminar: Measuring the Utility of Search Engine Result Pages(08 October, 2018)

Speaker: Dr. Leif Azzopardi

Web Search Engine Result Pages (SERPs) are complex responses to queries, containing many heterogeneous result elements (web results, advertisements, and specialised ``answers'') positioned in a variety of layouts. This poses numerous challenges when trying to measure the quality of a SERP because standard measures were designed for homogeneous ranked lists.

In this talk, I will explain how we developed a means to measure the utility and cost of SERPs. 
To ground this work we adopted the C/W/L framework by Moffat et al which enables a direct comparison between different measures in the same units of measurement, i.e. expected (total) utility and cost. I argue that the extended C/W/L framework provides a clearer and more interpretable framework for measurement i.e. utility, cost (in time), and also predicted stopping rank - the latter two are both directly observable - and so the quality of the metric can be assessed by how well it predicts these observables.

Within this framework, we proposed a new measure based on Information Foraging Theory, which can account for the heterogeneity of elements, through different costs, and which naturally motivates the development of a user stopping model that adapts behaviour depending on the rate of gain. This directly connects models of how people search with how we measure search, providing a number of new dimensions in which to investigate and evaluate user behaviour and performance.We perform an analysis over 1000 popular queries issued to a major search engine, and report the aggregate utility experienced by users over time. Then in an comparison against common measures, we show that the proposed foraging based measure provides a more accurate reflection of the utility and of observed behaviours (stopping rank and time spent).

Talk: Performance-oriented management in the large-scale cluster(08 October, 2018)

Speaker: Dr Chao Chen

To support a number of complex data analysis frameworks in different areas, a maintainable large-scale cluster with the required QoS is necessary. The cluster management is the core element can not only orchestrate various data analysis frameworks and services to harmoniously coexist, but also maximise the performance and utilisation to cluster physical machines as much as possible. This presentation will focus on resources provision, allocation and job scheduling for the cluster management.

Talk: Resource management in Grid and Cloud Infrastructures(08 October, 2018)

Speaker: Dr Hamid Arabnejad

Increasing availability of different type of resources in Grid and Cloud platforms associated with today's fast-changing, and unpredictability of submitted workload, has propelled an interest towards self-adaptive manager system that dynamically detects and reallocates system resources to user’s applications in order to optimize the given Quality of service (e.g. performance, energy, reliability, resource utilization) for the target platform. However, finding an effective resource management solution to support diverse application performance objectives in heterogeneous computing environments becomes a difficult challenge.

Resource Management (RM) is the collective term that describes the best practices, processes, procedures, and technology tools to manage available resources in the target platform. RM has a focus across multiple aspects, such as applications, servers, networking, and storage, to address efficient usage of available resources to meet user application requirement objectives while addressing performance, availability, capacity, and energy requirements in a cost-effective manner.

The focus of this talk will discuss issues and challenges of resource allocation and scheduling in Grid and Cloud systems. We will first provide a characterization of workload and resource management. Then, we will then describe our recent work to address this challenge.

Towards data-driven hearing aid solutions(04 October, 2018)

Speaker: Widex staff

Widex will give an informal overview of the company and current challenges in the hearing aid domain. We will discuss challenges related to data collection, machine learning and real-time optimisation with humans in the loop.

The world of 'big' graphs: storage and query optimization(20 September, 2018)

Speaker: Dr Medha Atre

Graph structured data is ubiquitous even if not conspicuously visible. More often these graphs are of the order of a few billion edges and hundreds of millions of nodes. Thus over the past decade there has been a proliferation of commercial and community graph databases. E.g., BitMat, RDF-3X, gStore, Triplebit, S2RDF, TriAD emerged from academic research, and Neo4j, Pregel, Apache Giraph, Oracle Spatial and Graph store, IBM Graph etc have come from commercial and large community efforts.

In this talk, the speaker will focus on the BitMat system that she developed singlehandedly from scratch to handle RDF graph data. She designed BitMat to target "low-selectivity" pattern queries, i.e., pattern queries which require to access a large amount of graph data, that cannot always benefit from the heuristic cost-based optimization. She will also discuss in brief her ongoing work -- (1) using modern hardware advances such as multi-core CPUs and GPUs for massively parallel processing of graphs, (2) optimizing "path pattern queries", and (3) the interdisciplinary work on the combination of machine learning, computer vision, and large scale data management.

Variational Sparse Coding(13 June, 2018)

Speaker: Francesco Tonolini

We propose a new method for sparse coding based on the variational auto-encoder architecture, which allows sparse representations with generally intractable probabilistic models. We assume data to be generated from a sparse distribution prior in the latent space of a generative model and aim to maximise the observed data likelihood with a variational auto-encoding approach. We consider both the Laplace and the spike and slab priors and in each case derive an analytic approximation to the regularisation term in the variational lower bound, making posterior inference as efficient as in the standard variational auto-encoder case. By inducing sparsity in the prior, training results in a recognition function that generates sparse representations of observed data. Such representations can then be used as information-rich inputs to further learning tasks. 

Deep, complex networks for inversion of transmission effects in multimode optical fibres(30 May, 2018)

Speaker: Oisin Moran

We use complex-weighted, deep convolutional networks to invert the effects of multimode optical fibre distortion of a coherent input image. We generated experimental data based on collections of optical fibre responses to greyscale, input images generated with coherent light, and measuring only image amplitude  (not amplitude and phase as is typical) at the output of the \SI{10}{\metre} long \SI{105}{\micro\metre} diameter multimode fibre. This data is made available as the {\it Optical fibre inverse problem} Benchmark collection. The experimental data is used to train complex-weighted models with a range of regularisation approaches and subsequent denoising autoencoders. A new {\it unitary regularisation} approach for complex-weighted networks is proposed which performs best in robustly inverting the fibre transmission matrix, which fits well with the physical theory.

Modelling the creative process through black-box optimisation(23 May, 2018)

Speaker: Anders Kirk Uhrenholt

The creative process from getting an idea to having that idea materialise as an image or a piece of music can often be framed as an optimisation task where the artist makes incremental changes until a local optimum is reached. This begs the question whether machine learning has a role to play in automating the tedious part of this process thereby freeing up time and energy for the user to be creative.
 
In a typical optimisation setting the cost function can be objectively evaluated with some measurable degree of certainty. But what if the target of the optimisation is something inherently subjective such as a person's perception of sound or image? This is a central question in the intersection between predictive modelling and creative software where the aim is to support the artist throughout the creative process in an intelligent way.
 
This talk focuses on said problem specifically for the task of tuning a music synthesizer. The task can be framed as optimising a black-boxed system (the synthesizer) with regards to an unknown cost function (the user's opinion of the synthesised sound). In the proposed approach metric learning is included as part of the optimisation loop to simultaneously learn a mapping from synthesizer configuration to sound while inferring from user feedback what the artist will think of the produced result.

IR Seminar: Controversy Analysis and Detection (21 May, 2018)

Speaker: Shiri Dori-Hacohen

Controversy Analysis and Detection
Seeking information on a controversial topic is often a complex task. Alerting users about controversial search results can encourage critical literacy, promote healthy civic discourse and counteract the "filter bubble" effect, and therefore would be a useful feature in a search engine or browser extension. Additionally, presenting information to the user about the different stances or sides of the debate can help her navigate the landscape of search results beyond a simple "list of 10 links". Our existing work has made strides in the emerging niche of controversy analysis and detection. In our work, we've made a few conceptual and technical contributions, including: (1) Offering a computational definition of controversy and its components; (2) Improving the current state-of-the-art controversy detection in Wikipedia by employing a stacked model using a combination of link structure and similarity; and (3) the first automated approach to detecting controversy on the web, using a KNN classifier that maps from the web to similar Wikipedia articles. I also recently founded a startup aiming to bring this research & technology to practical uses. This talk will largely focus on contribution (2) above, and touch on the other aspects briefly as time allows.

 

This talk is based on joint work with James Allan, John Foley, Myung-ha Jang, David Jensen and Elad Yom-Tov.
 
Bio:
Dr. Shiri Dori-Hacohen is the CEO & founder of AuCoDe. She has fifteen years of academic and industry experience, including Google and Facebook. She received her M.Sc. and B.Sc. (cum laude) at the University of Haifa in Israel and her M.S. and Ph.D. from the University of Massachusetts Amherst where she researched computational models of controversy. Dr. Dori-Hacohen is the recipient of several prestigious awards, including the 2011 Google Lime Scholarship and first place at the 2016 UMass Amherst’s Innovation Challenge. She has one daughter; identifies as a person with disabilities; and has taken an active leadership role in broadening participation in Computer Science on a local and global scale.
 

IR Seminar: Understanding and Leveraging the Impact of Response 1 Latency on User Behaviour in Web Search(18 May, 2018)

Speaker: Ioannis Arapakis

Summary:
The interplay between the response latency of web search systems and users' search experience has only recently started to attract research attention, despite the important implications of response latency on monetisation of such systems. In this work, we carry out two complementary studies to investigate the impact of response latency on users' searching behaviour in web search engines. We first conduct a controlled user study to investigate the sensitivity of users to increasing delays in response latency. This study shows that the users of a fast search system are more sensitive to delays than the users of a slow search system. Moreover, the study finds that users are more likely to notice the response latency delays beyond a certain latency threshold, their search experience potentially being affected. We then analyse a large number of search queries obtained from Yahoo Web Search to investigate the impact of response latency on users' click behaviour. This analysis demonstrates the significant change in click behaviour as the response latency increases. We also find that certain user, context, and query attributes play a role in the way increasing response latency affects the click behaviour. To demonstrate a possible use case for our findings, we devise a machine learning framework that leverages the latency impact, together with other features, to predict whether a user will issue any clicks on web search results. As a further extension of this use case, we investigate whether this machine learning framework can be exploited to help search engines reduce their energy consumption during query processing.

Understanding Capsule Networks(16 May, 2018)

Speaker: Piotr Ozimek

Abstract:

In recent years convolutional neural networks (CNNs) have revolutionized the fields of computer vision and machine learning. On multiple occasions, they have achieved state of the art performance on a variety of vision tasks, such as object detection, classification and segmentation. In spite of this CNNs suffer from a variety of problems: they require large and diverse datasets that may be expensive to obtain, they do not have an explicit and easy to interpret internal object representation, and they are easy to fool by manipulating spatial relationships between visual features in the input image. To address these issues Hinton et. al. have devised a new neural network architecture called the Capsule Network (CapsNet), which consists of explicit and encapsulated neural structures whose output represents the detected object or feature in a richer and more interpretable format. CapsNets are a new concept that is still being researched and developed, but they have already achieved state of the art performance on the MNIST dataset without any data augmentation. In this talk, I will give a brief overview of the current state of CapsNets, explain the motivation behind them as well as their architecture.

Bio:

Closed-loop, bio-inspired and cloud elasticity(10 May, 2018)

Speaker: Amjad Ullah

I will mainly present the research work I have done during my PhD. This include, the development of a new intelligent cloud elasticity framework for systems that operate in time-varying operating conditions. This talk will present the motivation behind the research objectives and will briefly discuss the architecture of the proposed method that consist of the use of feedback control, fuzzy logic, bio-inspired computational models and multi-objective optimization. 

Surviving the Flood of Big Data Streams(30 April, 2018)

Speaker: Richard McCreadie

Research talk abstract: The way big data is being processed is evolving from predominantly batch-based analysis of static datasets, towards to microservice-driven architectures designed to analyse big data streams. This change raises new challenges for both data systems enginers examining how to build efficient and scalable architectures/platforms; as well as for researchers and developers looking to extract value from emerging real-time streams. In this talk, I will discuss how real-time streaming data is altering the research landscape from the perspective of real-time event detection and modelling. In particular, I will cover my past and present research in this area, focusing on challenges in data systems development, event detection from real-time streams, as well as how to model information from event streams over time. I will conclude the talk with a discussion on some promising new research directions to examine in this area in the future.

Lectureship abstract: We are asking all IDA Lectureship candidates to give a 15 minute lecture, as if they were teaching Level 4 undergraduates. The topic is “Explaining the matrix factorisation (MF) approach for collaborative filtering”.

Scaling Entity Linking with Crowdsourcing(23 April, 2018)

Speaker: Dyaa Albakour

In this presentation, we first review the current state-of-the-art for the EL task and make the case for using supervised learning approaches to tackle EL. These approaches require large amounts of labelled data, which represent a bottleneck for scaling them out to cover large numbers of entities. To mitigate this, we have developed a production-ready solution to efficiently collect high-quality labelled data at scale using Active Learning and  Crowdsourcing. In particular, we will discuss the different steps and the challenges in tuning the design parameters of the crowdsourcing task. The design parameters include the qualification of the workers and UI features that help them complete the task. The tuning aims to limit the noise, reduce the cost and maximise the throughput of labelling whilst maintaining the quality of the resulting models for EL.

Signal Media is a research-led company that uses text analytics and machine learning to turn streams of unstructured text, e.g. news articles, into useful information for professional users. One of the core components of Signal’s text analytics pipeline is entity linking (EL).

 

Shard Effects on Effectiveness(18 April, 2018)

Speaker: Mark Sanderson

Title
Shard Effects on Effectiveness

Abstract
Studying the experimental factors that impact IR measures is often overlooked when comparing IR systems. In particular, the effects of splitting the document collection into shards has not been examined in detail. I will talk about our use of the general linear mixed model framework and present a model that encompasses the experimental factors of system, topic, shard, and their interaction effects. The model allows us to more accurately estimate significant differences between the effect of various factors. We study shards created by various methods used in prior work and better explain observations noted in prior work in a principled setting and offer new insights. Notably, I describe how we discovered that the topic*shard interaction effect is large, almost globally across all datasets, an observation that has not been recognised or measured before to our knowledge.

Prototyping Deep Learning Applications Through Knowledge Transfer(16 April, 2018)

Speaker: Nina S Dethlefs

Deep learning plays an ever increasing role in artificial intelligence and a growing number of libraries facilitate the fast development of new applications. For each new learning task, some trial and error is normally required to tune hyperparameters or find an adequate learning representation etc before a suitable prediction model can be learnt. In this talk, I explore the possibility of transferring hyperparameters (and learning representations) from one task to another based on the tasks’ similarity. The idea is to reuse previously acquired knowledge and in this way reduce time and development costs and speed up prototyping of new deep learning applications. I present a number of case studies from natural language processing and other AI tasks that show how knowledge transfer can - in some cases - lead to state-of-the-art performance on unseen tasks while substantially reducing computation time. Embedding important operations into a generalised abstract framework, e.g. a domain specific programming language, facilitates prototyping even further.

Bio 
I am a Lecturer in Computer Science at the University of Hull, UK. I lead the Big Data Analytics groups and I am a member of the Computational Science group. Previously, I was a Research Fellow at the Interaction Lab at Heriot-Watt University, Edinburgh. I have a PhD in Computational Linguistics from the University of Bremen, Germany. 
My research interests are in computational intelligence and machine learning - particularly deep learning and optimisation - as well as natural language processing. I investigate how machine learning algorithms themselves can be equipped with intelligence so as to enable transfer learning across domains and learning tasks. Most of my work has been in natural language processing but I have also worked in other areas, including health informatics and human-robot interaction.

Simulating Interaction for Evaluation(09 April, 2018)

Speaker: Leif Azzopardi

Search is an inherently interactive, non-deterministic and user-dependent process. This means that there are many different possible sequences of interactions which could be taken (some ending in success and others ending in failure). Simulation provides a powerful tool for low-cost, repeatable and reproducible evaluations which explore a large range of different possibilities - and enables the analysis of IR systems, interfaces, user behaviour and user strategies. To run a simulation, a model of the user is formalised, and then used, for example, as the basis of a metric, to create a test collection, or generate interaction data. In this talk, I will give an overview of various methods that we have developed in order to: (1) create simulated test collections which enable more extensive evaluations, as well as enable the evaluation on new collections without the expense of costly user judgements, and (2) create user interaction data, which enables a range of different user strategies/behaviours to be compared and contrasted in a systematic manner.

Bio: Dr. Leif Azzopardi is a Chancellor's Fellow in Data Science and Associate Professor at the University of Strathclyde, Glasgow within Department of Computer and Information Science. He leads the Interactive Information Retrieval group within Strathclyde's iSchool. His research focuses on examining the influence and impact of search technology on people and society and is heavily underpinned by theory. He has made numerous contributions in: (i) the development of statistical language models for document, sentence, expert retrieval, (ii) the simulation and evaluation of users and their interactions, (iii) the analysis of systems and retrieval bias using retrievability theory and the (iv) the formalisation of search and search behaviour using economic theory. He has given numerous keynotes, invited talks and tutorials through out the world on retrievability, search economics, and simulation. He is co-author of the Tango with Django (www.tangowithdjango.com) which has seen over 1.5 million visitors. And more recently he has been co-developing resources for IR research with Lucene (www.github.com/lucene4ir/), while co-creating evaluation resources for Technology Assisted Reviews as part of the CLEF eHealth Track 2017. He is an honorary lecturer at the University of Glasgow (where he was previously a Senior Lecturer) and an honorary Adjunct Associate Professor at Queensland University of Technology. He received his Ph.D. in Computing Science from the University of Paisley in 2006, under the supervision of Prof. Mark Girolami and Prof. Keith van Rijsbergen. Prior to that he received a First Class Honours Degree in Information Science from the University of Newcastle, Australia, 2001.

Analyzing and Using Large-scale Web Graphs(29 March, 2018)

Speaker: Ansgar Scherp

The talk first provides an overview about my research in Data Science, namely text and data mining. Subsequently, I focus on graph data mining on the Web. I have developed a schema-level index called SchemEX in order to be able to search in large-scale web graphs. The SchemEX index can be efficiently computed in a stream-based fashion with reasonable accuracy over graphs of billions of edges. The data search engine LODatio+ (see: http://lodatio.informatik.uni-kiel.de/) uses the SchemEX index to find relevant data sources. In order to quickly develop, tailor, and compare schema-level indices, I provide a novel formal, parameterized model for schema-level indices. A grant challenge is to deal with the evolution of the web graphs, specifically their schema in terms of types and properties used to describe entities. I have investigated the dynamics of entities in order to find, e. g., periodicities in the schema changes, and to use this information to predict future changes. This is important for various future data-driven applications that aim at using graph data on the web.

 

IR Seminar: Using Synthetic Text for Developing Content Coordination Metrics and Semantic Verification(12 March, 2018)

Speaker: Dmitri Roussinov

Recurrent neural language models allowed generating realistically looking synthetic texts, but the use of those texts for scientific purposes has been largely unexplored. I will present my work in progress and some forthcoming results on using simulated text for developing the metrics for catching coordinated content in microblogs (e.g. Twitter trolling attacks) and verifying semantic classes of words (e.g. France is a country, Gladiator is a movie, but not a country) for question answering applications. My simulation results support the conjecture that only when the metric takes the context and the properties of the repeated sequence into consideration, it is capable of separating organic and coordinated content. I will also demonstrate how those context-specific adjustments can be obtained using existing resources.

 

Bio:
Dr. Roussinov is a Senior Lecturer in Computer and Information Sciences University of Strathclyde. He has contributed to the fields of information systems, information retrieval, natural language processing, search engines, security informatics, medical informatics, human computer interaction, databases and others. He received his doctoral degree in Information Systems from the University of Arizona (advisor H. Chen), Master’s in Economics from Indiana University, and his undergraduate in Physics and computer science from Moscow Institute of Physics and Technology.

Learning from samples of variable quality(26 February, 2018)

Speaker: Mostafa Deghani

The success of deep neural networks to date depends strongly on the availability of labeled data which is costly and not always easy to obtain. Usually, it is much easier to obtain small quantities of high-quality labeled data and large quantities of unlabeled, weak or noisy data. The problem of how to best integrate these two different sources of information during training and hot to get best of samples of variable quality is an active pursuit in the field of semi-supervised learning. In this talk, we are going to discuss some methods for training neural networks with labels with different quality.

Bio:
Mostafa Dehghani is a PhD student at the University of Amsterdam working with Jaap Kamps and Maarten de Rijke. His doctorate research lies at the intersection machine learning and information retrieval, in particular employing weak supervision signals for training neural models for IR problems. He has contributed to top-tier ML and IR conferences like NIPS, ICLR, SIGIR, CIKM, WSDM, and ICTIR by publishing papers and giving tutorials and received awards at SIGIR, ICTIR, ECIR, and CLEF for some of his works. He has done internships at Google Research on search conversationalization and currently interning at Google Brain.

SOCIAL & CROSS-DOMAIN RECOMMENDATIONS(19 February, 2018)

Speaker: Dimitrios Rafailidis

How the selections of social friends can influence user preferences in recommender systems? How can we exploit distrust relationships when generating product, movie or song recommendations? In the first part of my talk I will present my recent research in social recommender systems, and how these questions are answered to produce accurate recommendations by considering both trust and distrust relationships.

While Amazon users can rate products from different domains, such as books, toys and clothes, they do not necessarily have the same behavior when different types of products are recommended, making the widely used collaborative filtering strategy underperform. So, the main challenge is to carefully transfer the knowledge of user preferences from one domain to another by handling their different behaviors accordingly. In the second part of my talk, I will demonstrate my recently proposed algorithm for generating cross-domain recommendations and how the different user behaviors are weighted across multiple domains."

BIO: "Dimitrios Rafailidis is a postdoctoral research fellow at the Department of Computer Science at UMons in Belgium. His research interests are recommender systems and social media mining. His primary research goal is to generate personalized recommendations of massive, multimodal and streaming user data from different social media platforms, or any source that can capture user preferences. His main focus is on capturing user preference dynamics, and producing social and cross-domain recommendations. The results from this research have been published in leading peer reviewed journals, like TBD, TiiS, TOMCCAP, TMM, TCBB, TSMC, TASLP and SNAM, and highly selective conference proceedings such as RecSys, ECML/PKDD, CIKM, SIGIR, WWW and ASONAM."

QUANTITATIVE EVALUATION OF CANINE PELVIC LIMB ATAXIA USING A WIRELESS ACCELEROMETER SYSTEM(15 February, 2018)

Speaker: Rodrigo Gutierrez-Quintana

R. Gutierrez-Quintana, K.L. Holmes, Z. Hatfield, P. Amengual Batle, J. Brocal, K. Lazzerini, R. José-López. Small Animal Hospital, School of Veterinary Medicine, University of Glasgow, UK.

   An inexpensive and easily available method for objectively identifying and grading pelvic limb ataxia in dogs in the clinical setting is urgently needed. An alternative approach to conventional gait analysis techniques is the use of accelerometers attached to the body. They have the advantages of being low cost and allowing non-restrictive evaluation in a normal environment. 

   The purpose of this prospective study was to perform gait analysis using a lumbar accelerometer in dogs with pelvic limb ataxia and healthy controls; and assess whether the data obtained could be used to differentiate these 2 groups.

   Fifty-three dogs (21 healthy controls and 32 dogs with pelvic limb ataxia) of different size breeds were included. All dogs were walked in a straight line, on a non-slippery surface, at a slow walking pace for 50 meters using a short lead.  Acceleration signals were measured using a wireless tri-axial accelerometer that was secured with an elastic band at the level of the fifth lumbar vertebra. The average and coefficient of variation of the peak-to-peak amplitude was calculated for each acceleration component (x: Cranio-caudal, y: Latero-lateral and z; Dorso-ventral). Mann-Whitney test was used to compare groups (p<0.05).

   A significant difference between affected and control dogs was identified in the coefficient of variation of the x axis (p<0.0001).

   The results of the present study suggest that the coefficient of variation of the cranio-caudal axis could represent an objective measure of pelvic limb ataxia in dogs. Further longitudinal studies in a larger number of cases are indicated.

IR Seminar: A Survey of Information Retrieval Approaches with Embedded Word Vectors(05 February, 2018)

Speaker: Debasis Ganguly

Standard information retrieval (IR) models are designed to work with categorical features, i.e., discrete terms. Generally speaking, documents are represented as vectors in a discrete term space facilitating the computation of pair-wise document similarities by standard vector space similarity (inverse distance) measures, such as the inner product between the vectors.
 
The limitations of these approaches are that: i) they assume that terms are independent; ii) they have no way of incorporating the notion of semantic distances between terms; iii) they have no way to address ‘concepts’ (combined meaning of multiple terms). To address the above limitations (and thereby, the age old problem of vocabulary mismatch for discrete terms), there has been an increasing trend in the IR research community to utilize semantic relationships between terms by embedding them within a continuous vector space over reals. The semantic relationship between the terms are then predicted by computing the distances between the words embedded as real-valued vectors. These semantic relationships are then applied to improve various IR tasks such as document ranking, query formulation, relevance feedback, end-to-end deep neural ranking models, session modeling etc.
 
This talk will focus on describing ways to incorporate term semantic information into standard retrieval models through the application of embedded word vectors. More specifically, we will analyze the key ideas of some recent papers on applications of word vectors for improving the effectiveness of various IR tasks, such as ad hoc ranking,
query modeling and session modeling.

IR Seminar: Natural Language Understanding in Virtual Agents for Airline Pilots.(22 January, 2018)

Speaker: Sylvain Daronnat

Abstract:
This presentation summarizes a 6 months master's internship that took place at Airbus (Toulouse, France) around a virtual agent research thematic for airline pilots. Our initial hypothesis was that a intent categorization system could benefit from using synthetic “natural-like” data. In order to test this hypothesis we decided, first, to create a methodology that would help us collect natural questions from end-users. Then we used the “natural” data we previously collected along with a synthetic question generator we designed in order to output synthetic questions that are as close as possible from the original ones. Lastly, we experimented on the synthetic datasets using various tools in order to put to the test our initial hypothesis. The results we obtained allowed us to open new perspectives on the natural language understanding part of the virtual agent system for airline pilots.

Short bio:
My name is Sylvain Daronnat, I'm a PhD student in computer and information sciences at Strathclyde University working on implementing new human-agent collaboration systems aboard submarines. For this project I'm also funded by Thales, a company designing electrical systems for various industries. Before my PhD, I was studying Natural Language Processing at the Grenoble Alpes University in France.

Approaches to analysis of genomic data(17 January, 2018)

Speaker: Thomas Otto

A huge amount of data in biological sciences are generated in the hope to answer biological questions. This is possible due to the decreased price of high throughput methods. Although many analysis tools exist, there is a need to improve many of them. Further, there are many opportunities to develop new methods by combining existing dataset sets. 

In this talk, I will present some of the datasets and the methods we used/developed to analyse genomic data, including genomic and transcriptional data from malaria. I will also describe anticipated data, such as single cell RNA-Seq or detection of biomarkers. 

Automated Clinical Patient Health Surveillance(15 January, 2018)

Speaker: Stewart Whiting

Part1: Calibration Brain-Computer Interfaces. Part2: The need for more flexible robotics tools(14 December, 2017)

Speaker: Jonathan Grizou

Abstract: Recent works have explored the use of brain signals to directly control virtual and robotic agents in sequential tasks. So far in such brain-computer interfaces (BCI), an explicit calibration phase was required to build a decoder that translates raw electroencephalography (EEG) signals from the brain of each user into meaningful instructions. In this talk, I will explain how we removed the need for a calibration phase. In practice, this means being able to interactively teach an agent to perform a task without it knowing beforehand how to associate the human communicative signals with their meanings. In a second part, I will talk about the open source robotic project Poppy and how it was used in art, education and research. This will bring us to the need for more flexible and modular tools to accelerate the design of robotics products.

Bio: Jonathan is currently a PostDoc within the Cronin group in charge of the Chemobot Team. The team explores how robots and algorithms can become tools for the exploration and discovery of complex physicochemical systems. Jonathan pursued his PhD at the INRIA and Ensta-ParisTech Flowers Team where he investigated how to create calibration-free interactive systems. He was advised by Manuel Lopes and Pierre-Yves Oudeyer and received the ''Prix Le Monde de la Recherche Universitaire'' 2015 for his thesis work. Jonathan is also a long-time maker and an active member of the Poppy project, an open-source project providing tools to enable the creative exploration of interactive robots for science, education, and art. Recently, and together with three robotics specialists, he co-founded Pollen Robotics a young start-up aiming to make robotic product development much simpler.

Websites:
- http://jgrizou.com/ 
- https://www.poppy-project.org/en/ 
- https://www.pollen-robotics.com/ 

Going beyond relevance: Incorporating effort into Information retrieval(04 December, 2017)

Speaker: Manisha Verma

Abstract:
Relevance lies at the core of evaluation of Information retrieval systems. However, with rapid development in search algorithms, a myriad number of search devices and increasing complexity of user information needs, we argue that relevance can no longer be the primary criteria for design and evaluation of IR systems. In this talk, I shall provide a brief overview of our work on characterizing, measuring and incorporating effort in IR.

The first half of the talk shall highlight our work on characterizing and measuring document specific effort. I shall provide a brief overview of how effort can be incorporated in information retrieval. I shall outline one important source of the mismatch between search log based evaluation and offline relevance judgments: the high degree of effort required to identify and consume relevant information in a document. I shall describe how to incorporate effort into existing learning to rank algorithms and their performance on publicly available datasets.

The second half of the talk shall focus on device-specific effort. Users have access to same information on several devices today. Our work attempts to analyze in-depth the differences between mobile and desktop. I shall give a brief overview of how judgments on both devices may differ significantly for different documents. I shall touch on the features that are useful in predicting effort across devices. Finally, I shall close the talk with some unresolved research questions and some failed attempts.

Bio:
Manisha Verma is a final year Ph.D. student in Media futures group at University College London. Her primary area of research is characterizing user effort and incorporating it in retrieval and evaluation. Some of her recent work has been published at conferences such as CIKM, WSDM, ECIR, and SIGIR. Over the past few years, Manisha has worked with researchers at Google, Microsoft, and Yahoo on understanding role of user effort in retrieval. She has served as an Ambassador for postgraduate women at UCL and a co-coordinator of the Tasks Track in TREC 2015-2016 and TREC CAR Track in 2017.

IR seminar: Summarizing the Situation with Social Media Streams(27 November, 2017)

Speaker: Richard McCreadie

When a crisis hits, it is important for response agencies to quickly determine the situation on the ground, such that they can deploy the limited resources at their disposal as quickly and effectively as possible. However, during an emergency, information is difficult to come-by, as response units often need to arrive on the scene before the severity of the situation can be estimated. On the other hand, during emergencies, the general public is gravitating to social media platforms to ask for assistance and to show what they see to their friends. As such, emergency services are increasingly interested in technologies that can extract relevant information from social media during an emergency, to aid situational awareness. Meanwhile, real-time summarization is an emerging field that aims to build timeline summaries of events that are happening in the world, using news and social media streams as sensors. In this talk, I will provide an overview of what emergency services want to extract from social media, and how real-time summarization systems can help achieve this. Furthermore, I will discuss current technologies and techniques for real-time summarization that are relevant to the crisis domain, along with the challenges that are yet to be solved.

 

Bio: 
Richard McCreadie is a Research Associate at the University of Glasgow, UK. He is an information retrieval specialist, as well as developer and manager for the Terrier open source IR platform, which has been downloaded over 40,000 times since 2004. His research is focused on the interface between streaming IR and social media, tackling topics such information retrieval architectures for real-time stream processing; leveraging social media for event sensing (detecting events, extracting knowledge and summarizing those events); evaluation methodologies for streaming IR; and social media analytics, particularly when applied to security-related use-cases such as disaster management.


Richard received his Ph.D on the topic of News Vertical Search using User Generated Content in 2012 and is currently a senior researcher within the Terrier Team IR research group in Glasgow. Furthermore, he works with researchers and industry partners around the world to advance the IR field as co-chair of the streaming summarization evaluation initiatives (2014-Present) and 2018 Incident Streams emergency informatics initiative (2018) at the Text Retrieval Conference (TREC). He is active in the research community with 26 published conference papers in the areas of IR and social media, in addition to articles in longer formats, such as a book on Search in Social Media published in the highly-cited FnTIR series. Richard is also a current PC member for the top-tier conferences in the IR field (ACM CIKM, ACM SIGIR, AAAI ICWSM and ACM WSDM). 

IR Seminar: A Study of Snippet Length and Informativeness: Behaviour, Performance and User Experience(20 November, 2017)

Speaker: David Maxwell

The design and presentation of a Search Engine Results Page (SERP) has been subject to much research. With many contemporary aspects of the SERP now under scrutiny, work still remains in investigating more traditional SERP components, such as the result summary. Prior studies have examined a variety of different aspects of result summaries, but in this paper we investigate the influence of result summary length on search behaviour, performance and user experience. To this end, we designed and conducted a within-subjects experiment using the TREC AQUAINT news collection with 53 participants. Using Kullback-Leibler distance as a measure of information gain, we examined result summaries of different lengths and selected four conditions where the change in information gain was the greatest: (i) title only; (ii) title plus one snippet; (iii) title plus two snippets; and (iv) title plus four snippets. Findings show that participants broadly preferred longer result summaries, as they were perceived to be more informative. However, their performance in terms of correctly identifying relevant documents was similar across all four conditions. Furthermore, while the participants felt that longer summaries were more informative, empirical observations suggest otherwise; while participants were more likely to click on relevant items given longer summaries, they also were more likely to click on non-relevant items. This shows that longer is not necessarily better, though participants perceived that to be the case -- and second, they reveal a positive relationship between the length and informativeness of summaries and their attractiveness (i.e. clickthrough rates). These findings show that there are tensions between perception and performance when designing result summaries that need to be taken into account.

Neural Models for Information Retrieval(06 November, 2017)

Speaker: Bhaskar Mitra

Abstract: In the last few years, neural representation learning approaches have achieved very good performance on many natural language processing (NLP) tasks, such as language modelling and machine translation. This suggests that neural models may also yield significant performance improvements on information retrieval (IR) tasks, such as relevance ranking, addressing the query-document vocabulary mismatch problem by using semantic rather than lexical matching. IR tasks, however, are fundamentally different from NLP tasks leading to new challenges and opportunities for existing neural representation learning approaches for text.

In this talk, I will present my recent work on neural IR models. We begin with a discussion on learning good representations of text for retrieval. I will present visual intuitions about how different embeddings spaces capture different relationships between items, and their usefulness to different types of IR tasks. The second part of this talk is focused on the applications of deep neural architectures to the document ranking task.
 
Bio: Bhaskar Mitra is a Principal Applied Scientist at Microsoft AI & Research, Cambridge. He started at Bing in 2007 (then called Live Search) working on several problems related to document ranking, query formulation, entity ranking, and evaluation. His current research interests include representation learning and neural networks, and their applications to information retrieval. He co-organized multiple workshops (at SIGIR 2016 and 2017) and tutorials (at WSDM2017 and SIGIR 2017) on neural IR, and served as a guest editor for the special issue of the Information Retrieval Journal on the same topic. He is currently pursuing a doctorate at University College London under the supervision of Dr. Emine Yilmaz and Dr. David Barber.

IR Seminar: Jarana Manotumruksa(30 October, 2017)

Speaker: Jarana Manotumruksa

IR Seminar: Incorporating Positional Information and Other Domain Knowledge into a Neural IR Model(23 October, 2017)

Speaker: Andrew Yates

Retrieval models consider query-document interactions to produce a document relevance score for a given query. Traditionally, such interactions have been modelled using handcrafted statistics that generally compare term frequencies within a document and across a collection. Recently, neural models have demonstrated that they provide the instruments necessary to consider query-document interactions directly, without the need for such statistics.
 
In this talk, I will describe how positional term information can be incorporated into a neural IR model. The resulting model, called PACRR, performs substantially better on TREC benchmarks than previous neural approaches. This improvement can be attributed to the fact that PACRR can learn to match both ordered and unordered sequences of query terms in addition to the unigram matches considered by prior work. Using PACRR's approach to modeling query-document interactions as a foundation, I will describe how several well-known IR problems can be addressed within a neural framework; the resulting model substantially outperforms the original PACRR model. Finally, I will provide a brief look inside the PACRR model to highlight the types of positional information it uses and to investigate how such information is combined to produce a relevance score.

Optimal input for low reliability assistive technology(19 October, 2017)

Speaker: John Williamson

Most devices used for human input are reliable, in the sense
that errors are small in proportion to the information which
passes through the interface channel. There are, however, a few
important and relevant human interface channels which have
both very low communication rates and very low reliability.
 
We present a practical and general method for
optimal human interaction using binary input devices having very
high noise levels where a reliable feedback channel is available. In
particular, we show efficient navigation and selection techniques are
viable even with a
binary channel (symmetric or asymmetric) where reliability
may be below 75%, with provably optimal performance.
This mechanism can automatically adapt to changing channel statistics
with no overhead, and does not need precise calibration. A range
of visualisations are used to implicitly code for these channels in
a way that it is transparent to users. We validate our results
through a considered process of evaluation from theoretical
analysis, automated simulation, live interaction simulators.

Recognition of Grasp Points for Clothes Manipulation under unconstrained Conditions(12 October, 2017)

Speaker: Luz Martinez

Abstract: I will talk about a system for recognizing grasp points in RGB-D images. This system is intended to be used in domestic robots when deploying clothes lying at random positions on a table. By taking into consideration that the grasp points are usually near key parts of clothing, such as the waist of pants or the neck of a shirt. Also, I will cover my recent work on clothing simulators that I use to obtain images to train deep learning networks.

Short-bio: Luz is a PhD student in Electrical Engineering at the University of Chile; and currently, a visiting research student in the Computer Vision and Autonomous group. Luz has worked with service robots for 4 years and has expertise in computer vision, computational intelligence, voice recognition and high-level behaviours design. She is currently working on her PhD thesis and she focuses on clothing recognition using active vision.

Leveraging from Ontologies in machine learning(05 October, 2017)

Speaker: David Stirling

This presentation considers a number of successful cases that have significantly benefited from the inclusion of an ontology framework. Firstly, a human bespoke ontology describing cyclic temporal control states has enabled successful multi-objective control (an intelligent autopilot) of a simulated aircraft. Secondly, an empirically learnt ontology was derived to identifying several industrial process modalities, which were exploited to reveal underlying causal factors for a set of undesirable modes (states) of high heat loads in a Blast Furnace. The first case reviews a novel approach for learning and building computational models of human skills that are typically utilized in complex control situations. Such skills are often internalized as sub-cognitive and automatic responses, such as those routinely used in driving a car. Previously, a degree of success in modelling these was reported via behavioural cloning. However, skills obtained by this technique, often exhibit a lack of generality and robustness when applied to different control tasks. This is now mitigated in the alternative presented approach, by segmenting and compressing a universal set of reaction plans with symbolic induction methods. This approach is termed, Compressed Heuristic Universal Reaction Planners or CHURPs. The substantially improved robustness and control performance arises from synergistic interactions and collaborations between the different CHURPs entities including, surrogate control and goal sharing. In the latter case, an abstracted ontology containing nine major heat load modalities, was initially learnt as a 38 state Gaussian Mixture Model from several years of Blast Furnace heat load data, and subsequently utilized to diagnose the casual influences determining these states. Such methodologies are now being pursued in a number of kinematic rehabilitation motion studies, as well as oncology and radiotherapy aspects of cancer care.

 

Bio:
Dr Stirling obtained his BEng degree from the Tasmanian College of Advanced Education (1976), an MSc (Digital Techniques) in from Heriot-Watt University, Scotland UK (1980), and his PhD from the University of Sydney (1995). He has worked for over 20 years in wide range of industries, including as a Principal Research Scientist with BHP Steel. More recently he joined the University of Wollongong as a Senior Lecturer. David has developed a wide range of expertise in data analysis and knowledge management with skills in problem solving, statistical methods, visualization, pattern recognition, data fusion and reduction. He has applied machine learning and data mining techniques in specialized classifier designs for noisy multivariate data to medical research, exploration geo-science, and financial markets, as well as to industrial primary operations.

 

 

Gesture Typing on Virtual Tabletop: Effect of Input Dimensions on Performance(28 September, 2017)

Speaker: Antoine Loriette

The association of tabletop interaction with gesture typing presents interaction potential for situationally or physically impaired users. In this work, we use depth cameras to create touch surfaces on regular tabletops. We describe our prototype system and report on a supervised learning approach to fingertips touch classification. We follow with a gesture typing study that compares our system with a control tablet scenario and explore the influence of input size and aspect ratio of the virtual surface on the text input performance. We show that novice users perform with the same error rate at half the input rate with our system as compared to the control condition, that an input size between A5 and A4 ensures the best tradeoff between performance and user preference and that users’ indirect tracking ability seems to be the overall performance limiting factor. 

A Theory of How People Make Decisions Through Interaction(14 September, 2017)

Speaker: Andrew Howes

In this talk I will discuss current thinking concerning how people make decisions through interaction. The talk offers evidence for the adaptive, embodied and context sensitive nature of human decision making. It also offers a computational theory, inspired by machine learning, of how the constraints imposed by the human visual system, and by the the visualisation design, lead to emergent strategies for interaction. These strategies focus user attention on certain kinds of information and ignore others; they determine apparent risk preferences and, ultimately, the quality of decisions made.

Amplifying Human Abilities: Digital Technologies to Enhance Perception and Cognition(12 September, 2017)

Speaker: Albrecht Schmidt

Historically the use and development of tools is strongly linked to human evolution and intelligence. The last 10.000 years show a stunning progress in physical tools that have transformed what people can do and how people live. Currently, we are at the beginning of an even more fundamental transformation: the use of digital tools to amplify the mind. Digital technologies provide us with entirely new opportunities to enhance the perceptual and cognitive abilities of humans. Many ideas, ranging from mobile access to search engines, to wearable devices for lifelogging and augmented realty application give as first indications of this transition. In our research we create novel digital technologies that systematically explore how to enhance human cognition and perception. Our experimental approach is to: first, understand the users in their context as well as the potential for enhancement. Second, we create innovative interventions that provide functionality that amplifies human capabilities. And third, we empirically evaluate and quantify the enhancement that is gained by these developments. It is exciting to see how ultimately these new ubiquitous computing technologies have the potential for overcoming fundamental limitations in human perception and cognition.

Data-Efficient Learning for Autonomous Robots(23 August, 2017)

Speaker: Marc Deisenroth

Fully autonomous systems and robots have been a vision for many decades, but we are still far from practical realization. One of the fundamental challenges in fully autonomous systems and robots is learning from data directly without relying on any kind of intricate human knowledge. This requires data-driven statistical methods for modeling, predicting, and decision making, while taking uncertainty into account, e.g., due to measurement noise, sparse data or stochasticity in the environment. In my talk I will focus on machine learning methods for controlling autonomous robots, which pose an additional practical challenge: Data-efficiency, i.e., we need to be able to learn controllers in a few experiments since performing millions of experiments with robots is time consuming and wears out the hardware. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, pre-shaped policies, or the underlying dynamics. In the first part of the talk, I follow a different approach and speed up learning by efficiently extracting information from sparse data. In particular, I propose to learn a probabilistic, non-parametric Gaussian process dynamics model.By explicitly incorporating model uncertainty in long-term planning and controller learning my approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art reinforcement learning our model-based policy search method achieves an unprecedented speed of learning, which makes is most promising for application to real systems. I demonstrate its applicability to autonomous learning from scratch on real robot and control tasks. In the second part of my talk, I will discuss an alternative method for learning controllers for bipedal locomotion based on Bayesian Optimization, where it is hard to learn models of the underlying dynamics due to ground contacts. Using Bayesian optimization, we sidestep this modeling issue and directly optimize the controller parameters without the need of modeling the robot's dynamics.

NOTE MEETING ROOM CHANGE - NOW IN SAWB 303 DUE TO DELAYS IN BUILDING WORK COMPLETION

Context-aware and Context-Driven Applications on the Web(07 July, 2017)

Speaker: Yong Zheng

Context-awareness has been explored and applied in multiple areas, including ubiquitous computing, information retrieval and recommender systems. We may need to collect contexts in advance, so that the system can make changes by adapting to these dynamic situations. Obviously, it is much easier to collect these information from sensors in ubiquitous computing, but the process of context acquisition becomes one of the challenges in the Web applications. In this talk, we introduce the context-aware applications on the Web, especially based on the information retrieval and recommender systems. In addition, we highlight and discuss the context-driven applications that may influence the process of context collections, user interface and interactions, as well as relevant algorithms to support these novel applications.

Bio:

Dr. Yong Zheng obtained his PhD degree in Computer and Information Sciences from DePaul University, USA. Currently, he is a full-time senior lecturer at Illinois Institute of Technology, USA. His research lie in user modeling, behavior analysis, human factors (user emotions and personalities), context-awareness, multi-criteria decision making, educational learning, and recommender systems. Particularly, he is one of the experts in the context-aware recommender systems. And he served as a data science consultant at NPAW (Nice People At Work), Barcelona, Spain to help them build context-aware recommendation engines. He published more than two dozens of academic papers related to his research topics. He served as publicity chair at ACM RecSys 2018 and ACM IUI 2018. He organized multiple workshops related to recommender systems. And he was invited as PC members for a number of academic conferences, such as WWW, ACM RecSys, ACM UMAP, ACM IUI, etc

Building relevance judgments automatically for a test collection.(19 June, 2017)

Speaker: Mireille Makary

In this talk, I will talk about my ongoing research and will present two different approaches I used to build relevance judgments (qrels) for TREC test collections without any human intervention. I will talk about an approach that involves using keyphrases extraction and another based on supervised machine learning techniques using Naïve Bayes and Support vector machines classifiers.

Bio: I am a PhD student at the University of Wolverhampton, Research Group in Computational Linguistics. My research area is in information retrieval. I am also a lecturer at Computer Science Department in the International University - Lebanon.

 

Effectively and Efficiently Searching Among Sensitive Content(08 June, 2017)

Speaker: Professor Douglas W. Oard

In Europe today, people have a “right to be forgotten.”  Exercising that right requires identifying each Web page that a person wishes to have removed from a search engine’s index.  In Maryland today, people have no right to record what they hear in the course of a day without the consent of every person whom they hear.  The law provides that the penalty for doing so could be as much as a year in jail for the first offence.  In many jurisdictions today, citizens have a right to request information held by their government.  Government officials who seek to sift through that information to determine which parts are releasable sometimes take so long to do so that the public purpose for which the request was originally made simply cannot be served.  In this talk I will argue that each of these problems arises from the same cause: an almost complete lack of attention to building language technologies that can proactively protect sensitive content.  I will further claim that the language technology for performing these tasks is well within the present state of the art, but that we will need to co-evolve the design of our information systems with the legislative, regulatory and normative public policy frameworks within those new capabilities would be employed.  Finally, I will illustrate the considerations that arise by describing a new project in which we are seeking to integrate protection for sensitive content into a search engine that is designed to provide public access to collections in which sensitive and non-sensitive content are intermixed and unlabelled.  

About the Speaker:

Douglas Oard is a Professor at the University of Maryland, College Park (USA), with joint appointments there in the College of Information Studies (Maryland’s iSchool) and the University of Maryland Institute for Advanced Computer Studies (UMIACS).  Dr. Oard earned his Ph.D. in Electrical Engineering from the University of Maryland.  His research interests center around the use of emerging technologies to support information seeking by end users.  Additional information is available at http://terpconnect.umd.edu/~oard/.

Simple Rules from Chaos: Towards Socially Aware Robotics using Agent-Local Cellular Automata(08 May, 2017)

Speaker: Alexander Hallgren

Controlling robotic agents requires complex control methods. This study aims to take advantage of emergent behaviours to reduce the amount of complexity. Cellular automata (CA) are employed as a means to generate emergent behaviour at low computational cost. A novel architecture is developed based on subsumption architecture, which uses an agent-local CA to influences the selection of a behaviour. The architecture is tested by measuring the time it takes the robot to navigate through a maze. 2 different models are used to evaluate the system. The results indicate that the current configuration is ineffective, but a number of task are proposed as future work.

Spatial Smoothing in Mass Spectrometry Imaging(08 May, 2017)

Speaker: Arijus Pleska

In this paper, we target a data modelling approach used in computational metabolomics; to be specic, we assess whether spatial smoothing improves the topic term and noise identification. By assessing mass spectrometry imaging data, we design an enhancement for latent Dirichlet allocation-based topic models. For both data pre-processing and topic model design, we survey relevant research. Further, we present the proposed methodology in detail providing the prelimi- naries and guiding through the performed topic model en hancements. To assess the performance, we evaluate the spatial smoothing application on a number

Integrating a Biologically Inspired Software Retina with Convolutional Neural Networks(08 May, 2017)

Speaker: Piotr Ozimek

 

Convolutional neural networks are the state of the art machine learning model for a wide range of computer vision tasks, however a major drawback of the method is that there rarely is enough memory or computational power for ConvNets to operate directly on large, high resolution images. We present a biologically inspired method for pre-processing images provided to ConvNets, the benefits of which are: 
1) a visual attention mechanism that preserves high frequency information around the foveal focal point by the use of space-variant subsampling
2) a conforming and inherently scale and rotation invariant mapping for presenting images to the ConvNet
3) a highly parameterizable image compression process
The method is based on the mammalian retino-cortical transform. This is the first attempt at integrating such a process to ConvNets. To evaluate the method a dataset was built from ImageNet and a set of ConvNets with identical architectures was trained on raw, partially pre-processed and fully pre-processed images. The ConvNets achieved comparable results, suggesting an untapped potential in drawing inspirations from natural vision systems.
 

Investigation of users' affective and physiological traits in a multi-modal interaction context(04 May, 2017)

Speaker: Iulia Popescu

In this talk, I will present my Level 5 (MSci) project which explored how users react and what they feel when they are exposed to different types of stimuli (visual, auditory). This study aimed to understand how short-term stressors impact individuals’ behaviour when they need to complete a task in a multi-modal interaction context (e.g. searching for a flight using graphical and spoken dialogue interfaces). Additionally, I will give an overview about the data set which has been delivered as part of this project and how it can be used for further research.

Real-time Mobile Object Removal using Google Project Tango(04 May, 2017)

Speaker: Rhys Simpson

Visually removing objects from a video feed is difficult to perform in real-time, as existing solutions rely on expensive patch lookups and specific environment conditions to produce meaningful results. Results are also guessed from the image surrounding the object, usually making them physically inaccurate and visually displeasing. Recent advances in hardware and software are pushing businesses to make large investments into Augmented Reality, including furniture catalogue applications, which could greatly benefit if existing objects could be visually removed from the video feed in real-time. This paper demonstrates a novel approach where instead of painting frames in an entirely 2D context, a 3D room mesh is captured, tracked and selectively rendered to paint geometry that was behind the object over it. The object's mask, and filled textures covering the planes the object was in contact with are also sourced and tracked from this mesh. Our approach works for a broad range of objects in typical indoors scenes, where target objects are separate and against large wall and floor planes. We show that our algorithm produces much better results per frame than object removal using traditional 2D inpainting, at an interactive framerate, and demonstrate that temporal incoherence between subsequent video frames is also eliminated.

IDA Seminar: Probabilistic Deep Learning: Models for Unsupervised Representation Learning(04 May, 2017)

Speaker: Dr Sebastian Nowozin

An important problem in achieving general artificial intelligence is the data-efficient learning of representations suitable for causal reasoning, planning, and decision making.  Learning such representations from unsupervised data is challenging and requires flexible models to discover the underlying manifold of high-dimensional data.  Recently three new classes of unsupervised learning approaches based on deep learning have enabled major progress towards large-scale unsupervised learning: generative adversarial networks (GAN), variational autoencoders (VAE), and approaches based on integral probability metrics (IPM).

I will provide an overview of these methods, research contributions by my group, and the main open research questions around this new class of learning methods.

 

Big Crisis Data - an exciting frontier for applied computing.(24 April, 2017)

Speaker: Carlos Castillo

Social media is an invaluable source of time-critical information during a crisis. However, emergency response and humanitarian relief organizations that would like to use this information struggle with an avalanche of social media messages, exceeding their capacity to process them. In this talk, we will look at how interdisciplinary research has enabled the creation of new tools for emergency managers, decision makers, and affected communities. These tools typically incorporate a combination of automatic processing and crowdsourcing. The talk will also look at ethical issues of this line of research.

http://bigcrisisdata.org/

ProbUI: Generalising Touch Target Representations to Enable Declarative Gesture Definition for Probabilistic GUIs (20 April, 2017)

Speaker: Daniel Buschek

We present ProbUI, a mobile touch GUI framework that merges ease of use of declarative gesture definition with the benefits of probabilistic reasoning. It helps developers to handle uncertain input and implement feedback and GUI adaptations. ProbUI replaces today's static target models (bounding boxes) with probabilistic gestures ("bounding behaviours"). It is the first touch GUI framework to unite concepts from three areas of related work: 1) Developers declaratively define touch behaviours for GUI targets. As a key insight, the declarations imply simple probabilistic models (HMMs with 2D Gaussian emissions). 2) ProbUI derives these models automatically to evaluate users' touch sequences. 3) It then infers intended behaviour and target. Developers bind callbacks to gesture progress, completion, and other conditions. We show ProbUI's value by implementing existing and novel widgets, and report developer feedback from a survey and a lab study.

Information Foraging in Environments(31 March, 2017)

Speaker: Kevin Ong

Kevin is a PhD student from ISAR Research Group at RMIT University, Australia. Kevin had previously worked on logs from National Archives UK, Peter MacCallum Cancer Institute, Westfield Group and Listcorp.

In this talk, he will talk about his work on information foraging in physical and virtual environments. The first part of his talk will be on "Understanding information foraging in physical environment - a log analysis" and the second part of his talk will be on "information foraging in virtual environments - an observational study".

Semantic Search at Bloomberg.(27 March, 2017)

Speaker: Edgar Meij

Abstract:

Large-scale knowledge graphs (KGs) store relationships between entities that are increasingly being used to improve the user experience in search applications. At Bloomberg we are currently in the process of rolling out our own knowledge graph and in this talk I will describe some of the semantic search applications that we aim to support. In particular, I will be discussing some of our recent papers on context-specific entity recommendations and automatically generating textual descriptions for arbitrary KG relationships.

Bio:

Dr. Edgar Meij is a senior scientist at Bloomberg. Before this, he was a research scientist at Yahoo Labs and a postdoc at the University of Amsterdam, where he also obtained his PhD. His research focuses on advancing the state of the art in semantic search at Web scale, by designing entity-oriented search systems that employ knowledge graphs, entity linking, NLP, and machine learning techniques to improve the user experience, search, and recommendations. He has co-authored 50+ peer-reviewed papers and regularly teaches at the post-graduate level, including university courses, summer schools, and conference tutorials.

Assessing User Engagement in Information Retrieval Systems(20 March, 2017)

Speaker: Mengdie Zhuang

Abstract:

In this study, we investigated both using user actions from log files, and the results of the User Engagement Scale, both of which came from a study of people interacting with a retrieval interface containing an image collection, but with a non-purposeful task. Our results suggest that selected behaviour measures are associated with selected user perceptions of engagement  (i.e., focused attention, felt involvement, and novelty), while typical search and browse measures have no association with aesthetics and perceived usability. This is finding can lead towards a more systematic user-centered evaluation model.

Bio:

Mengdie Zhuang is a PhD student from the University of Sheffield, UK. Her research focuses on evaluation metrics of Information Retrieval Systems.

Access, Search and Enrichment in Temporal Collections(06 March, 2017)

Speaker: Avishek Anand

There have been numerous efforts recently to digitize previously published content and preserving born-digital content leading to the widespread growth of large temporal text repositories. Temporal collections are continuously growing text collections which contain versions of documents spanning over long time periods and present many opportunities for historical, cultural and political analyses. Consequently there is a growing need for methods that can efficiently access, search and mine them. In this talk we deal with approaches in each of these aspects -- access, search and enrichment. First, I describe some of the access methods for searching temporal collections. Specifically, how do we index text to support temporal workloads? Secondly, I will describe retrieval models, which exploit historical information, essential in searching such collections. That is, how do we rank documents given temporal query intents? Finally, I will present some of the ongoing efforts to mine such collections for enriching Knowledge sources like Wikipedia.

A stochastic formulation of a dynamical singly constrained spatial interaction model (02 March, 2017)

Speaker: Mark Girolami

One of the challenges of 21st-century science is to model the evolution of complex systems.  One example of practical importance is urban structure, for which the dynamics may be described by a series of non-linear first-order ordinary differential equations.  Whilst this approach provides a reasonable model of spatial interaction as are relevant in areas diverse as public health and urban retail structure, it is somewhat restrictive owing to uncertainties arising in the modelling process. 

We address these shortcomings by developing a dynamical singly constrained spatial interaction model, based on a system of stochastic differential equations.   Our model is ergodic and the invariant distribution encodes our prior knowledge of spatio-temporal interactions.  We proceed by performing inference and prediction in a Bayesian setting, and explore the resulting probability distributions with a position-specific metropolis-adjusted Langevin algorithm. Insights from studies of interactions within the city of London from retail structure are used as illustration

Collaborative Information Retrieval.(27 February, 2017)

Speaker: Nyi Nyi Htun

Presentation of 2 papers to appear at CHIIR 2017.

Paper 1:

Title: How Can We Better Support Users with Non-Uniform Information Access in Collaborative Information Retrieval?

Abstract: The majority of research in Collaborative Information Retrieval (CIR) has assumed that collaborating team members have uniform information access. However, practice and research has shown that there may not always be uniform information access among team members, e.g. in healthcare, government, etc. To the best of our knowledge, there has not been a controlled user evaluation to measure the impact of non-uniform information access on CIR outcomes. To address this shortcoming, we conducted a controlled user evaluation using 2 non-uniform access scenarios (document removal and term blacklisting) and 1 full and uniform access scenario. Following this, a design interview was undertaken to provide interface design suggestions. Evaluation results show that neither of the 2 non-uniform access scenarios had a significant negative impact on collaborative and individual search outcomes. Design interview results suggested that awareness of team’s query history and intersecting viewed/judged documents could potentially help users share their expertise without disclosing sensitive information.

Paper 2:

Title: An Interface for Supporting Asynchronous Multi-Level Collaborative Information Retrieval

Abstract: Case studies and observations from different domains including government, healthcare and legal, have suggested that Collaborative Information Retrieval (CIR) sometimes involves people with unequal access to information. This type of scenario has been referred to as Multi-Level CIR (MLCIR). In addition to supporting collaboration, MLCIR systems must ensure that there is no unintended disclosure of sensitive information, this is an under investigated area of research. We present results of an evaluation of an interface we have designed for MLCIR scenarios. Pairs of participants used the interface under 3 different information access scenarios for a variety of search tasks. These scenarios included one CIR and two MLCIR scenarios, namely: full access (FA), document removal (DR) and term blacklisting (TR). Design interviews were conducted post evaluation to obtain qualitative feedback from participants. Evaluation results showed that our interface performed well for both DR and FA scenarios but for TR, team members with less access had a negative influence on their partner’s search performance, demonstrating insights into how different MLCIR scenarios should be supported. Design interview results showed that our interface helped the participants to reformulate their queries, understand their partner’s performance, reduce duplicated work and review their team’s search history without disclosing sensitive information.

A Comparison of Document-at-a-Time and Score-at-a-Time Query Evaluation(14 February, 2017)

Speaker: Joel Mackenzie

We present an empirical comparison between document-at-a-time (DaaT) and score-at-a-time (SaaT) document ranking strategies within a common framework. Although both strategies have been extensively explored, the literature lacks a fair, direct comparison: such a study has been difficult due to vastly different query evaluation mechanics and index organizations. Our work controls for score quantization, document processing, compression, implementation language, implementation effort, and a number of details, arriving at an empirical evaluation that fairly characterizes the performance of three specific techniques:WAND (DaaT), BMW (DaaT), and JASS (SaaT). Experiments reveal a number of interesting findings. The performance gap between WAND and BMW is not as clear as the literature suggests, and both methods are susceptible to tail queries that may take orders of magnitude longer than the median query to execute. Surprisingly, approximate query evaluation in WAND and BMW does not significantly reduce the risk of these tail queries. Overall, JASS is slightly slower than either WAND or BMW, but exhibits much lower variance in query latencies and is much less susceptible to tail query effects. Furthermore, JASS query latency is not particularly sensitive to the retrieval depth, making it an appealing solution for performance-sensitive applications where bounds on query latencies are desirable.

Bio:

Joel is a PhD candidate at RMIT University, Melbourne, Australia. He works with Dr J. Shane Culpepper and Assoc Prof. Falk Scholer on efficient and effective candidate generation for multi-stage retrieval. His research interests include index efficiency, multi-stage retrieval and distributed IR.

Unsupervised Event Extraction and Storyline Generation from Text(13 February, 2017)

Speaker: Dr. Yulan He

This talk consists of two parts. In the first part, I will present our proposed Latent Event and Categorisation Model (LECM) which is an unsupervised Bayesian model for the extraction of structured representations of events from Twitter without the use of any labelled data. The extracted events are automatically clustered into coherence event type groups. The proposed framework has been evaluated on over 60 millions tweets and has achieved a precision of 70%, outperforming the state-of-the-art open event extraction system by nearly 6%. The LECM model has been extended to jointly modelling event extraction and visualisation which performs remarkably better than both the state-of-the-art event extraction method and a pipeline approach for event extraction and visualisation.

In the second part of my talk, I will present a non-parametric generative model to extract structured representations and evolution patterns of storylines simultaneously. In the model, each storyline is modelled as a joint distribution over some locations, organisations, persons, keywords and a set of topics. We further combine this model with the Chinese restaurant process so that the number of storylines can be determined automatically without human intervention. The proposed model is able to generate coherent storylines from new articles.

Bio:
 
Yulan He is a Reader and Director of the Systems Analytics Research Institute at Aston University. She obtained her PhD degree in Spoken Language Understanding in 2004 from the University of Cambridge, UK. Prior joining Aston, she was a Senior Lecturer at the Open University, Lecturer at the University of Exeter and Lecturer at the University of Reading. Her current research interests lie in the integration of machine learning and natural language processing for text mining and social media analysis. Yulan has published over 140 papers with most appeared in high impact journals and at top conferences such as IEEE Transactions on Knowledge and Data Engineering, IEEE Intelligent Systems, KDD, CIKM, ACL, etc. She served as an Area Chair in NAACL 2016, EMNLP 2015, CCL 2015 and NLPCC 2015
and co-organised ECIR 2010 and IAPR 2007.

Applying Machine Learning to Data Exploration.(23 January, 2017)

Speaker: Charles Sutton

One of the first and most fundamental tasks in data mining is what we might call data understanding. Given a dump of data, what's in it? If modern machine learning methods are effective at finding patterns in data, then they should be effective at summarizing data sets so as to help data analysts develop a high-level understanding of them.

I'll describe several different approaches to this problem. First I'll describe a new approach to classic data mining problems, such as frequent itemset mining and frequent sequence mining, using a new principled model from probabilistic machine learning. Essentially, this casts the problem of pattern mining as one of structure learning in a probabilistic model. I'll describe an application to summarizing the usage of software libraries on Github.

A second attack to this general problem is based on cluster analysis. A good clustering can help a data analyst to explore and understand a data set, but what constitutes a good clustering may depend on domain-specific and application-specific criteria. I'll describe a new framework for interactive clustering that allows the analyst to examine a clustering and guide it in a way that is more useful for their information need.

Finally, topic modelling has proven to be a highly useful family of methods for data exploration, but it still requires a large amount of specialized effort to develop a new topic model for a specific data analysis scenario. I'll present new results on highly scalable inference for latent Dirichlet allocation based on recently proposed deep learning methods for probabilistic models.

Slides and relevant papers will be available at http://homepages.inf.ed.ac.uk/csutton/talks/

Rethinking eye gaze for human-computer interaction(19 January, 2017)

Speaker: Hans Gellersen

Eye movements are central to most of our interactions. We use our eyes to see and guide our actions and they are a natural interface that is reflective of our goals and interests. At the same time, our eyes afford fast and accurate control for directing our attention, selecting targets for interaction, and expressing intent. Even though our eyes play such a central part to interaction, we rarely think about the movement of our eyes and have limited awareness of the diverse ways in which we use our eyes --- for instance, to examine visual scenes, follow movement, guide our hands, communicate non-verbally, and establish shared attention. 

This talk will reflect on use of eye movement as input in human-computer interaction. Jacob's seminal work showed over 25 years ago that eye gaze is natural for pointing, albeit marred by problems of Midas Touch and limited accuracy. I will discuss new work on eye gaze as input that looks beyond conventional gaze pointing. This includes work on: gaze and touch, where we use gaze to naturally modulate manual input; gaze and motion, where we introduce a new form of gaze input based on the smooth pursuit movement our eyes perform when they follow a moving object; and gaze and games, where we explore social gaze in interaction with avatars and joint attention as multi-user input . 

Hans Gellersen is Professor of Interactive Systems at Lancaster University. Hans' research interest is in sensors and devices for ubiquitous computing and human-computer interaction. He has worked on systems that blend physical and digital interaction, methods that infer context and human activity, and techniques that facilitate spontaneous interaction across devices. In recent work he is focussing on eye movement as a source of context information and modality for interaction. 

The Role of Relevance in Sponsored Search.(16 January, 2017)

Speaker: Fabrizio Silvestri

Sponsored search aims at retrieving the advertisements that in the one hand meet users’ intent reflected in their search queries, and in the other hand attract user clicks to generate revenue. Advertisements are typically ranked based on their expected revenue that is computed as the product between their predicted probability of being clicked (i.e., namely clickability) and their advertiser provided bid. The relevance of an advertisement to a user query is implicitly captured by the predicted clickability of the advertisement, assuming that relevant advertisements are more likely to attract user clicks. However, this approach easily biases the ranking toward advertisements having rich click history. This may incorrectly lead to showing irrelevant advertisements whose clickability is not accurately predicted due to lack of click history. Another side effect consists of never giving a chance to new advertisements that may be highly relevant due to their lack of click history. To address this problem, we explicitly measure the relevance between an advertisement and a query without relying on the advertisement’s click history, and present different ways of leveraging this relevance to improve user search experience without reducing search engine revenue. Specifically, we propose a machine learning approach that solely relies on text-based features to measure the relevance between an advertisement and a query. We discuss how the introduced relevance can be used in four important use cases: pre-filtering of irrelevant advertisements, recovering advertisements with little history, improving clickability prediction, and re-ranking of the advertisements on the final search result page. Offline experiments using large-scale query logs and online A/B tests demonstrate the superiority of the proposed click-oblivious relevance model and the important roles that relevance plays in sponsored search.

Working toward computer generated music traditions(12 January, 2017)

Speaker: Bob Sturm

I will discuss research aimed at making computers intelligent and sensitive enough to working with music data, whether acoustic or symbolic. Invariably, this includes a lot of work in applying machine learning to music collections in order to divine distinguishing and identifiable characteristics of practices that defy strict definition. Many of the resulting machine music listening systems appear to be musically sensitive and intelligent, but their fraudulent ways can be revealed when they are used to create music in the styles they have been taught to identify. Such "evaluation by generation” is a powerful way to gauge the generality of what a machine has learned to do. I will present several examples, focusing in particular on our work applying deep LSTM networks to modelling folk music transcriptions, and ultimately generating new music traditions.

 

References:

https://github.com/IraKorshunova/folk-rnn

https://highnoongmt.wordpress.com/2015/05/22/lisls-stis-recurrent-neural-networks-for-folk-music-generation/ 

https://highnoongmt.wordpress.com/?s=%22Deep+learning+for+assisting+the+process%22&submit=Search

 

https://youtu.be/YMbWwU2JdLg

https://youtu.be/RaO4HpM07hE 

https://soundcloud.com/sturmen-1

Studies of Disputed Authorship(09 January, 2017)

Speaker: Michael P. Oakes

Automatic author identification is a branch of computational stylometry, which is the computer analysis of writing style. It is based on the idea that an author’s style can be described by a unique set of textual features, typically the frequency of use of individual words, but sometimes considering the use of higher level linguistic features. Disputed authorship studies assume that some of these features are outside the author’s conscious control, and thus provide a reliable means of discriminating between individual authors. Many studies have successfully made use of high frequency function words like “the”, “of” and “and”, which tend to have grammatical functions rather than reveal the topic of the text. Their usage is unlikely to be consciously regulated by authors, but varies substantially between authors, texts, and even individual characters in Jane Austen’s novels. Using stylometric techniques, Oakes and Pichler (2013) were able to show that the writing style of the document “Diktat für Schlick” was much more similar to that of Wittgenstein than that of other philosophers of the Vienna Circle. Michael Oakes is currently researching the authorship of “The Dark Tower”, normally attributed to C. S. Lewis.

Satisfying User Needs or Beating Baselines? Not always the same.(12 December, 2016)

Speaker: Walid Magdy

Information retrieval (IR) is mainly concerned with retrieving relevant documents to satisfy the information needs of users. Many IR tasks involving different genres and search scenarios have been studied for decades. Typically, researchers aim to improve retrieval effectiveness beyond the current “state-of-the-art”. However, revisiting the modeling of the IR task itself is often essential before seeking improvement of results. This includes reconsidering the assumed search scenario, the approach used to solve the problem, or even the conducted evaluation methodology. In this talk, some well-known IR tasks are explored to demonstrate that beating the state-of-the-art baseline is not always sufficient. Novel modeling, understanding, or approach to IR tasks could lead to significant improvements in user satisfaction compared to just improving “objective” retrieval effectiveness. The talk includes example IR tasks, such as printed document search, patent search, speech search, and social media search.

Supporting Evidence-based Medicine with Natural Language Processing(28 November, 2016)

Speaker: Dr. Mark Stevenson

The modern evidence-based approach to medicine is designed to ensure that patients are given the best possible care by basing treatment decisions on robust evidence. But the huge volume of information available to medical and health policy decision makers can make it difficult for them to decide on the best approach. Much of the current medical knowledge is stored in textual format and providing tools to help access it represents a significant opportunity for Natural Language Processing and Information Retrieval. However, automatically processing documents in this domain is not straightforward and doing so successfully requires a range of challenges to be overcome, including dealing with volume, ambiguity, complexity and inconsistency.  This talk will present a range of approaches from Natural Language Processing that support access to medical information. It will focus on three tasks: Word Sense Disambiguation, Relation Extraction and Contradiction Identification. The talk will outline the challenges faced when developing approaches for accessing information contained in medical documents, including the lack of available gold standard data to train systems. It will show how existing resources can help alleviate this problem by providing information that allows training data to be created automatically.

SHIP: The Single-handed Interaction Problem in Mobile and Wearable Computing(24 November, 2016)

Speaker: Hui-Shyong Yeo

Screen sizes on devices are becoming smaller (eg. smartwatch and music player) and larger (eg. phablets, tablets) at the same time. Each of these trends can make devices difficult to use with only one hand (eg. fat-finger or reachability problem). This Single-Handed Interaction Problem (SHIP) is not new but it has been evolving along with a growth of larger and smaller interaction surfaces. The problem is exacerbated when the other hand is occupied (encumbered) or not available (missing fingers/limbs). The use of voice command or wrist gestures can be less robust or perceived as awkward in the public. 

This talk will discuss several projects (RadarCat UIST 2016, WatchMI MobileHCI 2016, SWIM and WatchMouse) in which we are working towards achieving/supporting effective single-handed interaction for mobile and wearable computing. The work focusses on novel interaction techniques that are not being explored thoroughly for interaction purposes, using ubiquitous sensors that are widely available such as IMU, optical sensor and radar (eg. Google Soli, soon to be available).

Biography:

Hui-Shyong Yeo is a second year PhD student in SACHI, University of St Andrews, advised by Prof. Aaron Quigley. Before that he worked as a researcher in KAIST for one year. Yeo has a wide range of interest within the field of HCI, including topics such as wearable, gestures, mixed reality and text entry. Currently he is focusing on single-handed interaction for his dissertation topic. He has published in conferences such as CHI, UIST, MobileHCI (honourable mention), SIGGRAPH and journals such as MTAP and JNCA.

Visit his homepage http://hsyeo.com or twitter @hci_research

Demo of Google Soli Radar and Single Handed Smartwatch interaction(24 November, 2016)

Speaker: Hui-Shyong Yeo

This demo session will present the Google Soli Radar and Smartwatch interaction system

Biography:

Hui-Shyong Yeo is a second year PhD student in SACHI, University of St Andrews, advised by Prof. Aaron Quigley. Before that he worked as a researcher in KAIST for one year. Yeo has a wide range of interest within the field of HCI, including topics such as wearable, gestures, mixed reality and text entry. Currently he is focusing on single-handed interaction for his dissertation topic. He has published in conferences such as CHI, UIST, MobileHCI (honourable mention), SIGGRAPH and journals such as MTAP and JNCA.

Visit his homepage http://hsyeo.com or twitter @hci_research

IDA coffee breaks(22 November, 2016)

Speaker: everyone

A chance to catch up informally with members of the IDA section in the Computing Science Common Room.

Human Computation for Entity-Centric Information Access(21 November, 2016)

Speaker: Dr. Gianluca Demartini

Human Computation is a novel approach used to obtain manual data processing at scale by means of crowdsourcing. In this talk we will start introducing the dynamics of crowdsourcing platforms and provide examples of their use to build hybrid human-machine information systems. We will then present ZenCrowd: a hybrid system for entity linking and data integration problems over linked data showing how the use of human intelligence at scale in combination with machine-based algorithms outperforms traditional systems. In this context, we will then discuss efficiency and effectiveness challenges of micro-task crowdsourcing platforms including spam, quality control, and job scheduling in crowdsourcing.

IDA coffee breaks(15 November, 2016)

Speaker: everyone

A chance to catch up informally with members of the IDA section in the Computing Science Common Room.

Control Theoretical Models of Pointing(11 November, 2016)

Speaker: Rod Murray-Smith

I will present an empirical comparison of four models from manual control theory on their ability to model targeting behaviour by human users using a mouse: McRuer's Crossover, Costello's Surge, second-order lag (2OL), and the Bang-bang model. Such dynamic models are generative, estimating not only movement time, but also pointer position, velocity, and acceleration on a moment-to-moment basis. We describe an experimental framework for acquiring pointing actions and automatically fitting the parameters of mathematical models to the empirical data. We present the use of time-series, phase space and Hooke plot visualisations of the experimental data, to gain insight into human pointing dynamics. We find that the identified control models can generate a range of dynamic behaviours that captures aspects of human pointing behaviour to varying degrees. Conditions with a low index of difficulty (ID) showed poorer fit because their unconstrained nature leads naturally to more dynamic variability. We report on characteristics of human surge behaviour in pointing.

We report differences in a number of controller performance measures, including Overshoot, Settling time, Peak time, and Rise time. We describe trade-offs among the models. We conclude that control theory offers a promising complement to Fitts' law based approaches in HCI, with models providing representations and predictions of human pointing dynamics which can improve our understanding of pointing and inform design.

IDA coffee breaks(08 November, 2016)

Speaker: everyone

A chance to catch up informally with members of the IDA section in the Computing Science Common Room.

Analysis of the Cost and Benefits of Search Interactions(07 November, 2016)

Speaker: Dr. Leif Azzopardi

Interactive Information Retrieval (IR) systems often provide various features and functions, such as query suggestions and relevance feedback, that a user may or may not decide to use. The decision to take such an option has associated costs and may lead to some benefit. Thus, a savvy user would take decisions that maximises their net benefit. In this talk, we will go through a number of formal models which examine the costs and benefits of various decisions that users, implicitly or explicitly, make when searching. We consider and analyse the following scenarios: (i) how long a user's query should be? (ii) should the user pose a specific or vague query? (iii) should the user take a suggestion or re-formulate? (iv) when should a user employ relevance feedback? and (v) when would the "find similar" functionality be worthwhile to the user? To this end, we analyse a series of cost-benefit models exploring a variety of parameters that affect the decisions at play. Through the analyses, we are able to draw a number of insights into different decisions, provide explanations for observed behaviours and generate numerous testable hypotheses. This work not only serves as a basis for future empirical work, but also as a template for developing other cost-benefit models involving human-computer interaction.

This talk is based on the recent ICTIR 2016 paper with Guido Zuccon: http://dl.acm.org/citation.cfm?id=2970412

IDA coffee breaks(01 November, 2016)

Speaker: everyone

A chance to catch up informally with members of the IDA section in the Computing Science Common Room.

I'm an information scientist - let me in!(31 October, 2016)

Speaker: Martin White

For the last 46 years Martin has been a professional information scientist, though often in secret. Since founding Intranet Focus Ltd he has found that the awareness of research into topics such as information behaviour, information quality and information seeking in his clients is close to zero. This is especially true in information retrieval. In his presentation Martin will consider why this is the case, what the impact might be and what (if anything) should and could be done to change this situation.

IDA coffee breaks(25 October, 2016)

Speaker: everyone

A chance to catch up informally with members of the IDA section in the Computing Science Common Room.

The problem of quantification in Information Retrieval and on Social Networks.(17 October, 2016)

Speaker: Gianni Amati

There is a growing interest to know how fast information spreads on social networks, how many unique users are participating to an event, the leading opinion polarity in a stream. Quantifying distinct elements on a flow information is thus becoming a crucial problem in many real time information retrieval or streaming applications. We discuss the state-of-art of quantification and show that many problems can be interpreted within a common framework. We introduce a new probabilistic framework for quantification and show as examples how to count opinions in a stream and how to compute the degrees of separation of a network.

Analytics over Parallel Multi-view Data(03 October, 2016)

Speaker: Dr. Deepak Padmanabhan

Conventional unsupervised data analytics techniques have largely focused on processing datasets of single-type data, e.g., one of text, ECG, Sensor Readings and Image data. With increasing digitization, it has become common to have data objects having representations that encompass different "kinds" of information. For example, the same disease condition may be identified through EEG or fMRI data. Thus, a dataset of EEG-fMRI pairs would be considered as a parallel two-view dataset.  Datasets of text-image pairs (e.g., a description of a seashore, and an image of it) and text-text pairs (e.g., problem-solution text, or multi-language text from machine translation scenarios) are other common instances of multi-view data. The challenge in multi-view data analytics is about effectively leveraging such parallel multi-view data to perform analytics tasks such as clustering, retrieval and anomaly detection. This talk will cover some emerging trends in processing multi-view parallel data, and different paradigms for the same. In addition to looking at the different schools of techniques, and some specific techniques from each school, this talk will also be used to present some possibilities for future work in this area.

 

Dr. Deepak Padmanabhan is a lecturer with the Centre for Data Sciences and Scalable Computing at Queen's University Belfast. He obtained his B.Tech in Comp. Sc. and Engg. from Cochin University (Kerala, India), followed by his M.Tech and PhD, all in computer science, from Indian Institute of Technology Madras. Prior to joining Queen's, he was a researcher at IBM Research - India. He has over 40 publications across top venues in Data Mining, NLP, Databases and Information Retrieval. He co-authored a book on Operators for Similarity Search, published by Springer in 2015. He is the author on ~15 patent applications to the USPTO, including 4 granted patents. He is a recipient of the INAE Young Engineer Award 2015, and is a Senior Member of the ACM and the IEEE. His research interests include Machine Learning, Data Mining, NLP, Databases and Information Retrieval. Email: deepaksp@acm.org  URL: http://member.acm.org/~deepaksp

Improvising minds: Improvisational interaction and cognitive engagement(29 August, 2016)

Speaker: Adam Linson

In this talk, I present my research on improvisation as a general form of adaptive expertise. My interdisciplinary approach takes music as a tractable domain for empirical studies, which I have used to ground theoretical insights from HCI, AI/robotics, psychology, and embodied cognitive science. I will discuss interconnected aspects of digital musical instrument (DMI) interface design a musical robotic AI system, and a music psychology study of sensorimotor influences on perceptual ambiguity. I will also show how I integrate this work with an inference-based model of neural functioning, to underscore implications beyond music. On this basis, I indicate how studies of musical improvisation can shed light on a domain-general capacity: our flexible, context-sensitive responsiveness to rapidly-changing environmental conditions.

 

Recognizing manipulation actions through visual accelerometer tracking, relational histograms, and user adaptation(26 August, 2016)

Speaker: Sebastian Stein

Activity recognition research in computer vision and pervasive computing has made a remarkable trajectory from distinguishing full-body motion patterns to recognizing complex activities. Manipulation activities as occurring in food preparation are particularly challenging to recognize, as they involve many different objects, non-unique task orders and are subject to personal idiosyncrasies. Video data and data from embedded accelerometers provide complementary information, which motivates an investigation of effective methods for fusing these sensor modalities.

In this talk I present a method for multi-modal recognition of manipulation activities that combines accelerometer data and video at multiple stages of the recognition pipeline. A method for accelerometer tracking is introduced that provides for each accelerometer-equipped object a location estimate in the camera view by identifying a point trajectory that matches well the accelerometer data. It is argued that associating accelerometer data with locations in the video provides a key link for modelling interactions between accelerometer-equipped objects and other visual entities in the scene. Estimates of accelerometer locations and their visual displacements are used to extract two new types of features: (i)

Reference Tracklet Statistics characterizes statistical properties of an accelerometer’s visual trajectory, and (ii) RETLETS, a feature representation that encodes relative motion, uses an accelerometer’s visual trajectory as a reference frame for dense tracklets. In comparison to a traditional sensor fusion approach where features are extracted from each sensor-type independently and concatenated for classification, it is shown that by combining RETLETS and Reference Tracklet Statistics with those sensor-specific features performs considerably better. Specifically addressing scenarios in which a recognition

system would be primarily used by a single person (e.g., cognitive situational support), this thesis investigates three methods for adapting activity models to a target user based on user-specific training data. Via randomized control trials it is shown that these methods indeed learn user idiosyncrasies.

The whole is greater than the sum of its parts: how semantic trajectories and recommendations may help tourism.(22 August, 2016)

Speaker: Dr. Chiara Renso

During the first part of this talk I will overview my recent activity in the field of mobility data mining with particular interest in the study of semantics in trajectory data and the experience with the SEEK Marie Curie project [1] recently concluded.  Then I will present two highlights of tourism recommendation works based on the idea of semantic trajectories: TripBuilder [2] and GroupFinder [3].  Tripbuilder is based on the analysis of enriched tourist trajectories extracted from Flickr photos to suggest itineraries constrained by a temporal budget and based on the travellers preferences.  The Groupfinder framework recommends a group of friends with whom to enjoy a visit to a place, balancing the friendship relations of the group members with the user individual interests in the destination location.

[1] http://www.seek-project.eu
[2] Igo Ramalho Brilhante, José Antônio Fernandes de Macêdo, Franco Maria Nardini, Raffaele Perego,Chiara Renso. On planning sightseeing tours with TripBuilder. Inf. Process. Manage. 51(2): 1-15 (2015)
[3]  Igo Ramalho Brilhante, José Antônio Fernandes de Macêdo, Franco Maria Nardini, Raffaele Perego,Chiara Renso. Group Finder: An Item-Driven Group Formation Framework. MDM 2016: 8-17

Bio:

Dr. Chiara Renso holds a PhD and M.Sc. degree in Computer Science from University of Pisa (1992, 1997).  She is permanent researcher at ISTI Institute of CNR, Italy.  Her research interests are related to spatio-temporal data mining, reasoning, data mining query languages, semantic data mining, trajectory data mining.  She has been involved in several EU projects about mobility data mining.  She has been the scientific coordinator of an FP7 Marie-Curie project on semantic trajectories knowledge discovery called SEEK (www.seek-project.eu).  She was also coordinator of a bilateral CNR-CNPQ Italy-Brazil project on mobility data mining with Federal University of Cearà.  She is author of more than 90 peer-reviewed publications.  She is co-editor of the book "Mobility Data: Modelling, Management, and Understanding" edited by Cambridge Press in 2013; co-editor of the special issue for Journal on Knowledge and Information system (KAIS) on Context aware data mining; co-editor of International Journal of Knowledge and Systems Science (IJKSS) on Modelling Tools for Extracting Useful Knowledge and Decision Making.  She has been co-chair of three editions of the Workshop on Semantic Aspects of Data Mining in conjunction with IEEE ICDM conference.  She is a regular reviewer of ACM CIKM, ACM KDD, ACM SIGSPATIAL and many journals on these topics.

Skin Reading: Encoding Text in a 6-Channel Haptic Display(11 August, 2016)

Speaker: Granit Luzhnica

In this talk I will present a study we performed in to investigate the communication of natural language messages using a wearable haptic display. Our research experiments investigated both the design of the haptic display, as well as the methods for communication that use it. First, three wearable configurations are proposed basing on haptic perception fundamentals and evaluated in the first study. To encode symbols, we use an overlapping spatiotemporal stimulation (OST) method, that distributes stimuli spatially and temporally with a minima gap. Second, we propose an encoding for the entire English alphabet and a training method for letters, words and phrases. A second study investigates communication accuracy. It puts four participants through five sessions, for an overall training time of approximately 5 hours per participant. 

Casual Interaction for Smartwatch Feedback and Communication(01 July, 2016)

Speaker: Henning Pohl
Casual interaction strives to enable people to scale back their engagement with interactive systems, while retaining the level of control they desire. In this talk, we will take a look on two recent developments in casual interaction systems. The first p

Casual interaction strives to enable people to scale back their engagement with interactive systems, while retaining the level of control they desire. In this talk, we will take a look on two recent developments in casual interaction systems. The first project to be presented is an indirect visual feedback system for smartwatches. Embedding LEDs into the back of a watch case enabled us to create a form of feedback that is less disruptive than vibration feedback and blends in with the body. We investigated how well such subtle feedback works in an in-the-wild study, which we will take a closer look at in this talk. Where the first project is a more casual form of feedback, the second project tries to support a more casual form of communication: emoji. Over the last years these characters have become more and more popular, yet entering them can take quite some effort. We have developed a novel emoji keyboard around zooming interaction, that makes it easier and faster to enter emoji.

Predicting Ad Quality for Native Advertisements(06 June, 2016)

Speaker: Dr Ke Zhou,

Native advertising is a specific form of online advertising where ads replicate the look-and-feel of their serving platform. In such context, providing a good user experience with the served ads is crucial to ensure long-term user engagement. 

 

In this talk, I will explore the notion of ad quality, namely the effectiveness of advertising from a user experience perspective. I will talk from both the pre-click and post-click perspective for predicting quality for native ads. With respect to pre-click ad quality, we design a learning framework to detect offensive native ads, showing that, to quantify ad quality, ad offensive user feedback rates are more reliable than the commonly used click-through rate metrics. We translate a set of user preference criteria into a set of ad quality features that we extract from the ad text, image and advertiser, and then use them to train a model able to identify offensive ads. In terms of post-click quality, we use ad landing page dwell time as our proxy and exploit various ad landing page features to predict ad landing page with high dwell time.

Efficient Web Search Diversification via Approximate Graph Coverage(25 April, 2016)

Speaker: Carsten Eickhoff

In the case of general or ambiguous Web search queries, retrieval systems rely on result set diversification techniques in order to ensure an adequate coverage of underlying topics such that the average user will find at least one of the returned documents.

In the case of general or ambiguous Web search queries, retrieval systems rely on result set diversification techniques in order to ensure an adequate coverage of underlying topics such that the average user will find at least one of the returned documents useful. Previous attempts at result set diversification employed computationally expensive analyses of document content and query intent. In this paper, we instead rely on the inherent structure of the Web graph. Drawing from the locally dense distribution of similar topics across the hyperlink graph, we cast the diversification problem as optimizing coverage of the Web graph. In order to reduce the computational burden, we rely on modern sketching techniques to obtain highly efficient yet accurate approximate solutions. Our experiments on a snapshot of Wikipedia as well as the ClueWeb'12 dataset show ranking performance and execution times competitive with the state of the art at dramatically reduced memory requirements.
 

Searching for better health: challenges and implications for IR(04 April, 2016)

Speaker: Dr. Guido Zuccon
A talk about why IR researchers should care about health search

In this talk I will discuss research problems and possible solutions related to helping the general public searching for health information online. I will show that although in the first instance this appears to be a domain-specific search task, research problems associated with this task have more general implications for IR and offer opportunities to develop advances that are applicable to the whole research field. In particular, in the talk I will focus on two aspects related to evaluation: (1) the inclusion of multiple dimensions of relevance in the evaluation of IR systems and (2) the modelling of query variations within the evaluation framework.

A Comparison of Primary and Secondary Relevance Judgements for Real-Life Topics(07 March, 2016)

Speaker: Dr Martin Halvey
n this talk I present a user study that examines in detail the differences between primary and secondary assessors on a set of

The notion of relevance is fundamental to the field of Information Retrieval. Within the field a generally accepted conception of relevance as inherently subjective has emerged, with an individual's assessment of relevance influenced by numerous contextual factors. In this talk I present a user study that examines in detail the differences between primary and secondary assessors on a set of "real-world" topics which were gathered specifically for the work. By gathering topics which are representative of the staff and students at a major university, at a particular point in time, we aim to explore differences between primary and secondary relevance judgements for real-life search tasks. Findings suggest that while secondary assessors may find the assessment task challenging in various ways (they generally possess less interest and knowledge in secondary topics and take longer to assess documents), agreement between primary and secondary assessors is high.  

Steps towards Profile-Based Web Site Search and Navigation(29 February, 2016)

Speaker: Prof. Udo Kruschwitz
Steps towards Profile-Based Web Site Search and Navigation

Web search in all its flavours has been the focus of research for decades with thousands of highly paid researchers competing for fame. Web site search has however attracted much less attention but is equally challenging. In fact, what makes site search (as well as intranet and enterprise search) even more interesting is that it shares some common problems with general Web search but also offers a good number of additional problems that need to be addressed in order to make search on a Web site no longer a waste of time. At previous visits to Glasgow I talked about turning the log files collected on a Web site into usable, adaptive data structures that can be used in search applications (and which we call user or cohort profiles). This time I will focus on applying these profiles to a navigation scenario and illustrate how the automatically acquired profiles provide a practical use case for combining natural language processing and information retrieval techniques (as that is what we really do at Essex).

Sentiment and Preference Guided Social Recommendation.(22 February, 2016)

Speaker: Yoke Yie Chen
In this talk, I will focus on two knowledge sources for product recommendation: product reviews and user purchase trails.

Social recommender systems harness knowledge from social media to generate recommendations. Previous works in social recommender systems use social knowledge such as social tags, social relationship (social network) and microblogs.  In this talk, I will focus on two knowledge sources for product recommendation: product reviews and user purchase trails. In particular, I will present how we exploit the sentiment expressed in product reviews and user preferences which are implicitly contained in user purchase trails as the basis for recommendation.

Recent Advances in Search Result Diversification for the Web and Social Media(17 February, 2016)

Speaker: Ismail Sengor Altingovde
I will focus on the web search result diversification problem and present our novel contributions in the field.

In this talk, I will start with a short potpourri of our most recent research, emphasis being on the topics related to the web search engines and social Web. Then, I will focus on the web search result diversification problem and present our novel contributions in three directions. Firstly, I will present how the normalizaton of query relevance scores can boost the performance of the state-of-the-art explicit diversification strategies. Secondly, I will introduce a set of new explicit diversification strategies based on the score(-based) and rank(-based) aggregation methods. As a third contribution, I will present how query performance prediction (QPP) can be utilized to weight query aspects during diversification. Finally, I will discuss how these diversification methods perform in the context of Tweet search, and how we improve them using word embeddings.

Practical and theoretical problems on the frontiers of multilingual natural language processing(16 February, 2016)

Speaker: Dr Adam Lopez
Multilingual natural language processing (NLP) has been enormously successful over the last decade, highlighted by offerings like Google translate. What is left to do?

Multilingual natural language processing (NLP) has been enormously successful over the last decade, highlighted by offerings like Google translate. What is left to do? I'll focus on two quite different, very basic problems that we don't yet know how to solve. The first is motivated by the development of new, massively-parallel hardware architectures like GPUs, which are especially tantalizing for computation-bound NLP problems, and may open up new possibilities for the application and scale of NLP. The problem is that classical NLP algorithms are inherently sequential, so harnessing the power of such processors requires complete rethinking the fundamentals of the field. The second is motivated by the fact that NLP systems often fail to correctly understand, translate, extract, or generate meaning. We're poised to make serious progress in this area using the reliable method of applying machine learning to large datasets—in this case, large quantities of natural language text annotated with explicit meaning representations, which take the form of directed acyclic graphs. The problem is that probabilities on graphs are surprisingly hard to define. I'll discuss work on both of these fronts.

Information retrieval challenges in conducting systematic reviews(08 February, 2016)

Speaker: Julie Glanville
The presentation will also describe other areas where software such as text mining and machine learning have potential to contribute to the Systematic Review process

Systematic review (SR) is a research method that seeks to provide an assessment of the state of the research evidence on a specific question.  Systematic reviews (SRs) aim to be objective, transparent and replicable and seek to minimise bias by means of extensive  searches.

 

The challenges of extensive searching will be summarised.  As software tools and internet interconnectivity increase, we are seeing increasing use of a range of tools within the SR process (not only for information retrieval).  This presentation will present some  of the tools we are currently using within the Cochrane SR community and UK SRs, and the challenges which remain for efficient information retrieval.  The presentation will also describe other areas where software such as text mining and machine learning have potential to contribute to the SR process.

Learning to Hash for Large Scale Image Retrieval(14 December, 2015)

Speaker: Sean Moran
In this talk I will introduce two novel data-driven models that significantly improve the retrieval effectiveness of locality sensitive hashing (LSH), a popular randomised algorithm for nearest neighbour search that permits relevant data-points to be ret

In this talk I will introduce two novel data-driven models that significantly improve the retrieval effectiveness of locality sensitive hashing (LSH), a popular randomised algorithm for nearest neighbour search that permits relevant data-points to be retrieved in constant time, independent of the database size.

To cut down the search space LSH generates similar binary hashcodes for similar data-points and uses the hashcodes to index database data-points into the buckets of a set of hashtables. At query time only those data-points that collide in the same hashtable buckets as the query are returned as candidate nearest neighbours. LSH has been successfully used for event detection in large scale streaming data such as Twitter [1] and for detecting 100,000 object classes on a single CPU [2].

 

The generation of similarity preserving binary hashcodes comprises two steps: projection of the data-points onto the normal vectors of a set of hyperplanes partitioning the input feature space followed by a quantisation step that uses a single threshold to binarise the resulting projections to obtain the hashcodes. In this talk I will argue that the retrieval effectiveness of LSH can be significantly improved by learning the thresholds and hyperplanes based on the distribution of the input data.

 

In the first part of my talk I will provide a high level introduction of LSH. I will then argue that LSH makes a set of limiting assumptions arising from its data-independence that hamper its retrieval effectiveness. This motivates the second and third parts of my talk in which I introduce two new models that address these limiting assumptions. 

 

Firstly, I will discuss a scalar quantisation model that can learn multiple thresholds per LSH hyperplane using a novel semi-supervised objective function [3]. Optimising this objective function results in thresholds that reduce information loss inherent in converting the real-valued projections to binary. Secondly, I will introduce a new two-step iterative model for learning the hashing hyperplanes [4]. In the first step the hashcodes of training data-points are regularised over an adjacency graph which encourages similar data-points to be assigned similar hashcodes. In the second step a set of binary classifiers are learnt so as to separate opposing bits (0,1) with maximum margin. Repeating both steps iteratively encourages the hyperplanes to evolve into positions that provide a much better bucketing of the input feature space compared to LSH.

 

For both algorithms I will present a set of query-by-example image retrieval results on standard image collections, demonstrating significantly improved retrieval effectiveness versus state-of-the-art hash functions, in addition to a set of interesting and previously unexpected results.

[1] Sasa Petrovic, Miles Osborne and Victor Lavrenko, Streaming First Story Detection with Application to Twitter, In NAACL'10.

 

[2] Thomas Dean, Mark Ruzon, Mark Segal, Jonathon Shlens, Sudheendra Vijayanarasimhan,  and Jay Yagnik, Fast, Accurate Detection of 100,000 Object Classes on a Single Machine, In CVPR'13.

 

[3] Sean Moran, Victor Lavrenko and Miles Osborne. Neighbourhood Preserving Quantisation for LSH, In SIGIR'13.

 

[4] Sean Moran and Victor Lavrenko. Graph Regularised Hashing. In ECIR'15.

 

 

 

An electroencephalograpy (EEG)-based real-time feedback training system for cognitive brain-machine interface (cBMI)(04 November, 2015)

Speaker: Kyuwan Choi

In this presentation, I will present a new cognitive brain-machine interface (cBMI) using cortical activities in the prefrontal cortex. In the cBMI system, subjects conduct directional imagination which is more intuitive than the existing motor imagery. The subjects control a bar on the monitor freely by extracting the information of direction from the prefrontal cortex, and that the subject’s prefrontal cortex is activated by giving them the movement of the bar as feedback. Furthermore, I will introduce an EEG-based wheelchair system using the cBMI concept. If we use the cBMI, it is possible to build a more intuitive BMI system. It could help improve the cognitive function of healthy people and help activate the area around the damaged area of the patients with prefrontal damage such as patients with dementia, autism, etc. by consistently activating their prefrontal cortex.

Adapting biomechanical simulation for physical ergonomics evaluation of new input methods(28 October, 2015)

Speaker: Myroslav Bachynskyi

Recent advances in sensor technology and computer vision allowed new computer input methods to rapidly emerge. These methods are often considered as more intuitive and easier to learn comparing to the conventional keyboard or mouse, however most of them are poorly assessed with respect to their physical ergonomics and health impact of their usage. The main reasons for this are large input spaces provided by these interfaces, absence of a reliable, cheap and easy-to-apply physical ergonomics assessment method and absence of biomechanics expertize in user interface designers. The goal of my research is to develop a physical ergonomics assessment method, which provides support to interface designers on all stages of the design process for low cost and without specialized knowledge. Our approach is to extend biomechanical simulation tools developed for medical and rehabilitation purposes to adapt them for Human-Computer Interaction setting. The talk gives an overview of problems related to the development of the method and shows answers to some of the fundamental questions.

Detecting Swipe Errors on Touchscreens using Grip Modulation(22 October, 2015)

Speaker: Faizuddin Mohd Noor

We show that when users make errors on mobile devices, they make immediate and distinct physical responses that can be observed with standard sensors. We used three

standard cognitive tasks (Flanker, Stroop and SART) to induce errors from 20 participants. Using simple low-resolution capacitive touch sensors placed around a standard device and a built-in accelerometer, we demonstrate that errors can be predicted using micro-adjustments to hand grip and movement in the period after swiping the touchscreen. In a per-user model, our technique predicted error with a mean AUC of 0.71 in Flanker and 0.60 in Stroop and SART using hand grip, while with the accelerometer the mean AUC in all tasks was ≥ 0.90. Using a pooled, non-user-specific, model, our technique achieved mean AUC of 0.75 in Flanker and 0.80 in Stroop and SART using hand grip and an AUC for all tasks > 0.90 for the accelerometer. When combining these features we achieved an AUC of 0.96 (with false accept and reject rates both below 10%). These results suggest that hand grip and movement provide strong and very low latency evidence for mistakes, and could be a valuable component in interaction error detection and correction systems.

A conceptual model of the future of input devices(14 October, 2015)

Speaker: John Williamson

Turning sensor engineering into advances into human computer interaction is slow, ad hoc and unsystematic. I'll discuss a fundamental approach to input device engineering, and illustrate how machine learning could have the exponentially-accelerating impact in HCI that it has had in other fields.

[caveat: This is a proposal: there are only words, not results!]

Haptic Gaze Interaction - EVENT CANCELLED(05 October, 2015)

Speaker: Poika Isokoski
Eye trackers that can be (somewhat) comfortably worn for long periods are now available. Thus, computing systems can track the gaze vector and it can be used in interactions with mobile and embedded computing systems together with other input and output

Eye trackers that can be (somewhat) comfortably worn for long periods are now available. Thus, computing systems can track the gaze vector and it can be used in interactions with mobile and embedded computing systems together with other input and output modalities. However, interaction techniques for these activities are largely missing. Furthermore, it is unclear how feedback from eye movements should be given to best support user's goals. This talk will give an overview of the results of our recent work in exploring haptic feedback on eye movements and building multimodal interaction techniques that utilize the gaze data. I will also discuss some possible future directions in this line of research.

Challenges in Metabolomics, and some Machine Learning Solutions(30 September, 2015)

Speaker: Simon Rogers

Large scale measurement of the metabolites present in an organism is very challenging, but potentially highly rewarding in the understanding of disease and the development of drugs. In this talk I will describe some of the challenges in analysis of data from Liquid Chromatography - Mass Spectrometry, one of the most popular platforms for metabolomics. I will present Statistical Machine Learning solutions to several of these challenges, including the alignment of spectra across experimental runs, the identification of metabolites within the spectra, and finish with some recent work on using text processing techniques to discover conserved metabolite substructures.

Engaging with Music Retrieval(09 September, 2015)

Speaker: Daniel Boland

Music collections available to listeners have grown at a dramatic pace, now spanning tens of millions of tracks. Interacting with a music retrieval system can thus be overwhelming, with users offered ‘too-much-choice’. The level of engagement required for such retrieval interactions can be inappropriate, such as in mobile or multitasking contexts. Using listening histories and work from music psychology, a set of engagement-stratified profiles of listening behaviour are developed. The challenge of designing music retrieval for different levels of user engagement is explored with a system allowing users to denote their level of engagement and thus the specificity of their music queries. The resulting interaction has since been adopted as a component in a commercial music system.

Building Effective and Efficient Information Retrieval Systems(26 June, 2015)

Speaker: Jimmy Lin
Machine learning has become the tool of choice for tackling challenges in a variety of domains, including information retrieval

Machine learning has become the tool of choice for tackling challenges in a variety of domains, including information retrieval. However, most approaches focus exclusively on effectiveness---that is, the quality of system output. Yet, real-world production systems need to search billions of documents in tens of milliseconds, which means that techniques also need to be efficient (i.e., fast).  In this talk, I will discuss two approaches to building more effective and efficient information retrieval systems. The first is to directly learn ranking functions that are inherently more efficient---a thread of research dubbed "learning to efficiently rank". The second is through architectural optimizations that take advantage of modern processor architectures---by paying attention to low-level details such as cache misses and branch mispredicts. The combination of both approaches, in essence, allow us to "have our cake and eat it too" in building systems that are both fast and good.

Deep non-parametric learning with Gaussian processes(10 June, 2015)

Speaker: Andreas Damianou
http://staffwww.dcs.sheffield.ac.uk/people/A.Damianou/research/index.html#DeepGPs

This talk will discuss deep Gaussian process models, a recent approach to combining deep probabilistic structures with Bayesian nonparametrics. The obtained deep belief networks are constructed using continuous variables connected with Gaussian process mappings; therefore, the methodology used for training and inference deviates from traditional deep learning paradigms. The first part of the talk will thus outline the associated computational tools, revolving around variational inference. In the second part, we will discuss models obtained as special cases of the deep Gaussian process, namely dynamical / multi-view / dimensionality reduction models and nonparametric autoencoders. The above concepts and algorithms will be demonstrated with examples from computer vision (e.g. high-dimensional video, images) and robotics (motion capture data, humanoid robotics).

Intermittent Control in Man and Machine(30 April, 2015)

Speaker: Henrik Gollee

An intermittent controller generates a sequence of (continuous-time) parametrised trajectories whose parameters are adjusted intermittently, based on continuous observation. This concept is related to "ballistic" control and differs from i) discrete-time control in that the control is not constant between samples, and ii) continuous-time control in that the trajectories are reset intermittently.  The Intermittent Control paradigm evolved separately in the physiological and engineering literature. The talk will give details on the experimental verification of intermittency in biological systems and its applications in engineering.

Advantages of intermittent control compared to the continuous paradigm in the context of adaptation and learning will be discussed.

Get A Grip: Predicting User Identity From Back-of-Device Sensing(19 March, 2015)

Speaker: Mohammad Faizuddin Md Noor

We demonstrate that users can be identified using back-of-device handgrip changes during the course of the interaction with mobile phone, using simple, low-resolution capacitive touch sensors placed around a standard device. As a baseline, we replicated the front-of-screen experiments of Touchalytics and compare with our results. We show that classifiers trained using back-of-device could match or exceed the performance of classifiers trained using the Touchalytics approach. Our technique achieved mean AUC, false accept rate and false reject rate of 0.9481, 3.52% and 20.66% for a vertical scrolling reading task and 0.9974, 0.85% and 2.62% for horizontal swiping game task. These results suggest that handgrip provides substantial evidence of user identity, and can be a valuable component of continuous authentication systems.

Towards Effective Non-Invasive Brain-Computer Interfaces Dedicated to Ambulatory Applications (19 March, 2015)

Speaker: Matthieu Duvinage

Disabilities affecting mobility, in particular, often lead to exacerbated isolation and thus fewer communication opportunities, resulting in a limited participation in social life. Additionally, as costs for the health-care system can be huge, rehabilitation-related devices and lower-limb prostheses (or orthoses) have been intensively studied so far. However, although many devices are now available, they rarely integrate the direct will of the patient. Indeed, they basically use motion sensors or the residual muscle activities to track the next move.

Therefore, to integrate a more direct control from the patient, Brain-Computer Interfaces

(BCIs) are here proposed and studied under ambulatory conditions. Basically, a BCI allows you to control any electric device without the need of activating muscles. In this work, the conversion of brain signals into a prosthesis kinematic control is studied following two approaches. First, the subject transmits his desired walking speed to the BCI. Then, this high-level command is converted into a kinematics signal thanks to a Central Pattern Generator (CPG)-based gait model, which is able to produce automatic gait patterns. Our work thus focuses on how BCIs do behave in ambulatory conditions. The second strategy is based on the assumption that the brain is continuously controlling the lower limb. Thus, a direct interpretation, i.e. decoding, from the brain signals is performed. Here, our work consists in determining which part of the brain signals can be used.

Gait analysis from a single ear-worn sensor(17 March, 2015)

Speaker: Delaram Jarchi

Objective assessment of detailed gait patterns is important for clinical applications. One common approach to clinical gait analysis is to use multiple optical or inertial sensors affixed to the patient body for detailed bio-motion and gait analysis. The complexity of sensor placement and issues related to consistent sensor placement have limited these methods only to dedicated laboratory settings, requiring the support of a highly trained technical team. The use of a single sensor for gait assessment has many advantages, particularly in terms of patient compliance, and the possibility of remote monitoring of patients in home environment. In this talk we look into the assessment of a single ear-worn sensor (e-AR sensor) for gait analysis by developing signal processing techniques and using a number of reference platforms inside and outside the gait laboratory. The results are provided considering two clinical applications such as post-surgical follow-up and rehabilitation of orthopaedic patients and investigating the gait changes of the Parkinson's Disease (PD) patients.

Imaging without cameras(05 March, 2015)

Speaker: Matthew Edgar

Conventional cameras rely upon a pixelated sensor to provide spatial resolution. An alternative approach replaces the sensor with a pixelated transmission mask encoded with a series of binary patterns. Combining knowledge of the series of patterns and the associated filtered intensities, measured by single-pixel detectors, allows an image to be deduced through data inversion. At Glasgow we have been extending the concept of a `single-pixel camera' to provide continuous real-time video in excess of 10 Hz, at non-visible wavelengths, using efficient computer algorithms. We have so far demonstrated some applications for our camera such as imaging through smoke, through tinted screens, and detecting gas leaks, whilst performing sub-Nyquist sampling. We are currently investigating the most effective image processing strategies and basis scanning procedures for increasing the image resolution and frame rates for single-pixel video systems.

Analysing UK Annual Report Narratives using Text Analysis and Natural Language Processing(23 February, 2015)

Speaker: Mahmoud El-Haj
In this presentation I will show the work we’ve done in our Corporate Financial Information Environment (CFIE) project.

In this presentation I will show the work we’ve done in our Corporate Financial Information Environment (CFIE) project.  The Project, funded by ESRC and ICAEW, seeks to analyse UK financial narratives, their association with financial statement information, and their informativeness for investors using Computational Linguistics, heuristic Information Extraction (IE) and Natural Language Processing (NLP).  We automatically collected and analysed a number of 14,000 UK annual reports covering a period between 2002 and 2014 for the UK largest firms listed on the London Stock Exchange. We developed software for this purpose which is available online for general use by academics.  The talk includes a demo on the software that we developed and used in our analysis: Wmatrix-import and Wmatrix.  Wmatrix-import is a web-based tool to automatically detect and parse the structure of UK annual reports; the tool provides sectioning, word frequency and readability metrics.  The output from Wmatrix-import goes as input for further NLP and corpus linguistic analysis by Wmatrix - a web based corpus annotation and retrieval tool which currently supports the analysis of small to medium sized English corpora.

Links:

Wmatrix-import
https://cfie.lancaster.ac.uk:8443/

Wmatrix
http://ucrel.lancs.ac.uk/wmatrix/

CFIE Project
http://ucrel.lancs.ac.uk/cfie/

Compositional Data Analysis (CoDA) approaches to distance in information retrieval (20 February, 2015)

Speaker: Dr Paul Thomas
Many techniques in information retrieval produce counts from a sample

Many techniques in information retrieval produce counts from a sample, and it is common to analyse these counts as proportions of the whole—term frequencies are a familiar example.  Proportions carry only relative information and are not free to vary independently of one another: for the proportion of one term to increase, one or more others must decrease.  These constraints are hallmarks of compositional data.  While there has long been discussion in other fields of how such data should be analysed, to our knowledge, Compositional Data Analysis (CoDA) has not been considered in IR. In this work we explore compositional data in IR through the lens of distance measures, and demonstrate that common measures, naïve to compositions, have some undesirable properties which can be avoided with composition-aware measures.  As a practical example, these measures are shown to improve clustering.

Users versus Models: What observation tells us about effectiveness metrics(16 February, 2015)

Speaker: Dr. Paul Thomas
This work explores the link between users and models by analysing the assumptions and implications of a number of effectiveness metrics, and exploring how these relate to observable user behaviours

Retrieval system effectiveness can be measured in two quite different ways: by monitoring the behaviour of users and gathering data about the ease and accuracy with which they accomplish certain specified information-seeking tasks; or by using numeric effectiveness metrics to score system runs in reference to a set of relevance judgements.  In the second approach, the effectiveness metric is chosen in the belief that it predicts ease or accuracy.

This work explores that link, by analysing the assumptions and implications of a number of effectiveness metrics, and exploring how these relate to observable user behaviours.  Data recorded as part of a user study included user self-assessment of search task difficulty; gaze position; and click activity.  Our results show that user behaviour is influenced by a blend of many factors, including the extent to which relevant documents are encountered, the stage of the search process, and task difficulty.  These insights can be used to guide development of batch effectiveness metrics.

Towards Effective Retrieval of Spontaneous Conversational Spoken Content(08 January, 2015)

Speaker: Gareth J. F. Jones
Spoken content retrieval (SCR) has been the focus of various research initiatives for more then 20 years.

Spoken content retrieval (SCR) has been the focus of various research initiatives for more then 20 years. Early research focused on retrieval of clear defined spoken documents principally from the broadcast news domain. The main focus of this work was spoken document retrieval (SDR) task at TREC-6-9. The end of which saw SDR declared a largely solved problem. However, this was soon found to be a premature conclusion relating to controlled recordings of professional news content and overlooking many of the potential challenges of searching more complex spoken content. Subsequent research has focused on more challenging tasks such as search of interview recordings and semi-professional internet content.  This talk will begin by reviewing early work in SDR, explaining its successes and limitations, it will then move to outline work exploring SCR for more challenging tasks, such as identifying relevant elements in long spoken recordings such as meetings and presentations, provide a detailed analysis of the characteristics of retrieval behaviour of spoken content elements when indexed using manual and automatic transcripts, and conclude with a summary of the challenges of delivering effective SCR for complex spoken content and initial attempts to address these challenges. 

On Inverted Index Compression for Search Engine Efficiency(01 September, 2014)

Speaker: Matteo Catena

Efficient access to the inverted index data structure is a key aspect for a search engine to achieve fast response times to users’ queries. While the performance of an information retrieval (IR) system can be enhanced through the compression of its posting lists, there is little recent work in the literature that thoroughly compares and analyses the performance of modern integer compression schemes across different types of posting information (document ids, frequencies, positions). In this talk, we show the benefit of compression for different types of posting information to the space- and time-efficiency of the search engine. Comprehensive experiments have been conducted on two large, widely used document corpora and large query sets; using different modern integer compression algorithms, integrated into a modern IR system, the Terrier IR platform. While reporting the compression scheme which results in the best query response times, the presented analysis will also show the impact of compression on frequency and position posting information in Web corpora that have large volumes of anchor text.

Interactive Visualisation of Big Music Data.(22 August, 2014)

Speaker: Beatrix Vad

Musical content can be described by a variety of features that are measured or inferred through the analysis of audio data. For a large music collection this establishes the possibility to retrieve information about its structure and underlying patterns. Dimensionality reduction techniques can be used to gain insight into such a high-dimensional dataset and to enable visualisation on two-dimensional screens. In this talk we investigate the usability of these techniques with respect to an interactive exploration interface for large music collections based on moods. A method employing Gaussian Processes to extend the visualisation with additional information about its composition is presented and evaluated

Behavioural Biometrics for Mobile Touchscreen Devices(22 August, 2014)

Speaker: Daniel Buschek

Inference in non‐linear dynamical systems – a machine learning perspective, (08 July, 2014)

Speaker: Carl Rasmussen

Inference in discrete-time non-linear dynamical systems is often done using the Extended Kalman Filtering and Smoothing (EKF) algorithm, which provides a Gaussian approximation to the posterior based on local linearisation of the dynamics. In challenging problems, when the non-linearities are significant and the signal to noise ratio is poor, the EKF performs poorly. In this talk we will discuss an alternative algorithm developed in the machine learning community which is based message passing in Factor Graphs and the Expectation Propagation (EP) approximation. We will show this method provides a consistent and accurate Gaussian approximation to the posterior enabling system identification using Expectation Maximisation (EM) even in cases when the EKF fails.

Adaptive Interaction(02 June, 2014)

Speaker: Professor Andrew Howes
A utility maximization approach to understanding human interaction with technology

This lecture describes a theoretical framework for the behavioural sciences that holds high promise for theory-driven research and design in Human-Computer Interaction. The framework is designed to tackle the adaptive, ecological, and bounded nature of human behaviour. It is designed to help scientists and practitioners reason about why people choose to behave as they do and to explain which strategies people choose in response to utility, ecology, and cognitive information processing mechanisms. A key idea is that people choose strategies so as to maximise utility given constraints. The framework is illustrated with a number of examples including pointing, multitasking, skim- reading, online purchasing, Signal-Detection Theory and diagnosis, and the influence of reputation on purchasing decisions. Importantly, these examples span from perceptual/motor coordination, through cognition to social interaction. Finally, the lecture discusses the challenging idea that people seek to find optimal strategies and also discusses the implications for behavioral investigation in HCI.

Web-scale Semantic Ranking(16 May, 2014)

Speaker: Dr Nick Craswell
Bing Ranking Techniques

Semantic ranking models score documents based on closeness in meaning to the query rather than by just matching keywords. To implement semantic ranking at Web-scale, we have designed and deployed a new multi-level ranking systems that combines the best of inverted index and forward index technologies. I will describe this infrastructure which is currently serving many millions of users and explore several types of semantic models: translation models, syntactic pattern matching and topical matching models. Our experiments demonstrate that these semantic ranking models significantly improve relevance over our existing baseline system. This is the repeat of a WWW2014 industry track talk.

Optimized Interleaving for Retrieval Evaluation(28 April, 2014)

Speaker: Filip Radlinski

Interleaving is an online evaluation technique for comparing the relative quality of information retrieval functions by combining their result lists and tracking clicks. A sequence of such algorithms have been proposed, each being shown to address problems in earlier algorithms. In this talk, I will formalize and generalize this process, while introducing a formal model: After identifying a set of desirable properties for interleaving, I will show that an interleaving algorithm can be obtained as the solution to an optimization problem within those constraints. This approach makes explicit the parameters of the algorithm, as well as assumptions about user behavior. Further, this approach leads to an unbiased and more efficient interleaving algorithm than any previous approach, as I will show a novel log-based analysis of user search behaviour.

Gaussian Processes for Big Data(03 April, 2014)

Speaker: Dr James Hensman

Gaussian Process (GP) models are widely applicable models of functions, and are used extensively in statistics and machine learning for regression, classification and as components of more complex models. Inference in a Gaussian process model usually costs O(n^3) operations, where n is the number of data. In the Big Data (tm) world, it would initially seem unlikely that GPs might contribute due to this computational requirement.

Parametric models have been successfully applied to Big Data (tm) using the Robbins-Monro gradient method, which allows data to be processed individually or in small batches. In this talk, I'll show how these ideas can be applied to Gaussian Processes. To do this, I'll form a variational bound on the marginal likelihood: we discuss the properties of this bound, including the conditions where we recover exact GP behaviour.

Our methods have allowed GP regression on hundreds of thousands of data, using a standard desktop machine. for more details, see http://auai.org/uai2013/prints/papers/244.pdf .

Composite retrieval of heterogeneous web search (24 March, 2014)

Speaker: Horatiu Bota

Traditional search systems generally present a ranked list of documents as answers to user queries. In aggregated search systems, results from different and increasingly diverse verticals (image, video, news, etc.) are returned to users. For instance, many such search engines return to users both images and web documents as answers to the query ``flower''. Aggregated search has become a very popular paradigm. In this paper, we go one step further and study a different search paradigm: composite retrieval. Rather than returning and merging results from different verticals, as is the case with aggregated search, we propose to return to users a set of "bundles", where a bundle is composed of "cohesive'' results from several verticals. For example, for the query "London Olympic'', one bundle per sport could be returned, each containing results extracted from news, videos, images, or Wikipedia. Composite retrieval can promote exploratory search in a way that helps users understand the diversity of results available for a specific query and decide what to explore in more detail. 

 

We proposed and evaluated a variety of approaches to construct bundles that are relevant, cohesive and diverse. We also utilize both entitiy and term as a surrogate to represent items and demonstrate their effectiveness of bridging the "mismatch" gap among heterogeneous sources. Compared with three baselines (traditional "general web only'' ranking, federated search ranking and aggregated search),  our evaluation results demonstrate significant performance improvement for a highly heterogeneous web collection.

Query Auto-completion & Composite retrieval(17 March, 2014)

Speaker: Stewart Whiting and Horatiu Bota

=Recent and Robust Query Auto-Completion by Stewart Whiting=

Query auto-completion (QAC) is a common interactive feature that assists users in formulating queries by providing completion suggestions as they type. In order for QAC to minimise the user’s cognitive and physical effort, it must: (i) suggest the user’s intended query after minimal input keystrokes, and (ii) rank the user’s intended query highly in completion suggestions. QAC must be both robust and time-sensitive – that is, able to sufficiently rank both consistently and recently popular queries in completion suggestions. Addressing this trade-off, we propose several practical completion suggestion ranking approaches, including: (i) a sliding window of query popularity evidence from the past 2-28 days, (ii) the query popularity distribution in the last N queries observed with a given prefix, and (iii) short-range query popularity prediction based on recently observed trends. Through real-time simulation experiments, we extensively investigated the parameters necessary to maximise QAC effectiveness for three openly available query log datasets with prefixes of 2-5 characters: MSN and AOL (both English), and Sogou 2008 (Chinese). Results demonstrate consistent and language-independent improvements of up to 9.2% over a non-temporal QAC baseline for all query logs with prefix lengths of 2-3 characters. Hence, this work is an important step towards more effective QAC approaches.

 

=Composite retrieval of heterogeneous web search by Horatiu Bota=

Traditional search systems generally present a ranked list of documents as answers to user queries. In aggregated search systems, results from different and increasingly diverse verticals (image, video, news, etc.) are returned to users. For instance, many such search engines return to users both images and web documents as answers to the query ``flower''. Aggregated search has become a very popular paradigm. In this paper, we go one step further and study a different search paradigm: composite retrieval. Rather than returning and merging results from different verticals, as is the case with aggregated search, we propose to return to users a set of "bundles", where a bundle is composed of "cohesive'' results from several verticals. For example, for the query "London Olympic'', one bundle per sport could be returned, each containing results extracted from news, videos, images, or Wikipedia. Composite retrieval can promote exploratory search in a way that helps users understand the diversity of results available for a specific query and decide what to explore in more detail. 

 

We proposed and evaluated a variety of approaches to construct bundles that are relevant, cohesive and diverse. We also utilize both entitiy and term as a surrogate to represent items and demonstrate their effectiveness of bridging the "mismatch" gap among heterogeneous sources. Compared with three baselines (traditional "general web only'' ranking, federated search ranking and aggregated search),  our evaluation results demonstrate significant performance improvement for a highly heterogeneous web collection.

Studying the performance of semi-structured p2p information retrieval(10 March, 2014)

Speaker: Rami Alkhawaldeh

In recent decades, retrieval systems deployed over peer-to-peer (P2P) overlay networks have been investigated as an alternative to centralised search engines. Although modern search engines provide efficient document retrieval, there are several drawbacks, including: a single point of failure, maintenance costs, privacy risks, information monopolies from search engines companies, and difficulty retrieving hidden documents in the web (i.e. the deep web). P2P information retrieval (P2PIR) systems promise an alternative distributed system to the traditional centralised search engine architecture. Users and creators of web content in such networks have full control over what information they wish to share as well as how they share it.

 

 

 

Researchers have been tackling several challenges to build effective P2PIR systems: (i) collection (peer) representation during indexing, (ii) peer selection during search to route queries to relevant peers and (iii) final peer result merging. Semi-structured P2P networks (i.e, a partially decentralised unstructured overlay network) offer an intermediate design that minimizes the weakness of both centralised and completely decentralised overlay networks and combines the advantages of those two topologies. So, an evaluation framework for this kind of network is necessary to compare the performance of different P2P approaches and to be a guide for developing new and more powerful approaches. In this work, we study the performance of three cluster-based semi-structured P2PIR models and explain the effectiveness of several important design considerations and parameters on retrieval performance, as well as the robustness of these types of network.

 

4pm @ Level 4

Inside The World’s Playlist(23 February, 2014)

Speaker: Manos Tsagkias

 

We describe the algorithms behind Streamwatchr, a real-time system for analyzing the music listening behavior of people around the world. Streamwatchr collects music-related tweets, extracts artists and songs, and visualises the results in two ways: (i)~currently trending songs and artists, and (ii)~newly discovered songs.

 

Machine Learning for Back-of-the-Device Multitouch Typing (17 December, 2013)

Speaker: Daniel Buschek

IDI Seminar: Machine Learning for Back-of-the-Device Multitouch Typing(17 December, 2013)

Speaker: Daniel Buscheck

Dublin City Search: An evolution of search to incorporate city data (24 November, 2013)

Speaker: Dr Veli Bicer, IBM Research Dublin
ors, devices, social networks, governmental applications, or service networks. In such a diversity of information, answering specific information needs of city inhabitants requires holistic information retrieval techniques, capable of harnessing differen

Dr Veli Bicer is a researcher at Smarter Cities Technology Center of IBM Research in Dublin. His research interests include semantic data management, semantic search, software engineering and statistical relational learning. He obtained his PhD from Karlsruhe Institute of Technology, Karlsruhe, Germany and B.Sc. and M.Sc. degrees in computer engineering from Middle East Technical University, Ankara, 

IDI Seminar: Uncertain Text Entry on Mobile Devices(21 November, 2013)

Speaker: Daryl Weir

Modern mobile devices typically rely on touchscreen keyboards for input. Unfortunately, users often struggle to enter text accurately on virtual keyboards. We undertook a systematic investigation into how to best utilize probabilistic information to improve these keyboards. We incorporate a state-of-the-art touch model that can learn the tap idiosyncrasies of a particular user, and show in an evaluation that character error rate can be reduced by up to 7% over a baseline, and by up to 1.3% over a leading commercial keyboard. We furthermore investigate how users can explicitly control autocorrection via how hard they touch.

Economic Models of Search(18 November, 2013)

Speaker: Leif Azzopardi

TBA

Predicting Screen Touches From Back-of-Device Grip Changes(14 November, 2013)

Speaker: Faizuddin Mohd Noor

We demonstrate that front-of-screen targeting on mobile phones can be predicted from back-of-device grip manipulations. Using simple, low-resolution capacitive touch sensors placed around a standard phone, we outline a machine learning approach to modelling the grip modulation and inferring front-of-screen touch targets. We experimentally demonstrate that grip is a remarkably good predictor of touch, and we can predict touch position 200ms before contact with an accuracy of 18mm.

IDI Seminar: Predicting Screen Touches From Back-of-Device Grip Changes(14 November, 2013)

Speaker: Faizuddin Mohd Noor

We demonstrate that front-of-screen targeting on mobile phones can be predicted from back-of-device grip manipulations. Using simple, low-resolution capacitive touch sensors placed around a standard phone, we outline a machine learning approach to modelling the grip modulation and inferring front-of-screen touch targets. We experimentally demonstrate that grip is a remarkably good predictor of touch, and we can predict touch position 200ms before contact with an accuracy of 18mm.

Online Learning in Explorative Multi Period Information Retrieval(11 November, 2013)

Speaker: Marc Sloan

 

In Multi Period Information Retrieval we consider retrieval as a stochastic yet controllable process, the ranking action during the process continuously controls the retrieval system's dynamics and an optimal ranking policy is found in order to maximise the overall users' satisfaction. Different aspects of this process can be fixed giving rise to different search scenarios. One such application is to fix search intent and learn from a population of users over time. Here we use a multi-armed bandit algorithm and apply techniques from finance to learn optimally diverse and explorative search results for a query. We can also fix the user and dynamically model the search over multiple pages of results using relevance feedback. Likewise we are currently investigating using the same technique over session search using a Markov Decision Process.

Stopping Information Search: An fMRI Investigation (04 November, 2013)

Speaker: Eric Walden

Information search has become an increasingly important factor in people's use of information systems.  In both personal and workplace environments, advances in information technology and the availability of information have enabled people to perform far more search and access much more information for decision making than in the very recent past.  One consequence of this abundance of information has been an increasing need for people to develop better heuristic methods for stopping search, since information available for most decisions now overwhelms people's cognitive processing capabilities and in some cases is almost infinite.  Information search has been studied in much past research, and cognitive stopping rules have also been investigated.  The present research extends and expands on previous behavioral research by investigating brain activation during searching and stopping behavior using functional Magnetic Resonance Imaging (fMRI) techniques.  We asked subjects to search for information about consumer products and to stop when they believed they had enough information to make a subsequent decision about whether to purchase that product.  They performed these tasks while in an MRI machine.  Brain scans were taken that measured brain activity throughout task performance.  Results showed that different areas of the brain were active for searching and stopping, that different brain regions were used for several different self-reported stopping rules, that stopping is a neural correlate of inhibition, suggesting a generalized stopping mechanism in the brain, and that certain individual difference variables make no difference in brain regions active for stopping.  The findings extend our knowledge of information search, stopping behavior, and inhibition, contributing to both the information systems and neuroscience literatures.  Implications of our findings for theory and practice are discussed.

Towards Technically assisted Sensitivity Review of UK Digital Public Records(21 October, 2013)

Speaker: Tim Gollins

There are major difficulties involved in identifying sensitive information in digital public records. These difficulties, if not addressed, will together with the challenge of managing the risks of failing to identify sensitive documents, force government departments into the precautionary closure of large swaths of digital records. Such closures will inhibit timely, open and transparent access by citizens and others in civic society. Precautionary closures will also prevent social scientists’ and contemporary historians’ access to valuable qualitative information, and their ability to contextualise studies of emerging large scale quantitative data. Closely analogous problems exist in UK local authorities, the third sector, and in other countries which are covered by the same or similar legislation and regulation. In 2012, having conducted investigations and earlier research into this problem, and with new evidence of immediate need emerging from the 20 year rule transition process, The UK National Archives (TNA) highlighted this serious issue facing government departments in the UK Public Records system; the Abaca project is the response.

 

The talk will outline the role of TNA, the background to sensitivity review, the impact of the move to born digital records, the nature of the particular challenge of reviewing them for sensitivity, and the broad approach that the Abaca Project is taking.

 

 

Next Monday, 4pm at 423

Accelerating research on big datasets with Stratosphere(14 October, 2013)

Speaker: Moritz Schubotz
Stratosphere is a research project investigating new paradigms for scalable, complex analytics on massively-parallel data sets.

Stratosphere is a research project investigating new paradigms for scalable, complex analytics on massively-parallel data sets. The core concept of Stratosphere is the PACT programming model that extends MapReduce with second order functions like Match, CoGroup and Cross, which allows researchers to describe complex analytics task naturally. The result are directed acyclic that are optimized for parallel execution, by a cost based optimizer that incorporates user code properties, and executed by the Nephele Data Flow Engine. Nephele is a massively parallel data flow engine dealing with resource management, work scheduling, communication, and fault tolerance.

In the seminar session we introduce and showcase how researchers can set their working environment quickly and start doing research right away. As a proof of concept, we present how a simple java program parallelized optimized by Stratosphere obtained top results at the "exotic" Math search task at NTCIR-10. While other research groups optimized index structures and data formats and waited several hours for their indices to be build on high end hardware, we could focus on the essential program logic use basic data types and run the experiments on a heterogenous desktop cluster in several minutes.

IDI Seminar: Around-device devices: utilizing space and objects around the phone(07 October, 2013)

Speaker: Henning Pohl

For many people their phones have become their main everyday tool. While phones can fulfill many different roles, they also require users to (1) make do with affordance not specialized for the specific task, and (2) closely engage with the device itself. In this talk, I propose utilizing the space and objects around the phone to offer better task affordance and to create an opportunity for casual interactions. Around-device devices are a class of interactors, that do not require the user to bring special tangibles, but repurpose items already found in the user’s surroundings. I'll present a survey study, where we determined which places and objects are available to around-device devices. I'll also talk about a prototype implementation of hand interactions and object tracking for future mobiles with built-in depth sensing.

IDI Seminar: Extracting meaning from audio – a machine learning approach(03 October, 2013)

Speaker: Jan Larsen

Validity and Reliability in Cranfield-like Evaluation in Information Retrieval(23 September, 2013)

Speaker: Julián Urbano

The Cranfield paradigm to Information Retrieval evaluation has been used for half a century now as the means to compare retrieval techniques and advance the state of the art accordingly. However, this paradigm makes certain assumptions that remain a research problem in Information Retrieval and that may invalidate our experimental results.

In this talk I will approach the Cranfield paradigm as an statistical estimator of certain probability distributions that describe the final user experience. These distributions are estimated with a test collection, which actually computes system-related distributions that are assumed to be correlated with the target user-related distributions. From the point of view of validity, I will discuss the strength of that correlation and how it affects the conclusions we draw from an evaluation experiment. From the point of view of reliability, I will discuss on past and current practice to measure the reliability of test collections and review several of them accordingly.

Exploration and contextualization: towards reusable tools for the humanities.(16 September, 2013)

Speaker: Marc Bron

The introduction of new technologies, access to large electronic

cultural heritage repositories, and the availability of new

information channels continues to change the way humanities

researchers work and the questions they seek to answer. In this talk I

will discuss how the research cycle of humanities researchers has been

affected by these changes and argue for the continued development of

interactive information retrieval tools to support the research

practices of humanities researchers. Specifically, I will focus on two

phases in the humanities research cycle: the exploration phase and

contextualization phase. In the first part of the talk I discuss work

on the development and evaluation of search interfaces aimed at

supporting exploration. In the second part of the talk I will focus on

how information retrieval technology focused on identifying relations

between concepts may be used to develop applications that support

contextualization.

Quantum Language Models(19 August, 2013)

Speaker: Alessandro Sordoni

A joint analysis of both Vector Space and Language Models for IR

using the mathematical framework of Quantum Theory revealed how both

models allocate the space of density matrices. A density matrix is

shown to be a general representational tool capable of leveraging

capabilities of both VSM and LM representations thus paving the way

for a new generation of retrieval models. The new approach is called

Quantum Language Modeling (QLM) and has shown its efficiency and

effectiveness in modeling term dependencies for Information

Retrieval.

Toward Models and Measures of Findability(21 July, 2013)

Speaker: Colin Wilkie
A summary of the work being undertaken on Findability

In this 10 minute talk, I will provide an overview of the project I am working on, which is about Findability, and review some of the existing models and measures of findability, before outlining the models that I have working on.

How cost affects search behaviour(21 July, 2013)

Speaker: Leif Azzopardi
Find out about how microeconomic theory predicts user behaviour...

In this talk, I will run through the work I will be presenting at SIGIR on "How cost affects search behavior". The empirical analysis is motivated and underpinned using the Search Economic Theory that I proposed at SIGIR 2011. 

[SICSA DVF] Language variation and influence in social media(15 July, 2013)

Speaker: Dr. Jacob Eisenstein
Dr. Eisenstein works on statistical natural language processing, focusing on social media analysis, discourse, and latent variable models

Languages vary by speaker and situation, and change over time.  While variation and change are inhibited in written corpora such as news text, they are endemic to social media, enabling large-scale investigation of language's social and temporal dimensions. The first part of this talk will describe a method for characterizing group-level language differences, using the Sparse Additive Generative Model (SAGE). SAGE is based on a re-parametrization of the multinomial distribution that is amenable to sparsity-inducing regularization and facilitates joint modeling across many author characteristics. The second part of the talk concerns change and influence. Using a novel dataset of geotagged word counts, we induce a network of linguistic influence between cities, aggregating across thousands of words. We then explore the demographic and geographic factors that drive spread of new words between cities. This work is in collaboration with Amr Ahmed, Brendan O'Connor, Noah A. Smith, and Eric P. Xing.

Biography
Jacob Eisenstein is an Assistant Professor in the School of Interactive Computing at Georgia Tech. He works on statistical natural language processing, focusing on social media analysis, discourse, and latent variable models. Jacob was a Postdoctoral researcher at Carnegie Mellon and the University of Illinois. He completed his Ph.D. at MIT in 2008, winning the George M. Sprowls dissertation award.

 

The Use of Correspondence Analysis in Information Retrieval(11 July, 2013)

Speaker: Dr Taner Dincer
This presentation will introduce the application of Correspondence Analysis in Information Retrieval

This presentation will introduce the application of Correspondence Analysis (CA) to Information Retrieval. CA is a well-established multivariate, statistical, exploratory data analysis technique. Multivariate data analysis techniques usually operate on a rectangular array of real numbers called a data matrix whose rows represent r observations (for example, r terms/words in documents) and columns represent c variables (for the example, c documents, resulting in a rxc term-by-document matrix). Multivariate data analysis refers to analyze the data in a manner that takes into account the relationships among observations and also among variables. In contrast to univariate statistics, it is concerned with the joint nature of measurements. The objective of exploratory data analysis is to explore the relationships among objects and among variables over measurements for the purpose of visual inspection. In particular, by using CA one can visually study the “Divergence From Independence” (DFI) among observations and among variables.


For Information Retrieval (IR), CA can serve three different uses: 1) As an analysis tool to visually inspect the results of information retrieval experiments, 2) As a basis to unify the probabilistic approaches to term weighting problem such as Divergence From Randomness and Language Models, and 3) As a term weighting model itself, "term weighting based on measuring divergence from independence". In this presentation, the uses of CA for these three purposes are exemplified.

A study of Information Management in the Patient Surgical Pathway in NHS Scotland(03 June, 2013)

Speaker: Matt-Mouley Bouamrane

We conducted a study of information management processes across the patient surgical pathway in NHS Scotland. While the majority of General Practitioners (GPs) consider electronic information systems as an essential and integral part of their work during the patient consultation, many were not fully satisfied with the functionalities of these systems. A majority of GPs considered that the national eReferral system streamlined referral processes. Almost all GPs reported marked variability in the quality of discharge information. Preoperative processes vary significantly across Scotland, with most services using paper based systems. There is insufficient use made of information provided through the patient electronic referral and a considerable duplication of effort with the work already performed in primary care. Three health-boards have implemented electronic preoperative information systems. These have transformed clinical practices and facilitated communication and information-sharing among the multi-disciplinary team and within the health boards. Substantial progress has been made towards improving information transfer and sharing within the surgical pathway in recent years but there remains scope for further improvements at the interface between services.

Interdependence and Predictability of Human Mobility and Social Interactions(23 May, 2013)

Speaker: Mirco Musolesi

The study of the interdependence of human movement and social ties of individuals is one of the most interesting research areas in computational social science. Previous studies have shown that human movement is predictable to a certain extent at different geographic scales. One of the open problems is how to improve the prediction exploiting additional available information. In particular, one of the key questions is how to characterise and exploit the correlation between movements of friends and acquaintances to increase the accuracy of the forecasting algorithms.

In this talk I will discuss the results of our analysis of the Nokia Mobile Data Challenge dataset showing that, by means of multivariate nonlinear predictors, it is possible to exploit mobility data of friends in order to improve user movement forecasting. This can be seen as a process of discovering correlation patterns in networks of linked social and geographic data. I will also show how mutual information can be used to quantify this correlation; I will demonstrate how to use this quantity to select individuals with correlated mobility patterns in order to improve movement prediction. Finally, I will show how the exploitation of data related to friends improves dramatically the prediction with respect to the case of information of people that do not have social ties with the user.

Discovering, Modeling, and Predicting Task-by-Task Behaviour of Search Engine Users (20 May, 2013)

Speaker: Salvatore Orlando

Users of web search engines are increasingly issuing queries to accomplish their daily tasks (e.g., “finding a recipe”, “booking a flight”, “read- ing online news”, etc.). In this work, we propose a two-step methodology for discovering latent tasks that users try to perform through search engines. Firstly, we identify user tasks from individual user sessions stored in query logs. In our vision, a user task is a set of possibly non-contiguous queries (within a user search session), which refer to the same need. Secondly, we discover collective tasks by aggregating similar user tasks, possibly performed by distinct users. To discover tasks, we propose to adopt clustering algorithms based on novel query similarity functions, in turn obtained by exploiting specific features, and both unsupervised and supervised learning approaches.  All the proposed solutions were evaluated on a manually-built ground-truth.

Furthermore, we introduce the the Task Relation Graph (TGR) as a representation of users' search behaviors on a task-by-task perspective, by exploiting the collective tasks obtained so far. The task-by-task behavior is captured by weighting the edges of TGR with a relatedness score computed between pairs of tasks, as mined from the query log.  We validated our approach on a concrete application, namely a task recommender system, which suggests related tasks to users on the basis of the task predictions derived from the TGR. Finally, we showed that the task recommendations generated by our technique are beyond the reach of existing query suggestion schemes, and that our solution is able to recommend tasks that user will likely perform in the near future. 

 

Work in collaboration with Claudio Lucchese, Gabriele Tolomei, Raffaele Perego, and Fabrizio Silvestri.

 

References:

[1] C. Lucchese, S. Orlando, R. Perego, F. Silvestri, G. Tolomei. "Identifying Task-based Sessions in Search Engine Query Logs". Forth ACM Int.l Conference on Web Search and Data Mining (WSDM 2011), Hong Kong, February 9-12, 2011

[2] C. Lucchese, S. Orlando, R. Perego, F. Silvestri, G. Tolomei. "Discovering Tasks from Search Engine Query Logs", To appear on ACM Transactions on Information Systems (TOIS). 

[3] C. Lucchese, S. Orlando, R. Perego, F. Silvestri, G. Tolomei. "Modeling and Predicting the Task-by-Task Behavior of Search Engine Users". To appear in Proc. OAIR 2013, Int.l Conference in the RIAO series.

Personality Computing(13 May, 2013)

Speaker: Alessandro Vinciarelli

 

 

Personality is one of the driving factors behind everything we do and experience

in life. During the last decade, the computing community has been showing an ever

increasing interest for such a psychological construct, especially when it comes

to efforts aimed at making machines socially intelligent, i.e. capable of interacting with

people in the same way as people do. This talk will show the work being done in this

area at the School of Computing Science. After an introduction to the concept of

personality and its main applications, the presentation will illustrate experiments

on speech based automatic perception and recognition. Furthermore, the talk will

outline the main issues and challenges still open in the domain.  

Fast and Reliable Online Learning to Rank for Information Retrieval(06 May, 2013)

Speaker: Katja Hoffman

Online learning to rank for information retrieval (IR) holds promise for allowing the development of "self-learning search engines" that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge.

 

In this talk I will present my recent work, which addresses the challenges posed by learning from natural user interactions. First, I will detail a new method, called Probabilistic Interleave, for inferring user preferences from users' clicks on search results. I show that this method allows unbiased and fine-grained ranker comparison using noisy click data, and that this is the first such method that allows the effective reuse of historical data (i.e., collected for previous comparisons) to infer information about new rankers. Second, I show that Probabilistic Interleave enables new online learning to rank approaches that can reuse historical interaction data to speed up learning by several orders of magnitude, especially under high levels of noise in user feedback. I conclude with an outlook on research directions in online learning to rank for IR, that are opened up by our results.

Entity Linking for Semantic Search(29 April, 2013)

Speaker: Edgar Meij



Semantic annotations have recently received renewed interest with the explosive increase in the amount of textual data being produced, the advent of advanced NLP techniques, and the maturing of the web of data. Such annotations hold the promise for improving information retrieval algorithms and applications by providing means to automatically understand the meaning of a piece of text. Indeed, when we look at the level of understanding that is involved in modern-day search engines (on the web or otherwise), we come to the obvious conclusion that there is still a lot of room for improvement. Although some recent advances are pushing the boundaries already, information items are still retrieved and ordered mainly using their textual representation, with little or no knowledge of what they actually mean. In this talk I will present my recent and ongoing work, which addresses the challenges associated with leveraging semantic annotations for intelligent information access. I will introduce a recently proposed method for entity linking and show how it can be applied to several tasks related to semantic search on collections of different types, genres, and origins. 

Flexible models for high-dimensional probability distributions(04 April, 2013)

Speaker: Iain Murray

Statistical modelling often involves representing high-dimensional probability distributions. The textbook baseline methods, such as mixture models (non-parametric Bayesian or not), often don’t use data efficiently. Whereas the machine learning literature has proposed methods, such as Gaussian process density models and undirected neural network models, that are often too computationally expensive to use. Using a few case-studies, I will argue for increased use of flexible autoregressive models as a strong baseline for general use.

Query Classification for a Digital Library(18 March, 2013)

Speaker: Deirdre Lungley

The motivation for our query classification is the insight it gives the digital content provider into what his users are searching for and hence how his collection could be extended. This talk details two query classification methodologies we have implemented as part of the GALATEAS project (http://www.galateas.eu/): one log-based, the other using wikified queries to learn a Labelled LDA model. An analysis of their respective classification errors indicates the method best suited to particular category groups. 

Reusing Historical Interaction Data for Faster Online Learning to Rank for IR(12 March, 2013)

Speaker: Anne Schuth

 

Online learning to rank for information retrieval (IR) holds promise for allowing the development of ³self-learning² search engines that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge.

 

In this paper we investigate whether and how previously collected (historical) interaction data can be used to speed up learning in online learning to rank for IR. We devise the first two methods that can utilize historical data (1) to make feedback available during learning more reliable and (2) to preselect candidate ranking functions to be evaluated in interactions with users of the retrieval system. We evaluate both approaches on 9 learning to rank data sets and find that historical data can speed up learning, leading to substantially and significantly higher online performance. In particular, our preselection method proves highly effective at compensating for noise in user feedback. Our results show that historical data can be used to make online learning to rank for IR much more effective than previously possible, especially when feedback is noisy.

Scientific Lenses over Linked Data: Identity Management in the Open PHACTS project(11 March, 2013)

Speaker: Alasdair Gray,

Scientific Lenses over Linked Data: Identity Management in the Open PHACTS project

Alasdair Gray, University of Manchester

 

The discovery of new medicines requires pharmacologists to interact with a number of information sources ranging from tabular data to scientific papers, and other specialized formats. The Open PHACTS project, a collaboration of research institutions and major pharmaceutical companies, has developed a linked data platform for integrating multiple pharmacology datasets that form the basis for several drug discovery applications. The functionality offered by the platform has been drawn from a collection of prioritised drug discovery business questions created as part of the Open PHACTS project. Key features of the linked data platform are:

1) Domain specific API making drug discovery linked data available for a diverse range of applications without requiring the application developers to become knowledgeable of semantic web standards such as SPARQL;

2) Just-in-time identity resolution and alignment across datasets enabling a variety of entry points to the data and ultimately to support different integrated views of the data;

3) Centrally cached copies of public datasets to support interactive response times for user-facing applications.

 

Within complex scientific domains such as pharmacology, operational equivalence between two concepts is often context-, user- and task-specific. Existing linked data integration procedures and equivalence services do not take the context and task of the user into account. We enable users of the Open PHACTS platform to control the notion of operational equivalence by applying scientific lenses over linked data. The scientific lenses vary the links that are activated between the datasets which affects the data returned to the user

 

Bio

Alasdair is a researcher in the MyGrid team at the University of Manchester. He is currently working on the Open PHACTS project which is building an Open Pharmacological Space to integrate drug discovery data. Alasdair gained his PhD from Heriot-Watt University, Edinburgh, and then worked as a post-doctoral researcher in the Information Retrieval Group at the University of Glasgow. He has spent the last 10 years working on novel knowledge management projects investigating issues of relating data sets.

http://www.cs.man.ac.uk/~graya/

Modelling Time & Demographics in Search Logs(01 March, 2013)

Speaker: Milad Shokouhi

Knowing users' context offers a great potential for personalizing web search results or related services such as query suggestion and query completion. Contextual features cover a wide range of signals; query time, user’s location,  search history and demographics can all  be regarded as contextual features that can be used for search personalization.

In this talk, we’ll focus on two main questions:

1)      How can we use the existing contextual features, in particular time, for improving search results (Shokouhi & Radinsky, SIGIR’12).

2)      How can we infer missing contextual features, in particular user-demographics, based on search history (Bi et al., WWW2013).

 

Our results confirm that (1) contextual features matter and (2) that many of them can be inferred from search history.

Pre-interaction Identification By Dynamic Grip Classification(28 February, 2013)

Speaker: Faizuddin Mohd Noor

We present a novel authentication method to identify users at they pick up a mobile device. We use a combination of back-of-device capacitive sensing and accelerometer measurements to perform classification, and obtain increased performance compared to previous accelerometer-only approaches. Our initial results suggest that users can be reliably identified during the pick-up movement before interaction commences.

Time-Biased Gain(21 February, 2013)

Speaker: Charlie Clark
Time-biased gain provides a unifying framework for information retrieval evaluation

Time-biased gain provides a unifying framework for information retrieval evaluation, generalizing many traditional effectiveness measures while accommodating aspects of user behavior not captured by these measures. By using time as a basis for calibration against actual user data, time-biased gain can reflect aspects of the search process that directly impact user experience, including document length, near-duplicate documents, and summaries. Unlike traditional measures, which must be arbitrarily normalized for averaging purposes, time-biased gain is reported in meaningful units, such as the total number of relevant documents seen by the user. In work reported at SIGIR 2012, we proposed and validated a closed-form equation for estimating time-biased gain, explored its properties, and compared it to standard approaches. In work reported at CIKM 2012, we used stochastic simulation to numerically approximate time-biased gain, an approach that provides greater flexibility, allowing us to accommodate different types of user behavior and increases the realism of the effectiveness measure. In work reported at HCIR 2012, we extended our stochastic simulation to model the variation between users. In this talk, I will provide an overview of time-biased gain, and outline our ongoing and future work, including extensions to evaluate query suggestion, diversity, and whole-page relevance. This is joint work with Mark Smucker.

Evaluating Bad Query Abandonment in an Iterative SMS-Based FAQ Retrieval System(14 February, 2013)

Speaker: Edwin Thuma

We investigate how many iterations users are willing to tolerate in an iterative Frequently Asked Question (FAQ) system that provides information on HIV/AIDS. This is part of work in progress that aims to develop an automated Frequently Asked Question system that can be used to provide answers on HIV/AIDS related queries to users in Botswana. Our system engages the user in the question answering process by following an iterative interaction approach in order to avoid giving inappropriate answers to the user. Our findings provide us with an indication of how long users are willing to engage with the system. We subsequently use this to develop a novel evaluation metric to use in future developments of the system. As an additional finding, we show that the previous search experience of the users has a significant effect on their future behaviour.

[IR] Searching the Temporal Web: Challenges and Current Approaches(04 February, 2013)

Speaker: Nattiya Kanhabua

In this talk, we will give a survey of current approaches to searching the

temporal web. In such a web collection, the contents are created and/or

edited over time, and examples are web archives, news archives, blogs,

micro-blogs, personal emails and enterprise documents. Unfortunately,

traditional IR approaches based on term-matching only can give

unsatisfactory results when searching the temporal web. The reason for this

is multifold:  1) the collection is strongly time-dependent, i.e., with

multiple versions of documents, 2) the contents of documents are about

events happened at particular time periods, 3) the meanings of semantic

annotations can change over time, and 4) a query representing an information

need can be time-sensitive, so-called a temporal query.

 

Several major challenges in searching the temporal web will be discussed,

namely, 1) How to understand temporal search intent represented by

time-sensitive queries? 2) How to handle the temporal dynamics of queries

and documents? and 3) How to explicitly model temporal information in

retrieval and ranking models? To this end, we will present current

approaches to the addressed problems as well as outline the directions for

future research.

Probabilistic rule-based argumentation for norm-governed learning agents(28 January, 2013)

Speaker: Sebastian Riedel

There is a vast and ever-increasing amount of unstructured textual data at our disposal. The ambiguity, variability and expressivity of language makes this data difficult to analyse, mine, search, visualise, and, ultimately, base decisions on. These challenges have motivated efforts to enable machine reading: computers that can read text and convert it into semantic representations, such as the Google Knowledge Graph for general facts, or pathway databases in the biomedical domain. This representations can then be harnessed by machines and humans alike. At the heart of machine reading is relation extraction: reading text to create a semantic network of entities and their relations, such as employeeOf(Person,Company), regulates(Protein,Protein) or causes(Event,Event). 

In this talk I will present a series of graphical models and matrix factorisation techniques that can learn to extract relations. I will start by contrasting a fully supervised approach with one that leverages pre-existing semantic knowledge (for example, in the Freebase database) to reduce annotation costs. I will then present ways to extract additional relations that are not yet part of the schema, and for which no pre-existing semantic knowledge is available. I will show that by doing so we cannot only extract richer knowledge, but also improve extraction quality of relations within the original schema. This helps to improve over previous state-of-the-art by more than 10% points mean average precision. 

IDI Seminar(29 November, 2012)

Speaker: Konstantinos Georgatzis
Efficient Optimisation for Data Visualisation as an Information Retrieval Task

Visualisation of multivariate data sets is often done by mapping data onto a low-dimensional display with nonlinear dimensionality reduction (NLDR) methods. We have introduced a formalism where NLDR for visualisation is treated as an information retrieval task, and a novel NLDR method called the Neighbor Retrieval

Visualiser (NeRV) which outperforms previous methods. The remaining concern is that NeRV has quadratic computational complexity with respect to the number of data. We introduce an efficient learning algorithm for NeRV where relationships between data are approximated through mixture modeling, yielding efficient computation with near-linear computational complexity with respect to the number of data. The method is much faster to optimise as the number of data grows, and it maintains good visualisation performance.

Context data in lifelog retrieval(19 November, 2012)

Speaker: Liadh Kelly
Context data in lifelog retrieval

Advances in digital technologies for information capture combined with
massive increases in the capacity of digital storage media mean that it is
now possible to capture and store much of one's life experiences in a
personal lifelog. Information can be captured from a myriad of personal
information devices including desktop computers, mobile phones, digital
cameras, and various sensors, including GPS, Bluetooth, and biometric
devices. This talk centers on the investigation of the challenges of
retrieval in this emerging domain and on the examination of the utility of
several implicitly recorded and derived context types in meeting these
challenges. For these investigations unique rich multimodal personal
lifelog collections of 20 months duration are used. These collections
contain all items accessed on subjects' PCs and laptops (email, web pages,
word documents, etc), passively captured images depicting subjects' lives
using the SenseCam device (http://research.microsoft.com/sensecam), and
mobile text messages sent and received. Items are annotated with several
rich sources of automatically derived context data types including
biometric data (galvanic skin response, heart rate, etc), geo-location
(captured using GPS data), people present (captured using Bluetooth data),
weather conditions, light status, and several context types related to the
dates and times of accesses to items.

 

From Search to Adaptive Search(12 November, 2012)

Speaker: Udo Kruschwitz
Generating good query modification suggestions or alternative queries to assist a searcher remains however a challenging issue

Modern search engines have been moving away from very simplistic interfaces that aimed at satisfying a user's need with a single-shot query. Interactive features such as query suggestions and faceted search are now integral parts of Web search engines. Generating good query modification suggestions or alternative queries to assist a searcher remains however a challenging issue. Query log analysis is one of the major strands of work in this direction. While much research has been performed on query logs collected on the Web as a whole, query log analysis to enhance search on smaller and more focused collections (such as intranets, digital libraries and local Web sites) has attracted less attention. The talk will look at a number of directions we have explored at the University of Essex in addressing this problem by automatically acquiring continuously updated domain models using query and click logs (as well as other sources).