Information Retrieval

The Information Retrieval group is a research group within the Information, Data and Analysis Section. The group has a long and strong research history in the process of information retrieval as a whole from theoretical modelling of the retrieval process to large-scale text retrieval systems building and to the interactive evaluation of multimedia information retrieval systems. The research interests of the group include:

- Theoretical modelling of IR systems
- Probabilistic retrieval
- Web information retrieval
- Implementation of large-scale IR systems
- Multimedia (Image, Video, Audio) information retrieval
- Intranet/Enterprise and Blog search
- Distributed and Peer-to-Peer retrieval
- User Modelling and the development of novel adaptive interaction techniques
- Evaluation of IR systems
- Text mining and knowledge discovery
- Multilingual information retrieval
- Semantic Web and information retrieval

The group maintains strong links with researchers in Machine Learning and Human-Computer Interaction, as well as with industry through knowledge and technology transfer

Current Projects:

  • SMART(Search engine for MultimediA environment geRenated contenT) is a research project funded by the European Commission's FP7 (number 287583).
  • COSS is is an EPSRC funded project (number EP/J020664/1). ls with finding novel events in streams such as Twitter, Wikipedia or Newswire, in real time. 

Past Projects

  •  
  • RAENG Fellowship
    Foundations research in information retrieval inspired by quantum theory.
    Prof C.J. van Rijsbergen, 2007-2012.

Academic Staff: Prof Joemon Jose,  Prof Iadh Ounis, Dr Craig Macdonald

Research Assistants and Research Students: Eugene Kharitonov,Mr Richard Mccreadie, Graham McDonald,Philip McParlane,Fajie Yuan, Colin Wilkie, David Maxwell, Long Chen, Rami Alkawaldeh, Andrew McMinn, Haitao Yu, Jorge Gonzalez-Paule, Xiao Yang, Anjie Fang, Stuart Mackie, Jarana Manotumruksa.

  • Theoretical development of probabilistic and logic-based models
  • multimedia IR systems
  • information analysis and access across media
  • evaluation and usability of IR systems
  • data mining of large data sets
  • Web information retrieval
  • citation/link analysis
  • implementation and evaluation of large-scale IR systems
  • performance prediction and optimisation
  • information retrieval in context
  • multilingual retrieval
  • interaction techniques based on implicit relevance feedback and summarisation
  • adaptive information retrieval
  • Intranet/Enterprise and Blog search

This Week’s EventsAll Upcoming EventsPast EventsWebapp

This Week’s Events

Satisfying User Needs or Beating Baselines? Not always the same.

Group: Information Retrieval (IR)
Speaker: Walid Magdy, University of Edinburgh
Date: 12 December, 2016
Time: 15:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

Information retrieval (IR) is mainly concerned with retrieving relevant documents to satisfy the information needs of users. Many IR tasks involving different genres and search scenarios have been studied for decades. Typically, researchers aim to improve retrieval effectiveness beyond the current “state-of-the-art”. However, revisiting the modeling of the IR task itself is often essential before seeking improvement of results. This includes reconsidering the assumed search scenario, the approach used to solve the problem, or even the conducted evaluation methodology. In this talk, some well-known IR tasks are explored to demonstrate that beating the state-of-the-art baseline is not always sufficient. Novel modeling, understanding, or approach to IR tasks could lead to significant improvements in user satisfaction compared to just improving “objective” retrieval effectiveness. The talk includes example IR tasks, such as printed document search, patent search, speech search, and social media search.

Upcoming Events

Satisfying User Needs or Beating Baselines? Not always the same.

Group: Information Retrieval (IR)
Speaker: Walid Magdy, University of Edinburgh
Date: 12 December, 2016
Time: 15:00
Location: Sir Alwyn Williams Building, 422 Seminar Room

Information retrieval (IR) is mainly concerned with retrieving relevant documents to satisfy the information needs of users. Many IR tasks involving different genres and search scenarios have been studied for decades. Typically, researchers aim to improve retrieval effectiveness beyond the current “state-of-the-art”. However, revisiting the modeling of the IR task itself is often essential before seeking improvement of results. This includes reconsidering the assumed search scenario, the approach used to solve the problem, or even the conducted evaluation methodology. In this talk, some well-known IR tasks are explored to demonstrate that beating the state-of-the-art baseline is not always sufficient. Novel modeling, understanding, or approach to IR tasks could lead to significant improvements in user satisfaction compared to just improving “objective” retrieval effectiveness. The talk includes example IR tasks, such as printed document search, patent search, speech search, and social media search.

Past Events

Supporting Evidence-based Medicine with Natural Language Processing (28 November, 2016)

Speaker: Dr. Mark Stevenson

The modern evidence-based approach to medicine is designed to ensure that patients are given the best possible care by basing treatment decisions on robust evidence. But the huge volume of information available to medical and health policy decision makers can make it difficult for them to decide on the best approach. Much of the current medical knowledge is stored in textual format and providing tools to help access it represents a significant opportunity for Natural Language Processing and Information Retrieval. However, automatically processing documents in this domain is not straightforward and doing so successfully requires a range of challenges to be overcome, including dealing with volume, ambiguity, complexity and inconsistency.  This talk will present a range of approaches from Natural Language Processing that support access to medical information. It will focus on three tasks: Word Sense Disambiguation, Relation Extraction and Contradiction Identification. The talk will outline the challenges faced when developing approaches for accessing information contained in medical documents, including the lack of available gold standard data to train systems. It will show how existing resources can help alleviate this problem by providing information that allows training data to be created automatically.

Human Computation for Entity-Centric Information Access (21 November, 2016)

Speaker: Dr. Gianluca Demartini

Human Computation is a novel approach used to obtain manual data processing at scale by means of crowdsourcing. In this talk we will start introducing the dynamics of crowdsourcing platforms and provide examples of their use to build hybrid human-machine information systems. We will then present ZenCrowd: a hybrid system for entity linking and data integration problems over linked data showing how the use of human intelligence at scale in combination with machine-based algorithms outperforms traditional systems. In this context, we will then discuss efficiency and effectiveness challenges of micro-task crowdsourcing platforms including spam, quality control, and job scheduling in crowdsourcing.

Analysis of the Cost and Benefits of Search Interactions (07 November, 2016)

Speaker: Dr. Leif Azzopardi

Interactive Information Retrieval (IR) systems often provide various features and functions, such as query suggestions and relevance feedback, that a user may or may not decide to use. The decision to take such an option has associated costs and may lead to some benefit. Thus, a savvy user would take decisions that maximises their net benefit. In this talk, we will go through a number of formal models which examine the costs and benefits of various decisions that users, implicitly or explicitly, make when searching. We consider and analyse the following scenarios: (i) how long a user's query should be? (ii) should the user pose a specific or vague query? (iii) should the user take a suggestion or re-formulate? (iv) when should a user employ relevance feedback? and (v) when would the "find similar" functionality be worthwhile to the user? To this end, we analyse a series of cost-benefit models exploring a variety of parameters that affect the decisions at play. Through the analyses, we are able to draw a number of insights into different decisions, provide explanations for observed behaviours and generate numerous testable hypotheses. This work not only serves as a basis for future empirical work, but also as a template for developing other cost-benefit models involving human-computer interaction.

This talk is based on the recent ICTIR 2016 paper with Guido Zuccon: http://dl.acm.org/citation.cfm?id=2970412

I'm an information scientist - let me in! (31 October, 2016)

Speaker: Martin White

For the last 46 years Martin has been a professional information scientist, though often in secret. Since founding Intranet Focus Ltd he has found that the awareness of research into topics such as information behaviour, information quality and information seeking in his clients is close to zero. This is especially true in information retrieval. In his presentation Martin will consider why this is the case, what the impact might be and what (if anything) should and could be done to change this situation.

The problem of quantification in Information Retrieval and on Social Networks. (17 October, 2016)

Speaker: Gianni Amati

There is a growing interest to know how fast information spreads on social networks, how many unique users are participating to an event, the leading opinion polarity in a stream. Quantifying distinct elements on a flow information is thus becoming a crucial problem in many real time information retrieval or streaming applications. We discuss the state-of-art of quantification and show that many problems can be interpreted within a common framework. We introduce a new probabilistic framework for quantification and show as examples how to count opinions in a stream and how to compute the degrees of separation of a network.

Analytics over Parallel Multi-view Data (03 October, 2016)

Speaker: Dr. Deepak Padmanabhan

Conventional unsupervised data analytics techniques have largely focused on processing datasets of single-type data, e.g., one of text, ECG, Sensor Readings and Image data. With increasing digitization, it has become common to have data objects having representations that encompass different "kinds" of information. For example, the same disease condition may be identified through EEG or fMRI data. Thus, a dataset of EEG-fMRI pairs would be considered as a parallel two-view dataset.  Datasets of text-image pairs (e.g., a description of a seashore, and an image of it) and text-text pairs (e.g., problem-solution text, or multi-language text from machine translation scenarios) are other common instances of multi-view data. The challenge in multi-view data analytics is about effectively leveraging such parallel multi-view data to perform analytics tasks such as clustering, retrieval and anomaly detection. This talk will cover some emerging trends in processing multi-view parallel data, and different paradigms for the same. In addition to looking at the different schools of techniques, and some specific techniques from each school, this talk will also be used to present some possibilities for future work in this area.

 

Dr. Deepak Padmanabhan is a lecturer with the Centre for Data Sciences and Scalable Computing at Queen's University Belfast. He obtained his B.Tech in Comp. Sc. and Engg. from Cochin University (Kerala, India), followed by his M.Tech and PhD, all in computer science, from Indian Institute of Technology Madras. Prior to joining Queen's, he was a researcher at IBM Research - India. He has over 40 publications across top venues in Data Mining, NLP, Databases and Information Retrieval. He co-authored a book on Operators for Similarity Search, published by Springer in 2015. He is the author on ~15 patent applications to the USPTO, including 4 granted patents. He is a recipient of the INAE Young Engineer Award 2015, and is a Senior Member of the ACM and the IEEE. His research interests include Machine Learning, Data Mining, NLP, Databases and Information Retrieval. Email: deepaksp@acm.org  URL: http://member.acm.org/~deepaksp

The whole is greater than the sum of its parts: how semantic trajectories and recommendations may help tourism. (22 August, 2016)

Speaker: Dr. Chiara Renso

During the first part of this talk I will overview my recent activity in the field of mobility data mining with particular interest in the study of semantics in trajectory data and the experience with the SEEK Marie Curie project [1] recently concluded.  Then I will present two highlights of tourism recommendation works based on the idea of semantic trajectories: TripBuilder [2] and GroupFinder [3].  Tripbuilder is based on the analysis of enriched tourist trajectories extracted from Flickr photos to suggest itineraries constrained by a temporal budget and based on the travellers preferences.  The Groupfinder framework recommends a group of friends with whom to enjoy a visit to a place, balancing the friendship relations of the group members with the user individual interests in the destination location.

[1] http://www.seek-project.eu
[2] Igo Ramalho Brilhante, José Antônio Fernandes de Macêdo, Franco Maria Nardini, Raffaele Perego,Chiara Renso. On planning sightseeing tours with TripBuilder. Inf. Process. Manage. 51(2): 1-15 (2015)
[3]  Igo Ramalho Brilhante, José Antônio Fernandes de Macêdo, Franco Maria Nardini, Raffaele Perego,Chiara Renso. Group Finder: An Item-Driven Group Formation Framework. MDM 2016: 8-17

Bio:

Dr. Chiara Renso holds a PhD and M.Sc. degree in Computer Science from University of Pisa (1992, 1997).  She is permanent researcher at ISTI Institute of CNR, Italy.  Her research interests are related to spatio-temporal data mining, reasoning, data mining query languages, semantic data mining, trajectory data mining.  She has been involved in several EU projects about mobility data mining.  She has been the scientific coordinator of an FP7 Marie-Curie project on semantic trajectories knowledge discovery called SEEK (www.seek-project.eu).  She was also coordinator of a bilateral CNR-CNPQ Italy-Brazil project on mobility data mining with Federal University of Cearà.  She is author of more than 90 peer-reviewed publications.  She is co-editor of the book "Mobility Data: Modelling, Management, and Understanding" edited by Cambridge Press in 2013; co-editor of the special issue for Journal on Knowledge and Information system (KAIS) on Context aware data mining; co-editor of International Journal of Knowledge and Systems Science (IJKSS) on Modelling Tools for Extracting Useful Knowledge and Decision Making.  She has been co-chair of three editions of the Workshop on Semantic Aspects of Data Mining in conjunction with IEEE ICDM conference.  Sheis a regular reviewer of ACM CIKM, ACM KDD, ACM SIGSPATIAL and many journals on these topics.

Predicting Ad Quality for Native Advertisements (06 June, 2016)

Speaker: Dr Ke Zhou,

Native advertising is a specific form of online advertising where ads replicate the look-and-feel of their serving platform. In such context, providing a good user experience with the served ads is crucial to ensure long-term user engagement. 

 

In this talk, I will explore the notion of ad quality, namely the effectiveness of advertising from a user experience perspective. I will talk from both the pre-click and post-click perspective for predicting quality for native ads. With respect to pre-click ad quality, we design a learning framework to detect offensive native ads, showing that, to quantify ad quality, ad offensive user feedback rates are more reliable than the commonly used click-through rate metrics. We translate a set of user preference criteria into a set of ad quality features that we extract from the ad text, image and advertiser, and then use them to train a model able to identify offensive ads. In terms of post-click quality, we use ad landing page dwell time as our proxy and exploit various ad landing page features to predict ad landing page with high dwell time.

Efficient Web Search Diversification via Approximate Graph Coverage (25 April, 2016)

Speaker: Carsten Eickhoff

In the case of general or ambiguous Web search queries, retrieval systems rely on result set diversification techniques in order to ensure an adequate coverage of underlying topics such that the average user will find at least one of the returned documents.

In the case of general or ambiguous Web search queries, retrieval systems rely on result set diversification techniques in order to ensure an adequate coverage of underlying topics such that the average user will find at least one of the returned documents useful. Previous attempts at result set diversification employed computationally expensive analyses of document content and query intent. In this paper, we instead rely on the inherent structure of the Web graph. Drawing from the locally dense distribution of similar topics across the hyperlink graph, we cast the diversification problem as optimizing coverage of the Web graph. In order to reduce the computational burden, we rely on modern sketching techniques to obtain highly efficient yet accurate approximate solutions. Our experiments on a snapshot of Wikipedia as well as the ClueWeb'12 dataset show ranking performance and execution times competitive with the state of the art at dramatically reduced memory requirements.
 

Searching for better health: challenges and implications for IR (04 April, 2016)

Speaker: Dr. Guido Zuccon
A talk about why IR researchers should care about health search

In this talk I will discuss research problems and possible solutions related to helping the general public searching for health information online. I will show that although in the first instance this appears to be a domain-specific search task, research problems associated with this task have more general implications for IR and offer opportunities to develop advances that are applicable to the whole research field. In particular, in the talk I will focus on two aspects related to evaluation: (1) the inclusion of multiple dimensions of relevance in the evaluation of IR systems and (2) the modelling of query variations within the evaluation framework.

A Comparison of Primary and Secondary Relevance Judgements for Real-Life Topics (07 March, 2016)

Speaker: Dr Martin Halvey
n this talk I present a user study that examines in detail the differences between primary and secondary assessors on a set of

The notion of relevance is fundamental to the field of Information Retrieval. Within the field a generally accepted conception of relevance as inherently subjective has emerged, with an individual's assessment of relevance influenced by numerous contextual factors. In this talk I present a user study that examines in detail the differences between primary and secondary assessors on a set of "real-world" topics which were gathered specifically for the work. By gathering topics which are representative of the staff and students at a major university, at a particular point in time, we aim to explore differences between primary and secondary relevance judgements for real-life search tasks. Findings suggest that while secondary assessors may find the assessment task challenging in various ways (they generally possess less interest and knowledge in secondary topics and take longer to assess documents), agreement between primary and secondary assessors is high.  

Steps towards Profile-Based Web Site Search and Navigation (29 February, 2016)

Speaker: Prof. Udo Kruschwitz
Steps towards Profile-Based Web Site Search and Navigation

Web search in all its flavours has been the focus of research for decades with thousands of highly paid researchers competing for fame. Web site search has however attracted much less attention but is equally challenging. In fact, what makes site search (as well as intranet and enterprise search) even more interesting is that it shares some common problems with general Web search but also offers a good number of additional problems that need to be addressed in order to make search on a Web site no longer a waste of time. At previous visits to Glasgow I talked about turning the log files collected on a Web site into usable, adaptive data structures that can be used in search applications (and which we call user or cohort profiles). This time I will focus on applying these profiles to a navigation scenario and illustrate how the automatically acquired profiles provide a practical use case for combining natural language processing and information retrieval techniques (as that is what we really do at Essex).

Sentiment and Preference Guided Social Recommendation. (22 February, 2016)

Speaker: Yoke Yie Chen
In this talk, I will focus on two knowledge sources for product recommendation: product reviews and user purchase trails.

Social recommender systems harness knowledge from social media to generate recommendations. Previous works in social recommender systems use social knowledge such as social tags, social relationship (social network) and microblogs.  In this talk, I will focus on two knowledge sources for product recommendation: product reviews and user purchase trails. In particular, I will present how we exploit the sentiment expressed in product reviews and user preferences which are implicitly contained in user purchase trails as the basis for recommendation.

Recent Advances in Search Result Diversification for the Web and Social Media (17 February, 2016)

Speaker: Ismail Sengor Altingovde
I will focus on the web search result diversification problem and present our novel contributions in the field.

In this talk, I will start with a short potpourri of our most recent research, emphasis being on the topics related to the web search engines and social Web. Then, I will focus on the web search result diversification problem and present our novel contributions in three directions. Firstly, I will present how the normalizaton of query relevance scores can boost the performance of the state-of-the-art explicit diversification strategies. Secondly, I will introduce a set of new explicit diversification strategies based on the score(-based) and rank(-based) aggregation methods. As a third contribution, I will present how query performance prediction (QPP) can be utilized to weight query aspects during diversification. Finally, I will discuss how these diversification methods perform in the context of Tweet search, and how we improve them using word embeddings.

Practical and theoretical problems on the frontiers of multilingual natural language processing (16 February, 2016)

Speaker: Dr Adam Lopez
Multilingual natural language processing (NLP) has been enormously successful over the last decade, highlighted by offerings like Google translate. What is left to do?

Multilingual natural language processing (NLP) has been enormously successful over the last decade, highlighted by offerings like Google translate. What is left to do? I'll focus on two quite different, very basic problems that we don't yet know how to solve. The first is motivated by the development of new, massively-parallel hardware architectures like GPUs, which are especially tantalizing for computation-bound NLP problems, and may open up new possibilities for the application and scale of NLP. The problem is that classical NLP algorithms are inherently sequential, so harnessing the power of such processors requires complete rethinking the fundamentals of the field. The second is motivated by the fact that NLP systems often fail to correctly understand, translate, extract, or generate meaning. We're poised to make serious progress in this area using the reliable method of applying machine learning to large datasets—in this case, large quantities of natural language text annotated with explicit meaning representations, which take the form of directed acyclic graphs. The problem is that probabilities on graphs are surprisingly hard to define. I'll discuss work on both of these fronts.

Information retrieval challenges in conducting systematic reviews (08 February, 2016)

Speaker: Julie Glanville
The presentation will also describe other areas where software such as text mining and machine learning have potential to contribute to the Systematic Review process

Systematic review (SR) is a research method that seeks to provide an assessment of the state of the research evidence on a specific question.  Systematic reviews (SRs) aim to be objective, transparent and replicable and seek to minimise bias by means of extensive  searches.

 

The challenges of extensive searching will be summarised.  As software tools and internet interconnectivity increase, we are seeing increasing use of a range of tools within the SR process (not only for information retrieval).  This presentation will present some  of the tools we are currently using within the Cochrane SR community and UK SRs, and the challenges which remain for efficient information retrieval.  The presentation will also describe other areas where software such as text mining and machine learning have potential to contribute to the SR process.

Learning to Hash for Large Scale Image Retrieval (14 December, 2015)

Speaker: Sean Moran
In this talk I will introduce two novel data-driven models that significantly improve the retrieval effectiveness of locality sensitive hashing (LSH), a popular randomised algorithm for nearest neighbour search that permits relevant data-points to be ret

In this talk I will introduce two novel data-driven models that significantly improve the retrieval effectiveness of locality sensitive hashing (LSH), a popular randomised algorithm for nearest neighbour search that permits relevant data-points to be retrieved in constant time, independent of the database size.

To cut down the search space LSH generates similar binary hashcodes for similar data-points and uses the hashcodes to index database data-points into the buckets of a set of hashtables. At query time only those data-points that collide in the same hashtable buckets as the query are returned as candidate nearest neighbours. LSH has been successfully used for event detection in large scale streaming data such as Twitter [1] and for detecting 100,000 object classes on a single CPU [2].

 

The generation of similarity preserving binary hashcodes comprises two steps: projection of the data-points onto the normal vectors of a set of hyperplanes partitioning the input feature space followed by a quantisation step that uses a single threshold to binarise the resulting projections to obtain the hashcodes. In this talk I will argue that the retrieval effectiveness of LSH can be significantly improved by learning the thresholds and hyperplanes based on the distribution of the input data.

 

In the first part of my talk I will provide a high level introduction of LSH. I will then argue that LSH makes a set of limiting assumptions arising from its data-independence that hamper its retrieval effectiveness. This motivates the second and third parts of my talk in which I introduce two new models that address these limiting assumptions. 

 

Firstly, I will discuss a scalar quantisation model that can learn multiple thresholds per LSH hyperplane using a novel semi-supervised objective function [3]. Optimising this objective function results in thresholds that reduce information loss inherent in converting the real-valued projections to binary. Secondly, I will introduce a new two-step iterative model for learning the hashing hyperplanes [4]. In the first step the hashcodes of training data-points are regularised over an adjacency graph which encourages similar data-points to be assigned similar hashcodes. In the second step a set of binary classifiers are learnt so as to separate opposing bits (0,1) with maximum margin. Repeating both steps iteratively encourages the hyperplanes to evolve into positions that provide a much better bucketing of the input feature space compared to LSH.

 

For both algorithms I will present a set of query-by-example image retrieval results on standard image collections, demonstrating significantly improved retrieval effectiveness versus state-of-the-art hash functions, in addition to a set of interesting and previously unexpected results.

[1] Sasa Petrovic, Miles Osborne and Victor Lavrenko, Streaming First Story Detection with Application to Twitter, In NAACL'10.

 

[2] Thomas Dean, Mark Ruzon, Mark Segal, Jonathon Shlens, Sudheendra Vijayanarasimhan,  and Jay Yagnik, Fast, Accurate Detection of 100,000 Object Classes on a Single Machine, In CVPR'13.

 

[3] Sean Moran, Victor Lavrenko and Miles Osborne. Neighbourhood Preserving Quantisation for LSH, In SIGIR'13.

 

[4] Sean Moran and Victor Lavrenko. Graph Regularised Hashing. In ECIR'15.

 

 

 

Building Effective and Efficient Information Retrieval Systems (26 June, 2015)

Speaker: Jimmy Lin
Machine learning has become the tool of choice for tackling challenges in a variety of domains, including information retrieval

Machine learning has become the tool of choice for tackling challenges in a variety of domains, including information retrieval. However, most approaches focus exclusively on effectiveness---that is, the quality of system output. Yet, real-world production systems need to search billions of documents in tens of milliseconds, which means that techniques also need to be efficient (i.e., fast).  In this talk, I will discuss two approaches to building more effective and efficient information retrieval systems. The first is to directly learn ranking functions that are inherently more efficient---a thread of research dubbed "learning to efficiently rank". The second is through architectural optimizations that take advantage of modern processor architectures---by paying attention to low-level details such as cache misses and branch mispredicts. The combination of both approaches, in essence, allow us to "have our cake and eat it too" in building systems that are both fast and good.

Analysing UK Annual Report Narratives using Text Analysis and Natural Language Processing (23 February, 2015)

Speaker: Mahmoud El-Haj
In this presentation I will show the work we’ve done in our Corporate Financial Information Environment (CFIE) project.

In this presentation I will show the work we’ve done in our Corporate Financial Information Environment (CFIE) project.  The Project, funded by ESRC and ICAEW, seeks to analyse UK financial narratives, their association with financial statement information, and their informativeness for investors using Computational Linguistics, heuristic Information Extraction (IE) and Natural Language Processing (NLP).  We automatically collected and analysed a number of 14,000 UK annual reports covering a period between 2002 and 2014 for the UK largest firms listed on the London Stock Exchange. We developed software for this purpose which is available online for general use by academics.  The talk includes a demo on the software that we developed and used in our analysis: Wmatrix-import and Wmatrix.  Wmatrix-import is a web-based tool to automatically detect and parse the structure of UK annual reports; the tool provides sectioning, word frequency and readability metrics.  The output from Wmatrix-import goes as input for further NLP and corpus linguistic analysis by Wmatrix - a web based corpus annotation and retrieval tool which currently supports the analysis of small to medium sized English corpora.

Links:

Wmatrix-import
https://cfie.lancaster.ac.uk:8443/

Wmatrix
http://ucrel.lancs.ac.uk/wmatrix/

CFIE Project
http://ucrel.lancs.ac.uk/cfie/

Compositional Data Analysis (CoDA) approaches to distance in information retrieval (20 February, 2015)

Speaker: Dr Paul Thomas
Many techniques in information retrieval produce counts from a sample

Many techniques in information retrieval produce counts from a sample, and it is common to analyse these counts as proportions of the whole—term frequencies are a familiar example.  Proportions carry only relative information and are not free to vary independently of one another: for the proportion of one term to increase, one or more others must decrease.  These constraints are hallmarks of compositional data.  While there has long been discussion in other fields of how such data should be analysed, to our knowledge, Compositional Data Analysis (CoDA) has not been considered in IR. In this work we explore compositional data in IR through the lens of distance measures, and demonstrate that common measures, naïve to compositions, have some undesirable properties which can be avoided with composition-aware measures.  As a practical example, these measures are shown to improve clustering.

Users versus Models: What observation tells us about effectiveness metrics (16 February, 2015)

Speaker: Dr. Paul Thomas
This work explores the link between users and models by analysing the assumptions and implications of a number of effectiveness metrics, and exploring how these relate to observable user behaviours

Retrieval system effectiveness can be measured in two quite different ways: by monitoring the behaviour of users and gathering data about the ease and accuracy with which they accomplish certain specified information-seeking tasks; or by using numeric effectiveness metrics to score system runs in reference to a set of relevance judgements.  In the second approach, the effectiveness metric is chosen in the belief that it predicts ease or accuracy.

This work explores that link, by analysing the assumptions and implications of a number of effectiveness metrics, and exploring how these relate to observable user behaviours.  Data recorded as part of a user study included user self-assessment of search task difficulty; gaze position; and click activity.  Our results show that user behaviour is influenced by a blend of many factors, including the extent to which relevant documents are encountered, the stage of the search process, and task difficulty.  These insights can be used to guide development of batch effectiveness metrics.

Towards Effective Retrieval of Spontaneous Conversational Spoken Content (08 January, 2015)

Speaker: Gareth J. F. Jones
Spoken content retrieval (SCR) has been the focus of various research initiatives for more then 20 years.

Spoken content retrieval (SCR) has been the focus of various research initiatives for more then 20 years. Early research focused on retrieval of clear defined spoken documents principally from the broadcast news domain. The main focus of this work was spoken document retrieval (SDR) task at TREC-6-9. The end of which saw SDR declared a largely solved problem. However, this was soon found to be a premature conclusion relating to controlled recordings of professional news content and overlooking many of the potential challenges of searching more complex spoken content. Subsequent research has focused on more challenging tasks such as search of interview recordings and semi-professional internet content.  This talk will begin by reviewing early work in SDR, explaining its successes and limitations, it will then move to outline work exploring SCR for more challenging tasks, such as identifying relevant elements in long spoken recordings such as meetings and presentations, provide a detailed analysis of the characteristics of retrieval behaviour of spoken content elements when indexed using manual and automatic transcripts, and conclude with a summary of the challenges of delivering effective SCR for complex spoken content and initial attempts to address these challenges. 

On Inverted Index Compression for Search Engine Efficiency (01 September, 2014)

Speaker: Matteo Catena

Efficient access to the inverted index data structure is a key aspect for a search engine to achieve fast response times to users’ queries. While the performance of an information retrieval (IR) system can be enhanced through the compression of its posting lists, there is little recent work in the literature that thoroughly compares and analyses the performance of modern integer compression schemes across different types of posting information (document ids, frequencies, positions). In this talk, we show the benefit of compression for different types of posting information to the space- and time-efficiency of the search engine. Comprehensive experiments have been conducted on two large, widely used document corpora and large query sets; using different modern integer compression algorithms, integrated into a modern IR system, the Terrier IR platform. While reporting the compression scheme which results in the best query response times, the presented analysis will also show the impact of compression on frequency and position posting information in Web corpora that have large volumes of anchor text.

Adaptive Interaction (02 June, 2014)

Speaker: Professor Andrew Howes
A utility maximization approach to understanding human interaction with technology

This lecture describes a theoretical framework for the behavioural sciences that holds high promise for theory-driven research and design in Human-Computer Interaction. The framework is designed to tackle the adaptive, ecological, and bounded nature of human behaviour. It is designed to help scientists and practitioners reason about why people choose to behave as they do and to explain which strategies people choose in response to utility, ecology, and cognitive information processing mechanisms. A key idea is that people choose strategies so as to maximise utility given constraints. The framework is illustrated with a number of examples including pointing, multitasking, skim- reading, online purchasing, Signal-Detection Theory and diagnosis, and the influence of reputation on purchasing decisions. Importantly, these examples span from perceptual/motor coordination, through cognition to social interaction. Finally, the lecture discusses the challenging idea that people seek to find optimal strategies and also discusses the implications for behavioral investigation in HCI.

Web-scale Semantic Ranking (16 May, 2014)

Speaker: Dr Nick Craswell
Bing Ranking Techniques

Semantic ranking models score documents based on closeness in meaning to the query rather than by just matching keywords. To implement semantic ranking at Web-scale, we have designed and deployed a new multi-level ranking systems that combines the best of inverted index and forward index technologies. I will describe this infrastructure which is currently serving many millions of users and explore several types of semantic models: translation models, syntactic pattern matching and topical matching models. Our experiments demonstrate that these semantic ranking models significantly improve relevance over our existing baseline system. This is the repeat of a WWW2014 industry track talk.

Optimized Interleaving for Retrieval Evaluation (28 April, 2014)

Speaker: Filip Radlinski

Interleaving is an online evaluation technique for comparing the relative quality of information retrieval functions by combining their result lists and tracking clicks. A sequence of such algorithms have been proposed, each being shown to address problems in earlier algorithms. In this talk, I will formalize and generalize this process, while introducing a formal model: After identifying a set of desirable properties for interleaving, I will show that an interleaving algorithm can be obtained as the solution to an optimization problem within those constraints. This approach makes explicit the parameters of the algorithm, as well as assumptions about user behavior. Further, this approach leads to an unbiased and more efficient interleaving algorithm than any previous approach, as I will show a novel log-based analysis of user search behaviour.

Composite retrieval of heterogeneous web search (24 March, 2014)

Speaker: Horatiu Bota

Traditional search systems generally present a ranked list of documents as answers to user queries. In aggregated search systems, results from different and increasingly diverse verticals (image, video, news, etc.) are returned to users. For instance, many such search engines return to users both images and web documents as answers to the query ``flower''. Aggregated search has become a very popular paradigm. In this paper, we go one step further and study a different search paradigm: composite retrieval. Rather than returning and merging results from different verticals, as is the case with aggregated search, we propose to return to users a set of "bundles", where a bundle is composed of "cohesive'' results from several verticals. For example, for the query "London Olympic'', one bundle per sport could be returned, each containing results extracted from news, videos, images, or Wikipedia. Composite retrieval can promote exploratory search in a way that helps users understand the diversity of results available for a specific query and decide what to explore in more detail. 

 

We proposed and evaluated a variety of approaches to construct bundles that are relevant, cohesive and diverse. We also utilize both entitiy and term as a surrogate to represent items and demonstrate their effectiveness of bridging the "mismatch" gap among heterogeneous sources. Compared with three baselines (traditional "general web only'' ranking, federated search ranking and aggregated search),  our evaluation results demonstrate significant performance improvement for a highly heterogeneous web collection.

Query Auto-completion & Composite retrieval (17 March, 2014)

Speaker: Stewart Whiting and Horatiu Bota

=Recent and Robust Query Auto-Completion by Stewart Whiting=

Query auto-completion (QAC) is a common interactive feature that assists users in formulating queries by providing completion suggestions as they type. In order for QAC to minimise the user’s cognitive and physical effort, it must: (i) suggest the user’s intended query after minimal input keystrokes, and (ii) rank the user’s intended query highly in completion suggestions. QAC must be both robust and time-sensitive – that is, able to sufficiently rank both consistently and recently popular queries in completion suggestions. Addressing this trade-off, we propose several practical completion suggestion ranking approaches, including: (i) a sliding window of query popularity evidence from the past 2-28 days, (ii) the query popularity distribution in the last N queries observed with a given prefix, and (iii) short-range query popularity prediction based on recently observed trends. Through real-time simulation experiments, we extensively investigated the parameters necessary to maximise QAC effectiveness for three openly available query log datasets with prefixes of 2-5 characters: MSN and AOL (both English), and Sogou 2008 (Chinese). Results demonstrate consistent and language-independent improvements of up to 9.2% over a non-temporal QAC baseline for all query logs with prefix lengths of 2-3 characters. Hence, this work is an important step towards more effective QAC approaches.

 

=Composite retrieval of heterogeneous web search by Horatiu Bota=

Traditional search systems generally present a ranked list of documents as answers to user queries. In aggregated search systems, results from different and increasingly diverse verticals (image, video, news, etc.) are returned to users. For instance, many such search engines return to users both images and web documents as answers to the query ``flower''. Aggregated search has become a very popular paradigm. In this paper, we go one step further and study a different search paradigm: composite retrieval. Rather than returning and merging results from different verticals, as is the case with aggregated search, we propose to return to users a set of "bundles", where a bundle is composed of "cohesive'' results from several verticals. For example, for the query "London Olympic'', one bundle per sport could be returned, each containing results extracted from news, videos, images, or Wikipedia. Composite retrieval can promote exploratory search in a way that helps users understand the diversity of results available for a specific query and decide what to explore in more detail. 

 

We proposed and evaluated a variety of approaches to construct bundles that are relevant, cohesive and diverse. We also utilize both entitiy and term as a surrogate to represent items and demonstrate their effectiveness of bridging the "mismatch" gap among heterogeneous sources. Compared with three baselines (traditional "general web only'' ranking, federated search ranking and aggregated search),  our evaluation results demonstrate significant performance improvement for a highly heterogeneous web collection.

Studying the performance of semi-structured p2p information retrieval (10 March, 2014)

Speaker: Rami Alkhawaldeh

In recent decades, retrieval systems deployed over peer-to-peer (P2P) overlay networks have been investigated as an alternative to centralised search engines. Although modern search engines provide efficient document retrieval, there are several drawbacks, including: a single point of failure, maintenance costs, privacy risks, information monopolies from search engines companies, and difficulty retrieving hidden documents in the web (i.e. the deep web). P2P information retrieval (P2PIR) systems promise an alternative distributed system to the traditional centralised search engine architecture. Users and creators of web content in such networks have full control over what information they wish to share as well as how they share it.

 

 

 

Researchers have been tackling several challenges to build effective P2PIR systems: (i) collection (peer) representation during indexing, (ii) peer selection during search to route queries to relevant peers and (iii) final peer result merging. Semi-structured P2P networks (i.e, a partially decentralised unstructured overlay network) offer an intermediate design that minimizes the weakness of both centralised and completely decentralised overlay networks and combines the advantages of those two topologies. So, an evaluation framework for this kind of network is necessary to compare the performance of different P2P approaches and to be a guide for developing new and more powerful approaches. In this work, we study the performance of three cluster-based semi-structured P2PIR models and explain the effectiveness of several important design considerations and parameters on retrieval performance, as well as the robustness of these types of network.

 

4pm @ Level 4

Inside The World’s Playlist (23 February, 2014)

Speaker: Manos Tsagkias

 

We describe the algorithms behind Streamwatchr, a real-time system for analyzing the music listening behavior of people around the world. Streamwatchr collects music-related tweets, extracts artists and songs, and visualises the results in two ways: (i)~currently trending songs and artists, and (ii)~newly discovered songs.

 

Dublin City Search: An evolution of search to incorporate city data (24 November, 2013)

Speaker: Dr Veli Bicer, IBM Research Dublin
ors, devices, social networks, governmental applications, or service networks. In such a diversity of information, answering specific information needs of city inhabitants requires holistic information retrieval techniques, capable of harnessing differen

Dr Veli Bicer is a researcher at Smarter Cities Technology Center of IBM Research in Dublin. His research interests include semantic data management, semantic search, software engineering and statistical relational learning. He obtained his PhD from Karlsruhe Institute of Technology, Karlsruhe, Germany and B.Sc. and M.Sc. degrees in computer engineering from Middle East Technical University, Ankara, 

Economic Models of Search (18 November, 2013)

Speaker: Leif Azzopardi

TBA

Predicting Screen Touches From Back-of-Device Grip Changes (14 November, 2013)

Speaker: Faizuddin Mohd Noor

We demonstrate that front-of-screen targeting on mobile phones can be predicted from back-of-device grip manipulations. Using simple, low-resolution capacitive touch sensors placed around a standard phone, we outline a machine learning approach to modelling the grip modulation and inferring front-of-screen touch targets. We experimentally demonstrate that grip is a remarkably good predictor of touch, and we can predict touch position 200ms before contact with an accuracy of 18mm.

Online Learning in Explorative Multi Period Information Retrieval (11 November, 2013)

Speaker: Marc Sloan

 

In Multi Period Information Retrieval we consider retrieval as a stochastic yet controllable process, the ranking action during the process continuously controls the retrieval system's dynamics and an optimal ranking policy is found in order to maximise the overall users' satisfaction. Different aspects of this process can be fixed giving rise to different search scenarios. One such application is to fix search intent and learn from a population of users over time. Here we use a multi-armed bandit algorithm and apply techniques from finance to learn optimally diverse and explorative search results for a query. We can also fix the user and dynamically model the search over multiple pages of results using relevance feedback. Likewise we are currently investigating using the same technique over session search using a Markov Decision Process.

Stopping Information Search: An fMRI Investigation (04 November, 2013)

Speaker: Eric Walden

Information search has become an increasingly important factor in people's use of information systems.  In both personal and workplace environments, advances in information technology and the availability of information have enabled people to perform far more search and access much more information for decision making than in the very recent past.  One consequence of this abundance of information has been an increasing need for people to develop better heuristic methods for stopping search, since information available for most decisions now overwhelms people's cognitive processing capabilities and in some cases is almost infinite.  Information search has been studied in much past research, and cognitive stopping rules have also been investigated.  The present research extends and expands on previous behavioral research by investigating brain activation during searching and stopping behavior using functional Magnetic Resonance Imaging (fMRI) techniques.  We asked subjects to search for information about consumer products and to stop when they believed they had enough information to make a subsequent decision about whether to purchase that product.  They performed these tasks while in an MRI machine.  Brain scans were taken that measured brain activity throughout task performance.  Results showed that different areas of the brain were active for searching and stopping, that different brain regions were used for several different self-reported stopping rules, that stopping is a neural correlate of inhibition, suggesting a generalized stopping mechanism in the brain, and that certain individual difference variables make no difference in brain regions active for stopping.  The findings extend our knowledge of information search, stopping behavior, and inhibition, contributing to both the information systems and neuroscience literatures.  Implications of our findings for theory and practice are discussed.

Towards Technically assisted Sensitivity Review of UK Digital Public Records (21 October, 2013)

Speaker: Tim Gollins

There are major difficulties involved in identifying sensitive information in digital public records. These difficulties, if not addressed, will together with the challenge of managing the risks of failing to identify sensitive documents, force government departments into the precautionary closure of large swaths of digital records. Such closures will inhibit timely, open and transparent access by citizens and others in civic society. Precautionary closures will also prevent social scientists’ and contemporary historians’ access to valuable qualitative information, and their ability to contextualise studies of emerging large scale quantitative data. Closely analogous problems exist in UK local authorities, the third sector, and in other countries which are covered by the same or similar legislation and regulation. In 2012, having conducted investigations and earlier research into this problem, and with new evidence of immediate need emerging from the 20 year rule transition process, The UK National Archives (TNA) highlighted this serious issue facing government departments in the UK Public Records system; the Abaca project is the response.

 

The talk will outline the role of TNA, the background to sensitivity review, the impact of the move to born digital records, the nature of the particular challenge of reviewing them for sensitivity, and the broad approach that the Abaca Project is taking.

 

 

Next Monday, 4pm at 423

Accelerating research on big datasets with Stratosphere (14 October, 2013)

Speaker: Moritz Schubotz
Stratosphere is a research project investigating new paradigms for scalable, complex analytics on massively-parallel data sets.

Stratosphere is a research project investigating new paradigms for scalable, complex analytics on massively-parallel data sets. The core concept of Stratosphere is the PACT programming model that extends MapReduce with second order functions like Match, CoGroup and Cross, which allows researchers to describe complex analytics task naturally. The result are directed acyclic that are optimized for parallel execution, by a cost based optimizer that incorporates user code properties, and executed by the Nephele Data Flow Engine. Nephele is a massively parallel data flow engine dealing with resource management, work scheduling, communication, and fault tolerance.

In the seminar session we introduce and showcase how researchers can set their working environment quickly and start doing research right away. As a proof of concept, we present how a simple java program parallelized optimized by Stratosphere obtained top results at the "exotic" Math search task at NTCIR-10. While other research groups optimized index structures and data formats and waited several hours for their indices to be build on high end hardware, we could focus on the essential program logic use basic data types and run the experiments on a heterogenous desktop cluster in several minutes.

Validity and Reliability in Cranfield-like Evaluation in Information Retrieval (23 September, 2013)

Speaker: Julián Urbano

The Cranfield paradigm to Information Retrieval evaluation has been used for half a century now as the means to compare retrieval techniques and advance the state of the art accordingly. However, this paradigm makes certain assumptions that remain a research problem in Information Retrieval and that may invalidate our experimental results.

In this talk I will approach the Cranfield paradigm as an statistical estimator of certain probability distributions that describe the final user experience. These distributions are estimated with a test collection, which actually computes system-related distributions that are assumed to be correlated with the target user-related distributions. From the point of view of validity, I will discuss the strength of that correlation and how it affects the conclusions we draw from an evaluation experiment. From the point of view of reliability, I will discuss on past and current practice to measure the reliability of test collections and review several of them accordingly.

Exploration and contextualization: towards reusable tools for the humanities. (16 September, 2013)

Speaker: Marc Bron

The introduction of new technologies, access to large electronic

cultural heritage repositories, and the availability of new

information channels continues to change the way humanities

researchers work and the questions they seek to answer. In this talk I

will discuss how the research cycle of humanities researchers has been

affected by these changes and argue for the continued development of

interactive information retrieval tools to support the research

practices of humanities researchers. Specifically, I will focus on two

phases in the humanities research cycle: the exploration phase and

contextualization phase. In the first part of the talk I discuss work

on the development and evaluation of search interfaces aimed at

supporting exploration. In the second part of the talk I will focus on

how information retrieval technology focused on identifying relations

between concepts may be used to develop applications that support

contextualization.

Quantum Language Models (19 August, 2013)

Speaker: Alessandro Sordoni

A joint analysis of both Vector Space and Language Models for IR

using the mathematical framework of Quantum Theory revealed how both

models allocate the space of density matrices. A density matrix is

shown to be a general representational tool capable of leveraging

capabilities of both VSM and LM representations thus paving the way

for a new generation of retrieval models. The new approach is called

Quantum Language Modeling (QLM) and has shown its efficiency and

effectiveness in modeling term dependencies for Information

Retrieval.

Toward Models and Measures of Findability (21 July, 2013)

Speaker: Colin Wilkie
A summary of the work being undertaken on Findability

In this 10 minute talk, I will provide an overview of the project I am working on, which is about Findability, and review some of the existing models and measures of findability, before outlining the models that I have working on.

How cost affects search behaviour (21 July, 2013)

Speaker: Leif Azzopardi
Find out about how microeconomic theory predicts user behaviour...

In this talk, I will run through the work I will be presenting at SIGIR on "How cost affects search behavior". The empirical analysis is motivated and underpinned using the Search Economic Theory that I proposed at SIGIR 2011. 

[SICSA DVF] Language variation and influence in social media (15 July, 2013)

Speaker: Dr. Jacob Eisenstein
Dr. Eisenstein works on statistical natural language processing, focusing on social media analysis, discourse, and latent variable models

Languages vary by speaker and situation, and change over time.  While variation and change are inhibited in written corpora such as news text, they are endemic to social media, enabling large-scale investigation of language's social and temporal dimensions. The first part of this talk will describe a method for characterizing group-level language differences, using the Sparse Additive Generative Model (SAGE). SAGE is based on a re-parametrization of the multinomial distribution that is amenable to sparsity-inducing regularization and facilitates joint modeling across many author characteristics. The second part of the talk concerns change and influence. Using a novel dataset of geotagged word counts, we induce a network of linguistic influence between cities, aggregating across thousands of words. We then explore the demographic and geographic factors that drive spread of new words between cities. This work is in collaboration with Amr Ahmed, Brendan O'Connor, Noah A. Smith, and Eric P. Xing.

Biography
Jacob Eisenstein is an Assistant Professor in the School of Interactive Computing at Georgia Tech. He works on statistical natural language processing, focusing on social media analysis, discourse, and latent variable models. Jacob was a Postdoctoral researcher at Carnegie Mellon and the University of Illinois. He completed his Ph.D. at MIT in 2008, winning the George M. Sprowls dissertation award.

 

The Use of Correspondence Analysis in Information Retrieval (11 July, 2013)

Speaker: Dr Taner Dincer
This presentation will introduce the application of Correspondence Analysis in Information Retrieval

This presentation will introduce the application of Correspondence Analysis (CA) to Information Retrieval. CA is a well-established multivariate, statistical, exploratory data analysis technique. Multivariate data analysis techniques usually operate on a rectangular array of real numbers called a data matrix whose rows represent r observations (for example, r terms/words in documents) and columns represent c variables (for the example, c documents, resulting in a rxc term-by-document matrix). Multivariate data analysis refers to analyze the data in a manner that takes into account the relationships among observations and also among variables. In contrast to univariate statistics, it is concerned with the joint nature of measurements. The objective of exploratory data analysis is to explore the relationships among objects and among variables over measurements for the purpose of visual inspection. In particular, by using CA one can visually study the “Divergence From Independence” (DFI) among observations and among variables.


For Information Retrieval (IR), CA can serve three different uses: 1) As an analysis tool to visually inspect the results of information retrieval experiments, 2) As a basis to unify the probabilistic approaches to term weighting problem such as Divergence From Randomness and Language Models, and 3) As a term weighting model itself, "term weighting based on measuring divergence from independence". In this presentation, the uses of CA for these three purposes are exemplified.

A study of Information Management in the Patient Surgical Pathway in NHS Scotland (03 June, 2013)

Speaker: Matt-Mouley Bouamrane

We conducted a study of information management processes across the patient surgical pathway in NHS Scotland. While the majority of General Practitioners (GPs) consider electronic information systems as an essential and integral part of their work during the patient consultation, many were not fully satisfied with the functionalities of these systems. A majority of GPs considered that the national eReferral system streamlined referral processes. Almost all GPs reported marked variability in the quality of discharge information. Preoperative processes vary significantly across Scotland, with most services using paper based systems. There is insufficient use made of information provided through the patient electronic referral and a considerable duplication of effort with the work already performed in primary care. Three health-boards have implemented electronic preoperative information systems. These have transformed clinical practices and facilitated communication and information-sharing among the multi-disciplinary team and within the health boards. Substantial progress has been made towards improving information transfer and sharing within the surgical pathway in recent years but there remains scope for further improvements at the interface between services.

Discovering, Modeling, and Predicting Task-by-Task Behaviour of Search Engine Users (20 May, 2013)

Speaker: Salvatore Orlando

Users of web search engines are increasingly issuing queries to accomplish their daily tasks (e.g., “finding a recipe”, “booking a flight”, “read- ing online news”, etc.). In this work, we propose a two-step methodology for discovering latent tasks that users try to perform through search engines. Firstly, we identify user tasks from individual user sessions stored in query logs. In our vision, a user task is a set of possibly non-contiguous queries (within a user search session), which refer to the same need. Secondly, we discover collective tasks by aggregating similar user tasks, possibly performed by distinct users. To discover tasks, we propose to adopt clustering algorithms based on novel query similarity functions, in turn obtained by exploiting specific features, and both unsupervised and supervised learning approaches.  All the proposed solutions were evaluated on a manually-built ground-truth.

Furthermore, we introduce the the Task Relation Graph (TGR) as a representation of users' search behaviors on a task-by-task perspective, by exploiting the collective tasks obtained so far. The task-by-task behavior is captured by weighting the edges of TGR with a relatedness score computed between pairs of tasks, as mined from the query log.  We validated our approach on a concrete application, namely a task recommender system, which suggests related tasks to users on the basis of the task predictions derived from the TGR. Finally, we showed that the task recommendations generated by our technique are beyond the reach of existing query suggestion schemes, and that our solution is able to recommend tasks that user will likely perform in the near future. 

 

Work in collaboration with Claudio Lucchese, Gabriele Tolomei, Raffaele Perego, and Fabrizio Silvestri.

 

References:

[1] C. Lucchese, S. Orlando, R. Perego, F. Silvestri, G. Tolomei. "Identifying Task-based Sessions in Search Engine Query Logs". Forth ACM Int.l Conference on Web Search and Data Mining (WSDM 2011), Hong Kong, February 9-12, 2011

[2] C. Lucchese, S. Orlando, R. Perego, F. Silvestri, G. Tolomei. "Discovering Tasks from Search Engine Query Logs", To appear on ACM Transactions on Information Systems (TOIS). 

[3] C. Lucchese, S. Orlando, R. Perego, F. Silvestri, G. Tolomei. "Modeling and Predicting the Task-by-Task Behavior of Search Engine Users". To appear in Proc. OAIR 2013, Int.l Conference in the RIAO series.

Personality Computing (13 May, 2013)

Speaker: Alessandro Vinciarelli

 

 

Personality is one of the driving factors behind everything we do and experience

in life. During the last decade, the computing community has been showing an ever

increasing interest for such a psychological construct, especially when it comes

to efforts aimed at making machines socially intelligent, i.e. capable of interacting with

people in the same way as people do. This talk will show the work being done in this

area at the School of Computing Science. After an introduction to the concept of

personality and its main applications, the presentation will illustrate experiments

on speech based automatic perception and recognition. Furthermore, the talk will

outline the main issues and challenges still open in the domain.  

Fast and Reliable Online Learning to Rank for Information Retrieval (06 May, 2013)

Speaker: Katja Hoffman

Online learning to rank for information retrieval (IR) holds promise for allowing the development of "self-learning search engines" that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge.

 

In this talk I will present my recent work, which addresses the challenges posed by learning from natural user interactions. First, I will detail a new method, called Probabilistic Interleave, for inferring user preferences from users' clicks on search results. I show that this method allows unbiased and fine-grained ranker comparison using noisy click data, and that this is the first such method that allows the effective reuse of historical data (i.e., collected for previous comparisons) to infer information about new rankers. Second, I show that Probabilistic Interleave enables new online learning to rank approaches that can reuse historical interaction data to speed up learning by several orders of magnitude, especially under high levels of noise in user feedback. I conclude with an outlook on research directions in online learning to rank for IR, that are opened up by our results.

Entity Linking for Semantic Search (29 April, 2013)

Speaker: Edgar Meij



Semantic annotations have recently received renewed interest with the explosive increase in the amount of textual data being produced, the advent of advanced NLP techniques, and the maturing of the web of data. Such annotations hold the promise for improving information retrieval algorithms and applications by providing means to automatically understand the meaning of a piece of text. Indeed, when we look at the level of understanding that is involved in modern-day search engines (on the web or otherwise), we come to the obvious conclusion that there is still a lot of room for improvement. Although some recent advances are pushing the boundaries already, information items are still retrieved and ordered mainly using their textual representation, with little or no knowledge of what they actually mean. In this talk I will present my recent and ongoing work, which addresses the challenges associated with leveraging semantic annotations for intelligent information access. I will introduce a recently proposed method for entity linking and show how it can be applied to several tasks related to semantic search on collections of different types, genres, and origins. 

Query Classification for a Digital Library (18 March, 2013)

Speaker: Deirdre Lungley

The motivation for our query classification is the insight it gives the digital content provider into what his users are searching for and hence how his collection could be extended. This talk details two query classification methodologies we have implemented as part of the GALATEAS project (http://www.galateas.eu/): one log-based, the other using wikified queries to learn a Labelled LDA model. An analysis of their respective classification errors indicates the method best suited to particular category groups. 

Reusing Historical Interaction Data for Faster Online Learning to Rank for IR (12 March, 2013)

Speaker: Anne Schuth

 

Online learning to rank for information retrieval (IR) holds promise for allowing the development of ³self-learning² search engines that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge.

 

In this paper we investigate whether and how previously collected (historical) interaction data can be used to speed up learning in online learning to rank for IR. We devise the first two methods that can utilize historical data (1) to make feedback available during learning more reliable and (2) to preselect candidate ranking functions to be evaluated in interactions with users of the retrieval system. We evaluate both approaches on 9 learning to rank data sets and find that historical data can speed up learning, leading to substantially and significantly higher online performance. In particular, our preselection method proves highly effective at compensating for noise in user feedback. Our results show that historical data can be used to make online learning to rank for IR much more effective than previously possible, especially when feedback is noisy.

Scientific Lenses over Linked Data: Identity Management in the Open PHACTS project (11 March, 2013)

Speaker: Alasdair Gray,

Scientific Lenses over Linked Data: Identity Management in the Open PHACTS project

Alasdair Gray, University of Manchester

 

The discovery of new medicines requires pharmacologists to interact with a number of information sources ranging from tabular data to scientific papers, and other specialized formats. The Open PHACTS project, a collaboration of research institutions and major pharmaceutical companies, has developed a linked data platform for integrating multiple pharmacology datasets that form the basis for several drug discovery applications. The functionality offered by the platform has been drawn from a collection of prioritised drug discovery business questions created as part of the Open PHACTS project. Key features of the linked data platform are:

1) Domain specific API making drug discovery linked data available for a diverse range of applications without requiring the application developers to become knowledgeable of semantic web standards such as SPARQL;

2) Just-in-time identity resolution and alignment across datasets enabling a variety of entry points to the data and ultimately to support different integrated views of the data;

3) Centrally cached copies of public datasets to support interactive response times for user-facing applications.

 

Within complex scientific domains such as pharmacology, operational equivalence between two concepts is often context-, user- and task-specific. Existing linked data integration procedures and equivalence services do not take the context and task of the user into account. We enable users of the Open PHACTS platform to control the notion of operational equivalence by applying scientific lenses over linked data. The scientific lenses vary the links that are activated between the datasets which affects the data returned to the user

 

Bio

Alasdair is a researcher in the MyGrid team at the University of Manchester. He is currently working on the Open PHACTS project which is building an Open Pharmacological Space to integrate drug discovery data. Alasdair gained his PhD from Heriot-Watt University, Edinburgh, and then worked as a post-doctoral researcher in the Information Retrieval Group at the University of Glasgow. He has spent the last 10 years working on novel knowledge management projects investigating issues of relating data sets.

http://www.cs.man.ac.uk/~graya/

Modelling Time & Demographics in Search Logs (01 March, 2013)

Speaker: Milad Shokouhi

Knowing users' context offers a great potential for personalizing web search results or related services such as query suggestion and query completion. Contextual features cover a wide range of signals; query time, user’s location,  search history and demographics can all  be regarded as contextual features that can be used for search personalization.

In this talk, we’ll focus on two main questions:

1)      How can we use the existing contextual features, in particular time, for improving search results (Shokouhi & Radinsky, SIGIR’12).

2)      How can we infer missing contextual features, in particular user-demographics, based on search history (Bi et al., WWW2013).

 

Our results confirm that (1) contextual features matter and (2) that many of them can be inferred from search history.

Time-Biased Gain (21 February, 2013)

Speaker: Charlie Clark
Time-biased gain provides a unifying framework for information retrieval evaluation

Time-biased gain provides a unifying framework for information retrieval evaluation, generalizing many traditional effectiveness measures while accommodating aspects of user behavior not captured by these measures. By using time as a basis for calibration against actual user data, time-biased gain can reflect aspects of the search process that directly impact user experience, including document length, near-duplicate documents, and summaries. Unlike traditional measures, which must be arbitrarily normalized for averaging purposes, time-biased gain is reported in meaningful units, such as the total number of relevant documents seen by the user. In work reported at SIGIR 2012, we proposed and validated a closed-form equation for estimating time-biased gain, explored its properties, and compared it to standard approaches. In work reported at CIKM 2012, we used stochastic simulation to numerically approximate time-biased gain, an approach that provides greater flexibility, allowing us to accommodate different types of user behavior and increases the realism of the effectiveness measure. In work reported at HCIR 2012, we extended our stochastic simulation to model the variation between users. In this talk, I will provide an overview of time-biased gain, and outline our ongoing and future work, including extensions to evaluate query suggestion, diversity, and whole-page relevance. This is joint work with Mark Smucker.

[IR] Searching the Temporal Web: Challenges and Current Approaches (04 February, 2013)

Speaker: Nattiya Kanhabua

In this talk, we will give a survey of current approaches to searching the

temporal web. In such a web collection, the contents are created and/or

edited over time, and examples are web archives, news archives, blogs,

micro-blogs, personal emails and enterprise documents. Unfortunately,

traditional IR approaches based on term-matching only can give

unsatisfactory results when searching the temporal web. The reason for this

is multifold:  1) the collection is strongly time-dependent, i.e., with

multiple versions of documents, 2) the contents of documents are about

events happened at particular time periods, 3) the meanings of semantic

annotations can change over time, and 4) a query representing an information

need can be time-sensitive, so-called a temporal query.

 

Several major challenges in searching the temporal web will be discussed,

namely, 1) How to understand temporal search intent represented by

time-sensitive queries? 2) How to handle the temporal dynamics of queries

and documents? and 3) How to explicitly model temporal information in

retrieval and ranking models? To this end, we will present current

approaches to the addressed problems as well as outline the directions for

future research.

Probabilistic rule-based argumentation for norm-governed learning agents (28 January, 2013)

Speaker: Sebastian Riedel

There is a vast and ever-increasing amount of unstructured textual data at our disposal. The ambiguity, variability and expressivity of language makes this data difficult to analyse, mine, search, visualise, and, ultimately, base decisions on. These challenges have motivated efforts to enable machine reading: computers that can read text and convert it into semantic representations, such as the Google Knowledge Graph for general facts, or pathway databases in the biomedical domain. This representations can then be harnessed by machines and humans alike. At the heart of machine reading is relation extraction: reading text to create a semantic network of entities and their relations, such as employeeOf(Person,Company), regulates(Protein,Protein) or causes(Event,Event). 

In this talk I will present a series of graphical models and matrix factorisation techniques that can learn to extract relations. I will start by contrasting a fully supervised approach with one that leverages pre-existing semantic knowledge (for example, in the Freebase database) to reduce annotation costs. I will then present ways to extract additional relations that are not yet part of the schema, and for which no pre-existing semantic knowledge is available. I will show that by doing so we cannot only extract richer knowledge, but also improve extraction quality of relations within the original schema. This helps to improve over previous state-of-the-art by more than 10% points mean average precision. 

Context data in lifelog retrieval (19 November, 2012)

Speaker: Liadh Kelly
Context data in lifelog retrieval

Advances in digital technologies for information capture combined with
massive increases in the capacity of digital storage media mean that it is
now possible to capture and store much of one's life experiences in a
personal lifelog. Information can be captured from a myriad of personal
information devices including desktop computers, mobile phones, digital
cameras, and various sensors, including GPS, Bluetooth, and biometric
devices. This talk centers on the investigation of the challenges of
retrieval in this emerging domain and on the examination of the utility of
several implicitly recorded and derived context types in meeting these
challenges. For these investigations unique rich multimodal personal
lifelog collections of 20 months duration are used. These collections
contain all items accessed on subjects' PCs and laptops (email, web pages,
word documents, etc), passively captured images depicting subjects' lives
using the SenseCam device (http://research.microsoft.com/sensecam), and
mobile text messages sent and received. Items are annotated with several
rich sources of automatically derived context data types including
biometric data (galvanic skin response, heart rate, etc), geo-location
(captured using GPS data), people present (captured using Bluetooth data),
weather conditions, light status, and several context types related to the
dates and times of accesses to items.

 

From Search to Adaptive Search (12 November, 2012)

Speaker: Udo Kruschwitz
Generating good query modification suggestions or alternative queries to assist a searcher remains however a challenging issue

Modern search engines have been moving away from very simplistic interfaces that aimed at satisfying a user's need with a single-shot query. Interactive features such as query suggestions and faceted search are now integral parts of Web search engines. Generating good query modification suggestions or alternative queries to assist a searcher remains however a challenging issue. Query log analysis is one of the major strands of work in this direction. While much research has been performed on query logs collected on the Web as a whole, query log analysis to enhance search on smaller and more focused collections (such as intranets, digital libraries and local Web sites) has attracted less attention. The talk will look at a number of directions we have explored at the University of Essex in addressing this problem by automatically acquiring continuously updated domain models using query and click logs (as well as other sources).

Events Webapp