This Week’s Events
There are no events scheduled for this week
Upcoming Events
There are no upcoming events
Past Events
Of bison and bigraphs: modelling interactions in physical/virtual spaces (15 May, 2012)
Speaker: Muffy Calder
Mixed reality systems present a wide range of challenges for formal modelling -- how can we model interactions in both physical and virtual spaces? We start to explore this question through a specific application: modelling Steve Benford's Savannah game using Bigraphical reactive systems. The Savannah game is a collaborative, location-based game in which groups of `lions' (i.e. children with devices) hunt together on a virtual savannah that is overlaid on a (physical) open playing field. This work is in the preliminary stages and so unusually for a formal methods talk, we will not give the details of *any* formal models! Instead we will focus on which aspects of the game we can formalise and reason about, and assumptions about the level of detail required for the physical space and for the virtual space.
Empirical Computer Science: how not to do it (09 October, 2012)
Speaker: Patrick Prosser
Empirical Computer Science is hard. To do it well you have to be ruthlessly honest and more than a little bit paranoid. I will present two examples of "How Not to do Empirical Computer Science". NOTE: "All persons, places, and events in this presentation are real. Certain speeches and thoughts are necessarily constructions by the presenter. No names have been changed to protect the innocent, since God Almighty protects the innocent as a matter of Heavenly routine." (quote plagiarized from Kurt Vonnegut's The Sirens of Titan)
Pi-Cost and a brief introduction to DR-PI-OMEGA and DR-PI - Towards Formalizing the Cost of Computation in a Distributed Computer Network (16 October, 2012)
Speaker: Manish Gaur
The picalculus is a basic abstract language for describing communicating processes and has a very developed behavioural theory expressed as equivalence relation between process descriptors; A process P equivalent to a process Q signifies that although P and Q may be intentionally very different they offer essentially same behaviour to the users. The basic language and its related theory has been extended in myriad ways in order to incorporate many different aspect of concurrent behaviour. In this talk, we present a new variation on the picalculus, picost, in which the use of channels must be paid for. Processes operate relative to a cost environment; and communication can only happen if principals have provided sufficient funds for the channels associated with the communications. We define a bisimulation based behavioural preorder in which processes are related if, intuitively, they exhibit the same behaviour but one may be efficient than the other. We justify our choice of preorder by proving that it is characterised by three intuitive properties which behavioural preorders should satisfy in a framework in which the use of resources must be funded.
This development, apart from other applications, is useful in formalising a distributed network with routers acting as an active component in determining the quality of service of the network. We developed two formal languages for distributed networks where computations are described explicitly in the presence of routers. Our model may be considered as an extension of the asynchronous distributed pi-calculus (ADpi). We believe that such models help in prototyping the routing algorithms in context of large networks and reasoning about them while abstracting away the excessive details. Being general, the model may also be applied to demonstrate the role of routers in determining the quality of services of the network. Further in this talk, we intend to very briefly describe the frame work and results obtained about such descriptions.
Building Brains (19 October, 2012)
Speaker: Professor Steve Furber
When his concept of the universal computing machine finally became an engineering reality, Alan Turing speculated on the prospects for such machines to emulate human thinking. Although computers now routinely perform impressive feats of logic and analysis, such as searching the vast complexities of the global internet for information in a second or two, they have progressed much more slowly than Turing anticipated towards achieving normal human levels of intelligent behaviour, or perhaps “common sense”. Why is this?
Perhaps the answer lies in the fact that the principles of information processing in the brain are still far from understood. But progress in computer technology means that we can now realistically contemplate building computer models of the brain that can be used to probe these principles much more readily than is feasible, or ethical, with a living biological brain.
To register your attendance visit: www.gla.ac.uk/schools/computing/buildingbrains/
An Integer Programming Approach to the Hospitals/Residents Problem with Ties (06 November, 2012)
Speaker: Augustine Kwanashie
Matching problems generally involve the assignment of agents of one set to those of another. Often some or all of the agents have preferences over one another. An example of such a problem is the Hospitals/Residents problem with Ties (HRT) which models the problem of assigning graduating medical students to hospitals based on agents having preferences over one another, which can involve ties. Finding a maximum stable matching given an HRT instance is known to be NP-hard. We investigate integer programming techniques for producing optimal stable matchings that perform reasonably well in practice. Gains made in the size of these matchings can deliver considerable benefits in some real-life applications. We describe various techniques used to improve the performance of these integer programs and present some empirical results.
The impact of the new Computer Science standards in NZ schools (12 November, 2012)
Speaker: Professor Tim Bell
New achievement standards for Computer Science are being phased in for high
schools from 2011 to 2013. This talk will describe the process that led to
the changes, and report on two studies on the impact of the new standards on
teachers and students respectively. It will conclude with speculation on the
future impact on universities once students arrive who have access to the
standards.
The first study is a survey of teachers, looking at the existing base of
teacher expertise, their motivations for changing, and how they have found
the transition. The second study is a detailed analysis of 151 student
reports submitted at the end of 2011 for a standard on algorithms,
programming languages and HCI. We will look at the kinds of learning that
students reported on, misconceptions, and the importance of students being
asked the right questions and using good examples to produce a quality work.
Tim Bell is a professor in the Department of Computer Science and
Software Engineering at the University of Canterbury in Christchurch, New
Zealand. His main current research interest is computer science education;
in the past he has been also worked on computers and music, and data
compression. His ``Computer Science Unplugged'' project is widely used
internationally, and its books and videos have been translated into about 17
languages. He is also a qualified musician, and performs regularly on
instruments that have black-and-white keyboards.
From Search to Adaptive Search (12 November, 2012)
Speaker: Udo Kruschwitz
Modern search engines have been moving away from very simplistic interfaces that aimed at satisfying a user's need with a single-shot query. Interactive features such as query suggestions and faceted search are now integral parts of Web search engines. Generating good query modification suggestions or alternative queries to assist a searcher remains however a challenging issue. Query log analysis is one of the major strands of work in this direction. While much research has been performed on query logs collected on the Web as a whole, query log analysis to enhance search on smaller and more focused collections (such as intranets, digital libraries and local Web sites) has attracted less attention. The talk will look at a number of directions we have explored at the University of Essex in addressing this problem by automatically acquiring continuously updated domain models using query and click logs (as well as other sources).
The Trials and Tribulations of Typestate Inference (13 November, 2012)
Speaker: Iain McGinniss
Typestate is the combination of traditional object-oriented type theory with finite state machines that represent allowable sequences of method calls. A textual definition of a typestate, as required in specifying the type of a function parameter, is verbose to the point of being impractical. Therefore it is desirable to be able to omit such definitions where they can be accurately inferred. In this talk, I shall discuss my attempts to formally define and prove a typestate inference algorithm for a simple calculus, TS1.
Open Problems in 2-level Compact Routing (14 November, 2012)
Speaker: Paul Jakma
"A quick talk on the subject of my PhD, on some of the open problems in compact routing. In particular, issues around selecting landmark nodes in 2-level compact routing schemes, and their influence on other problems such as policy in routing."
Detector Development for Hadron Physics and Applications (15 November, 2012)
Speaker: Bjoern Seitz
Nuclear physics research requires the detection and characterisation of ionising radiation over a large range of energies. This is especially true for fundamental science, but also for the manifold applications generated by nuclear physics research. Hence, nuclear physics group develop a large range of expertise in radiation detection and the development of associated instrumentation across the full data chain, from signal generation to interpretation, from simulation to hardware deployment. The presentation will give a brief overview over the instrumentation development activities for fundamental and applied nuclear physics undertaken at the University of Glasgow and will highlight some particular strength of the group.
Personification Using Affective Speech Synthesis (16 November, 2012)
Speaker: Matthew P. Aylett
For many applications multimodal systems require the ability to convey personality. This requirement varies from the explicit, where a virtual agent is mimicking a human in an immersive training application[1], to the implicit, effective interaction in many systems is improved dues to the enhanced involvement and pleasure of using a a multimodal system which creates a sense of a personality[2].
If this multimodal system is required to communicate with human using audio then speech synthesis is a critical element in rendering the sense of personality. In this paper we discuss the issues raised by the difficulty of assessing personality in artificial systems as well as the possible strategies for enhancing speech synthesis in order to create a deeper sense of personality rather than the more traditional objective of naturalness.
We examine work in expressive speech synthesis and present work carried out at CereProc in order to create emotional synthesis. The expressive functionality of CereVoice[3] is available in a
downloadable application and has been used in order to produce some novel character based android voices. We will examine the elements of this system in terms of accent, voice adaptation, and expressive synthesis.
We will then present preliminary results from an emotional assessment of the CereProc system which will form part of a baseline of evaluation for further assessment of personality. To conclude we will present our proposed personality assessment of this speech material.
The work presented forms part of the 'personify' project supported by the Royal Society which commenced on the 1st of January 2012.
[1] L.Hall, S.Jones, A.Paiva, and R.S Aylett, “Fearnot!: providing
children with strategies to cope with bullying,” in 8th International
Conference on Interaction Design and Children, 2009.
[2] T.Bickmore, L.Pfeifer, and B.Jack, “Taking the time to care:
Empowering low health literacy hospital patients with virtual nurse
agents,” in SIGCHI Conference on Human Factors in Computing Systems,
2009.
[3] Aylett, M.P., Pidcock, C.P., “The CereVoice Characterful Speech
Synthesiser SDK”, AISB, Newcastle. pp.174-8, 2007.
Dr Matthew Aylett has over 10 years experience in commercial speech synthesis and speech synthesis research. He is a founder of CereProc, which offers unique emotional and characterful synthesis solutions and has recently been awarded a Royal Society Industrial Fellowship to explore the role of speech synthesis in the perception of character in artificial agents.
Context data in lifelog retrieval (19 November, 2012)
Speaker: Liadh Kelly
Advances in digital technologies for information capture combined with
massive increases in the capacity of digital storage media mean that it is
now possible to capture and store much of one's life experiences in a
personal lifelog. Information can be captured from a myriad of personal
information devices including desktop computers, mobile phones, digital
cameras, and various sensors, including GPS, Bluetooth, and biometric
devices. This talk centers on the investigation of the challenges of
retrieval in this emerging domain and on the examination of the utility of
several implicitly recorded and derived context types in meeting these
challenges. For these investigations unique rich multimodal personal
lifelog collections of 20 months duration are used. These collections
contain all items accessed on subjects' PCs and laptops (email, web pages,
word documents, etc), passively captured images depicting subjects' lives
using the SenseCam device (http://research.microsoft.com/sensecam), and
mobile text messages sent and received. Items are annotated with several
rich sources of automatically derived context data types including
biometric data (galvanic skin response, heart rate, etc), geo-location
(captured using GPS data), people present (captured using Bluetooth data),
weather conditions, light status, and several context types related to the
dates and times of accesses to items.
PEPA and the Diffusion Problem (20 November, 2012)
Speaker: Michael Jamieson
PEPA, a formalism for keeping track of events during interacting stochastic processes, has been advocated in this School for use in biological investigations. An example is the study of nitric oxide diffusing in a blood vessel. In this talk I will suggest that PEPA be regarded as complementary to another method, which I will describe, of accounting for this diffusion.
VM Migration: Juggling the Data Center. (21 November, 2012)
Speaker: Gregg Hamilton
One major goal of data center operators is to give predictable, bounded performance guarantees (or SLAs) across their network. However, with the majority of traffic flows being highly dynamic and short-lived, achieving balanced network performance is somewhat problematic. Current virtual machine (VM) migration techniques balance server workloads using CPU and memory resources as migration indicators, with few considering the effects on network performance. This talk will look at the topic of my PhD work: combining server-side and network performance indicators to achieve a stable and predictable network through VM migration.
Turing's Universal Computing Machine (23 November, 2012)
Speaker: Paul Cockshott
The talk will cover the background to Turing's Machine. Why Universality was important, and address the philosophical confusion at the heart of the belief that Turing Machines and the Lambda calculus are equivalent. The talk is one that I gave to the British Mathematical Colloquium in May on the Turing Centenary.
ERMMM - Economic Resource Modelling for Memory Management (28 November, 2012)
Speaker: Jeremy Singer
How do we share resources equitably between competing individuals? In this particular case, how do we share main memory between concurrent JVM processes? Can micro-economic theory provide inspiration to software systems architects? In this week's ENDS talk I aim to address these questions in a pragmatic way.
IDI Seminar (29 November, 2012)
Speaker: Konstantinos Georgatzis
Visualisation of multivariate data sets is often done by mapping data onto a low-dimensional display with nonlinear dimensionality reduction (NLDR) methods. We have introduced a formalism where NLDR for visualisation is treated as an information retrieval task, and a novel NLDR method called the Neighbor Retrieval
Visualiser (NeRV) which outperforms previous methods. The remaining concern is that NeRV has quadratic computational complexity with respect to the number of data. We introduce an efficient learning algorithm for NeRV where relationships between data are approximated through mixture modeling, yielding efficient computation with near-linear computational complexity with respect to the number of data. The method is much faster to optimise as the number of data grows, and it maintains good visualisation performance.
Space-time modelling of climatic trends (29 November, 2012)
Speaker: Peter F. Craigmile
Classical assessments of climatic trends are based on the analysis of
a small number of time series. Considering trend to be only smooth
changes of the mean value of a stochastic process through time is
limiting, because it does not provide a mechanism to study changes of
the mean that could also occur over space. Thus, in studies of climate
there is a substantial interest in being able to jointly characterize
trends over time and space. In this talk we discuss the salient
features of climate data that must incorporated in statistical models
that characterize trend. We build wavelet-based space-time
hierarchical Bayesian models that can be used to simultaneously model
trend, seasonality, and error, allowing for the possibility that the
error process may exhibit space-time long-range dependence. We
demonstrate how these statistical models can be used to assess the
significance of trend over time and space. We motivate and apply our
methods to the analysis of space-time temperature trends.
Paired and altruistic kidney donation in the UK: how algorithms can help (30 November, 2012)
Speaker: David Manlove
A patient who requires a kidney transplant, and who has a willing but incompatible donor, may be able to "swap" their donor with that of another patient in a similar position. This creates a "kidney exchange", involving two or more pairs swapping kidneys in a cyclic manner. Altruistic donors can also trigger "domino paired donation chains" involving incompatible patient-donor pairs, with the final donor donating a kidney to the deceased donor waiting list.
NHS Blood and Transplant operate a UK-wide matching scheme, as part of the National Living Donor Kidney Sharing Schemes, which identifies potential kidney exchanges and domino paired donation chains involving incompatible patient-donor pairs and altruistic donors on their database every three months.
Since July 2008, NHSBT have used software produced by School of Computing Science in order to construct an optimal solution to the underlying optimisation problem at each quarterly matching run. This has led to at least 165 actual transplants to date.
In this talk I will describe the application in more detail and outline briefly the computational problems involved. I will then give an overview of the results obtained to date, illustrating a couple of web applications that have been developed to assist with this task. This is joint work with Gregg O'Malley, who has been solely responsible for the implementation of the software currently in use.
A model checking approach for air traffic control requirement analysis (04 December, 2012)
Speaker: Michele Sevegnani
NATS provides air traffic navigation services to over 6,000 aircraft flying through UK controlled airspace every day. The huge challenge faced by the engineering team at the NATS control centre in Prestwick is to monitor constantly the status of the equipment required to provide safe and efficient en route services. This involves interpreting and unstructured data feed generated by thousands of diverse sensors such as communication link monitors but also intrusion sensors.
In this talk we will describe how stochastic modelling and checking can be employed to help in this task. Our models allow us to quantify the overall performance of the monitoring system and the quality of the service provided, to predict future behaviours, to react to external events and to plan future upgrades by identifying weaknesses of the system and optimise assets.
Severe Weather in Kyoto (07 December, 2012)
Speaker: Wim Vanderbauwhede
This summer I spent a 2-month research visit at the Disaster Prevention Research Institute of Kyoto University. I went there to work on GPU acceleration of numerical weather simulations. I will discuss the motivation for this work, briefly explain the actual research and the outcomes.
The Hospitals Residents Problem with Couples (11 December, 2012)
Speaker: Iain McBride
The Hospitals Residents Problem (HR) is a familiar problem which seeks a stable bipartite matching amongst two sets: one containing residents and one containing hospitals. Each group expresses a strict linear preference over some subset of the members of the other set. The problem is well understood and an efficient algorithm due to Gale and Shapley exists which guarantees to find a stable matching in an instance of HR.
However, the problem becomes intractable when the residents are able to form linked pairs, or couples. This problem is known as the Hospitals Residents Problem with Couples (HRC). In this case Ronn has previously shown that the problem of deciding whether a stable matching exists in an instance of HRC is NP-complete. We show that this NP-completeness result holds for very restricted versions of HRC. Further, we provide an IP formulation for finding a stable matching in an instance of HRC, or reporting that the instance supports no stable matching, and provide early empirical results derived from this IP model.
Technology Systems in the Retail and Investment Banking Industry (11 January, 2013)
Speaker: Abyd Adhami
IT forms a strategic and integral part of every major banking and financial institution worldwide. These firms spend hundreds of millions (and for many even billions) on technology systems each year. This presentation will provide an overview of technology and its use in the banking and financial industry, touching on a number of areas, both across retail and investment banking technology.
It will look at some of the major integration architecture challenges that banks face as they continue to develop and enhance their portfolio of technology systems, striving for that commercial edge over the competition. It will also explore a couple of “cutting edge” and interesting areas of how technology is shaping the world of electronic trading and order routing systems.
NOTE: This presentation is based on my 10 years of IT experience working across several banking and financial institutions. Whilst preserving client confidentiality, it will include some examples of real banking systems/diagrams. It is not intended to be very technical, and I promise to include lots of stories and interesting facts hopefully appealing to a wider community.
Monitoring Crowd Movement using Social Media Platforms (18 January, 2013)
Speaker: Stefan Raue
Twenty years ago the first smartphone prototypes were presented to the public. Since then continuous technological improvements enabled the production of devices for the mass market, allowing the general public to adopt this technology. Globally there are now an estimated 1,038 billion smartphones in use. This has led to changes in the way information is exchanged amongst the general public. In my talk I will focus on the use of social media platforms from mobile devices as an example of a shift in communication behaviour during events (festival, sports, and incidents).
In this talk I will cover some of my recent work on monitoring crowd movement for large-scale public events based on social media data, alongside stories of recent social media use (e.g. during Hurricane Sandy) from around the world. The presentation will contain a brief demo visualising crowd movement during the London Olympics and the T-in-the-Park festival in 2012.
I will finish the talk by going all the way back to 1994 when Hewlett Packard released a vision video (“Synergies”) covering the technology required for future emergency management. Many of the devices and technologies shown in the video are now part of our everyday life...some require much more research.
Test Automation to Perform Regression Testing using an Action-Based Paradigm (25 January, 2013)
Speaker: Paul Mullen
When using manual testing to perform regression testing, it can prove expensive, time consuming and potentially inaccurate. With software failures costing an estimated 300 billion USD, there needs to be a cheap and reliable way to perform testing given a short period of time.
Test automation allows a team to perform testing without utilising manpower and can be used as an on demand service. Integrating test automation into a test plan can be a large project and traditional methods suffer from maintenance and extension costs, over the life of a product. To counter this, a new action-based paradigm was designed which relates the test script to the use cases being tested rather than the user interface.
This talk dispels common misconceptions about testing and discusses how test automation can provide a robust and effective tool in the hunt for defects over the lifetime of a product.
Probabilistic rule-based argumentation for norm-governed learning agents (28 January, 2013)
Speaker: Sebastian Riedel
There is a vast and ever-increasing amount of unstructured textual data at our disposal. The ambiguity, variability and expressivity of language makes this data difficult to analyse, mine, search, visualise, and, ultimately, base decisions on. These challenges have motivated efforts to enable machine reading: computers that can read text and convert it into semantic representations, such as the Google Knowledge Graph for general facts, or pathway databases in the biomedical domain. This representations can then be harnessed by machines and humans alike. At the heart of machine reading is relation extraction: reading text to create a semantic network of entities and their relations, such as employeeOf(Person,Company), regulates(Protein,Protein) or causes(Event,Event).
In this talk I will present a series of graphical models and matrix factorisation techniques that can learn to extract relations. I will start by contrasting a fully supervised approach with one that leverages pre-existing semantic knowledge (for example, in the Freebase database) to reduce annotation costs. I will then present ways to extract additional relations that are not yet part of the schema, and for which no pre-existing semantic knowledge is available. I will show that by doing so we cannot only extract richer knowledge, but also improve extraction quality of relations within the original schema. This helps to improve over previous state-of-the-art by more than 10% points mean average precision.
On Al Roth Nobel Prize-winning lecture (29 January, 2013)
Speaker: David Manlove
The 2012 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (commonly known as the Nobel Prize in Economics) was awarded jointly to Professors Alvin E Roth and Lloyd S Shapley (see http://www.nobelprize.org/nobel_prizes/economics/laureates/2012/) "for the theory of stable allocations and the practice of market design".
Lloyd Shapley is the co-author of the famous Gale-Shapley algorithm (with David Gale, who sadly died in 2008). Al Roth has been instrumental in turning theory into practice through his involvement with centralised clearinghouses in many application domains, including junior doctor allocation and kidney exchange, in addition to contributing many important theoretical results himself.
The Nobel Prize announcement was made on 15 October, and the two laureates gave their award lectures on 8 December before receiving the awards on 10 December. We will watch Al Roth’s lecture, entitled “The Theory and Practice of Market Design” (43 mins). This is highly relevant to FATA research, as well as being very accessible to anyone who is interested in knowing “who gets what” when it comes to sharing around scarce resources.
Who is old - and why should we care? (29 January, 2013)
Speaker: Dr Alistair Edwards
GIST Seminar: : Understanding Visualization: A Formal Approach using Category Theory and Semiotics (31 January, 2013)
Speaker: Dr Paul Vickers
We combine the vocabulary of semiotics and category theory to provide general framework for understanding visualization in practice, including: relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. Further work will explore how the framework may be used to compare visualizations, especially those of different modalities. This may offer predictive potential before expensive user studies are carried out.
[IR] Searching the Temporal Web: Challenges and Current Approaches (04 February, 2013)
Speaker: Nattiya Kanhabua
In this talk, we will give a survey of current approaches to searching the
temporal web. In such a web collection, the contents are created and/or
edited over time, and examples are web archives, news archives, blogs,
micro-blogs, personal emails and enterprise documents. Unfortunately,
traditional IR approaches based on term-matching only can give
unsatisfactory results when searching the temporal web. The reason for this
is multifold: 1) the collection is strongly time-dependent, i.e., with
multiple versions of documents, 2) the contents of documents are about
events happened at particular time periods, 3) the meanings of semantic
annotations can change over time, and 4) a query representing an information
need can be time-sensitive, so-called a temporal query.
Several major challenges in searching the temporal web will be discussed,
namely, 1) How to understand temporal search intent represented by
time-sensitive queries? 2) How to handle the temporal dynamics of queries
and documents? and 3) How to explicitly model temporal information in
retrieval and ranking models? To this end, we will present current
approaches to the addressed problems as well as outline the directions for
future research.
Ethical Challenges in Large Scale Mobile HCI (04 February, 2013)
Speaker: Alistair Morrison
The launch of 'app stores' on several mobile software platforms is a relatively recent phenomenon, and many HCI researchers have begun to take advantage of these distribution platforms to run human trials and gather data from hundreds of thousands of users. However, this new methodology radically changes participant-researcher relationships and has moved current researcher practice beyond available ethical guidelines. In this talk I will outline the ethical challenges specific to running mass participation mobile software trials. I present a classification scheme for categorising mobile software trials, along with a complementary set of recommended guidelines for each identified category. I encourage feedback and debate, as this work is intended to stimulate discussion towards the creation of a community consensus on ethical practice.
Multicriteria Optimization Approach to Select Images as Passwords in Recognition Based Graphical Authentication Systems (05 February, 2013)
Speaker: Soumyadeb Chowdhury
Recognition-based graphical authentication systems (RGBSs) use images as passwords. The major goal of our research is to investigate the usability and guessability i.e. vulnerability of the different image types, Mikon, doodle, art and object (sports, food, sculptures etc) to written and verbal descriptions, when used as passwords in RBGS. We conducted two longitudinal user studies over a period of 4 months to evaluate the usability (100 users) and guessability based on verbal descriptions (70 users), of these image types when used as passwords in RGBSs. After deriving conclusions based on a statistical analysis of the data, the research question was “How to rank image types based on both the criteria”. Usability and guessability are in conflict, when assessing the suitability of an image for use as a password. Since the statistical analysis alone does not unambiguously identify the most suitable image to be used as password, here, we present a new approach which effectively integrates a series of techniques to rank images, taking into account the conflicting criteria.
ITECH: Web Startup Pitches (06 February, 2013)
Speaker: Leif Azzopardi
ITech Students will be presenting the designs of their web applications. Each team has five minutes to describe their application and its objectives, along with discussing the user personas the app caters for and a walkthrough of the application using wireframes.
Information processing in emergency management environments (12 February, 2013)
Speaker: Stefan Raue
In this talk I will discuss some of my work on information processing in emergency management environments. In particular, I will focus on crowdsourcing techniques to improve the response to adverse events resulting from natural or man-made hazards. I will talk about the information needs of emergency services during the early stages of response, and discuss the information processing activities to which crowdsourcing activities could be beneficial. There are multiple technical, social and ethical challenges arising from the prospect of involving the crowd in large-scale information processing tasks in this time- and safety-critical environment.
Evaluating Bad Query Abandonment in an Iterative SMS-Based FAQ Retrieval System (14 February, 2013)
Speaker: Edwin Thuma
We investigate how many iterations users are willing to tolerate in an iterative Frequently Asked Question (FAQ) system that provides information on HIV/AIDS. This is part of work in progress that aims to develop an automated Frequently Asked Question system that can be used to provide answers on HIV/AIDS related queries to users in Botswana. Our system engages the user in the question answering process by following an iterative interaction approach in order to avoid giving inappropriate answers to the user. Our findings provide us with an indication of how long users are willing to engage with the system. We subsequently use this to develop a novel evaluation metric to use in future developments of the system. As an additional finding, we show that the previous search experience of the users has a significant effect on their future behaviour.
Big Data and how it's influencing the modern computing landscape (18 February, 2013)
Speaker: Prof Triantafillou
Big data is arguably the biggest buzzword to have hit the CS community at large in the last few years.
In this talk I will strive to explain what the big fuss is all about, providing answers to the following questions.
What does "big data" mean?
Why is it important to society and to computing scientists?
What are the essential tools/technologies?
Why does it necessitate a new suit of related technologies?
What are the key open challenges?
Which fields of CS does it cover?
Time permitting, I will overview some of our latest research results.
MultiMemoHome Project Showcase (19 February, 2013)
Speaker: various
This event is the final showcase of research and prototypes developed during the MultiMemoHome Project (funded by EPSRC).
The Black Hole Methodology (19 February, 2013)
Speaker: Wendy Goucher
A Parallel Task Composition Approach to Manycore Programming (20 February, 2013)
Speaker: Ashkan Tousimojarad
Many-core processors have emerged to change the parallel computation world. Efficient utilization of these platforms is a great challenge. The Glasgow Parallel Reduction Machine (GPRM) is a novel, flexible framework for parallel task-composition based manycore programming. We structure programs into task code, written as C++ classes, and communication code, written in a restricted subset of C++ with pure functional semantics and parallel evaluation. Therefore, our approach views programs as parallel compositions of (sequential) tasks.
In this talk I will discuss the GPRM, the virtual machine underlying our framework. I demonstrate the potential using an implementation of a merge sort algorithm on a 64-core Tilera processor, as well as on a conventional Intel quad-core processor. The results show that our approach actually outperforms the OpenMP code, while facilitates writing of parallel programs.
BCS/IET Turing Lecture (20 February, 2013)
Speaker: Suranga Chandratillake
Annual BCS/IET Turing Lecture: see http://conferences.theiet.org/turing/ for full details.
Free registration at: http://events.bcs.org/book/485/
Synopsis:
Armed with a good degree and interested in relatively esoteric extremes of Computer Science, Suranga Chandratillake was all set for an academic career. A combination of events conspired to take him down the industry route instead, and he found himself starting and running his own successful company.
In going through this process he realised just how little his (otherwise excellent) education had prepared him for the challenges of starting a company, building and marketing a product and growing an organisation.
Given the economic and social impact that such endeavour can have, Suranga asks what could be done to better equip those starting down this path today?
During this year’s Turing Lecture, our speaker will cover the background to this 2000 decision and his experience of going into industry versus academia, beginning with Autonomy plc and later the founding and path to growth of his company, blinkx plc.
He will cover the technology developed at both companies including efforts to reduce complexity, increase customer-centricity and the unique challenges of building for consumers.
Suranga will also cover ‘the rest’: the importance of marketing and PR in the technology industry, raising capital, running an IPO and managing the human element (hiring, firing and cultivating people and a culture)
Turning from personal experiences, Suranga will reflect on why this route is important (including the significance of industry on technology progress and its impact on employment and national wealth) as well as how he learnt about things he didn't know before and touch on comparisons between UK and US university degrees.
He will briefly refer to ‘Turing's World: the incredible, pervasive influence of computers on our lives’ and conclude by sharing his thoughts on what more might be done to help create successful technology companies.
Time-Biased Gain (21 February, 2013)
Speaker: Charlie Clark
Time-biased gain provides a unifying framework for information retrieval evaluation, generalizing many traditional effectiveness measures while accommodating aspects of user behavior not captured by these measures. By using time as a basis for calibration against actual user data, time-biased gain can reflect aspects of the search process that directly impact user experience, including document length, near-duplicate documents, and summaries. Unlike traditional measures, which must be arbitrarily normalized for averaging purposes, time-biased gain is reported in meaningful units, such as the total number of relevant documents seen by the user. In work reported at SIGIR 2012, we proposed and validated a closed-form equation for estimating time-biased gain, explored its properties, and compared it to standard approaches. In work reported at CIKM 2012, we used stochastic simulation to numerically approximate time-biased gain, an approach that provides greater flexibility, allowing us to accommodate different types of user behavior and increases the realism of the effectiveness measure. In work reported at HCIR 2012, we extended our stochastic simulation to model the variation between users. In this talk, I will provide an overview of time-biased gain, and outline our ongoing and future work, including extensions to evaluate query suggestion, diversity, and whole-page relevance. This is joint work with Mark Smucker.
Learn Physics by Programming (22 February, 2013)
Speaker: Scott Walck
I will describe a course for second-year physics students designed to
deepen understanding of basic physics by using a precise, expressive
language to expose the structure of a physical theory. With the
functional programming language Haskell, we use types, higher-order
functions, and referential transparency to encourage clear thinking and
to build data structures appropriate for problems in physics. The
results can be plotted or animated as appropriate.
Model Checking Port-Based Network Access Control for Wireless Networks (26 February, 2013)
Speaker: Yu Lu
With the rapid development of Internet, the security of network protocols becomes the focus of research. The 802.1X standard is the IEEE standard for port-based network access control. The 802.1X standard delivers powerful authentication and data privacy as part of its robust, extensible security framework. It is this strong security, assured authentication, and dependable data protection that has made the 802.1X standard the core ingredient in today’s most successful network access control (NAC) solutions. As the central access authentication, the importance of IEEE 802.1X protocol's security properties is obvious. Formal methods is an crucial software and protocol analysis and verification tool. Formal methods includes model checking, logic inference, and theorem proving, etc.
We could use model checking to help analyse security protocols by exhaustively inspecting reachable composite system states in a finite state machine representation of the system. The IEEE 802.1X standard provides port-based network access control for hybrid networking technologies. We describe how the current IEEE 802.1X mechanism for 802.11 wireless networks can be modelled in the PROMELA modelling language and verified using the SPIN model checker. We aim to verify a set of essential security properties of the 802.1X, and also to find out whether the current combination of the IEEE 802.1X and 802.11 standards provide a sufficient level of security.
Why am I not running the world? (26 February, 2013)
Speaker: Dave McKay
Inspired by Suranga Chandratillake’s Turing lecture, I want to develop his theme of the “The Boffin Phallacy”. Using wild assertions and examples from my own career, and with no humility whatsoever, I will point out some things that Suranga missed. I will put aside fears of losing my academic friends and alienating academic researchers everywhere, and try to show that a business life is exciting and sexy. Along the way, I hope to suggest some ways that we can turn out computing graduates who will one day run the globe.
NPL Sensors Presentation (27 February, 2013)
Speaker: Carlos Huggins
The National Physical Laboratory provides much of the UK’s outward facing support to science and commerce in the field of metrology, viz, the science and practice of measurement. This covers anything from international work on definition of fundamental standards, realisation of practical equipment than can transfer knowledge along the supply chain, to training and best practice support. Typical and topical examples of these roles will be discussed, covering firlds as varied as nuclear power, energy harvesting, climate and science, and the audience will be challenged to answer a question which may be key in achieving impact from their own research work : “do I have a way of convincing a series of strangers to believe and adopt my results?”. The role of the Knowledge Networks team in supporting the Measurement Network , and other networks, in facilitating progress in this type of challenge will be discussed.
Wireless sensor networks for real time particle tracking in inaccessible environments (27 February, 2013)
Speaker: George Maniatis
One of the most difficult problems of contemporary Geophysics is the description and the prediction of the movement of the riverbeds. According to the Lagrangian description of the system the whole movement can be resolved into the combinational result of the movement of individual grains across several time and space scales. The verification of this type of models demands the acquisition of data that a) express the synergistic effect of hydrological and topographical circumstances, b)describe the movement of each grain as an continuous process, especially during events of special interest (like floods) and c) give representative macroscopic information for the riverbed (synchronous monitoring of many grains).Although many of the contemporary technologies have been applied (advanced RFID techniques, specialized piezoelectric sensors, sonar e.t.c) none of the existing datasets meets all the above three requirements. The first stage of this project is the development of a Wireless Sensor that will be able to monitor robustly all the phases of individual grain movement (entrainment, transition,deposition) by correlating measures for both causal and result factors (experienced accelerations and travel-path length/position respectively).The second stage will be the deployment of a number of sensors which will be installed into artificial and/or natural stones and will form a Wireless Network of smart-pebbles- motes that would address the need for representative macroscopic information. The final stage will be the deployment of this WSN into a motoring system that will ,along with the data concerning the movement of the grains, provide synchronous information about the state of the river (stage discharge, flow velocity, local topography e.t.c).This is a challenging application, with constrains posed on all the "aspects" of the WSN (from the motes and the physical to the network and finally the application layer) .Those constrains are driven from the special characteristics of the system (difficult initial sensor calibration,demand for robust under- water RF communication,harsh environmental conditions e.t.c) and the stochasticity of the understudy process (need for robust event detection algorithms,decision making based on very variable thresholds, real time reprogramming for recalibration e.t.c).
Pre-interaction Identification By Dynamic Grip Classification (28 February, 2013)
Speaker: Faizuddin Mohd Noor
We present a novel authentication method to identify users at they pick up a mobile device. We use a combination of back-of-device capacitive sensing and accelerometer measurements to perform classification, and obtain increased performance compared to previous accelerometer-only approaches. Our initial results suggest that users can be reliably identified during the pick-up movement before interaction commences.
Modelling Time & Demographics in Search Logs (01 March, 2013)
Speaker: Milad Shokouhi
Knowing users' context offers a great potential for personalizing web search results or related services such as query suggestion and query completion. Contextual features cover a wide range of signals; query time, user’s location, search history and demographics can all be regarded as contextual features that can be used for search personalization.
In this talk, we’ll focus on two main questions:
1) How can we use the existing contextual features, in particular time, for improving search results (Shokouhi & Radinsky, SIGIR’12).
2) How can we infer missing contextual features, in particular user-demographics, based on search history (Bi et al., WWW2013).
Our results confirm that (1) contextual features matter and (2) that many of them can be inferred from search history.
Formal Models for Populations of User Activity Patterns and Varieties of Software Structures (05 March, 2013)
Speaker: Oana Andrei
The challenges raised by developing mobile applications come from the way these apps interweave with everyday life and are distributed globally via application centres or stores to a wide range of users. People use an app according to their needs and understanding, therefore one could observe variations in usage frequencies of features or time and duration of use. The same mobile app varies with app settings, mobile device settings, device model or operating system.
For this talk we present work in progress on a formal modelling approach suitable for representing and analysing the user activity patterns and the structural variability of a software system. It is based on a stochastic abstraction of the populations of software in use and the software uses, building upon results from statistical analysis of user activity patterns. One aim of our current research is to design for variability of uses and contexts that mobile software developers may not be able to fully predict. Based on the automatically logged feedback on in-app usage and configurations, inference methods and formal modelling and analysis connect and collaborate to provide information on relevant populations of similar user behaviour and software structure and to evaluate their performance and robustness. This way we can track behavioural changes in the population of users and suggest software improvements to fit new user behaviours and contexts and changes in the user behaviour. The software designers and developers will then (re)consider the design objectives and strategies, create more personalised modules to be incorporated in the software and identify new opportunities to improve the overall user experience. We use a real life case study based on an iOS game to illustrate the concepts.
This talk is based on a joint work with Muffy Calder, Mark Girolami and Matthew Higgs.
Further Adventures with the Raspberry Pi Cloud (05 March, 2013)
Speaker: David White, Jeremy Singer (and L4 project student)
With money from GU Chancellor's Fund, we have been constructing a scale model of a cloud datacenter out of Raspberry Pi boards. In this presentation, we will give details of the aims of the project, potential deployment in research and teaching contexts, and progress to date.
Scientific Lenses over Linked Data: Identity Management in the Open PHACTS project (11 March, 2013)
Speaker: Alasdair Gray,
Scientific Lenses over Linked Data: Identity Management in the Open PHACTS project
Alasdair Gray, University of Manchester
The discovery of new medicines requires pharmacologists to interact with a number of information sources ranging from tabular data to scientific papers, and other specialized formats. The Open PHACTS project, a collaboration of research institutions and major pharmaceutical companies, has developed a linked data platform for integrating multiple pharmacology datasets that form the basis for several drug discovery applications. The functionality offered by the platform has been drawn from a collection of prioritised drug discovery business questions created as part of the Open PHACTS project. Key features of the linked data platform are:
1) Domain specific API making drug discovery linked data available for a diverse range of applications without requiring the application developers to become knowledgeable of semantic web standards such as SPARQL;
2) Just-in-time identity resolution and alignment across datasets enabling a variety of entry points to the data and ultimately to support different integrated views of the data;
3) Centrally cached copies of public datasets to support interactive response times for user-facing applications.
Within complex scientific domains such as pharmacology, operational equivalence between two concepts is often context-, user- and task-specific. Existing linked data integration procedures and equivalence services do not take the context and task of the user into account. We enable users of the Open PHACTS platform to control the notion of operational equivalence by applying scientific lenses over linked data. The scientific lenses vary the links that are activated between the datasets which affects the data returned to the user
Bio
Alasdair is a researcher in the MyGrid team at the University of Manchester. He is currently working on the Open PHACTS project which is building an Open Pharmacological Space to integrate drug discovery data. Alasdair gained his PhD from Heriot-Watt University, Edinburgh, and then worked as a post-doctoral researcher in the Information Retrieval Group at the University of Glasgow. He has spent the last 10 years working on novel knowledge management projects investigating issues of relating data sets.
Reusing Historical Interaction Data for Faster Online Learning to Rank for IR (12 March, 2013)
Speaker: Anne Schuth
Online learning to rank for information retrieval (IR) holds promise for allowing the development of ³self-learning² search engines that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge.
In this paper we investigate whether and how previously collected (historical) interaction data can be used to speed up learning in online learning to rank for IR. We devise the first two methods that can utilize historical data (1) to make feedback available during learning more reliable and (2) to preselect candidate ranking functions to be evaluated in interactions with users of the retrieval system. We evaluate both approaches on 9 learning to rank data sets and find that historical data can speed up learning, leading to substantially and significantly higher online performance. In particular, our preselection method proves highly effective at compensating for noise in user feedback. Our results show that historical data can be used to make online learning to rank for IR much more effective than previously possible, especially when feedback is noisy.
Extremal graphs (12 March, 2013)
Speaker: Patrick Prosser and Alice Miller
GIST Seminar: A Study of Information Management Processes across the Patient Surgical Pathway in NHS Scotland (14 March, 2013)
Speaker: Matt-Mouley Bouamrane
Preoperative assessment is a routine medical screening process to assess a patient's fitness for surgery. Systematic reviews of the evidence have suggested that existing practices are not underpinned by a strong evidence-base and may be sub-optimal.
We conducted a study of information management processes across the patient surgical pathway in NHS Scotland, using the Medical Research Council Complex Intervention Framework and mixed-methods.
Most preoperative services were created in the last 10 years to reduce late theatre cancelations and increase the ratio of day-case surgery. 2 health-boards have set up electronic preoperative information systems and stakeholders at these services reported overall improvements in processes. General Practitioners' (GPs) referrals are now done electronically and GPs considered electronic referrals as a substantial improvement. GPs reported minimal interaction with preoperative services. Post- operative discharge information was often considered unsatisfactory.
Conclusion: Although some substantial progress have been made in recent years towards improving information transfer and sharing among care providers within the NHS surgical pathway, there remains considerable scope for improvements at the interface between services.
Dynamic analysis tools considered difficult (to write) (15 March, 2013)
Speaker: Stephen Kell
Dynamic analysis tools are widely used for both profiling and
bug-finding, but are difficult to develop. Portable approaches rely on
instrumentation, which is complex to specify and difficult to re-use. I
will give an overview of the DiSL and FRANC systems which address
(respectively) these two difficulties, borrowing concepts from
aspect-oriented and event-driven programming. I will also outline some
unfortunate properties of the Java platform which, as revealed by bitter
experience, make it especially difficult to achieve *high-coverage*
dynamic analysis tools.
Query Classification for a Digital Library (18 March, 2013)
Speaker: Deirdre Lungley
The motivation for our query classification is the insight it gives the digital content provider into what his users are searching for and hence how his collection could be extended. This talk details two query classification methodologies we have implemented as part of the GALATEAS project (http://www.galateas.eu/): one log-based, the other using wikified queries to learn a Labelled LDA model. An analysis of their respective classification errors indicates the method best suited to particular category groups.
Proactive Social Media Use of Emergency Authorities (19 March, 2013)
Speaker: Preben Bonnen & Martin Marcher
In the summer of 2012, the Danish Forum for Civil Protection and Emergency Planning / Forum for Samfundets Beredskab (FSB), started a large project focusing on the authorities' proactive use of social media, primarily Facebook and Twitter. The inspiration came from the Norwegian and Swedish police, who not only proactively use Facebook and Twitter, but they have also previously made thorough considerations regarding the possibilities and prospects for the use of social media.
The rationale behind the launch of an analysis, and later that year a seminar the 2nd of November 2012 in the Danish Parliament, were the growing challenges authorities are facing in relation to both the media and the press, and in relation to social media. In all cases there is an expectation of quick information, and even so more, in the possible event of a major incident where questions and the need for information would multiply. But when questions are many, the information from the authorities is typically and usually moderate. That may change with proactive use of social media.
Basically, there isn’t much that can prevent authorities using social media in ensuring society preparedness. For example, the police force can use social media tools to convey important information to the public, create campaigns targeting specific social segments, communicate enquiries regarding criminals or missing persons, and issue traffic warnings. Besides reaching their target audience, who may not usually be involved in dialogue with police, there is a good possibility of increasing dialogue with the general public. This can be achieved through chats with the public on various issues chosen by citizens themselves, on issues they find relevant within their own society. In conclusion, police presence on social media over time will be expected as a normal part of their everyday job. Preben Bonnén and Martin Marcher from Forum for Civil Protection and Emergency Planning (FSB) will present a detailed presentation discussing the opportunities and perspectives that present themselves to authorities in society preparedness, and to what extent they do so.
Using formal stochastic models to guide decision making -- Should I fix this problem now or in 3 hours? (19 March, 2013)
Speaker: Michele Sevegnani
NATS is the UK's main air navigation service provider. Its control centre in Prestwick constantly monitors the status of its infrastructure via thousands of sensors situated in numerous radar and communication sites all over the UK's territory. The size and complexity of this system often makes it difficult to interpret the sensed data and impossible to predict the system's future behaviour.
In this talk, we present on-going work in which a stochastic model is used to guide decision making. In particular, we will show a prototype web-app based on the formal model that could allow the engineering team in the control room to perform stochastic model checking in a simple and intuitive way, without prior knowledge of formal methods. The analysis results can then be used to schedule, prioritise and optimise maintenance, without affecting safety.
Engineering Adaptive Software Systems (19 March, 2013)
Speaker: Dr Arosha Bandara
Adaptive software systems have been the focus of significant research activity due to their promise of addressing some of the complexity challenges associated with large software intensive systems. In 2003, Kephart and Chess published their vision of autonomic computing, which aimed to address some of the challenges of software complexity. In essence, they proposed that software architectures should incorporate a layer, analogous to the autonomic nervous system, that could adapt the behaviour of the system to meet particular quality attributes (e.g., security, usability, etc.). The challenges of engineering such systems encompass a range of computing disciplines, that include requirements engineering, software architectures and usability. This talk will explore these challenges, drawing on work being done at The Open University in the areas of adaptive user interfaces, information security and privacy.
TechMeetup (27 March, 2013)
Speaker: Jason Frame & Iain Watt
TechMeetup Glasgow (http://techmeetup.co.uk/ ) is back on the 5th Floor of the School of Computing this evening from 6:30pm.
The talks are:
Brain Rules - Iain Watt
In 2009 Dr. John Medina gave us 12 "Brain Rules" - what scientists know for sure about how our brains work.
In this talk I'll ask you to consider how we as technologists might take advantage of some of these "brain rules" to be happier and more productive in our creative endeavours.
A JavaScript Extravaganza - Jason Frame
There'll be beer and pizza as usual and plenty of time before, between, and after the talks to catch up on the latest tech news & gossip. As ever, the event is free and no sign-up is necessary.
TechMeetup is made possible by the amazing financial support from the University of Glasgow, NewContext, ScottLogic, SkyScanner and small donations from community members. Thank you all.
[GIST] Talk -- Shape-changing Displays: The next revolution in display technology? (28 March, 2013)
Speaker: Dr Jason Alexander
Shape-changing interfaces physically mutate their visual display surface
to better represent on-screen content, provide an additional information
channel, and facilitate tangible interaction with digital content. This
talk will preview the current state-of-the art in shape-changing
displays, discuss our current work in this area, and explore the grand
challenges in this field. The talk will include a hardware demonstration
of one such shape-changing device, a Tilt Display.
Bio:
Jason is a lecturer in the School of Computing and Communications at
Lancaster University. His primary research interests are in
Human-Computer Interaction, with a particular interest in developing the
next generation of interaction techniques. His recent research is
hardware-driven, combining tangible interaction and future display
technologies. He was previously a post-doctoral researcher in the
Bristol Interaction and Graphics (BIG) group at the University of
Bristol. Before that he was a Ph.D. student in the HCI and Multimedia
Lab at the University of Canterbury, New Zealand. More information can
be found at http://www.scc.lancs.ac.uk/~jason/.
Flexible models for high-dimensional probability distributions (04 April, 2013)
Speaker: Iain Murray
Statistical modelling often involves representing high-dimensional probability distributions. The textbook baseline methods, such as mixture models (non-parametric Bayesian or not), often don’t use data efficiently. Whereas the machine learning literature has proposed methods, such as Gaussian process density models and undirected neural network models, that are often too computationally expensive to use. Using a few case-studies, I will argue for increased use of flexible autoregressive models as a strong baseline for general use.
A hierarchy related to interval orders (16 April, 2013)
Speaker: Sergey Kitaev
A partially ordered set (poset) is an interval order if it is isomorphic to some set of intervals on the real line ordered by left-to-right precedence. Interval orders are important in mathematics, computer science, engineering and the social sciences. For example, complex manufacturing processes are often broken into a series of tasks, each with a specified starting and ending time. Some of the tasks are not time-overlapping, so at the completion of the first task, all resources associated with that task can be used for the following task. On the other hand, if two tasks have overlapping time periods, they compete for resources and thus can be viewed as conflicting tasks.
A poset is said to be (2+2)-free if no two disjoint 2-element chains have comparable elements. In 1970, Fishburn proved that (2+2)-free posets are precisely interval orders. Recently, Bousquet-Mélou, Claesson, Dukes, and Kitaev introduced ascent sequences, which not only allowed us to enumerate interval orders, but also to connect them to other combinatorial objects, namely to Stoimenow's matchings, to certain upper triangular matrices, and to certain pattern avoiding permutations (a very active area of research these days). A host of papers by various authors has followed this initial paper.
In this talk, I will review some of results from these papers and will discuss a hierarchy of objects related to interval orders.
Optimizing Multicore Java Virtual Machines (17 April, 2013)
Speaker: Khaled Alnowaiser
The Java Virtual Machine (JVM) consumes a significant portion of its execution time performing internal services such as garbage collection and optimising compilation. Multicore processors offer the potential to reduce JVM service overhead by utilising the parallel hardware. However, the JVM developers face many challenges to adapt and achieve optimal performance. This talk will motivate and discuss multicore garbage collection performance and some behavioural observations of OpenJDK Hotspot JVM. We will propose some potential solutions to JVM performance optimisation.
Causality (26 April, 2013)
Speaker: Neil McDonnell
There has been a significant amount of work within Analytic Philosophy directed at understanding our concept of Causation. The central question is: what are the conditions that must obtain in order that one thing be considered the cause of another? Hume was famously skeptical on this question but David Lewis, an ardent Humean, made some substantial breakthroughs in his 1973 Counterfactual Analysis of Causation. This analysis forms the de facto standard test for causation in certain legal contexts and has had an enormous impact on the philosophical literature and beyond. Recently, Computer Scientists Joe Halpern and Judea Pearl adapted a central insight of Lewis's analysis into their account of causal modelling for the computer sciences.
In this paper I will introduce the Lewisian concept of Causation, discuss some problems for it that are the object of my thesis, and then tie that to the work of Judea Pearl in particular.
Entity Linking for Semantic Search (29 April, 2013)
Speaker: Edgar Meij
Semantic annotations have recently received renewed interest with the explosive increase in the amount of textual data being produced, the advent of advanced NLP techniques, and the maturing of the web of data. Such annotations hold the promise for improving information retrieval algorithms and applications by providing means to automatically understand the meaning of a piece of text. Indeed, when we look at the level of understanding that is involved in modern-day search engines (on the web or otherwise), we come to the obvious conclusion that there is still a lot of room for improvement. Although some recent advances are pushing the boundaries already, information items are still retrieved and ordered mainly using their textual representation, with little or no knowledge of what they actually mean. In this talk I will present my recent and ongoing work, which addresses the challenges associated with leveraging semantic annotations for intelligent information access. I will introduce a recently proposed method for entity linking and show how it can be applied to several tasks related to semantic search on collections of different types, genres, and origins.
The Hospitals/Residents problem with Free pairs (30 April, 2013)
Speaker: Augustine Kwanashie
In the classical Hospitals/Residents problem, a blocking pair exists with respect to a matching if both agents would be better off by coming together, rather than remaining with their partners in the matching (if any). However blocking pairs that exist in theory need not undermine a matching in practice. The absence of social ties between agents may cause a lack of awareness about the existence of blocking pairs in practice. We define the Hospitals/Residents problem with Free pairs (HRF) in which a subset of acceptable resident-hospital pairs are identified as free. This means that they can belong to a matching M but they can never block M. Free pairs essentially correspond to resident and hospitals that do not know one another. Relative to a relaxed stability definition for HRF, called local stability, we show that locally stable matchings can have different sizes and the problem of finding a maximum locally stable matching is NP-hard, though approximable within 3/2. Furthermore we give polynomial time algorithms for two special cases of the problem. This is joint work with David Manlove.
Sensing Infrastructure for a mini-Smart City within SoCS (03 May, 2013)
Speaker: Craig Macdonald and Dyaa Albakour
In this talk, we will describe our motivations and plans to deploy a sensing infrastructure within SAWB. In particular, we will describe how a mini-Smart city environment fits within wider initiatives, such as the University's sensor systems research area, and the SMART FP7 project. Indeed, such Smart city environments will facilitate information access and search for real-world events. We will then discuss plans for deploying visual sensors within SAWB, describing the proposed locations, the analysis that will be performed and the protection policies implemented.
Fast and Reliable Online Learning to Rank for Information Retrieval (06 May, 2013)
Speaker: Katja Hoffman
Online learning to rank for information retrieval (IR) holds promise for allowing the development of "self-learning search engines" that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge.
In this talk I will present my recent work, which addresses the challenges posed by learning from natural user interactions. First, I will detail a new method, called Probabilistic Interleave, for inferring user preferences from users' clicks on search results. I show that this method allows unbiased and fine-grained ranker comparison using noisy click data, and that this is the first such method that allows the effective reuse of historical data (i.e., collected for previous comparisons) to infer information about new rankers. Second, I show that Probabilistic Interleave enables new online learning to rank approaches that can reuse historical interaction data to speed up learning by several orders of magnitude, especially under high levels of noise in user feedback. I conclude with an outlook on research directions in online learning to rank for IR, that are opened up by our results.
Funding for Academic-Business Collaboration (10 May, 2013)
Speaker: Stephen Marshall and Elwood Vogt
This talk will cover the range of funding available, from First Step Awards, which provide up to £5,000 to buy out an academic’s time spent on a small project with a Scottish SME, to the University’s IAA (Impact Acceleration Account) and Knowledge Exchange Fund, which can provide up to £30,000 to support a range of KE interventions.
Personality Computing (13 May, 2013)
Speaker: Alessandro Vinciarelli
Personality is one of the driving factors behind everything we do and experience
in life. During the last decade, the computing community has been showing an ever
increasing interest for such a psychological construct, especially when it comes
to efforts aimed at making machines socially intelligent, i.e. capable of interacting with
people in the same way as people do. This talk will show the work being done in this
area at the School of Computing Science. After an introduction to the concept of
personality and its main applications, the presentation will illustrate experiments
on speech based automatic perception and recognition. Furthermore, the talk will
outline the main issues and challenges still open in the domain.
Discovering, Modeling, and Predicting Task-by-Task Behaviour of Search Engine Users (20 May, 2013)
Speaker: Salvatore Orlando
Users of web search engines are increasingly issuing queries to accomplish their daily tasks (e.g., “finding a recipe”, “booking a flight”, “read- ing online news”, etc.). In this work, we propose a two-step methodology for discovering latent tasks that users try to perform through search engines. Firstly, we identify user tasks from individual user sessions stored in query logs. In our vision, a user task is a set of possibly non-contiguous queries (within a user search session), which refer to the same need. Secondly, we discover collective tasks by aggregating similar user tasks, possibly performed by distinct users. To discover tasks, we propose to adopt clustering algorithms based on novel query similarity functions, in turn obtained by exploiting specific features, and both unsupervised and supervised learning approaches. All the proposed solutions were evaluated on a manually-built ground-truth.
Furthermore, we introduce the the Task Relation Graph (TGR) as a representation of users' search behaviors on a task-by-task perspective, by exploiting the collective tasks obtained so far. The task-by-task behavior is captured by weighting the edges of TGR with a relatedness score computed between pairs of tasks, as mined from the query log. We validated our approach on a concrete application, namely a task recommender system, which suggests related tasks to users on the basis of the task predictions derived from the TGR. Finally, we showed that the task recommendations generated by our technique are beyond the reach of existing query suggestion schemes, and that our solution is able to recommend tasks that user will likely perform in the near future.
Work in collaboration with Claudio Lucchese, Gabriele Tolomei, Raffaele Perego, and Fabrizio Silvestri.
References:
[1] C. Lucchese, S. Orlando, R. Perego, F. Silvestri, G. Tolomei. "Identifying Task-based Sessions in Search Engine Query Logs". Forth ACM Int.l Conference on Web Search and Data Mining (WSDM 2011), Hong Kong, February 9-12, 2011
[2] C. Lucchese, S. Orlando, R. Perego, F. Silvestri, G. Tolomei. "Discovering Tasks from Search Engine Query Logs", To appear on ACM Transactions on Information Systems (TOIS).
[3] C. Lucchese, S. Orlando, R. Perego, F. Silvestri, G. Tolomei. "Modeling and Predicting the Task-by-Task Behavior of Search Engine Users". To appear in Proc. OAIR 2013, Int.l Conference in the RIAO series.
Interdependence and Predictability of Human Mobility and Social Interactions (23 May, 2013)
Speaker: Mirco Musolesi
The study of the interdependence of human movement and social ties of individuals is one of the most interesting research areas in computational social science. Previous studies have shown that human movement is predictable to a certain extent at different geographic scales. One of the open problems is how to improve the prediction exploiting additional available information. In particular, one of the key questions is how to characterise and exploit the correlation between movements of friends and acquaintances to increase the accuracy of the forecasting algorithms.
In this talk I will discuss the results of our analysis of the Nokia Mobile Data Challenge dataset showing that, by means of multivariate nonlinear predictors, it is possible to exploit mobility data of friends in order to improve user movement forecasting. This can be seen as a process of discovering correlation patterns in networks of linked social and geographic data. I will also show how mutual information can be used to quantify this correlation; I will demonstrate how to use this quantity to select individuals with correlated mobility patterns in order to improve movement prediction. Finally, I will show how the exploitation of data related to friends improves dramatically the prediction with respect to the case of information of people that do not have social ties with the user.
On List Colouring and List Homomorphism of Permutation and Interval Graphs (28 May, 2013)
Speaker: Jessica Enright
List colouring is an NP-complete decision problem even if the total number of colours is three. It is hard even on planar bipartite graphs. I give a sketch of a polynomial-time algorithm for solving list colouring of permutation graphs with a bounded total number of colours. This generalises to a polynomial-time algorithm that solves the list-homomorphism problem to any fixed target graph for a large class of input graphs including all permutation and interval graphs.
New Group Medley (31 May, 2013)
Speaker: Phil Trinder
Abstract: We are a new group joining the department, and will present a series of 5 minute talks outlining some of our research. Topics are as diverse as:
· Researching Reliable Performance-Portable Parallel Computing – Phil Trinder
· An Overview of Autonomous Mobile Programs - Natalia Chechina
· Elegance – Joe Davidson
· Scalable Persistent Storage for Erlang – Amir Ghaffari
· The Design and Implementation of Scalable Parallel Haskell – Malak Aljabri
· Profiling Distributed-Memory Parallel Haskell – Maj Al Saeed
A study of Information Management in the Patient Surgical Pathway in NHS Scotland (03 June, 2013)
Speaker: Matt-Mouley Bouamrane
We conducted a study of information management processes across the patient surgical pathway in NHS Scotland. While the majority of General Practitioners (GPs) consider electronic information systems as an essential and integral part of their work during the patient consultation, many were not fully satisfied with the functionalities of these systems. A majority of GPs considered that the national eReferral system streamlined referral processes. Almost all GPs reported marked variability in the quality of discharge information. Preoperative processes vary significantly across Scotland, with most services using paper based systems. There is insufficient use made of information provided through the patient electronic referral and a considerable duplication of effort with the work already performed in primary care. Three health-boards have implemented electronic preoperative information systems. These have transformed clinical practices and facilitated communication and information-sharing among the multi-disciplinary team and within the health boards. Substantial progress has been made towards improving information transfer and sharing within the surgical pathway in recent years but there remains scope for further improvements at the interface between services.
On being the CSA for Scottish Government (04 June, 2013)
Speaker: Muffy Calder
An overview of what I do in "the other job".
The Matrix Mechanics of Modern Economies (07 June, 2013)
Speaker: Dave Zachariah
In this talk we will try to give answers to the questions "What is money?" and "What is the source of economic value?" using concepts from matrix algebra. We will also show how these tools enable framework to understand income distributions in market economies, the nature of government surpluses and sector balances, the fallacy of austerity and persistent trade surpluses, and the wealth of nations.
Information Visualization for Knowledge Discovery (13 June, 2013)
Speaker: Professor Ben Schneiderman, University of Maryland - College Park
This talk reviews the growing commercial success stories such as www.spotfire.com, and www.smartmoney.com/marketmap, plus emerging products such as www.hivegroup.com will be covered.
Full information on the talk is available on the University events listings.
The CloPeMa project: robotic Clothes Perception and Manipulation. (20 June, 2013)
Speaker: Computer Vision and Graphics Group
(Remember the big bundle of blue robot that sat in the Alwyn Williams building foyer? This is the story of what happened to that….)
We present current progress in CloPeMa, a 3 year open-source EU-FP7 research project which aims to advance the state of the art in the autonomous perception and manipulation of fabrics, textiles and garments. The goal of CloPeMa is to build a robot system that will learn to manipulate, perceive and fold a variety of textiles.
The novelty and uniqueness of this project is due chiefly to its generality. Various garments will be presented in a random pile on an arbitrary background and novel ways of manipulating them (sorting, folding, etc.) will be learned on demand in a real-life dynamic environment. A key requirement is to remove any specific restrictions about how textiles can be given to and handled by the robot and accordingly is expected to lead to greater robustness and reliability, and also to widen the field of robotics manipulation applications.
CloPeMa's main objective is closer integration of perception, action, learning, and reasoning. Perception means integrated haptic and visual sensing, recognition, and support for a perception-action reactive cycle. Actions will be performed by a cooperating pair of robotic hands, part of the CloPeMa experimental testbed that we have here in Glasgow. The hands will combine state-of-the-art solutions for manipulation of limp material: variable strength grip on a non-rigid hand mechanism using smart materials and tactile sensors with large areas of “artificial skin”.
Members of the Computer Vision and Graphics Group are developing the primary vision system for the Clopema robot and this talk will outline the current state of this system, overall progress to date in CloPeMa and plans for on-going and future developments using the CloPeMa robot facility.
[GIST] Talk -- The Value of Visualization for Exploring and Understanding Data (11 July, 2013)
Speaker: Prof John Stasko
Investigators have an ever-growing suite of tools available for analyzing and understanding their data. While techniques such as statistical analysis, machine learning, and data mining all have benefits, visualization provides an additional unique set of capabilities. In this talk I will identify the particular advantages that visualization brings to data analysis beyond other techniques, and I will describe the situations when it can be most beneficial. To help support these arguments, I'll present a number of provocative examples from my own work and others'. One particular system will demonstrate how visualization can facilitate exploration and knowledge acquisition from a collection of thousands of narrative text documents, in this case, reviews of wines from Tuscany.
The Use of Correspondence Analysis in Information Retrieval (11 July, 2013)
Speaker: Dr Taner Dincer
This presentation will introduce the application of Correspondence Analysis (CA) to Information Retrieval. CA is a well-established multivariate, statistical, exploratory data analysis technique. Multivariate data analysis techniques usually operate on a rectangular array of real numbers called a data matrix whose rows represent r observations (for example, r terms/words in documents) and columns represent c variables (for the example, c documents, resulting in a rxc term-by-document matrix). Multivariate data analysis refers to analyze the data in a manner that takes into account the relationships among observations and also among variables. In contrast to univariate statistics, it is concerned with the joint nature of measurements. The objective of exploratory data analysis is to explore the relationships among objects and among variables over measurements for the purpose of visual inspection. In particular, by using CA one can visually study the “Divergence From Independence” (DFI) among observations and among variables.
For Information Retrieval (IR), CA can serve three different uses: 1) As an analysis tool to visually inspect the results of information retrieval experiments, 2) As a basis to unify the probabilistic approaches to term weighting problem such as Divergence From Randomness and Language Models, and 3) As a term weighting model itself, "term weighting based on measuring divergence from independence". In this presentation, the uses of CA for these three purposes are exemplified.
[SICSA DVF] Language variation and influence in social media (15 July, 2013)
Speaker: Dr. Jacob Eisenstein
Languages vary by speaker and situation, and change over time. While variation and change are inhibited in written corpora such as news text, they are endemic to social media, enabling large-scale investigation of language's social and temporal dimensions. The first part of this talk will describe a method for characterizing group-level language differences, using the Sparse Additive Generative Model (SAGE). SAGE is based on a re-parametrization of the multinomial distribution that is amenable to sparsity-inducing regularization and facilitates joint modeling across many author characteristics. The second part of the talk concerns change and influence. Using a novel dataset of geotagged word counts, we induce a network of linguistic influence between cities, aggregating across thousands of words. We then explore the demographic and geographic factors that drive spread of new words between cities. This work is in collaboration with Amr Ahmed, Brendan O'Connor, Noah A. Smith, and Eric P. Xing.
Biography
Jacob Eisenstein is an Assistant Professor in the School of Interactive Computing at Georgia Tech. He works on statistical natural language processing, focusing on social media analysis, discourse, and latent variable models. Jacob was a Postdoctoral researcher at Carnegie Mellon and the University of Illinois. He completed his Ph.D. at MIT in 2008, winning the George M. Sprowls dissertation award.
How cost affects search behaviour (21 July, 2013)
Speaker: Leif Azzopardi
In this talk, I will run through the work I will be presenting at SIGIR on "How cost affects search behavior". The empirical analysis is motivated and underpinned using the Search Economic Theory that I proposed at SIGIR 2011.
Toward Models and Measures of Findability (21 July, 2013)
Speaker: Colin Wilkie
In this 10 minute talk, I will provide an overview of the project I am working on, which is about Findability, and review some of the existing models and measures of findability, before outlining the models that I have working on.
Quantum Language Models (19 August, 2013)
Speaker: Alessandro Sordoni
A joint analysis of both Vector Space and Language Models for IR
using the mathematical framework of Quantum Theory revealed how both
models allocate the space of density matrices. A density matrix is
shown to be a general representational tool capable of leveraging
capabilities of both VSM and LM representations thus paving the way
for a new generation of retrieval models. The new approach is called
Quantum Language Modeling (QLM) and has shown its efficiency and
effectiveness in modeling term dependencies for Information
Retrieval.
Exploration and contextualization: towards reusable tools for the humanities. (16 September, 2013)
Speaker: Marc Bron
The introduction of new technologies, access to large electronic
cultural heritage repositories, and the availability of new
information channels continues to change the way humantities
researchers work and the questions they seek to answer. In this talk I
will discuss how the research cycle of humanities researchers has been
affected by these changes and argue for the continued development of
interactive information retrieval tools to support the research
practices of humanities researchers. Specifically, I will focus on two
phases in the humanities research cycle: the exploration phase and
contextualization phase. In the first part of the talk I discuss work
on the development and evaluation of search interfaces aimed at
supporting exploration. In the second part of the talk I will focus on
how information retrieval technology focused on identifying relations
between concepts may be used to develop applications that support
contextualization.
Validity and Reliability in Cranfield-like Evaluation in Information Retrieval (23 September, 2013)
Speaker: Julián Urbano
The Cranfield paradigm to Information Retrieval evaluation has been used for half a century now as the means to compare retrieval techniques and advance the state of the art accordingly. However, this paradigm makes certain assumptions that remain a research problem in Information Retrieval and that may invalidate our experimental results.
In this talk I will approach the Cranfield paradigm as an statistical estimator of certain probability distributions that describe the final user experience. These distributions are estimated with a test collection, which actually computes system-related distributions that are assumed to be correlated with the target user-related distributions. From the point of view of validity, I will discuss the strength of that correlation and how it affects the conclusions we draw from an evaluation experiment. From the point of view of reliability, I will discuss on past and current practice to measure the reliability of test collections and review several of them accordingly.
CSS: See you in Beijing! (27 September, 2013)
Speaker: Alice Miller
I recently visited China for two weeks: a week in Guangzhou and a week in Beijing. This involved a research visit to Sun Yat-sen University (SYSU), and attendance at a conference in Beijing (plus a bit of sightseeing). As some of you may well be planning a similar trip in the future, in this talk I’ll give some background on SYSU and discuss some of the things to remember when travelling to China. Mainly though, I’ll show you some of my photographs!
