This Week’s Events

SICSA DVF Professor Guevara Noubir "Cross-Layer Attacks in Emerging Networks

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 24 March, 2017
Time: 15:00 - 16:00
Location:University of Stirling, Cottrell Building, University of Stirling, United Kingdom

SICSA DVF Professor Guevara Noubir from Northeastern University, Boston, MA, USA will be giving a talk on “Cross-Layer Attacks in Emerging Networks" on Friday 24 March at the University of Stirling. Abstract: The last decade has seen the rise of several new networking technologies, from mobile and wireless to overlay anonymous communication networks such as Tor. In this talk, I will argue that such networks are vulnerable to a variety of cross-layer attacks on their intrinsic features. For instance, an adversary can infer users location using malicious apps without requiring permissions, or by exploiting the physical layer characteristics. I will also provide evidence that the Tor anonymity network is also subject to active attacks, and present a framework that identifies malicious relays. I will then discuss the result of the use of the framework, revealing over 100 malicious relays. Bio: Guevara Noubir is a Professor in the College of Computer and Information Science at Northeastern University. He received a PhD is Computer Science from the Swiss Federal Institute of Technology in Lausanne (EPFL 1996) and an engineering diploma (MS) from École Nationale Supérieure d'Informatique et de Mathématiques Appliquées at Grenoble (ENSIMAG 1991). Prior to joining the faculty at Northeastern University, he was a senior research scientist at CSEM SA (Switzerland) where he led several research projects in the area of wireless and mobile networking. In particular, he contributed to the definition of the third generation Universal Mobile Telecommunication System (UMTS) standardized as 3GPP WCDMA and was the lead of the Data Networking Stack for the first 3G demonstrator in the world (as part of the FRAMES EU Research Project). In 2013, Noubir led Northeastern University’s team in the DARPA Spectrum Challenge competition winning the 2013 Cooperative Challenge. Dr Noubir held visiting research positions at Eurecom, MIT, and UNL. He is a Senior Member of the IEEE, a member of the ACM, and a recipient of the NSF CAREER Award. He serves on the editorial boards of IEEE Transactions on Mobile Computing, ACM Transaction on Privacy and Security, and co-chaired several ACM and IEEE conferences in the fields of mobile, wireless, and security (ACM WiSec, IEEE CNS, IEEE SECON, IEEE WoWMoM). His research covers both theoretical and practical aspects of secure and robust wireless and mobile systems. His current interests include leveraging mechanisms such as social networking authentication and low power ZigBee, to secure residential broadband networks, and boosting the robustness of wireless systems against smart attacks. The host of this Distinguished Visiting Fellow is Dr Paul Patras

Upcoming Events

SICSA DVF Professor Guevara Noubir "Cross-Layer Attacks in Emerging Networks

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 24 March, 2017
Time: 15:00 - 16:00
Location: University of Stirling, Cottrell Building, University of Stirling, United Kingdom

SICSA DVF Professor Guevara Noubir from Northeastern University, Boston, MA, USA will be giving a talk on “Cross-Layer Attacks in Emerging Networks" on Friday 24 March at the University of Stirling. Abstract: The last decade has seen the rise of several new networking technologies, from mobile and wireless to overlay anonymous communication networks such as Tor. In this talk, I will argue that such networks are vulnerable to a variety of cross-layer attacks on their intrinsic features. For instance, an adversary can infer users location using malicious apps without requiring permissions, or by exploiting the physical layer characteristics. I will also provide evidence that the Tor anonymity network is also subject to active attacks, and present a framework that identifies malicious relays. I will then discuss the result of the use of the framework, revealing over 100 malicious relays. Bio: Guevara Noubir is a Professor in the College of Computer and Information Science at Northeastern University. He received a PhD is Computer Science from the Swiss Federal Institute of Technology in Lausanne (EPFL 1996) and an engineering diploma (MS) from École Nationale Supérieure d'Informatique et de Mathématiques Appliquées at Grenoble (ENSIMAG 1991). Prior to joining the faculty at Northeastern University, he was a senior research scientist at CSEM SA (Switzerland) where he led several research projects in the area of wireless and mobile networking. In particular, he contributed to the definition of the third generation Universal Mobile Telecommunication System (UMTS) standardized as 3GPP WCDMA and was the lead of the Data Networking Stack for the first 3G demonstrator in the world (as part of the FRAMES EU Research Project). In 2013, Noubir led Northeastern University’s team in the DARPA Spectrum Challenge competition winning the 2013 Cooperative Challenge. Dr Noubir held visiting research positions at Eurecom, MIT, and UNL. He is a Senior Member of the IEEE, a member of the ACM, and a recipient of the NSF CAREER Award. He serves on the editorial boards of IEEE Transactions on Mobile Computing, ACM Transaction on Privacy and Security, and co-chaired several ACM and IEEE conferences in the fields of mobile, wireless, and security (ACM WiSec, IEEE CNS, IEEE SECON, IEEE WoWMoM). His research covers both theoretical and practical aspects of secure and robust wireless and mobile systems. His current interests include leveraging mechanisms such as social networking authentication and low power ZigBee, to secure residential broadband networks, and boosting the robustness of wireless systems against smart attacks. The host of this Distinguished Visiting Fellow is Dr Paul Patras

SICSA DVF Dr Hagen Lehmann, Istituto Italiano di Tecnologia, iCub Facility

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 27 March, 2017
Time: 14:15 - 16:15
Location: Heriot Watt University, Edinburgh, Heriot Watt University, United Kingdom

SICSA is pleased to welcome Dr. Hagen Lehmann to Heriot Watt University on Monday 27th March where he will be doing a masterclass course on: "Approaches in Social Human-Robot Interaction Research and Social Robotics" The course will give an overview of different methods and approaches used in Human-Robot Interaction Research and Social Robotics. Current efforts to integrate robots into mixed human-robot ecologies will be contextualized in the theoretical history of the field and its origins, beginning with Cybernetics research in the late 1940s. On this basis, different conceptual perspectives, important for the field and implemented in current experimental HRI studies, will be explored and illustrated, with particular attention to the psychological mechanisms supporting the interaction between humans and robots. In the second part of the lecture examples of recent research will be given in order to illustrate the limits and possibilities of the current Social Robotic approaches The estimated length of the lecture is 2 x 45 minutes, with a 15 minutes break and 15 minutes time for a follow up discussion Bio: Dr. Lehmann is a Marie Curie Experienced Researcher in the iCub Facility at the Italian Institute of Technology, where he develops the SICSAR project, dedg icated to generate and test social interaction behaviors for the iCub robot. Dr. Lehmann received his Diploma in Psychology from the Technical University Dresden, his MA degree in Psychology at the Max-Planck Institute for Evolutionary Anthropology in Leipzig, and his Ph.D. in Computer Science from the University of Bath. In these years he has worked, from different interdisciplinary perspectives, on Evolution and Social Cognition, examining in particular possible reasons for the evolution of social structures in primates, the role of social dominance in this process, and social gaze behavior and its role in human social evolution. His current work is devoted to the application of this knowledge to the fields of Human-Robot Interaction and Social Robotics, through experimental research and with a particular focus on Robot Assisted Therapy and robotic home companions. Before his work at the IIT, he was part of the Adaptive Systems Research Group in the School of Computer Science at the University of Hertfordshire, where he was involved in different European project, e.g. iTALK, and ACCOMPANY If you would like to attend this masterclass please contact Dr Frank Broz who is hosting Dr Lehmann

Semantic Search at Bloomberg.

Group: Information Retrieval (IR)
Speaker: Edgar Meij, Bloomberg
Date: 27 March, 2017
Time: 15:00
Location: Sir Alwyn Williams Building, 422 Seminar Room


Large-scale knowledge graphs (KGs) store relationships between entities that are increasingly being used to improve the user experience in search applications. At Bloomberg we are currently in the process of rolling out our own knowledge graph and in this talk I will describe some of the semantic search applications that we aim to support. In particular, I will be discussing some of our recent papers on context-specific entity recommendations and automatically generating textual descriptions for arbitrary KG relationships.


Dr. Edgar Meij is a senior scientist at Bloomberg. Before this, he was a research scientist at Yahoo Labs and a postdoc at the University of Amsterdam, where he also obtained his PhD. His research focuses on advancing the state of the art in semantic search at Web scale, by designing entity-oriented search systems that employ knowledge graphs, entity linking, NLP, and machine learning techniques to improve the user experience, search, and recommendations. He has co-authored 50+ peer-reviewed papers and regularly teaches at the post-graduate level, including university courses, summer schools, and conference tutorials.

Scalable Computing Beyond the Cloud

Group: Systems Seminars
Speaker: Blesson Varghese, Queens University Belfast
Date: 29 March, 2017
Time: 13:00 - 14:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

It is forecast that over 50 billion devices will be added to the Internet by 2020. Consequently, 50 trillion gigabytes of data will be generated. Currently, applications generating data on user devices, such as smartphones, tablets and wearables use the cloud as a centralised server. This will soon become an untenable computing model. The way forward is to decentralise computations away from the cloud towards the edge of the network closer to the user. In my talk, I will present challenges, my current research and vision to harness computing capabilities at the edge of the network. More information is available at


Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 30 March, 2017
Time: 00:00 - 00:00
Location: The Informatics Forum, The University of Edinburgh, Crichton Street, University of Edinburgh, United Kingdom

SICSA is organising a workshop on Centres for Doctoral Training which will take place on Friday 31 March 2017 at the Informatics Forum, University of Edinburgh As an academic environment, we feel that Scotland is a great place to do CompSci research: friendly, collaborative, flexible and sometimes quirky. We want to ensure we are ready for a future EPSRC CDT funding call (expected in the next 12 months) by running a workshop event where we can share information and experience. We aim to establish strategic partnerships between smaller institutions that will give Scotland a good chance to achieve positive outcomes in future CDT applications. We are therefore holding a SICSA Centres for Doctoral Training Strategy workshop at the Informatics Forum in Edinburgh on 31 March 2017. EPSRC representatives will attend and provide advice and guidance. The event will also feature presentations from the seven SICSA Research Themes by our Theme Leaders. Workshop Schedule 09:30-10:00 - coffee, welcome 10:00-10:10 - Introduction (Kevin Hammond & Jeremy Singer, SICSA) 10:10-11:00 - EPSRC CDT presentation (Zoe Brown, EPSRC) 11:00-11:30 - A working CDT in action (Murray Cole, University of Edinburgh) 11:30-12:00 - A dual-site CDT in action (John Marsh, University of Glasgow) 12:00-12:30 - Interactive Workshop (Part I) 12:30-13:30 - Lunch and Refreshments 13:30-14:30 - SICSA Research Theme Presentations 14:30-15:00 - Interactive Workshop (part II) 15:00-16:00 - Discussion panel (Panel members TBA) 16:00 - finis

SICSA DVF Dr Hagen Lehmann “Social interaction characteristics for social acceptable robots”

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 03 April, 2017
Time: 14:00 - 15:00
Location: University of St Andrews, Jack Cole Building, University of St Andrews, United Kingdom

SICSA DVF Dr. Hagen Lehmann, Istituto Italiano di Tecnologia, iCub Facility will be giving a talk at the University of St Andrews on 3 April on: "Social interaction characteristics for social acceptable robots" The last decade has seen fast advances in Social Robotic Technology. Social Robots start to be successfully used as robot companions and as therapeutic aids. In both of these cases the robots need to be able to interact intuitively and comfortably with their human users in close physical proximity. In order to achieve a seamless interaction and communication these robots need to coordinate different aspects of their behaviors with their human interlocutors. This behavior coordination of non-verbal and verbal interaction cues requires that the robots can interpret the social behavior of the other and react accordingly. In this talk different ways to (socially) coordinate human and robot behavior will be discussed and illustrated with examples from recent Human-Robot Interaction research. Bio: Dr. Lehmann is a Marie Curie Experienced Researcher in the iCub Facility at the Italian Institute of Technology, where he develops the SICSAR project, dedg icated to generate and test social interaction behaviors for the iCub robot. Dr. Lehmann received his Diploma in Psychology from the Technical University Dresden, his MA degree in Psychology at the Max-Planck Institute for Evolutionary Anthropology in Leipzig, and his Ph.D. in Computer Science from the University of Bath. In these years he has worked, from different interdisciplinary perspectives, on Evolution and Social Cognition, examining in particular possible reasons for the evolution of social structures in primates, the role of social dominance in this process, and social gaze behavior and its role in human social evolution. His current work is devoted to the application of this knowledge to the fields of Human-Robot Interaction and Social Robotics, through experimental research and with a particular focus on Robot Assisted Therapy and robotic home companions. Before his work at the IIT, he was part of the Adaptive Systems Research Group in the School of Computer Science at the University of Hertfordshire, where he was involved in different European project, e.g. iTALK, and ACCOMPANY If you would like to attend this masterclass please contact Dr Frank Broz who is hosting Dr Lehmann

FATA Seminar - TBA

Group: Formal Analysis, Theory and Algorithms (FATA)
Speaker: Luminita Manuela Bujorianu, University of Strathclyde
Date: 04 April, 2017
Time: 13:00 - 14:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

Categories, Logic, and Physics, Scotland

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 05 April, 2017
Time: 10:00 - 17:00
Location: University of Strathclye, Livingstone Tower, University of Strathclyde, United Kingdom

The SICSA Research Theme Theory, Modelling & Computation is pleased to be sponsoring the Categories, Logic and Physics (CLAP) Scotland event again. It will take place on Wednesday 5 April at the University of Strathclyde. CLAP Scotland is a forum for applications of category theory and logic to physics and computer science, that aims to maintain and enhance the cohesion of Scottish research in these areas. The meetings provide an informal atmosphere where participants can easily interact. They are open, and all are welcome to attend, in particular research students. This is a continuation of the series of CLAP workshops in the greater London area held biannually 2008-2010, and of the Scottish Category Theory seminar that ran biannually 2009-2014, and we encourage the same friendly and open atmosphere. If you would like to host a meeting or give a talk, contact Chris Heunen. The programme will be: 10:00: Coffee 10:30: Kevin Dunne (University of Strathclyde) 11:10: Chris Heunen (University of Edinburgh) 11:50: Fabio Zanasi (University College London) 12:30: Lunch 14:00: Aleks Kissinger (Radboud University) 14:40: Stefano Gogioso (University of Oxford) 15:20: Tea 15:40: Peter Hines (University of York) 16:20: Clemens Kupke (University of Strathclyde) 17:00: Pub Registration is free, but for catering purposes please email the local organiser Ross Duncan as soon as possible if you plan to attend. You might also be interested in the workshop on Algebra and Coalgebra meet Proof Theory held at the University of Strathclyde on April 10-12.

Glasgow Information Retrieval Festival 2017

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 06 April, 2017
Time: 01:00 - 01:00
Location: University of Glasgow, Wolfson Medical Building, University of Glasgow, United Kingdom

SICSA Data Science Reearch Theme is pleased to be sponsored the Information Retrieval Festival on 6 - 8 April at the University of Glasgow A festival of information retrieval held in Glasgow before ECIR 2017, and a bus "Tour de Scotland" en route to Aberdeen. Agenda is as follows: Thursday 6th April: Evening Meal Friday 7th April: Invited talks and Posters Saturday 8th April: Bus "Tour de Scotland" to Aberdeen, via a whisky distillery. Keynotes: Jaana Kekalainen, University of Tampere: Realistic Interactive IR Experiments & Simulation Maarten de Rijke, University of Amsterdam: IR + AI Industry Speaker Noreen Adams, BBC: Digital Archives at the BBC Stuart Miller, Verint: Information Retrieval for Knowledge Management in a changing world Ludovico Boratto, Eurecat: Balancing Individual and Group Satisfaction in the Evaluation of Group Recommender Systems Short Talks Frank Hopfgartner, University of Glasgow: Catching up with Industry - Online Evaluation of Information Access Diane Pennington, University of Glasgow: Making Emotional Information Retrieval a Reality Ingo Frommholz, University of Bedfordshire: Mind the Gap! Bibliometric-Enhanced Information Retrieval for Scholars’ Complex Information Needs Haiming Liu, University of Bedfordshire: Quick Idea Generation Using Positive Emotion Activity in Interactive Information Retrieval Leif Azzopardi, University of Strathclyde: Technologically Assisted Reviews in Empirical Medicine ... and others to be confirmed! If you would like to attend this event, please register at

SICSA DVF Dr Hagen Lehmann "Social interaction characteristics for social acceptable robots"

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 06 April, 2017
Time: 13:00 - 14:00
Location: School of Computing Science, University of Glasgow, Sir Alwyn Williams Building, University of Glasgow, United Kingdom

SICSA DVF Dr. Hagen Lehmann, Istituto Italiano di Tecnologia, iCub Facility will be giving a talk at the University of Glasgow on 30 March on: "Social interaction characteristics for social acceptable robots" The last decade has seen fast advances in Social Robotic Technology. Social Robots start to be successfully used as robot companions and as therapeutic aids. In both of these cases the robots need to be able to interact intuitively and comfortably with their human users in close physical proximity. In order to achieve a seamless interaction and communication these robots need to coordinate different aspects of their behaviors with their human interlocutors. This behavior coordination of non-verbal and verbal interaction cues requires that the robots can interpret the social behavior of the other and react accordingly. In this talk different ways to (socially) coordinate human and robot behavior will be discussed and illustrated with examples from recent Human-Robot Interaction research. Bio: Dr. Lehmann is a Marie Curie Experienced Researcher in the iCub Facility at the Italian Institute of Technology, where he develops the SICSAR project, dedg icated to generate and test social interaction behaviors for the iCub robot. Dr. Lehmann received his Diploma in Psychology from the Technical University Dresden, his MA degree in Psychology at the Max-Planck Institute for Evolutionary Anthropology in Leipzig, and his Ph.D. in Computer Science from the University of Bath. In these years he has worked, from different interdisciplinary perspectives, on Evolution and Social Cognition, examining in particular possible reasons for the evolution of social structures in primates, the role of social dominance in this process, and social gaze behavior and its role in human social evolution. His current work is devoted to the application of this knowledge to the fields of Human-Robot Interaction and Social Robotics, through experimental research and with a particular focus on Robot Assisted Therapy and robotic home companions. Before his work at the IIT, he was part of the Adaptive Systems Research Group in the School of Computer Science at the University of Hertfordshire, where he was involved in different European project, e.g. iTALK, and ACCOMPANY If you would like to attend this masterclass please contact Dr Frank Broz who is hosting Dr Lehmann

European Conference on Information Retrieval (ECIR 2017)

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 08 April, 2017
Time: 01:00 - 01:00
Location: Robert Gordon Univeristy, Garthdee Road, Aberdeen, United Kingdom

SICSA is pleased to be sponsoring the 39th European Conference on Information Retrieval (ECIR 2017) which is taking place at Robert Gordon University and the Aberdeen Exhibition and Conference Centre (AECC) from 8 - 13 April 2017 The 39th European Conference on Information Retrieval (ECIR) will be held in Aberdeen, Scotland from April 8-13, 2017. ECIR is the premier European research conference for the presentation of new results in the field of information retrieval (IR). ECIR 2017 has aimed to be inclusive of researchers stretching from the Northwest of Europe such as Scotland and Scandinavia, to Southeast on Eurasian boundary with Turkey. The programme now includes a new Doctoral Consortium for enabling wider student participation. To enable further opportunities for networking and social activities, we have provided the options of short trips to castles and distilleries nearby upon request. Outline programme Keynote Speakers confirmed: Laura Dietz (University of New Hampshire) - Retrieving knowledge from the Web Alexander Hauptmann (Carnegie Mellon University) - title TBC Jaime Teevan (Microsoft Research) - Search, Re-Search The schedule is as follows: Saturday 8th April 2017: Doctoral Consortium, Student and DC Panel, Welcome Sunday 9th April 2017: Workshops & Tutorials Monday 10th April 2017: Conference, Posters, Demo Session & Welcome Reception Tuesday 11th April 2017: Conference, Conference banquet Wednesday 12th April 2017: Conference, Reception/Welcome Industry Day Thursday 13th April 2017: Industry Day Registration is open and free SICSA student places are still available. For more information please see the ECIR 2017 Web-site

ALCOP VIII; SICSA RT: Theory, Modelling & Computation

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 10 April, 2017
Time: 01:00 - 01:00
Location: University of Strathclyde, McCance Building, University of Strathclyde, United Kingdom

The SICSA Theory, Modelling & Computation Research Theme is pleased to be sponsoring the Algebra and Coalgebra meet Proof Theory (ALCOP) VIII on 10 - 12 April at the University of Strathclyde About ALCOP The workshop Algebra and Coalgebra meet Proof Theory (ALCOP) brings together experts in algebraic logic, coalgebraic logic and proof theory to share new results and to strengthen the relationships between these fields. The MSP Group at the University of Strathclyde (Glasgow) will host the eighth edition of this workshop. Previous editions of ALCOP where held in Vienna (2016), Delft (2015), London (2014), Utrecht (2013), Prague (2012), Bern (2011), and London (2010).   Confirmed Speakers: Neil Ghani (Strathclyde) Sam van Gool (New York) Helle Hvid Hansen (Delft) Ekaterina Komendantskaya (Heriot-Watt University) Mark V Lawson (Heriot-Watt University) Thomas Lukasiewicz (Oxford) Filip Murlak (Warsaw) Daniela Petrisan (Paris) Jan Rutten (CWI Amsterdam/Nijmegen) Registration Registration is now open! Register here: For more information on ALCOP VIII please see he web-page:

Walk this Way

Group: Systems Seminars
Speaker: Prof. Des Higham, University of Strathclyde
Date: 19 April, 2017
Time: 13:00 - 14:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

Many applications require us to summarize key properties of a large, complex network. I will focus on the task of quantifying the relative importance, or "centrality" of the network nodes. This task is routinely performed, for example, on networks arising in biology, security, social science and telecommunication. To derive suitable algorithms, the concept of a walk around the network has proved useful; through either the dynamics of random walks or the combinatorics of deterministic walks.

In this talk I will argue that some types of walk are less relevant than others. In particular, eliminating backtracking walks leads to new network centrality measures with attractive properties and, perhaps surprisingly, reduced computational cost. Defining, analysing and implementing these new methods combines ideas from graph theory, matrix polynomial theory and sparse matrix computations.

SICSA HCI Sponsored Event: 3rd BiVi Annual Meeting

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 20 April, 2017
Time: 01:00 - 01:00
Location: Edinburgh Napier Unversity, Craiglockhart Campus, Edinburgh Napier University, United Kingdom

The Biological Visualisation Community's 3rd BiVi Annual Meeting takes place in Edinburgh on 20th-21st April 2017. This two day meeting comprises one day with 3 keynote talks, 7 talks on biological data visualisation as well as opportunities for lightning talks, posters and demos, with a second day of hands on training workshops. The talks and training include visualisation across the whole spectrum of biological data types from Œomic sequence based data through cells and tissues to whole organism physiology. This is a meeting of interest to anybody working in biomedical science as well as developers of visualisation techniques. The keynote speakers are: - Jean-luc Doumont: The Three Laws of Communication - Marc Streit: From Visual Exploration of Biomedical Data to Storytelling and Back Again - Bang Wong: Art and Science: A partnership catalyzing discovery in biomedicine SICSA members can register at: More details can be found at If you have any questions please

ProbUI: Generalising Touch Target Representations to Enable Declarative Gesture Definition for Probabilistic GUIs

Group: Inference, Dynamics and Interaction (IDI)
Speaker: Daniel Buschek, LMU Munich (visitor at Glasgow University Mar-May 2017)
Date: 20 April, 2017
Time: 14:00 - 15:00
Location: Sir Alwyn Williams Building, 423 Seminar Room

We present ProbUI, a mobile touch GUI framework that merges ease of use of declarative gesture definition with the benefits of probabilistic reasoning. It helps developers to handle uncertain input and implement feedback and GUI adaptations. ProbUI replaces today's static target models (bounding boxes) with probabilistic gestures ("bounding behaviours"). It is the first touch GUI framework to unite concepts from three areas of related work: 1) Developers declaratively define touch behaviours for GUI targets. As a key insight, the declarations imply simple probabilistic models (HMMs with 2D Gaussian emissions). 2) ProbUI derives these models automatically to evaluate users' touch sequences. 3) It then infers intended behaviour and target. Developers bind callbacks to gesture progress, completion, and other conditions. We show ProbUI's value by implementing existing and novel widgets, and report developer feedback from a survey and a lab study.

Scottish Combinatorics Meeting 2017

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 24 April, 2017
Time: 01:00 - 01:00
Location: University of St Andrews, St Andrews, University of St Andrews, United Kingdom

SICSA Research Theme Theory Modelling & Computation is sponsoring the Scottish Combinatorics Meeting which is being held at the University of St Andrews on 24 & 25 April Everyone with an interest in combinatorics and its applications is warmly invited to the Third Scottish Combinatorics Meeting, at the University of St Andrews. This year's meeting is run in conjunction with the British Colloquium for Theoretical Computer Science (to be held in St Andrews immediately following SCM). The invited speakers are: Rosemary Bailey (St Andrews) David Bevan (Strathclyde) Simon Blackburn (Royal Holloway, University of London) Robert Brignall (The Open University) Anders Claesson (University of Iceland, Reykjavik) Max Gadouleau (Durham) Kitty Meeks (Glasgow) Maura Paterson (Birkbeck, University of London) Attendance at the meeting is FREE, but participants are asked to register by 31st March 2017 (by emailing one of the organisers) in order to help with catering arrangements. Tea, coffee and light lunch will be provided on both days. Dinner will be arranged at a local restaurant for the evening of Monday 24th. If you would like to participate please let us know (preferably at time of registration). The meeting will be held in Lecture Theatre B of the Mathematical Institute of the University of St Andrews. Information on how to get to the venue can be found here. In addition to the invited talks, there will be an opportunity for research students and junior researchers to give short presentations on their work: more details here. A provisional schedule will be available in due course. More information can be found on the web-site: We look forward to welcoming you in St Andrews.

The SCOttish Networking Event (SCONE)

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 26 April, 2017
Time: 13:00 - 17:00
Location: School of Computer Science, University of St Andrews, Room 1.33 Jack Cole Building, University of St Andrews, United Kingdom

The Networking & Systems SICSA Research Theme is pleased to be sponsoring the SCOttish Networking Event (SCONE) which is taking place on Wednesday 26th April 2017 at the University of St Andrews. SCONE is the SCOttish Networking Event - an informal gathering of networking and systems researchers in and around Scotland. The goal of these meetings is to foster interaction between researchers from our various institutions. Each meeting will take place over the course of an afternoon, and feature: talks from PhD students talks from faculty, postdocs and industrial researchers discussions of possible funding opportunities food and drink We meet 2-3 times a year. For more information please see the SCONE Web-site

SICSA PhD Conference 2017

Group: Scottish Informatics and Computer Science Alliance (SICSA)
Speaker: SICSA Event, SICSA
Date: 27 June, 2017
Time: 01:00 - 01:00
Location: University of Dundee, Dalhousie House, University of Dundee, United Kingdom

SICSA PhD Conference 2017 will take place on 27th & 28th June 2017 at the University of Dundee. The SICSA PhD Conference 2017 will take place on Tuesday 27th and Wednesday 28th June at Dalhousie House, University of Dundee. The PhD Conference has become one of the highlights of the SICSA Events calendar, bringing together Computing Science and Informatics PhD students, leading academics and industry practitioners for two days of workshops, keynote presentations and social evening event. The Conference is an event aimed specifically at Informatics and Computer Science PhD students from across Scotland and is organised each year by a committee of PhD students and members of the SICSA Executive. More information on the Conference will be posted shortly.

Past Events

IDI Journal club - Yarin Gal's papers on Uncertainty in Deep learning (23 March, 2017)

Speaker: Rod

On Thursday 23rd at the Journal club we will have a look at some of Yarin Gal’s work. He has a blog on his thesis work on Uncertainty in Deep Learning:


I suggest we run through the blog with  some interactive demos in Javascript:

Cyber Physical Systems Research Theme - Invitation to demonstrate at the European Robotics Forum (22 March, 2017)

Speaker: SICSA Event
The SICSA Cyber Physical Systems Research Theme is participating in the European Robotics Forum in Edinburgh from 22 - 24 March. We are inviting teams to demonstrate their work at the ERF 2017 under the umbrella of the SICSA-Cyber-Physical Systems theme. A unique setting of technical demonstration is provided for both research labs from academia and industry to share experience and ideas. Through this exhibition stand teams will be enabled to share their visions and solutions to hardware and algorithmic implementation in Cyber-Physical Systems. Each demonstration will be facilitated with a table, 2 chairs and a power outlet. Demonstrators are expected to bring their own related hardware. Each selected team will be able to exhibit for one full-day at the event with up to 3 members (ERF 2017: Submission Guidelines: Demonstration participants should submit a 1-page abstract and a link to a maximum 3-min demo video clip for evaluation or some pictures of the apparatus. The abstract should include a short description of the demo, setup requirements, and safety concerns if any. All submissions will undergo a common review process among the organisers. Decisions about acceptance will be based on relevance to the theme, potential significance, and the accommodation capabilities. Please submit to with the subject “Demo Submission for “SICSA-Cyber-Physical Systems: your name”. Important dates: Submission deadline: FEB 28, 2017 Acceptance notification: MAR 3, 2017 Event: ERF 2017 MAR 22-24, 2017 For any questions regarding the demo session, please contact the organisers: Dr Subramanian Ramamoorthy SICSA, School of Informatics, University of Edinburgh, Edinburgh, EH8 9AB Email: Phone: +44 (0) 131 650 5119 Dr Katrin Solveig Lohan SICSA, Mathematical and Computer Science, Heriot-Watt University, Edinburgh, EH14 4AS Email: Phone: +44 (0) 131 451 8338

SICSA DVF Professor Guevara Noubir "Robustness and Privacy in Wireless Systems" (21 March, 2017)

Speaker: SICSA Event
SICSA DVF Professor Guevara Noubir from Northeastern University, Boston, MA, USA will be giving a talk on “Robustness and Privacy in Wireless Systems" on Tuesday 21 March at the University of Edinburgh. Abstract: Wireless communication is not only a key technology underlying the mobile revolution, it is also used to connect, monitor, alert, and interact with physical infrastructures such as smart-grids, transportation networks, and even implantable devices. Building secure and robust wireless networks raises several theoretical and practical problems. Solving such problems requires novel approaches to circumvent the resource limitations of such systems. In this talk, I will review some of the major vulnerabilities inherent to the design of current wireless networks. I will then present specific problems and results that address some of the issues in wireless and mobile networks including smart-interference mitigation, and location tracking. Bio: Guevara Noubir is a Professor in the College of Computer and Information Science at Northeastern University. He received a PhD is Computer Science from the Swiss Federal Institute of Technology in Lausanne (EPFL 1996) and an engineering diploma (MS) from École Nationale Supérieure d'Informatique et de Mathématiques Appliquées at Grenoble (ENSIMAG 1991). Prior to joining the faculty at Northeastern University, he was a senior research scientist at CSEM SA (Switzerland) where he led several research projects in the area of wireless and mobile networking. In particular, he contributed to the definition of the third generation Universal Mobile Telecommunication System (UMTS) standardized as 3GPP WCDMA and was the lead of the Data Networking Stack for the first 3G demonstrator in the world (as part of the FRAMES EU Research Project). In 2013, Noubir led Northeastern University’s team in the DARPA Spectrum Challenge competition winning the 2013 Cooperative Challenge. Dr Noubir held visiting research positions at Eurecom, MIT, and UNL. He is a Senior Member of the IEEE, a member of the ACM, and a recipient of the NSF CAREER Award. He serves on the editorial boards of IEEE Transactions on Mobile Computing, ACM Transaction on Privacy and Security, and co-chaired several ACM and IEEE conferences in the fields of mobile, wireless, and security (ACM WiSec, IEEE CNS, IEEE SECON, IEEE WoWMoM). His research covers both theoretical and practical aspects of secure and robust wireless and mobile systems. His current interests include leveraging mechanisms such as social networking authentication and low power ZigBee, to secure residential broadband networks, and boosting the robustness of wireless systems against smart attacks. The host of this Distinguished Visiting Fellow is Dr Paul Patras

Generative Programming and Product Family Engineering with WizardsWorkbench (21 March, 2017)

Speaker: Niall Barr

Language Workbenches are tools used to support the creation and use of Domain Specific Languages (DSLs), frequently for the purpose of supporting Language Oriented Programming (LOP) or Generative Programming. LOP is an approach to application development where a language that is close to the problem domain is created, and the application is developed in this language. Generative programming is the related approach where a language at a high level of abstraction is used, and source code in a more general purpose language is generated from that code. In this talk I will describe the approach to web application development using generative programming that I have been using and evolving over several years, and my simple language workbench, WizardsWorkbench. As these web applications tend to follow a fairly similar pattern, DSLs are reused and evolved as required, my approach can be considered to be a form of Product Family Engineering that utilises generative programming. I will also describe the example driven approach which is used with WizardsWorkbench to develop both the parsers and the code generation output templates as well as the two DSLs used internally by WizardsWorkbench for parsers and templates.

FATA Seminar - Specifying Protocol Message Formats (21 March, 2017)

Speaker: Florian Weber

The protocol description language Scribble, based on session types, is used as the input format for several tools that generate code to support the correct implementation of protocol roles. Scribble describes messages abstractly independently of their underlying transport mechanism. This means that working with legacy protocols and their existing textual message formats requires parsing and formatting code to bridge the gap between abstract Scribble messages and concrete protocol messages. We introduce a small language for defining the mapping between abstract and concrete messages. We also describe a tool that automatically generates the corresponding parsers and formatters from this language. This tool has been integrated with an existing Scribble to Java transpiler. We show that the combination of Scribble and a message mapping specification is an effective way of formally specifying internet protocols, by describing implementations of clients for POP3, SMTP and IMAP.

Assessing User Engagement in Information Retrieval Systems (20 March, 2017)

Speaker: Mengdie Zhuang


In this study, we investigated both using user actions from log files, and the results of the User Engagement Scale, both of which came from a study of people interacting with a retrieval interface containing an image collection, but with a non-purposeful task. Our results suggest that selected behaviour measures are associated with selected user perceptions of engagement  (i.e., focused attention, felt involvement, and novelty), while typical search and browse measures have no association with aesthetics and perceived usability. This is finding can lead towards a more systematic user-centered evaluation model.


Mengdie Zhuang is a PhD student from the University of Sheffield, UK. Her research focuses on evaluation metrics of Information Retrieval Systems.

GIST Seminar: Experiments in Positive Technology: the positives and negatives of meddling online (16 March, 2017)

Speaker: Dr. Lisa Tweedie

Experiments in Positive Technology: The positives and negatives of meddling online 

This talk is going to report on a few informal action research experiments I have conducted over a period of seven years using social media. Some have been more successful than others. The focus behind each is "How do we use technology/social media to make positive change?"

I will briefly discuss four interventions and what I have learnt from them.

A) Chile earthquake emergency response via Twitter and WordPress 

B) Make Malmesbury Even Better - Community Facebook page

C) Langtang lost and found - Facebook support group for families involved in the Langtang earthquake, Nepal

D) I am Amira - educational resources for British schools about the refugee crisis downloaded by 4000+ schools from Times Educational Supplement Resources online (TES)

Three of these are still ongoing projects. I will make the case that these projects have all initiated positive change. But that they also each have their darker side. I will discuss how each has affected me personally.

I will conclude with how I plan to carry forward my findings into the education arena. My current research thoughts are around education, play and outdoor learning.



Lisa started her academic life as a psychologist (via engineering product design at South Bank Poly) gaining a BSc (Hons) in Human Psychology from Aston University. She was then Phil Barnard's RA at the applied psychology unit in Cambridge (MRC APU). Researching low level cognitive models for icon search. She soon realised she wanted to look at the world in a more pragmatic way. 

Professor Bob Spence invited her to do a PhD in the visualisation of data at Imperial College, London (Dept of EEE). This was the start of a successful collaboration that continues to this day. She presented her work internationally at CHI, Parc (Palo Alto) and Apple (Cupertino) amongst other places. Lisa's visualisation work is still taught in computer science courses worldwide. She did a couple of years post doc at Imperial into developing visual tools to support problem holders create advanced statistical models (generalised linear models - Nelder - EPSRC) but felt industry calling. She then spent six happy years working for Nortel and Oracle as part of development teams. She worked on telephone network fault visualisations, managing vast quantities of live telephone fraud data from generated by genetic matching algorithms (SuperSleuth) and interactive UML models of code (Oracle Jdeveloper). She is named on two patents from this work.

Once Lisa had her second child she choose to leave corporate life. She had a teaching fellowship at Bath University in 2005. In 2007 she started a consultancy based around "positive technology". She worked as a UX mentor with over 50 companies remotely via Skype from her kitchen. Many of these were start ups in Silicon Valley. In 2011 she was awarded an honorary Research fellowship at Imperial College.

Four years ago she trained as a secondary Maths teacher and has a huge interest in special needs. She tutors students of all abilities and age groups in maths, english and reading each week. Most recently she returned to the corporate world working as a Senior User Experience Architect for St James Place. On the 5th January 2017 she became self employed and is looking to return to the academic research arena with a focus on education, play and outdoor learning. Action research is where she wants to be. 

Lisa is also a community activist, hands on parent to three lively children and a disability rights campaigner. She has lived with Ehlers-Danlos Syndrome, a rare genetic connective tissue disorder, her whole life. She is also a keen photographer, iPad artist (, writer, maker and has run numerous book clubs.


Programmable Address Spaces (15 March, 2017)

Speaker: Paul Keir

In the last decade, high-performance computing has made increasing use of heterogeneous many-core parallelism. Typically the individual processor cores within such a system are radically simpler than their predecessors; and an increased portion of the challenge in executing relevant programs efficiently is reassigned. Tasks, previously the responsibility of hardware, are now delegated to software. Fast, on-chip memory, will primarily be exposed within a series of trivially distinct programming languages, through a handful of address spaces annotations, which associate discrete sections of memory with pointers; or similar low-level abstractions. Traditional CPUs would provide a hardware data cache for such functionality. Our work aims to improve the programmability of address spaces by exposing new functionality within the existing template metaprogramming system of C++

Spring 2017 Research Staff Event (14 March, 2017)

Speaker: Dr Spela Brown, Ms Aline Orr, Mr Steven Kendrick

The event will consist of two parts:

  • Seminar (SAWB/422). To date we have two confirmed speakers:

    • 2.30pm: Dr Spela Brown (Sophrodine, Glasgow) on Starting own business after a PhD

    • 3pm: Ms Aline Orr / Mr Steven Kendrick (SICSA officers at Glasgow Uni) on SICSA opportunities for Postdocs

Lunch will be provided from 2.15pm.

  • Laser quest (sponsored by the School!) at 124 Portman Street, Kinning Park, Glasgow, G41 1EJ. The venue is 5 minute walk from the Kinning Park Subway Station, so we plan to go there by the subway, leaving the SAWB foyer at 4.10pm. The game itself will be from 4.45pm until 7pm.


Please, email either Amol ( or Natalia ( if you’d like to join the laser quest!

GPU Concurrency: The Wild West of Programming (08 March, 2017)

Speaker: Tyler Sorensen

GPUs are co-processors originally designed to accelerate graphics computations. However, their high bandwidth and low energy consumption have led to general purpose applications running on GPUs. To remain relevant in the fast-changing landscape of GPU frameworks, GPU programming models are often vague or underspecified. Because of this, several programming constructs have been developed which violate the official programming models, yet execute successfully on a specific GPU chip, enabling more diverse applications to be written for that specific device. During my PhD, we have examined one such construct: a global synchronisation barrier (or GSB). In this talk, we will address three key questions around this rogue programming construct: (1) Is it *possible* to write a portable GSB that successfully executes on a wide range of today's GPUs? (2) Can a GSB be *useful* for accelerating applications on GPUs? And (3) can a programming model that allows a GSB be *sustainable* for future GPU frameworks? Our hope is that this investigation will help the GSB find a permanent home in GPU programming models, enabling developers to exciting new applications in a safe and portable way.

Short Bio: Tyler’s research interests are in developing and understanding models for testing and safely developing GPU applications which contain irregular computations. In particular, he examines issues related to the GPU relaxed memory model and execution model. He received his MSc from University of Utah in 2014 and worked as an intern for the Nvidia compiler team during the summers of 2013 and 2014.

A Framework for Virtualized Security (07 March, 2017)

Speaker: Abeer Ali

Traditional network security systems consist of deploying high-performance and high-cost appliances (middleboxes) in fixed locations of the physical infrastructure to process traffic to prevent, detect or mitigate attacks.  This limits their provisioning abilities to a static specification, hindering extensible functionality and resulting in vendor lock-in.Virtualizing security function avoids these problems and increases the efficiency of the system. In this talk, we present the requirements and challenges of building a framework to deploy and manage virtualized security functions in a multitenant virtualized infrastructure like Cloud and how we can exploit latest advances in Network Function Virtualization (NFV) and network services offered by Software-Defined Networking (SDN) to implement it.

Access, Search and Enrichment in Temporal Collections (06 March, 2017)

Speaker: Avishek Anand

There have been numerous efforts recently to digitize previously published content and preserving born-digital content leading to the widespread growth of large temporal text repositories. Temporal collections are continuously growing text collections which contain versions of documents spanning over long time periods and present many opportunities for historical, cultural and political analyses. Consequently there is a growing need for methods that can efficiently access, search and mine them. In this talk we deal with approaches in each of these aspects -- access, search and enrichment. First, I describe some of the access methods for searching temporal collections. Specifically, how do we index text to support temporal workloads? Secondly, I will describe retrieval models, which exploit historical information, essential in searching such collections. That is, how do we rank documents given temporal query intents? Finally, I will present some of the ongoing efforts to mine such collections for enriching Knowledge sources like Wikipedia.

CS for all - a new 3-15 CS curriculum for Scotland (06 March, 2017)

Speaker: Quintin Cutts

Education Scotland, the organisation tasked with developing school curricula, will shortly announce a new computing science curriculum.  This is for the "Broad General Education" - the education that all pupils are entitled to receive from ages 3-15.  The underlying structure of the curriculum is largely the work of Quintin here at Glasgow, Richard Connor at Strathclyde, and Judy Robertson at Moray House/Edinburgh.  This structure captures key aspects of computing science and computational thinking and sheds light on why we have over the decades found it so hard to teach CS.  In this talk, Quintin will outline the structure, make links to CS education research findings and other curricula, and give an overview of the new Scottish curriculum.  If you have young children, come along and find out what they are about to be subjected to!

A stochastic formulation of a dynamical singly constrained spatial interaction model (02 March, 2017)

Speaker: Mark Girolami

One of the challenges of 21st-century science is to model the evolution of complex systems.  One example of practical importance is urban structure, for which the dynamics may be described by a series of non-linear first-order ordinary differential equations.  Whilst this approach provides a reasonable model of spatial interaction as are relevant in areas diverse as public health and urban retail structure, it is somewhat restrictive owing to uncertainties arising in the modelling process. 

We address these shortcomings by developing a dynamical singly constrained spatial interaction model, based on a system of stochastic differential equations.   Our model is ergodic and the invariant distribution encodes our prior knowledge of spatio-temporal interactions.  We proceed by performing inference and prediction in a Bayesian setting, and explore the resulting probability distributions with a position-specific metropolis-adjusted Langevin algorithm. Insights from studies of interactions within the city of London from retail structure are used as illustration

SICSA Research Challenge Future Cities: The Economy of Collaboration (02 March, 2017)

Speaker: SICSA Event
The SICSA Future Cities Research Challenge Workshop 'The Economy of Collaboration' is taking place on Thursday 2 March 2017 at the University of Dundee Following on from the SICSA Future Cities “Bottom-Up” workshop 2014, this one-day workshop will return to explore current and future initiatives in grassroots future cities, focussing the role of digital technologies and the collaborative economy: the mobilisation and infrastructuring of communities for engagement, resource sharing, empowerment, and innovation. Amidst the drive toward smarter cities, there is a growing movement of technologists, innovators and makers who are developing inspiring digital solutions to local issues and social challenges. Examples range from collaborative economy applications to support better sharing of resources, to initiatives that harness the collective awareness and intelligence of people environmental monitoring. These movements demonstrate the potential for bottom-up approaches to cities. However, this grassroots vision of future cities must overcome many challenges and barriers to participation, including access and ownership of data, human-centeredness and comprehensibility. At the same time, there are obvious and nuanced challenges to collaboration and innovation, including trust, reliability, competition and privacy. This workshop will frame and explore and highlight the current and state of the art, discuss research questions and computing and design challenges for developments that support a collaborative economy. Themes and provocations include: Resilience to global and local threats Decentralisation of ownership and control to allow communal use and innovation New models for small cities of the future Citizens’ understanding of smart cities and ability to leverage them to their advantage Data context and the lived experience Speakers Mara Balestrini - Participatory Sensing in Barcelona, Making Sense H2020. IAAC and Ideas for Change Dr Drew Hemment – City Verve – Bottom up and collaborative approaches in the UK’s most recent Smart City Demonstrator, Manchester. Dr Nick Taylor – Grassroots Innovation around Community Technologies in Ardler, Dundee. ESPRC Hacking for Situated Civic Engagement. Stewart Murdoch – Smart City notes from a Small City Dundee City Council (tbc) - further announcements to come For more information on the workshop and details of registration please visit the Economy of Collaboration Eventbrite page

Type-Driven Development of Communicating Systems using Idris (01 March, 2017)

Speaker: Dr. Jan de Muijnck-Hughes

Communicating protocols are a cornerstone of modern system design. However, there is a disconnect between the different tooling used to design, implement and reason about these protocols and their implementations. Session Types are a typing discipline that help resolve this difference by allowing protocol specifications to be used during type-checking to ensure that implementations adhere to a given specification.

Idris is a general purpose programming language that supports full-dependent types, providing programmers with the ability to reason more precisely about programs. This talk introduces =Sessions=, our implementation of Session Types in Idris, and demonstrates =Sessions= ability to design and realise several common protocols.

=Sessions= improves upon existing Session Type implementations by introducing value dependencies between messages and fine-grained channel management during protocol design and implementation. We also use Idris' support for EDSL construction to allow for protocols to be designed and reasoned about in the same language as their implementation. Thereby allowing for an intrinsic bond to be introduced between a protocol's implementation and specification, and also with its verification.

Using =Sessions=, we can reduce the existing disconnect between the tooling used for protocol design, implementation, and verification.

FATA Seminar - Behavioural types to make object-oriented programs go right (28 February, 2017)

Speaker: António Ravara

Abstract: Stateful objects typically have a non-uniform behaviour, as the availability of their methods depend on their internal state.  For instance, in an object that implements file access the method to read a file should not be invoked before calling the method to open that file. Similarly, in an iterator object, calls to the next method should be preceded by calls to the hasNext method.

Behavioural types are particularly well-suited to object-oriented programming, as they make it possible to statically guarantee that method calls happen when an object has an internal state that is safe for the method's execution. Following the typestates approach, one may declare for each possible state of the object the set of methods that can be safely executed in that state.

Several languages already associate with a class a dynamic description of objects' behaviour declaring the admissible sequences of method calls. These descriptions, herein called usages, can be used to ensure at compile time that, in a given program, all the sequences of method calls to each object follow the order declared by its usage. To ensure usages to be followed, objects are linear to prevent interferences unexpectedly changing their state.

However, the typing systems referred to above have two shortcomings. First, type checking is typically inefficient, as a method's body is checked each time that method appears in a usage. Second, said typing systems limit themselves to just verifying that method calls follow the usage, and do not necessarily prevent the typed program from ``going wrong'' (e.g., getting stuck or producing a null pointer exception).

Our work addresses these weaknesses:

1) We attain a stronger type-safety result, by including de-referencing a null reference in the definition of errors and by including in type-checking a form of null pointer analysis. Thus, type-safety in our setting means no run-time errors and complete execution of objects' usages.

2) We attain more efficient type-checking by analysing methods' bodies only once. Instead of checking the code following the usage, we introduce client usages, behavioural descriptions of how a methods' code changes the state of objects in fields (and variables/parameters), type the method bodies following that information and check the consistency of the usages independently.

Client usages have another advantage: they can, not only be inferred from the code, but also be used to produce pre- and post-conditions to methods that then allow to infer usages.

We are developing this work in stages. First, we define a type system only with usages and prove type-safety (our enhanced version). Subsequently, we extend the type system with client usages and get a more efficient type-checking. Afterwards we infer the client usages from the code, and finally, we infer pre- and post-conditions from client usages. Our aim is to provide an approach that takes a program in a Java-like language and automatically infers class usages that describe safe orders of method calls, but also type-checks (client) code against usages (either inferred or user-defined) so as to guarantee that the whole program does not go wrong.

Short bio: Assistant Professor at DI FCT UNL PT, on Sabbatical leave during 2016/17 visiting SCS Glasgow.

Main research problem addressed is how to ensure that inherently concurrent, highly distributed, software systems behave correctly. The focus is on the development of techniques, program constructions, and tools that help creating safe and well-behaved systems, provably providing correctness guarantees. The toolbox used includes static analysis of source code, capturing defects before deployment, with decidable, low complexity, property-driven, proof systems, using behavioural descriptions of programs.

Collaborative Information Retrieval. (27 February, 2017)

Speaker: Nyi Nyi Htun

Presentation of 2 papers to appear at CHIIR 2017.

Paper 1:

Title: How Can We Better Support Users with Non-Uniform Information Access in Collaborative Information Retrieval?

Abstract: The majority of research in Collaborative Information Retrieval (CIR) has assumed that collaborating team members have uniform information access. However, practice and research has shown that there may not always be uniform information access among team members, e.g. in healthcare, government, etc. To the best of our knowledge, there has not been a controlled user evaluation to measure the impact of non-uniform information access on CIR outcomes. To address this shortcoming, we conducted a controlled user evaluation using 2 non-uniform access scenarios (document removal and term blacklisting) and 1 full and uniform access scenario. Following this, a design interview was undertaken to provide interface design suggestions. Evaluation results show that neither of the 2 non-uniform access scenarios had a significant negative impact on collaborative and individual search outcomes. Design interview results suggested that awareness of team’s query history and intersecting viewed/judged documents could potentially help users share their expertise without disclosing sensitive information.

Paper 2:

Title: An Interface for Supporting Asynchronous Multi-Level Collaborative Information Retrieval

Abstract: Case studies and observations from different domains including government, healthcare and legal, have suggested that Collaborative Information Retrieval (CIR) sometimes involves people with unequal access to information. This type of scenario has been referred to as Multi-Level CIR (MLCIR). In addition to supporting collaboration, MLCIR systems must ensure that there is no unintended disclosure of sensitive information, this is an under investigated area of research. We present results of an evaluation of an interface we have designed for MLCIR scenarios. Pairs of participants used the interface under 3 different information access scenarios for a variety of search tasks. These scenarios included one CIR and two MLCIR scenarios, namely: full access (FA), document removal (DR) and term blacklisting (TR). Design interviews were conducted post evaluation to obtain qualitative feedback from participants. Evaluation results showed that our interface performed well for both DR and FA scenarios but for TR, team members with less access had a negative influence on their partner’s search performance, demonstrating insights into how different MLCIR scenarios should be supported. Design interview results showed that our interface helped the participants to reformulate their queries, understand their partner’s performance, reduce duplicated work and review their team’s search history without disclosing sensitive information.

GIST Seminar: Success and failure in ubiquitous computing, 30 years on. (23 February, 2017)

Speaker: Prof. Lars Erik Holmquist

Success and failure in ubiquitous computing, 30 years on.
It is almost three decades since Mark Weiser coined the term "ubiquitous computing" at Xerox PARC around 1988. The paper The Computer for the 21st Century was published in 1991, and the first Ubiquitous and Handheld Computing (now UBICOMP) conference was organized in 1999. It is clear that some of the ubicomp vision has come to pass (e.g. ubiquitous handheld computing terminals) whereas other have failed (arguably, any notion of ”calm technology” and ”computers that get out of the way of the work”!) I’d like to take this seminar to discuss some of my top picks for success and failure in ubicomp, and I invite participants to come do the same!
Homework: Think of at least one ubicomp success and one ubicomp failure, as they relate to the various visions of ubiquiotus/pervasive/invisible/etc. computing!
Lars Erik Holmquist is newly appointed Professor of Innovation at Northumbria University, Department of Design. He has worked in ubicomp and design research for 20 years, including as co-founder of The Mobile Life Centre in Sweden and Principal Scientist at Yahoo! Research in Silicon Valley. His book on how research can lead to useful results, "Grounded Innovation: Strategies for Developing Digital Products", was published by Morgan Kaufmann in 2012. Before joining Northumbria, he spent two years in Japan where he was a Guest Researcher at the University of Tokyo, learned Japanese, wrote a novel about augmented reality and played in the garage punk band Fuzz Things.

Next Generation Cyber-physical systems (22 February, 2017)

Speaker: Dr Steven J Johnston

Cyber-physical systems (CPS) have peaked in the hype curve and have demonstrated they are here to stay in one form or another. Many cities have attempted to retrofit 'smart' capabilities and there is no shortage of disconnected, often proprietary CPS addressing city infrastructure.

In the same way that online activity evolved from simplistic webpages to feature rich web 2.0, CPS also need to evolve. What will the Smart City 2.0 of tomorrow will look like, how will the architectures will evolve and most importantly how does this address the key challenges of cities; energy, environment and citizens. (Audience interaction welcomed)

Get Your Feet Wet With SDN in a HARMLE$$ Way (21 February, 2017)

Speaker: Levente Csikor

Software-Defined Networking (SDN) offers a new way to operate, manage, and deploy communication networks and to overcome many of the long-standing problems of legacy networking. However, widespread SDN adoption has not occurred yet, due to the lack of a viable incremental deployment path and the relatively immature present state of SDN-capable devices on the market. While continuously evolving software switches may alleviate the operational issues of commercial hardware-based SDN offerings, lagging standards-compliance, performance regressions, and poor scaling, they fail to match the cost-efficiency and port density. In this paper, we propose HARMLESS, a new SDN switch design that seamlessly adds SDN capability to legacy network gear, by emulating the OpenFlow switch OS in a separate software switch component. This way, HARMLESS enables a quick and easy leap into SDN, combining the rapid innovation and upgrade cycles of software switches with the port density and cost-efficiency of hardware-based appliances into a fully dataplane-transparent and vendor-neutral solution. HARMLESS incurs an order of magnitude smaller initial expenditure for an SDN deployment than existing turnkey vendor SDN solutions while, at the same time, yields matching, or even better, data plane performance.

FATA Seminar - Discovery and recognition of emerging activities via directional statistical models and active learning (21 February, 2017)

Speaker: Lei Fang

Human activity recognition plays a significant role in enabling pervasive applications as it abstracts low-level noisy sensor data into high-level human activities, which applications can respond to. In this paper, we identify a new research question in activity recognition -- discovering and learning unknown activities that have not been pre-defined or observed. As pervasive systems intend to be deployed in a real-world environment for a long period of time, it is infeasible, to expect users will only perform a set of pre-defined activities. Users might perform the same activities in a different manner, or perform a new type of activity. Failing to detect or update the activity model to incorporate new patterns or activities will outdate the model and result in unsatisfactory service delivery. To address this question, we propose a solution to not only discover and learn new activities over time, but also support incremental updating the activity model by employing directional statistical model (hierarchical mixtures of von Mises-Fisher Distributions) and active learning strategies.

Short bio:
Lei Fang is a Research Fellow at the School of Computer Science, University of St Andrews. His research interests include sensor networks, sensor data processing, statistical modelling, human activity recognition, etc. Currently, he is a postdoc working on the EPSRC Science of Sensor Systems Software (S4) project. He got his Ph.D. from the University of St Andrews in 2015.

SICSA Research Challenge Workshop on Next Generation Mixed-Reality Systems (21 February, 2017)

Speaker: SICSA Event
The SICSA Research Challenge Next Generation Mixed Reality Systems is holding a workshop on Tuesday 21 February at the University of Glasgow Introduction - This will be an informal, “working” workshop; the aim is to identify and discuss the most pressing questions posed by the widespread deployment of Mixed-Reality systems with a focus on: - The “Savannah experience”, modelling and analysis techniques for Mixed-Reality systems, limitations and exploration of future research directions with emphasis on IoT infrastructure, security and adaptive behaviours, - probabilistic and quantitative aspects, - new case studies “in the wild”, - identifying further partners/collaborators and funding sources. Programme 10:00 - 10:20 Coffee and Tea 10:20 - 10:30 Welcome and introductions 10:30 - 12:00 Morning session 12:00 - 13:00 Lunch Break 13:00 - 14:00 Lei Fang;s (St Andrews) FATA talk in SAWB 423 14:00 - 14:15 Coffee/Tea Break 14:15 - 15:20 Afternoon session 15:20 - 15:30 Concluding remarks If you would like to attend this workshop or if you have any questions please contact the Research Challenge Leader Michele Sevegnani

GIST Seminar: Understanding the usage of onscreen widgets and exploring ways to design better widgets for different contexts (16 February, 2017)

Speaker: Dr. Christian Frisson

Interaction designers and HCI researchers are expected to have skills for both creating and evaluating systems and interaction techniques. For evaluation phases, they often need to collect information regarding usage of applications and devices to interpret quantitative and behavioural aspects from users or to provide design guidelines. Unfortunately, it is often difficult to collect users' behaviours in real world scenarios from existing applications due to the unavailability of scripting support and access to the source code. For creation phases, they often have to comply with constraints imposed by the interdisciplinary team they are working with and by the diversity of the contexts of usage. For instance, the car industry may decide that dashboards may be easier to manufacture and to service with controls printed flat or curved, rather than when mounted with physical controls, while the body of research has shown that the latter are more efficient and safe for drivers.

This talk will first present InspectorWidget, an open-source suite which tracks and analyses users' behaviours with existing software and programs.  InspectorWidget covers the whole pipeline of software analysis from logging input events to visual statistics through browsing and programmable annotation.  To achieve this, InspectorWidget combines low-level event logging (e.g. mouse and keyboard events) and high-level screen features (e.g. interface widgets) captured though computer vision techniques.  The goal is to provide a tool for designers and researchers to understand users and develop more useful interfaces for different devices.

The talk will then discuss an ongoing project which explores ways to design haptic widgets, such as buttons, sliders and dials, for touchscreens and touch-sensitive surfaces on in-car centre consoles.  Touchscreens are now commonly found in cars, replacing the need for physical buttons and switchgear but there are safety concerns regarding driver distraction due to the loss of haptic feedback.  We propose the use of interactive sound synthesis techniques to design and develop effective widgets with haptic feedback capabilities for in-car touchscreens to reduce visual distractions on the driver. 


Christian Frisson graduated a MSc. in "Art, Science, Technology (AST)" from Institut National Polytechnique de Grenoble (INPG) and the Association for the Creation and Research on Expression Tools (ACROE), France, including a visiting research internship at the MusicTech group, McGill University, Montreal, Québec, Canada, in 2006. In February 2015, he obtained his PhD degree with Professor Thierry Dutoit at the University of Mons (UMONS), numediart Institute, Belgium, on designing interaction for browsing media collections (by similarity). Since June 2016, he is a postdoc at Inria Lille, Mjolnir team, on designing vibrotactile feedback for dashboard widgets within H2020 EU project HAPPINESS, whose partners feature Alexander Ng and Stephen Brewster from the Multimodal Interaction Group of the University of Glasgow.

Journal Club - Highlights ECCV 2016 and EUCOG 2016 (16 February, 2017)

Speaker: Paul Siebert

Paul will discuss highlights from ECCV 2016 and EUCOG 2016

Network-layer QoE-Fairness for Encrypted Adaptive Video Streams (15 February, 2017)

Speaker: Dr Marwan Fayed

Netflix, YouTube, iPlayer, are increasingly targets of the following complaint: "How come my child gets HD streams on her phone, while I'm stuck with terrible quality on my 50 inch TV?" Recent studies observe that competing adaptive video streams generate flows that lead to instability, under-utilization, and unfairness behind bottleneck links. Additional measurements suggest there may also be a negative impact on users' perceived quality of experience as a consequence. Intuitively, application-generated issues should be resolved at the application layer. In this presentation I shall demonstrate that fairness, by any definition, can only be solved in the network. Moreover, that in an increasingly HTTP-S world, some form of client interaction is required. In support, a new network-layer 'QoE-fairness' metric will be be introduced that reflects user experience. Experiments using our open-source implementation in the home environment reinforce the network-layer as the right place to attack the general problem.




Bio: Marwan Fayed received his MA from Boston University and his PhD from the University of Ottawa, in 2003 and 2009 respectively, and in between worked at Microsoft as a member of the Core Reliability Group. He joined the faculty at the University of Stirling, UK in 2009 as under the Scottish Informatics and Computer Science Alliance (SICSA) scheme. He recently held the appointment of 'Theme Leader' for networking research in Scotland. His current research interests lie in wireless algorithms, as well as general network, transport, and measurement in next generation edge networks. He is a co-founder of HUBS c.i.c., an ISP focussed on rural communities; recipient of an IEEE CCECE best paper award; and serves on committees at IEEE and ACM conferences.

A Comparison of Document-at-a-Time and Score-at-a-Time Query Evaluation (14 February, 2017)

Speaker: Joel Mackenzie

We present an empirical comparison between document-at-a-time (DaaT) and score-at-a-time (SaaT) document ranking strategies within a common framework. Although both strategies have been extensively explored, the literature lacks a fair, direct comparison: such a study has been difficult due to vastly different query evaluation mechanics and index organizations. Our work controls for score quantization, document processing, compression, implementation language, implementation effort, and a number of details, arriving at an empirical evaluation that fairly characterizes the performance of three specific techniques:WAND (DaaT), BMW (DaaT), and JASS (SaaT). Experiments reveal a number of interesting findings. The performance gap between WAND and BMW is not as clear as the literature suggests, and both methods are susceptible to tail queries that may take orders of magnitude longer than the median query to execute. Surprisingly, approximate query evaluation in WAND and BMW does not significantly reduce the risk of these tail queries. Overall, JASS is slightly slower than either WAND or BMW, but exhibits much lower variance in query latencies and is much less susceptible to tail query effects. Furthermore, JASS query latency is not particularly sensitive to the retrieval depth, making it an appealing solution for performance-sensitive applications where bounds on query latencies are desirable.


Joel is a PhD candidate at RMIT University, Melbourne, Australia. He works with Dr J. Shane Culpepper and Assoc Prof. Falk Scholer on efficient and effective candidate generation for multi-stage retrieval. His research interests include index efficiency, multi-stage retrieval and distributed IR.

The Last of the Big Ones: Crazy Stone, AlphaGo, and Master (14 February, 2017)

Speaker: John O'Donnell

The computer program AlphaGo made history in 2016 by defeating Lee Sedol, one of the top professional go players, in a five game match.  A few weeks ago, an updated version of AlphaGo played 60 games against professionals and won them all.  The current generation of strong go programs use neural networks and Monte Carlo tree search.  These programs have a distinctive playing style and occasionally make astonishing moves, raising questions that are presently the focus of intensive research.  This talk will explore some of these issues, and illustrate them with incidents from the history of go as well as from the recent games by computers.

Unsupervised Event Extraction and Storyline Generation from Text (13 February, 2017)

Speaker: Dr. Yulan He

This talk consists of two parts. In the first part, I will present our proposed Latent Event and Categorisation Model (LECM) which is an unsupervised Bayesian model for the extraction of structured representations of events from Twitter without the use of any labelled data. The extracted events are automatically clustered into coherence event type groups. The proposed framework has been evaluated on over 60 millions tweets and has achieved a precision of 70%, outperforming the state-of-the-art open event extraction system by nearly 6%. The LECM model has been extended to jointly modelling event extraction and visualisation which performs remarkably better than both the state-of-the-art event extraction method and a pipeline approach for event extraction and visualisation.

In the second part of my talk, I will present a non-parametric generative model to extract structured representations and evolution patterns of storylines simultaneously. In the model, each storyline is modelled as a joint distribution over some locations, organisations, persons, keywords and a set of topics. We further combine this model with the Chinese restaurant process so that the number of storylines can be determined automatically without human intervention. The proposed model is able to generate coherent storylines from new articles.

Yulan He is a Reader and Director of the Systems Analytics Research Institute at Aston University. She obtained her PhD degree in Spoken Language Understanding in 2004 from the University of Cambridge, UK. Prior joining Aston, she was a Senior Lecturer at the Open University, Lecturer at the University of Exeter and Lecturer at the University of Reading. Her current research interests lie in the integration of machine learning and natural language processing for text mining and social media analysis. Yulan has published over 140 papers with most appeared in high impact journals and at top conferences such as IEEE Transactions on Knowledge and Data Engineering, IEEE Intelligent Systems, KDD, CIKM, ACL, etc. She served as an Area Chair in NAACL 2016, EMNLP 2015, CCL 2015 and NLPCC 2015
and co-organised ECIR 2010 and IAPR 2007.

Inference-Based Automated Probabilistic Programming in Distributed Embedded Node Networks (08 February, 2017)

Speaker: Dr. Mark Post

Driven by ever more demanding applications, modern embedded computing and automation systems have reached unprecedented levels of complexity. Dr. Post’s research focuses on applying novel software and hardware architectures to simplify and distribute the structure of robots and other embedded systems, to make them robust and able to operate under uncertainty, and also allow to for more efficient and automated development processes. One way to achieve this is via the unification of programming and data, made possible by using probabilistic abstractions of exact data. In a new methodology for embedded programming developed through this research, exact variables are replaced with random variables and a computation process is defined based on evidence theory and probabilistic inference. This has many advantages including the implicit handling of uncertainty, a guarantee of deterministic program execution, and the ability to apply both statistical on-line learning and expert knowledge from relational semantic sources. Implementation on real-time systems is made reliable and practical by applying modular and lock-free inter-process communication, semantic introspection and stochastic characterization of processes to build robust embedded networks based on wide-computing concepts. This methodology in general has a vast array of potential real-world applications, and some aspects have been applied successfully to embedded programming of planetary rovers and agricultural robots.

The Problem of Validation in Systems Engineering (07 February, 2017)

Speaker: Robbie Simpson

Systems Engineering makes extensive use of modelling and analysis methodologies to design and analyse systems. However, it is rare for these methodologies to be effectively validated for correctness or utility. Additionally, the common use of case studies as an implicit validation mechanism is undermined by the lack of validation of these case studies themselves. This talk explores the problem of validation with specific reference to requirements engineering and safety analysis techniques, identifies the main shortcomings and attempts to propose some potential solutions.

intra-systems: TBA (07 February, 2017)

Speaker: Robbie Simpson

FATA Seminar - The complexity of finding and counting sum-free subsets (07 February, 2017)

Speaker: Kitty Meeks

A set A of natural numbers is said to be sum-free if it does not contain distinct x, y and z such that x + y = z.  Sum-free sets have been studied extensively in additive combinatorics (Paul Erdős was particularly interested in these sets) but algorithmic questions relating to sum-free sets have thus far received very little attention. We consider the problem, given a set A, of determining whether A contains a sum-free subset of size at least k.  We show that this problem is NP-complete in general, but is tractable with respect to certain parameterizations; in the cases where the decision problem is tractable, we also consider the complexity of counting all sum-free subsets of size exactly k.

This is joint work (in progress) with Andrew Treglown (University of Birmingham).

Research On Network Intrusion Detection Systems and Beyond (06 February, 2017)

Speaker: Dr Kostas Kyriakopoulos

The talk will go through the overview of research conducted in the "Signal Processing and Networks" group at Loughborough University, with a strong emphasis on the “Networks" side. We have developed algorithms for fusing cross layer measurements using the Dempster Shafer evidence framework to make decisions on whether packets/frames in the network are coming from a malicious source or from the legitimate Access Point. We are currently researching on how to infuse this system with contextual information besides the direct measurements from the network. The talk will also discuss other Networks relevant topics, including Ontologies for management of networks and some brief introduction to the group’s Signal Processing expertise in Signal Processing for defence areas.

SICSA DVF Assistant Professor Sam Tobin-Hochstadt "Pycket: A Tracing JIT for a functional language" (01 February, 2017)

Speaker: SICSA Event
SICSA DVF Assistant Professor Sam Tobin-Hochstadt from Indiana University, Bloomington will be giving a talk on “Pycket: A Tracing JIT for functional language” on Wednesday 1 February at the School of Computing Science, University of Glasgow Abstract: Functional languages have traditionally had sophisticated ahead-of-time compilers such as GHC for Haskell, MLton for ML, and Gambit for Scheme. But other modern languages often use JIT compilers, such as Java, Smalltalk, Lua, or JavaScript. Can we apply JIT compilers, in particular the technology of so-called tracing JIT compilers, to functional languages? I will present a new implementation of Racket, called Pycket, which shows that this is both possible and effective. Pycket is very fast on a wide range of benchmarks, supports most of Racket, and even addresses the overhead of gradual typing-generated proxies. Bio: Sam Tobin-Hochstadt is an Assistant Professor in the School of Informatics and Computing at Indiana University. He has worked on dynamic languages, type systems, module systems, and metaprogramming, including creating the Typed Racket system and popularizing the phrase “scripts to programs”. He is a member of the ECMA TC39 working group responsible for standardizing JavaScript, where he co-designed the module system for ES6, the next version of JavaScript. He received his PhD in 2010 from Northeastern University under Matthias Felleisen. The host of this SICSA DVF is Dr Patrick Maier, University of Glasgow

Pycket: A Tracing JIT for a functional language (01 February, 2017)

Speaker: Sam Tobin-Hochstadt

Functional languages have traditionally had sophisticated ahead-of-time compilers such as GHC for Haskell, MLton for ML, and Gambit for Scheme. But other modern languages often use JIT compilers, such as Java, Smalltalk, Lua, or JavaScript. Can we apply JIT compilers, in particular the technology of so-called tracing JIT compilers, to functional languages? I will present a new implementation of Racket, called Pycket, which shows that this is both possible and effective. Pycket is very fast on a wide range of benchmarks, supports most of Racket, and even addresses the overhead of gradual typing-generated proxies.

Biography: Sam Tobin-Hochstadt is an Assistant Professor in the School of Informatics and Computing at Indiana University. He has worked on dynamic languages, type systems, module systems, and metaprogramming, including creating the Typed Racket system and popularizing the phrase "scripts to programs." He is a member of the ECMA TC39 working group responsible for standardizing JavaScript, where he co-designed the module system for ES6, the next version of JavaScript. He received his PhD in 2010 from Northeastern University under Matthias Felleisen.

FATA Seminar - Spatial Reasoning about Traffic Safety (31 January, 2017)

Speaker: Sven Linker


Due to the increasing use of automated car controllers, mathematical correct formalisms are needed to describe these controllers and verify their safety. Typical approaches refer to the dynamical behaviour of cars via differential equations. In this way, spatial aspects of cars, like the current position and the space needed for safe braking in case of emergencies, are only available indirectly. However, for the verification of safety properties, e.g., collision freedom, these properties are of inherent importance.
In this talk, I present an approach with the intention to simplify safety proofs by abstracting away from the concrete dynamics of cars. Within proofs, explicit assumptions about the behaviour of cars have to be used. These assumptions, e.g. that cars are able to calculate their braking distance, can then be instantiated with more detailed approaches.
The contributions of this work are divided into three parts. I present an abstract model of traffic on multi-lane highways which hides the dynamics and only considers a local neighbourhood of each car. Subsequently, I define and briefly explain a modal logic based on this model to specify and verify safety properties of highway traffic.
Finally, I present the application of this logic in form of a case study exploring minimal constraints for controllers ensuring safety on motorways.

Sven Linker received his PhD with the topic "Proofs for Traffic Safety - Combining Diagrams and Logic" in 2015 from the Carl von Ossietzky University of Oldenburg, Germany. From 2015 to 2016 he was part of the project "The Readability of Proofs in Diagrammatic Logic" at the University of Brighton, UK. Since 2016, he works at the University of Liverpool in the project "Science of Sensor Systems Software".
His main research areas are the application of logics to verification and specification of computer systems, especially modal logics and their proof systems, as well as formal diagrammatic systems.

SICSA DVF Assistant Professor Sam Tobin-Hochstadt "Languages as Libraries" (27 January, 2017)

Speaker: SICSA Event
SICSA DVF Assistant Professor Sam Tobin-Hochstadt from Indiana University, Bloomington will be giving a talk on “Languages as Libraries" on Friday 27 January at the University of St Andrews Abstract: Programming language design benefits from constructs for extending the syntax and semantics of a host language. While C’s string-based macros empower programmers to introduce notational short-hands, the parser-level macros of Lisp encourage experimentation with domain-specific languages. The Scheme programming language improves on Lisp with macros that respect lexical scope. The design of Racket - a descendant of Scheme - goes even further with the introduction of a full-fledged interface to the static semantics of the language. A Racket extension programmer can thus add constructs that are indistinguishable from "native" notation, large and complex embedded domain-specific languages, and even optimizing transformations for the compiler backend. This power to experiment with language design has been used to create a series of sub-languages for programming with first-class classes and modules, numerous languages for implementing the Racket system, and the creation of Typed Racket, a complete and fully integrated typed sister language to Racket's untyped base language. In this talk, I'll review the power of Lisp macros for metaprogramming, describe how Scheme introduced lexical scope for macros, and then show how Racket builds upon these foundation to support the development of full-fledged languages as libraries. Bio: Sam Tobin-Hochstadt is an Assistant Professor in the School of Informatics and Computing at Indiana University. He has worked on dynamic languages, type systems, module systems, and metaprogramming, including creating the Typed Racket system and popularizing the phrase “scripts to programs”. He is a member of the ECMA TC39 working group responsible for standardizing JavaScript, where he co-designed the module system for ES6, the next version of JavaScript. He received his PhD in 2010 from Northeastern University under Matthias Felleisen. The host of this SICSA DVF is Dr Patrick Maier, University of Glasgow

SICSA DVF Assistant Professor Sam Tobin-Hochstadt "Typed Racket and Gradual Typing" (24 January, 2017)

Speaker: SICSA Event
SICSA DVF Assistant Professor Sam Tobin-Hochstadt from Indiana University, Bloomington will be giving a talk on "Typed Racket and Gradual Typing" on Tuesday 24 January at the Informatics Forum, University of Edinburgh Abstract: The trend toward constructing large-scale applications in scripting languages has inspired recent research in gradual typing, which adds types incrementally to existing languages. This idea has also now been adopted in industry, with Typed Clojure, TypeScript, and Facebook's Hack as recent example. Over the last decade, my collaborators and I have developed Typed Racket, the first practical gradual type system, to enable adding types to existing untyped Racket programs. Building Typed Racket has required work at every level of programming language research, from runtime systems and compilers, to type and contract system design, to IDE tool support, and even to new proof techniques. In this talk, I'll survey this landscape of work, explain how the needs of Typed Racket has driven all of these areas, and discuss future challenges that remain to be tackled. Bio: Sam Tobin-Hochstadt is an Assistant Professor in the School of Informatics and Computing at Indiana University. He has worked on dynamic languages, type systems, module systems, and metaprogramming, including creating the Typed Racket system and popularizing the phrase "scripts to programs". He is a member of the ECMA TC39 working group responsible for standardizing JavaScript, where he co-designed the module system for ES6, the next version of JavaScript. He received his PhD in 2010 from Northeastern University under Matthias Felleisen. The host of this SICSA DVF is Dr Patrick Maier, University of Glasgow

Intra-Systems Seminar (24 January, 2017)

Speaker: Jeremy Singer

Jermey presents an analysis of beginner Haskell code.

Applying Machine Learning to Data Exploration. (23 January, 2017)

Speaker: Charles Sutton

One of the first and most fundamental tasks in data mining is what we might call data understanding. Given a dump of data, what's in it? If modern machine learning methods are effective at finding patterns in data, then they should be effective at summarizing data sets so as to help data analysts develop a high-level understanding of them.

I'll describe several different approaches to this problem. First I'll describe a new approach to classic data mining problems, such as frequent itemset mining and frequent sequence mining, using a new principled model from probabilistic machine learning. Essentially, this casts the problem of pattern mining as one of structure learning in a probabilistic model. I'll describe an application to summarizing the usage of software libraries on Github.

A second attack to this general problem is based on cluster analysis. A good clustering can help a data analyst to explore and understand a data set, but what constitutes a good clustering may depend on domain-specific and application-specific criteria. I'll describe a new framework for interactive clustering that allows the analyst to examine a clustering and guide it in a way that is more useful for their information need.

Finally, topic modelling has proven to be a highly useful family of methods for data exploration, but it still requires a large amount of specialized effort to develop a new topic model for a specific data analysis scenario. I'll present new results on highly scalable inference for latent Dirichlet allocation based on recently proposed deep learning methods for probabilistic models.

Slides and relevant papers will be available at

Rethinking eye gaze for human-computer interaction (19 January, 2017)

Speaker: Hans Gellersen

Eye movements are central to most of our interactions. We use our eyes to see and guide our actions and they are a natural interface that is reflective of our goals and interests. At the same time, our eyes afford fast and accurate control for directing our attention, selecting targets for interaction, and expressing intent. Even though our eyes play such a central part to interaction, we rarely think about the movement of our eyes and have limited awareness of the diverse ways in which we use our eyes --- for instance, to examine visual scenes, follow movement, guide our hands, communicate non-verbally, and establish shared attention. 

This talk will reflect on use of eye movement as input in human-computer interaction. Jacob's seminal work showed over 25 years ago that eye gaze is natural for pointing, albeit marred by problems of Midas Touch and limited accuracy. I will discuss new work on eye gaze as input that looks beyond conventional gaze pointing. This includes work on: gaze and touch, where we use gaze to naturally modulate manual input; gaze and motion, where we introduce a new form of gaze input based on the smooth pursuit movement our eyes perform when they follow a moving object; and gaze and games, where we explore social gaze in interaction with avatars and joint attention as multi-user input . 

Hans Gellersen is Professor of Interactive Systems at Lancaster University. Hans' research interest is in sensors and devices for ubiquitous computing and human-computer interaction. He has worked on systems that blend physical and digital interaction, methods that infer context and human activity, and techniques that facilitate spontaneous interaction across devices. In recent work he is focussing on eye movement as a source of context information and modality for interaction. 

GIST Seminar: Sharing emotions in collaborative virtual environments (19 January, 2017)

Speaker: Arindam Dey

Interfaces for collaborative tasks, such as multiplayer games can enable effective remote collaboration and enjoyable gameplay. However, in these systems the emotional states of the users are often not communicated properly due to the remoteness. In this talk, I will present two of the recent work at Empathic Computing Lab (UniSA). 
In the first work, we investigated for the first time, the effects of sharing emotional states of one collaborator to the other during an immersive Virtual Reality (VR) gameplay experience. We created two collaborative immersive VR games that display the real-time heart rate of one player to the other. The two different games elicited different emotions, one joyous and the other scary. We tested the effects of visualizing heart-rate feedback in comparison with conditions where such a feedback was absent. Based on subjective feedback, we noticed clear indication of higher positive affect, collaborative communication, and subjective preferences when the heart-rate feedback was shown. The games had significant main effects on the overall emotional experience.
In the second work, we explore the effect of different VR games on human emotional responses measured physiologically and subjectively in a within-subjects user study. In the user study, six different types of VR experiences were experienced by 11 participants, and nine emotions were elicited and analyzed from physiological signals. The results indicate that there are primarily three emotions that are dominant when experiencing VR and the same emotions are elicited in all experiences we tested. Both subjective and objective measurement of emotions showed similar results, but subjectively participants reported to experience emotions more strongly then what they did objectively.

Exploiting Memory-Level Parallelism (18 January, 2017)

Speaker: Dr Timothy M Jones

Many modern data processing and HPC workloads are heavily memory-latency bound. Current architectures and compilers perform poorly on these applications due to the highly irregular nature of the memory access patterns involved. This leads to CPU stalling for the majority of the time. However, on closer inspection, these applications contain abundant memory-level parallelism that is currently unexploited. Data accesses are, in many cases, well defined and predictable in advance, falling into a small set of simple patterns. To exploit them though, we require new methods for prefetching, in hardware and software.

In this talk I will describe some of the work my group has been doing in this area over the past couple of years. First, I'll show a compiler pass to automatically generate software prefetches for indirect memory accesses, a special class of irregular memory accesses often seen in high-performance workloads. Next, I'll describe a dedicated hardware prefetcher that optimises breadth-first traversals of large graphs. Finally, I'll present a generic programmable prefetcher that embeds an array of small microcontroller-sized cores next to the L1 cache in a high-performance processor. Using an event-based programming model, programmers are able to realise performance increases of over 4x by manual creation of prefetch code, or 3.5x for the same application using an automatic compiler pass.

FATA Seminar - Hyper-Heuristics with Graph Transformations (17 January, 2017)

Speaker: Christopher Stone

Hyper-Heuristics is a search method for selecting and generating heuristics to solve combinatorial optimisation problems taking advantage of the abundance of heuristics developed to tackle a wide range of problem classes. Unfortunately heuristics, and the solutions they operate on, tend to have their own specific representation both in terms of underlying data structure and in the taxonomy used to describe their approach. This talk will present an approach based on graphs and graph transformations able to model multiple problem classes using the same data structure. This will include a discussion on the trade-offs of this approach and an overview of the latest empirical results.

Bio: Christopher L. Stone received his MEng degree in Software Engineering from Edinburgh Napier University. He is currently a PhD student under the supervision of Emma Hart and Ben Peachter at the same university. His main research interests are related to computational intelligence with a focus on representation of NP-Hard problems (Routing, packing and scheduling), generation of heuristics and graph transformations.

The Role of Relevance in Sponsored Search. (16 January, 2017)

Speaker: Fabrizio Silvestri

Sponsored search aims at retrieving the advertisements that in the one hand meet users’ intent reflected in their search queries, and in the other hand attract user clicks to generate revenue. Advertisements are typically ranked based on their expected revenue that is computed as the product between their predicted probability of being clicked (i.e., namely clickability) and their advertiser provided bid. The relevance of an advertisement to a user query is implicitly captured by the predicted clickability of the advertisement, assuming that relevant advertisements are more likely to attract user clicks. However, this approach easily biases the ranking toward advertisements having rich click history. This may incorrectly lead to showing irrelevant advertisements whose clickability is not accurately predicted due to lack of click history. Another side effect consists of never giving a chance to new advertisements that may be highly relevant due to their lack of click history. To address this problem, we explicitly measure the relevance between an advertisement and a query without relying on the advertisement’s click history, and present different ways of leveraging this relevance to improve user search experience without reducing search engine revenue. Specifically, we propose a machine learning approach that solely relies on text-based features to measure the relevance between an advertisement and a query. We discuss how the introduced relevance can be used in four important use cases: pre-filtering of irrelevant advertisements, recovering advertisements with little history, improving clickability prediction, and re-ranking of the advertisements on the final search result page. Offline experiments using large-scale query logs and online A/B tests demonstrate the superiority of the proposed click-oblivious relevance model and the important roles that relevance plays in sponsored search.

SICSA Conference 2017 Committee Meeting (16 January, 2017)

Speaker: SICSA Event
A meeting of the SICSA PhD Conference 2017 organising committee. For more details about the conference, please see SICSA PhD Conference.

Working toward computer generated music traditions (12 January, 2017)

Speaker: Bob Sturm

I will discuss research aimed at making computers intelligent and sensitive enough to working with music data, whether acoustic or symbolic. Invariably, this includes a lot of work in applying machine learning to music collections in order to divine distinguishing and identifiable characteristics of practices that defy strict definition. Many of the resulting machine music listening systems appear to be musically sensitive and intelligent, but their fraudulent ways can be revealed when they are used to create music in the styles they have been taught to identify. Such "evaluation by generation” is a powerful way to gauge the generality of what a machine has learned to do. I will present several examples, focusing in particular on our work applying deep LSTM networks to modelling folk music transcriptions, and ultimately generating new music traditions.



SICSA Networking & Systems Scottish Networking Event (SCONE) (12 January, 2017)

Speaker: SICSA Event
The 17th SCONE meeting will be held at the University of Glasgow on Thursday 12 January 2017 SCONE is the Scottish Networking Event, an informal gathering of networking and systems researchers in and around Scotland. The goal of the meeting is to encourage discussion and interaction between those working on systems-related things in Scotland. PhD students and RAs are encouraged to offer short (20 minute) talks on their work, and there will be time for announcements of interest to the community (job adverts, CFPs, data sets, tools, etc.). The goal is to be informal, to give chance of students to practice talks in a friendly atmosphere, and to give everyone chance to meet and chat to others working in similar areas. The schedule for the day is: 12:00 - 13:00 Lunch 13.00 - 14:30 Talks and announcements (Richard Cziva and Yuchen Zhao) 14:30 - 15:00 Coffee 15:00 - 16:00 Interactive session 16:00 - 16:20 Break 16:20 - 17:00 Talks (Stephen McQuistin) 17:00 Close (likely followed by drinks and dinner) Attendance is free, but registration is required (so we can order enough lunch and coffee…). Please email Colin Perkins to register, or to offer a talk. For more information please see the SCONE web-site

Studies of Disputed Authorship (09 January, 2017)

Speaker: Michael P. Oakes

Automatic author identification is a branch of computational stylometry, which is the computer analysis of writing style. It is based on the idea that an author’s style can be described by a unique set of textual features, typically the frequency of use of individual words, but sometimes considering the use of higher level linguistic features. Disputed authorship studies assume that some of these features are outside the author’s conscious control, and thus provide a reliable means of discriminating between individual authors. Many studies have successfully made use of high frequency function words like “the”, “of” and “and”, which tend to have grammatical functions rather than reveal the topic of the text. Their usage is unlikely to be consciously regulated by authors, but varies substantially between authors, texts, and even individual characters in Jane Austen’s novels. Using stylometric techniques, Oakes and Pichler (2013) were able to show that the writing style of the document “Diktat für Schlick” was much more similar to that of Wittgenstein than that of other philosophers of the Vienna Circle. Michael Oakes is currently researching the authorship of “The Dark Tower”, normally attributed to C. S. Lewis.

Cyber Security Christmas Lecture: Glasgow (16 December, 2016)

Speaker: SICSA Event
SICSA sponsored Cyber Security Lecture is taking place at Glasgow Caledonian University on Friday 16 December 2016 at 12.30pm The Christmas lectures have been running since 2012, always the week before Christmas and always in Universities across Scotland. The lectures have been aimed at School pupils and the intention is to inspire a new generation into careers in cyber security and the digital sector. Supported by Scottish Government, Scottish Enterprise, SICSA, Young Scot, (ISC)2, BSides and Skills Development Scotland the free lectures compliment the three themes that the new National Progression Award in Cyber Security are based on: Digital Forensics Ethical Hacking Data Security Cyber Security is a fast evolving area of technology with many more well paid and fulfilling jobs than can be filled for many years. Scotland is taking a global lead in protecting and educating. These lectures, provided by industry experts, will bring the subject area to life in a educational but fun session. For information and how to register please see the Eventbrite Page

Cyber Security Christmas Lecture: Glasgow (16 December, 2016)

Speaker: SICSA Event
SICSA sponsored Cyber Security Lecture is taking place at Glasgow Caledonian University on Friday 16 December 2016 at 10.00am The Christmas lectures have been running since 2012, always the week before Christmas and always in Universities across Scotland. The lectures have been aimed at School pupils and the intention is to inspire a new generation into careers in cyber security and the digital sector. Supported by Scottish Government, Scottish Enterprise, SICSA, Young Scot, (ISC)2, BSides and Skills Development Scotland the free lectures compliment the three themes that the new National Progression Award in Cyber Security are based on: Digital Forensics Ethical Hacking Data Security Cyber Security is a fast evolving area of technology with many more well paid and fulfilling jobs than can be filled for many years. Scotland is taking a global lead in protecting and educating. These lectures, provided by industry experts, will bring the subject area to life in a educational but fun session. For information and how to register please see the Eventbrite Page

Cyber Security Christmas Lecture: Edinburgh (15 December, 2016)

Speaker: SICSA Event
SICSA sponsored Cyber Security Lecture is taking place at the University of Edinburgh on Thursday 15 December 2016 at 12.30pm The Christmas lectures have been running since 2012, always the week before Christmas and always in Universities across Scotland. The lectures have been aimed at School pupils and the intention is to inspire a new generation into careers in cyber security and the digital sector. Supported by Scottish Government, Scottish Enterprise, SICSA, Young Scot, (ISC)2, BSides and Skills Development Scotland the free lectures compliment the three themes that the new National Progression Award in Cyber Security are based on: Digital Forensics Ethical Hacking Data Security Cyber Security is a fast evolving area of technology with many more well paid and fulfilling jobs than can be filled for many years. Scotland is taking a global lead in protecting and educating. These lectures, provided by industry experts, will bring the subject area to life in a educational but fun session. For information and how to register please see the Eventbrite Page

Deep Learning Journal Club (15 December, 2016)

Speaker: Rod Murray-Smith

Rod will debrief us on NIPS 2016.

IDI journal club (15 December, 2016)

Speaker: Rod & Bjørn

Rod & Bjørn will discuss what was new at last week's NIPS conference 

You can see the proceedings: 

And tutorials:  

New test environments for e.g. Reinforcement learning:



Best paper

Value Iteration networks


Interesting papers

Weight normalisation  and


Cyber Security Christmas Lecture: Edinburgh (15 December, 2016)

Speaker: SICSA Event
SICSA sponsored Cyber Security Lecture is taking place at the University of Edinburgh on Thursday 15 December 2016 at 10.00am The Christmas lectures have been running since 2012, always the week before Christmas and always in Universities across Scotland. The lectures have been aimed at School pupils and the intention is to inspire a new generation into careers in cyber security and the digital sector. Supported by Scottish Government, Scottish Enterprise, SICSA, Young Scot, (ISC)2, BSides and Skills Development Scotland the free lectures compliment the three themes that the new National Progression Award in Cyber Security are based on: Digital Forensics Ethical Hacking Data Security Cyber Security is a fast evolving area of technology with many more well paid and fulfilling jobs than can be filled for many years. Scotland is taking a global lead in protecting and educating. These lectures, provided by industry experts, will bring the subject area to life in a educational but fun session. For information and how to register please see the Eventbrite Page

SpaceTime - A fresh view on Parallel Programming (14 December, 2016)

Speaker: Prof Sven-Bodo Scholz

Traditionally, programs are specified in terms of data structures and successive modifications of these. This separation dictates at what time which piece of data is located in what space, be it main memory, disc or registers. When aiming at high-performance, parallel executions of programs, it turns out that the choice of this time / space separation can have a vast impact on the performance that can be achieved. Consequently, a lot of work has been spent on compiler technology for identifying dependencies between data and on techniques for rearranging codes for improved locality with respect to both, time and space. As it turns out, the programmer specified choice of data-structures often limits what can be achieved by such optimisation techniques. In this talk, we argue that a new way of formulating parallel programs that is based on a unified view on space and time not only matches much better typical scientific specifications, it also increases the re-usability of programs and, most importantly, it enables more radical space-time optimisations through compilers.

Cyber Security Christmas Lecture: Dundee (14 December, 2016)

Speaker: SICSA Event
SICSA sponsored Cyber Security Lecture is taking place at the University of Dundee on Wednesday 14 December 2016 at 12.30pm The Christmas lectures have been running since 2012, always the week before Christmas and always in Universities across Scotland. The lectures have been aimed at School pupils and the intention is to inspire a new generation into careers in cyber security and the digital sector. Supported by Scottish Government, Scottish Enterprise, SICSA, Young Scot, (ISC)2, BSides and Skills Development Scotland the free lectures compliment the three themes that the new National Progression Award in Cyber Security are based on: Digital Forensics Ethical Hacking Data Security Cyber Security is a fast evolving area of technology with many more well paid and fulfilling jobs than can be filled for many years. Scotland is taking a global lead in protecting and educating. These lectures, provided by industry experts, will bring the subject area to life in a educational but fun session. For information and how to register please see the Eventbrite Page

Cyber Security Christmas Lecture: Dundee (14 December, 2016)

Speaker: SICSA Event
SICSA sponsored Cyber Security Lecture is taking place at the University of Dundee on Wednesday 14 December 2016 at 10.00am The Christmas lectures have been running since 2012, always the week before Christmas and always in Universities across Scotland. The lectures have been aimed at School pupils and the intention is to inspire a new generation into careers in cyber security and the digital sector. Supported by Scottish Government, Scottish Enterprise, SICSA, Young Scot, (ISC)2, BSides and Skills Development Scotland the free lectures compliment the three themes that the new National Progression Award in Cyber Security are based on: Digital Forensics Ethical Hacking Data Security Cyber Security is a fast evolving area of technology with many more well paid and fulfilling jobs than can be filled for many years. Scotland is taking a global lead in protecting and educating. These lectures, provided by industry experts, will bring the subject area to life in a educational but fun session. For information and how to register please see the Eventbrite Page

Reviewing the Systems Curriculum Review (13 December, 2016)

Speaker: Colin Perkins

Over the last few months, the Section has been engaged in a review of our undergraduate curriculum and teaching. This talk will outline the changes we’re proposing, and what we hope to achieve by doing so

FATA Seminar - Between Subgraph Isomorphism and Maximum Common Subgraph (13 December, 2016)

Speaker: Craig Reilly

When a small pattern graph does not occur inside a larger target graph, we can ask how to find “as much of the pattern as possible” inside the target graph. In general, this is known as the maximum common subgraph problem, which is much more computationally challenging in practice than subgraph isomorphism. We introduce a restricted alternative, where we ask if all but k vertices from the pattern can be found in the target graph. This allows for the development of slightly weakened forms of certain invariants from subgraph isomorphism which are based upon degree and number of paths. We show that when k is small, weakening the invariants still retains much of their effectiveness. We are then able to solve this problem on the standard problem instances used to benchmark subgraph isomorphism algorithms, despite these instances being too large for current maximum common subgraph algorithms to handle. Finally, by iteratively increasing k, we obtain an algorithm which is also competitive for the maximum common subgraph problem.

Cyber Security Christmas Lecture: Aberdeen (13 December, 2016)

Speaker: SICSA Event
SICSA sponsored Cyber Security Lecture is taking place at Robert Gordon University on Tuesday 13 December 2016 at 12.30pm The Christmas lectures have been running since 2012, always the week before Christmas and always in Universities across Scotland. The lectures have been aimed at School pupils and the intention is to inspire a new generation into careers in cyber security and the digital sector. Supported by Scottish Government, Scottish Enterprise, SICSA, Young Scot, (ISC)2, BSides and Skills Development Scotland the free lectures compliment the three themes that the new National Progression Award in Cyber Security are based on: Digital Forensics Ethical Hacking Data Security Cyber Security is a fast evolving area of technology with many more well paid and fulfilling jobs than can be filled for many years. Scotland is taking a global lead in protecting and educating. These lectures, provided by industry experts, will bring the subject area to life in a educational but fun session. For information and how to register please see the Eventbrite Page

Cyber Security Christmas Lecture: Aberdeen (13 December, 2016)

Speaker: SICSA Event
SICSA sponsored Cyber Security Lecture is taking place at Robert Gordon University on Tuesday 13 December 2016 at 10.00am The Christmas lectures have been running since 2012, always the week before Christmas and always in Universities across Scotland. The lectures have been aimed at School pupils and the intention is to inspire a new generation into careers in cyber security and the digital sector. Supported by Scottish Government, Scottish Enterprise, SICSA, Young Scot, (ISC)2, BSides and Skills Development Scotland the free lectures compliment the three themes that the new National Progression Award in Cyber Security are based on: Digital Forensics Ethical Hacking Data Security Cyber Security is a fast evolving area of technology with many more well paid and fulfilling jobs than can be filled for many years. Scotland is taking a global lead in protecting and educating. These lectures, provided by industry experts, will bring the subject area to life in a educational but fun session. For information and how to register please see the Eventbrite Page

Satisfying User Needs or Beating Baselines? Not always the same. (12 December, 2016)

Speaker: Walid Magdy

Information retrieval (IR) is mainly concerned with retrieving relevant documents to satisfy the information needs of users. Many IR tasks involving different genres and search scenarios have been studied for decades. Typically, researchers aim to improve retrieval effectiveness beyond the current “state-of-the-art”. However, revisiting the modeling of the IR task itself is often essential before seeking improvement of results. This includes reconsidering the assumed search scenario, the approach used to solve the problem, or even the conducted evaluation methodology. In this talk, some well-known IR tasks are explored to demonstrate that beating the state-of-the-art baseline is not always sufficient. Novel modeling, understanding, or approach to IR tasks could lead to significant improvements in user satisfaction compared to just improving “objective” retrieval effectiveness. The talk includes example IR tasks, such as printed document search, patent search, speech search, and social media search.

Cyber Security Christmas Lecture: Inverness (12 December, 2016)

Speaker: SICSA Event
SICSA sponsored Cyber Security Lecture is taking place at An Loch, Inverness Campus on Monday 12 December 2016 a 12.30pm The Christmas lectures have been running since 2012, always the week before Christmas and always in Universities across Scotland. The lectures have been aimed at School pupils and the intention is to inspire a new generation into careers in cyber security and the digital sector. Supported by Scottish Government, Scottish Enterprise, SICSA, Young Scot, (ISC)2, BSides and Skills Development Scotland the free lectures compliment the three themes that the new National Progression Award in Cyber Security are based on: Digital Forensics Ethical Hacking Data Security Cyber Security is a fast evolving area of technology with many more well paid and fulfilling jobs than can be filled for many years. Scotland is taking a global lead in protecting and educating. These lectures, provided by industry experts, will bring the subject area to life in a educational but fun session. For information and how to register please see the Eventbrite Page

Cyber Security Christmas Lectures (12 December, 2016)

Speaker: SICSA Event
SICSA sponsored Cyber Security Lecture is taking place at An Loch, Inverness Campus on Monday 12 December 2016 The Christmas lectures have been running since 2012, always the week before Christmas and always in Universities across Scotland. The lectures have been aimed at School pupils and the intention is to inspire a new generation into careers in cyber security and the digital sector. Supported by Scottish Government, Scottish Enterprise, SICSA, Young Scot, (ISC)2, BSides and Skills Development Scotland the free lectures compliment the three themes that the new National Progression Award in Cyber Security are based on: Digital Forensics Ethical Hacking Data Security Cyber Security is a fast evolving area of technology with many more well paid and fulfilling jobs than can be filled for many years. Scotland is taking a global lead in protecting and educating. These lectures, provided by industry experts, will bring the subject area to life in a educational but fun session. For information and how to register please see the Eventbrite Page

SICSA Artificial Intelligence Research Theme Meet-Up (09 December, 2016)

Speaker: SICSA Event
The first general meet-up of the new SICSA Artificial Intelligence Research Theme will take place on Friday 9 December at Edinburgh Napier University. Artificial Intelligence (AI) is a broad area: it covers reasoning, planning, knowledge representation, sensory perception, language understanding, learning, optimisation and other related areas, and has been approached in a variety of ways, ranging from biologically inspired ideas (including neural networks), to logic (of various sorts), to emulation of experts. The aim of this, and future theme meetings are: Strengthen research collaborations between SICSA members working in AI and related disciplines, leading to long-lasting interdisciplinary research partnerships; Stimulate improved collaboration between the various research sub-groups to identify new and emerging research areas; Share knowledge, expertise and tools to enhance AI research The schedule for the day is: 0930-1000: Arrival and coffee 1000-1010: Welcome address from Principal of Napier University 1010-1030: Introduction to the day: aim of the meeting, and outline of the day: LSS and EH 1030-1115: Research Snapshots Session 1 (PechaKucha[1]) 1115-1140: Coffee 1140-1225: Research Snapshots session 2 (PechaKucha) 1225-1250: Collation of Delegates Research Topics (Well-sorted[2]) 1250-1400: Lunch 1400-1415: Research Topics: Grouping (Well-sorted stage 2) 1415-1500: Research Snapshots session 3 (Pecha-Kucha) 1500-1505: Research Topic Groups revealed (Well-sorted stage 3) 1505-1600: Coffee and Group discussions 1600-1640: Feedback from groups 1640-1700: Summing up, and discussion of next stage for AI Theme 1700 onwards: delegates retire to nearby hostelry to continue discussions. [1] For information of PekkaKucha, see [2] For more information on Well sorted, see The Research Theme Leaders for Artificial Intelligence are Professor Emma Hart ( and Professor Leslie Smith (

Knights Landing, MCDRAM, and NVRAM: The changing face of HPC technology (07 December, 2016)

Speaker: Mr Adrian Jackson

The hardware used in HPC systems is becoming much more diverse than we have been used to in recent times. Intel's latest Xeon Phi processor, the Knights Landing (KNL), is one example of such change, however bigger changes in memory technologies and hierarchies are on the way. In this talk I will outline our experiences with the KNL, how future memory technologies are likely to impact the hardware in HPC systems, and what these changes might mean for users.

Performance Evaluation for CloudSim - Cloud Computing Simulator (06 December, 2016)

Speaker: Dhahi Alshammari

Much cloud computing research is performed using simulators. There are many simulators available. One of the most common simulators is "CloudSim", which is widely used as a cloud research tool.  This talk will review briefly the CloudSim system and its various extensions. The extensions provide additional usability features and improved simulation fidelity. I will further present results of an empirical study to evaluate the precision of CloudSim by comparing it with actual test-bed results from the Glasgow Raspberry Pi Cloud infrastructure

FATA Seminar - Probabilistic and Stochastic Hybrid Automata and their Abstractions (06 December, 2016)

Speaker: Ruth Hoffman

With the wide applicability of probabilistic and stochastic hybrid systems in the real world, it is now more important than ever to be able to verify these systems for safety and reliability. Hybrid systems can be found anywhere, from thermostats to processes passing messages. We will discuss the different types of hybrid systems and their discrete abstractions. The probabilistic hybrid systems we will be focusing on are autonomous unmanned aerial vehicles. The abstracted structures allow for existing quantitative and model checking tools to verify and analyse the system.

Deep Learning Journal Club - Fast_RCNN tutorial (01 December, 2016)

Speaker: Catherine Higham

This meeting will comprise demos (MATCONVNET using pre-trained models) for

 1.Fully Convolutional Networks for Semantic Segmentation;

 2. Fast R-CNN (Region-based Convolutional Network)

see also Faster R-CNN though demo is only for Fast R-CNN.

 To put these demos in context, pages 35-47 (sematic segmentation), 48-61 (object detection), 62 (fast RCNN) and 68 (faster RCNN) in the following “Convolutional Networks for Computer Vision Applications” document are useful

Erlyberly - Erlang tracing for the masses (30 November, 2016)

Speaker: Mr Andy Till

The BEAM virtual machine has flexible and powerful tooling for introspection, statistics and debugging without affecting the running application. Erlyberly is an ongoing project to lower the barrier for entry for using these capabilities, focusing on tracing.

Tech Start-Up Meet-Up (29 November, 2016)

Speaker: Philip Petersen and Stefan Raue

Join us for the next in our series of meet-ups for aspiring technology entrepreneurs.  Our meet-ups bring you a range of inspiring speakers - from undergrad students who are just starting out with their first app venture, all the way to experienced CEOs who have built successful businesses, and everything in between.  The message is simple: creating your own start-up is hard work, heaps of fun, and an unparalleled learning experience.  All delivered with free pizza and refreshments, and lots of opportunities to network with other prospective entrepreneurs.

This month our speakers are:



Stefan Raue is a technology entrepreneur. After working with blue-chip companies like Vodafone and Bayer, he started his own company Bizvento in 2013. Bizvento offers scalable web and mobile technology for a global customer base in the event management industry. Developing and growing Bizvento provided Stefan with a wealth of insights, networks and experiences that he is now applying to the successful launch of GRN - a wearable and data analytics company based in Glasgow. In his talk Stefan will speak about the importance of the right team composition, start-up responsibilities, corporate life, support, funding, and discusses why "playing" start-up can be dangerous.



Philip Petersen has worked in the B2B ICT industry for over 30 years.  His experience ranges from working in large, international corporations to small businesses and bootstrapped start-ups, up to board level.  He has raised investment, has self-funded and he has exited.  Having moved to live in Scotland recently, he is getting involved in helping start-ups and scale-ups to grow.

Philip will talk about some of the things he has learned from the successes and failures in his career so far.  He will highlight the importance of sales and understanding the customer because without selling there is no business and there will be no investment.

Raspberry Pi based sensor platform for a smart campus (29 November, 2016)

Speaker: Dejice Jacob

In a sensor network, using sensor nodes with significant compute 
capability can enable flexible data collection, processing and reaction. This
can be done using commodity single-board computers. In this talk, we will be
describing initial deployment, software architecture and some preliminary analysis.

FATA Seminar - More Semantics More Robust: Improving Android Malware Classifiers (29 November, 2016)

Speaker: Wei Chen

Abstract: Automatic malware classifiers often perform badly on the detection of new malware, i.e., their robustness is poor. We study the machine-learning-based mobile malware classifiers and reveal one reason: the input features used by these classifiers can't capture general behavioural patterns of malware instances. We extract the best-performing syntax- based features like permissions and API calls, and some semantics-based features like happen-befores and unwanted behaviours, and train classifiers using popular supervised and semi-supervised learning methods. By comparing their classification performance on industrial datasets collected across several years, we demonstrate that using semantics- based features can dramatically improve robustness of malware classifiers.

Bio: Dr Wei Chen is a Research Associate in School of Informatics at University of Edinburgh. He received his PhD from University of Nottingham on Type Theory supervised by Prof. Roland C. Backhouse. In 2012 Wei worked with Prof. Martin Hofmann on type-based verification in Munich. He started his current RA with Prof. David Aspinall since 2013, focusing on learning policies for mobile security. Wei's main research interests are in formal methods, in particular, type theory, combinatorial games, and Buechi automata with their applications in program analysis and verification. He is currently working on combining formal methods and machine learning to help with mobile security.

SICSA HCI All Hands Meeting 2016 (29 November, 2016)

Speaker: SICSA Event
The Scottish Informatics and Computer Science Alliance (SICSA) Human-Computer Interaction (HCI) community meets yearly to celebrate the strength of the research and to discuss how to move forward. This year we want to have attendees from all HCI or HCI-related groups in Scotland, since we are looking at how to move further and make HCI and your research stronger. We have invited two Keynotes: Professor Albrecht Schmidt (Stuttgart Uni) and Professor Alan Dix (Birmingham), we'll have everyone represented, and there will be plenty opportunities to network (with and without a Pint in hand). To register to attend the SICSA HCI All Hands Meeting, please visit the Eventbrite page - Registration is free~!! The meeting is organised by Miguel (St Andrews) and Martin (Strathclyde) who are the SICSA HCI Theme co-leaders. Their goal is to build on the extensive research base of the SICSA HCI groups in Scotland to make it even more collaborative, successful, and visible to the world.

Supporting Evidence-based Medicine with Natural Language Processing (28 November, 2016)

Speaker: Dr. Mark Stevenson

The modern evidence-based approach to medicine is designed to ensure that patients are given the best possible care by basing treatment decisions on robust evidence. But the huge volume of information available to medical and health policy decision makers can make it difficult for them to decide on the best approach. Much of the current medical knowledge is stored in textual format and providing tools to help access it represents a significant opportunity for Natural Language Processing and Information Retrieval. However, automatically processing documents in this domain is not straightforward and doing so successfully requires a range of challenges to be overcome, including dealing with volume, ambiguity, complexity and inconsistency.  This talk will present a range of approaches from Natural Language Processing that support access to medical information. It will focus on three tasks: Word Sense Disambiguation, Relation Extraction and Contradiction Identification. The talk will outline the challenges faced when developing approaches for accessing information contained in medical documents, including the lack of available gold standard data to train systems. It will show how existing resources can help alleviate this problem by providing information that allows training data to be created automatically.

SHIP: The Single-handed Interaction Problem in Mobile and Wearable Computing (24 November, 2016)

Speaker: Hui-Shyong Yeo

Screen sizes on devices are becoming smaller (eg. smartwatch and music player) and larger (eg. phablets, tablets) at the same time. Each of these trends can make devices difficult to use with only one hand (eg. fat-finger or reachability problem). This Single-Handed Interaction Problem (SHIP) is not new but it has been evolving along with a growth of larger and smaller interaction surfaces. The problem is exacerbated when the other hand is occupied (encumbered) or not available (missing fingers/limbs). The use of voice command or wrist gestures can be less robust or perceived as awkward in the public. 

This talk will discuss several projects (RadarCat UIST 2016, WatchMI MobileHCI 2016, SWIM and WatchMouse) in which we are working towards achieving/supporting effective single-handed interaction for mobile and wearable computing. The work focusses on novel interaction techniques that are not being explored thoroughly for interaction purposes, using ubiquitous sensors that are widely available such as IMU, optical sensor and radar (eg. Google Soli, soon to be available).


Hui-Shyong Yeo is a second year PhD student in SACHI, University of St Andrews, advised by Prof. Aaron Quigley. Before that he worked as a researcher in KAIST for one year. Yeo has a wide range of interest within the field of HCI, including topics such as wearable, gestures, mixed reality and text entry. Currently he is focusing on single-handed interaction for his dissertation topic. He has published in conferences such as CHI, UIST, MobileHCI (honourable mention), SIGGRAPH and journals such as MTAP and JNCA.

Visit his homepage or twitter @hci_research

Demo of Google Soli Radar and Single Handed Smartwatch interaction (24 November, 2016)

Speaker: Hui-Shyong Yeo

This demo session will present the Google Soli Radar and Smartwatch interaction system


Hui-Shyong Yeo is a second year PhD student in SACHI, University of St Andrews, advised by Prof. Aaron Quigley. Before that he worked as a researcher in KAIST for one year. Yeo has a wide range of interest within the field of HCI, including topics such as wearable, gestures, mixed reality and text entry. Currently he is focusing on single-handed interaction for his dissertation topic. He has published in conferences such as CHI, UIST, MobileHCI (honourable mention), SIGGRAPH and journals such as MTAP and JNCA.

Visit his homepage or twitter @hci_research

Title: Research in Human-Computer Interaction: Methodology Matters (24 November, 2016)

Speaker: Scott MacKenzie

This talk will explore the what and how of research in HCI.  We'll elaborate on four definitions of research (what) and use these to distinguish research from design and engineering.  We'll then examine three research methodologies (how): observational, correlational, and experimental.  These methodologies are contrasted in terms of relevance vs. precision and internal validity vs. external validity.  Two key properties of research not encompassed by definitions are the imperative to publish and the need for replicability.  Methodology -- getting it right -- plays a central role in both publishing and replicability.  One thesis in the talk is that methodology not only matters but, in many respects, is all that matters.  We'll examine the expectations for the method section of a research paper and point out the most common methodological shortcomings in published research papers. 

Scott MacKenzie's research is in human-computer interaction with an emphasis on human performance measurement and modeling, experimental methods and evaluation, interaction devices and techniques, text entry, touch-based input, language modeling, accessible computing, gaming, and mobile 
computing.  He has more than 160 peer-reviewed publications in the field of Human-Computer Interaction (including more than 30 from the ACM's annual SIGCHI conference) and has given numerous invited talks over the past 25 years. In 2015, he was elected into the ACM SIGCHI Academy. That same year he was the recipient of the Canadian Human-Computer Communication Society's (CHCCS) Achievement Award. Since 1999, he has been Associate Professor of Computer Science and Engineering at York University, Canada.

Health technologies for all: designing for use "in the wild" (23 November, 2016)

Speaker: Prof. Ann Blandford

Abstract: There is a plethora of technologies for helping people manage their health and wellbeing: from self-care of chronic conditions (e.g. renal disease, diabetes) and palliative care at end of life through to supporting people in developing mindfulness practices or managing weight or exercise. In some cases, digital health technologies are becoming consumer products; in others, they remain under the oversight of healthcare professionals but are increasingly managed by lay people. How (and whether) these technologies are used depends on how they fit into people’s lives and address people’s values. In this talk, I will present studies on how and why people adopt digital health technologies, the challenges they face, how they fit them into their lives, and how to identify design requirements for future systems. There is no one-size-fits-all design solution for any condition: people have different lifestyles, motivations and needs. Appropriate use depends on fitness for purpose. This requires either customisable solutions or solutions that are tailored to different user populations.

Biography: Ann Blandford is Professor of Human–Computer Interaction at University College London and Director of the UCL Institute of Digital Health. Her expertise is in human factors for health technologies, and particularly how to design systems that fit well in their context of use. She is involved in several research projects studying health technology design, patient safety and user experience. She has published widely on the design and use of interactive health technologies, and on how technology can be designed to better support people’s needs.

Data Structures as Closures (23 November, 2016)

Speaker: Prof Greg Michaelson

In formalising denotational semantics, Strachey introduced a higher order update function for the modelling of stores, states and environments. This function relies solely on atomic equality types, lambda abstractions and conditions to represent stack disciplined association sequences as structured closures, without recourse to data structure constructs like lists.

Here, we present higher order functions that structure closures to model queue, linear ordered and tree disciplined look up functions, again built from moderately sugared pure lambda functions. We also discuss their type properties and practical implementation.

Masters Prize Winners for session 2015-16 (23 November, 2016)

Speaker: None

Masters Prizes 2015-16




Masters CS+


Class Prize for the best student in all CS+ programmes

Florian Diethard Deuerlein,

MSc Computing Science


Masters CS+


Project Prize for best project in all CS+ programmes

Ruben Giaquinta,

MSc Computing Science




Masters IT/SD


Class Prize for best student

Almin Vehabovic,       

MSc Information Technology


Masters IT/SD


Project Prize for best project

Piotr Franciszek Nadczuk,

MSc Software Development


Masters -  Supplied by Graeme Burnett, Enhyper Ltd


Grace Hopper Prize for the best female student

Natascha Sabrina Harth,

MSc Data Science


intra-systems: TBA (22 November, 2016)

Speaker: John O'Donnell

FATA Seminar - On parameterized algorithms for polynomial-time solvable problems (22 November, 2016)

Speaker: André Nichterlein

Parameterized complexity analysis is a flourishing field dealing with the exact solvability of "intractable" problems. Appropriately parameterizing polynomial-time solvable problems helps reducing unattractive polynomial running times. In particular, this "FPT in P" approach sheds new light on what makes a problem far from being solvable in linear time, in the same way as classical FPT algorithms help in illuminating what makes an NP-hard problem far from being solvable in polynomial time. Surprisingly, this very interesting research direction has been too little explored so far; the known results are rather scattered and do not systematically refer to or exploit the toolbox of parameterized algorithm design.

In this talk, I will introduce the field of "FPT in P". To this end, I will outline known results, explain some of the corresponding techniques, and highlight similarities and differences to the classical design of parameterized algorithms for NP-hard problems.

IDA coffee breaks (22 November, 2016)

Speaker: everyone

A chance to catch up informally with members of the IDA section in the Computing Science Common Room.

Human Computation for Entity-Centric Information Access (21 November, 2016)

Speaker: Dr. Gianluca Demartini

Human Computation is a novel approach used to obtain manual data processing at scale by means of crowdsourcing. In this talk we will start introducing the dynamics of crowdsourcing platforms and provide examples of their use to build hybrid human-machine information systems. We will then present ZenCrowd: a hybrid system for entity linking and data integration problems over linked data showing how the use of human intelligence at scale in combination with machine-based algorithms outperforms traditional systems. In this context, we will then discuss efficiency and effectiveness challenges of micro-task crowdsourcing platforms including spam, quality control, and job scheduling in crowdsourcing.

SICSA Conference on Big Data Science Innovations: Prospects in Smart Cities, Media and Governance (18 November, 2016)

Speaker: SICSA Event
SICSA Conference on Big Data Science Innovations: Prospects in Smart Cities, Media and Governance will take place on Friday 18th November at the University of Stirling Big Data Science has undoubtedly gained relevance in various sectors of society, and efforts to harness its potential have become a priority for the industry, government, and academia alike. The recent, Scottish Government driven Smart Cities Alliance initiative is a prime example of an imminent shift towards future data-driven societies, whereby big data empowers: a) city governments, including through e-governance and open data platforms; b) journalists, for example, through the adoption of web metrics to develop news agenda, Big data/content management systems and embryonic forms of artificial intelligence for newsrooms; and c) citizens, including through their active engagement in compiling, disseminating and interpreting Big Data, subsequently leading to the development of smart applications with the potential to enhance civic society. Last year’s announcement by the Scottish Infrastructure and Cities Secretary, Keith Brown, that a 15 million euro (£11.1 million) fund would be allocated to help make Scotland's cities "smarter" through the use of new cutting-edge technological infrastructure, has made it clear that the development and management of big data analytic systems is to be a priority within the strategic planning of seven Scottish cities forming the Scottish Cities Alliance. Within this ecosystem, it is of paramount importance that academia becomes a complementary actor assisting in the design, evolution, explanation and evaluation of the technological infrastructures mediating public life. This first of its kind, one-day SICSA funded Conference seeks to create networking space where data scientists, technologists and scholars involved with journalism, governance, and civic life can deliberate on best ways to boost the development and exploitation of big data analytics, particularly in Scotland. The goal is to impart an understanding of the strengths and limitations of some of the key data science technologies as they impact the future development of smart cities, open governance and data journalism. In particular, we will explore implications of current approaches to data analysis within city government, local news, academic public research and civic engagement, and ask whether they are compatible with a healthy, democratic and self-sustainable agenda of innovation. We invite multi-disciplinary academics, PhD research students, technologists, policy makers, media practitioners, community developers and think tanks interested in Big Data Science. The conference is intended to stimulate discussions on the following key themes: The current state-of-the-art within the emerging Scottish (and global) socio-technical ecosystem in smart cities, open governance, data journalism and civic innovation. The potential risks and benefits to this ecosystem emerging from current trends in data science and artificial intelligence NOTE for SICSA PhD students SICSA, as part of its sponsorship of this Conference, is covering the full registration fee for ALL (SICSA and non-SICSA funded) PhD students in computer science departments of SICSA member Scottish universities (for a full list of SICSA Universities, see: The number of SICSA students is limited and a decision on ranking may be taken if necessary. Note that SICSA sponsored PhD students will be responsible for their own travel arrangements and expenses to get to Stirling – they should be able to access local support from their own Schools/Departments to support such travel. All SICSA PhD students are required to include a short statement on their research interests and achievements (no more than 200 words - including career stage and publication details if any) - at the time of submitting their Abstract. Registration will open via Eventbrite soon. Please see the Conference Web-Site for more details.  

Implementing Ethics for a Mobile App Deployment (17 November, 2016)

Speaker: John Rooksby

In this talk I’ll discuss a paper I’ll be presenting at OzCHI 2016.

Abstract: "This paper discusses the ethical dimensions of a research project in which we deployed a personal tracking app on the Apple App Store and collected data from users with whom we had little or no direct contact. We describe the in-app functionality we created for supporting consent and withdrawal, our approach to privacy, our navigation of a formal ethical review, and navigation of the Apple approval process. We highlight two key issues for deployment-based research. Firstly, that it involves addressing multiple, sometimes conflicting ethical principles and guidelines. Secondly, that research ethics are not readily separable from design, but the two are enmeshed. As such, we argue that in-action and situational perspectives on research ethics are relevant to deployment-based research, even where the technology is relatively mundane. We also argue that it is desirable to produce and share relevant design knowledge and embed in-action and situational approaches in design activities.”

Authors: John Rooksby, Parvin Asadzadeh, Alistair Morrison, Claire McCallum, Cindy Gray, Matthew Chalmers. 

SICSA DVF Professor Philip J Scott “Coordinatization of Countable MV algebras" (16 November, 2016)

Speaker: SICSA Event
SICSA DVF Professor Philip J Scott from the University of Ottawa will be giving a talk titled “A Coorinatization of Countable MV algebras” at Heriot-Watt University. Abstract: The algebras of many-valued Lukasiewicz logics (MV algebras) as well as the algebras of quantum measurement (Effect algebras) have undergone major development since the 1980s and 1990s; they have connections with a wide range of areas, from logic to operator algebras to mathematical physics. I will give a brief introduction to MV algebras, as well as the moregeneral world of effect algebras. Time permitting, I hope to illustrate these notions by sketching recent results (with Mark Lawson) on coordinatization of countable MV-algebras using inverse semigroup theory. The structures involved, Boolean inverse monoids, have recently arisen in areas related to non-commutative Stone duality, aperiodic tilings, etc. We prove that every countable MV algebra is isomorphic to the lattice of principal ideals of certain Boolean inverse monoids. The specific class involved in the proof, AF inverse monoids, corresponds to AF C*-algebras and arises from Bratteli diagrams of countable dimension groups. If there's time, further new directions by F. Wehrung, D. Mundici, et. al. will be discussed. Bio: P. J. Scott is a mathematical logician working in category theory, proof theory, and theoretical computer science. In 1986 he published the book Introduction to Higher Order Categorical Logic (Cambridge University Press) with J. Lambek, which has been highly inuential both in the development of categorical logic, and in its applications in theoretical computer science. In particular, the book establishes the close connections between various type theories, categories, and logics. It thus motivated later works on using category theory and related machinery in programming languages, as well as operational and denotational semantics. Professor Scott is currently Associate Editor of the Cambridge journal Mathematical Structures in Computer Science, and a Coordinating Editor of the North-Holland journal Annals of Pure and Applied Logic. In Canada, his research funding comes from NSERC (Natural Sciences and Engineering Research Council of Canada). Since the early 1990s, Prof. Scott has published foundational papers in areas relating categorical logic to theoretical computer science. Chris Heunen is hosting Professor Philip Scott’s visit to Scotland and the local organiser for this talk is Laura Ciobanu Radomirovic

Automatic detection of parallel code: dependencies and beyond (16 November, 2016)

Speaker: Mr Stan Manilov

Automatic parallelisation is an old research topic, but unfortunately, it
has always been over-promising and under-performing. In this talk, we'll
look at the main approaches towards automatically detecting parallelism in
legacy sequentialcode and we'll follow with some fresh ideas we're working
on, aiming to bring us beyond the ubiquitous dependence analysis.

Device Comfort for Information Accessibility (15 November, 2016)

Speaker: Tosan Atele-Williams

Device comfort is an augmented notion of trust that embodies a relationship between a device, its owner and the environment, with the device able to act, advice, encourage, and reason about everyday interactions, including a minutely precise comprehension of information management and personal security of device owner. The growing privacy and security needs in an increasingly intuitive, interactive and interconnected society contends with Device Comfort as information security methodology based on trust reasoning. In this paper an information accessibility architecture based on java security sandbox that uses device comfort methodology is presented, a further look at how information can be classified based on trust ratings and sensitivity, and how everything within this definition is confined to trusted zones or dimensions. 

FATA Seminar - Max Weight Clique (15 November, 2016)

Speaker: Patrick Prosser

In the maximum clique problem, we are given a simple graph (with vertices and edges), and we are to find a largest set of vertices such that all pairs of vertices in that set are adjacent. In the maximum weight clique problem (mwc), vertices have weight, and we are to find a set of pair-wise adjacent vertices such that the sum of the weights of those vertices is as big as can be. In my talk I will present an exact algorithm for this problem, present some real-world problems (one that is very close to home) that are in fact instances of mwc. I'll review the current state of empirical studies of mwc and suggest future directions of study.

IDA coffee breaks (15 November, 2016)

Speaker: everyone

A chance to catch up informally with members of the IDA section in the Computing Science Common Room.

SICSA DEMOfest 2016 (11 November, 2016)

Speaker: SICSA Event
DEMOfest is our annual technology showcase of leading Informatics and Computer Science research from Scottish Universities and it creates an environment for industry partners and academics to come together and identify opportunities for: Collaborative Innovation Studentships, Placements and Recruitment Technology Licensing & Consultancy Feasibility & Proof of Concept Studies DEMOfest 2016 takes place on 11th November 2016 (4-7pm) at the Technology and Innovation Centre, University of Strathclyde. The event is organised in partnership with ScotlandIS - Scotland's trade body for the ICT sector. This free event is open to businesses large and small; the public sector; academics and research students. The 2016 event will feature: Keynote presentations on Cyber-Security and Big Data 50 researcher exhibitions on Big Data; Cyber-Security; User-Experience; Robotics and Autonomous Systems; Artificial Intelligence; and Networks and the Cloud Exhibitions from key stakeholders in the technology sector We have opportunities for Event-Partners at the 2016 event, who wish to exhibit in the main hall. If you would like to raise the profile of your brand at DEMOfest 2016, please contact Thank you to our 2016 Event Partners

Control Theoretical Models of Pointing (11 November, 2016)

Speaker: Rod Murray-Smith

I will present an empirical comparison of four models from manual control theory on their ability to model targeting behaviour by human users using a mouse: McRuer's Crossover, Costello's Surge, second-order lag (2OL), and the Bang-bang model. Such dynamic models are generative, estimating not only movement time, but also pointer position, velocity, and acceleration on a moment-to-moment basis. We describe an experimental framework for acquiring pointing actions and automatically fitting the parameters of mathematical models to the empirical data. We present the use of time-series, phase space and Hooke plot visualisations of the experimental data, to gain insight into human pointing dynamics. We find that the identified control models can generate a range of dynamic behaviours that captures aspects of human pointing behaviour to varying degrees. Conditions with a low index of difficulty (ID) showed poorer fit because their unconstrained nature leads naturally to more dynamic variability. We report on characteristics of human surge behaviour in pointing.

We report differences in a number of controller performance measures, including Overshoot, Settling time, Peak time, and Rise time. We describe trade-offs among the models. We conclude that control theory offers a promising complement to Fitts' law based approaches in HCI, with models providing representations and predictions of human pointing dynamics which can improve our understanding of pointing and inform design.

Deep Learning Journal Club - PyMC tutorial (11 November, 2016)

Speaker: John Williamson

John Williamson will cover probabilistic programming in Python. He will quickly go over using PyMC to put together (very) simple probabilistic models by simply writing down the graphical model, and then perform inference on them with a MCMC sampler. The materials can be found  at as a notebook. 

BIG RA Event (10 November, 2016)

Speaker: Prof Quintin Cutts, Dr Alice Miller, Prof Simon Gay

The BIG RA event is an opportunity for RAs to get together, learn some useful information, and then socialise in a relaxing atmosphere. The event will consist of two parts: seminar and bowling.

Seminar (SAWB/422) -- all are welcome! (including PhD students and academics)
                2pm – Prof Quintin Cutts – Teaching opportunities and recognition within the School
                2.30pm -- Dr Alice Miller -- Athena Swan: role and activities
                3pm – Prof Simon Gay – Promotion at Glasgow University

Bowling (Glasgow Bowl, Springfield Quay, Glasgow G5 8NP)
                Time: 5.30pm

Please, get in touch with either Natalia ( or Waqar ( if for some reason you haven't booked a place for the bowling yet.

SICSA sponsored Research Theme Event: Scottish Programming Languages Seminar (SPLS) (09 November, 2016)

Speaker: SICSA Event
Scottish Programming Languages Seminar (SPLS) The Scottish Programming Languages Seminar is an informal meeting for the discussion of any aspect of programming languages. The next SPLS will take place on Wednesday 9 November at the University of Strathclyde. Information and updates about the November edition of SPLS will be sent via the SPLS Mailing List. Registration Please register using this form. Time and place 12:00–17:45, 9 November, McCance Building, (Room 301) University of Strathclyde. Programme 12.00 Lunch (provided) 13.00 Consistency of Quine’s NF using nominal techniques - Jamie Gabbay 14.00 Coffee 14.30 Relating Channels and Actor-based Languages in Concurrent Lambda-Calculi - Simon Fowler 15.00 Provably Correct Transformation of Specifications into Programs - Martin Ward 15.30 Update on new SICSA Research Themes and funding - Katya Komendantskaya 15.45 Coffee 16.15 Type-Driven Design of Communicating Systems using Idris - Jan de Muijnck-Hughes 16.45 The essence of Frank programming - Craig McLaughlin 17.15 Irrelevant classical logic in Agda - Stephen Dolan 17.45 Pub For more information please see

SPLS (09 November, 2016)

Speaker: Multiple

The Scottish Programming Languages Seminar is an informal meeting for the discussion of any aspect of programming languages. The next SPLS will take place on Wednesday 9 November at the University of Strathclyde.



12.00 Lunch (provided)
13.00 Consistency of Quine’s NF using nominal techniques - Jamie Gabbay
14.00 Coffee
14.30 Relating Channels and Actor-based Languages in Concurrent Lambda-Calculi - Simon Fowler
15.00 Provably Correct Transformation of Specifications into Programs - Martin Ward
15.30 Update on new SICSA Research Themes and funding - Katya Komendantskaya
15.45 Coffee
16.15 Type-Driven Design of Communicating Systems using Idris - Jan de Muijnck-Hughes
16.45 The essence of Frank programming - Craig McLaughlin
17.15 Irrelevant classical logic in Agda - Stephen Dolan
17.45 Pub


Dynamically Estimating Mean Task Runtimes (08 November, 2016)

Speaker: Patrick Maier

The AJITPar project aims to automatically tune skeleton-based parallel
programs such that the task granularity falls within a range that
promises decent performance: Tasks should run long enough to amortise
scheduling overheads, but not too long.

In this talk, I will sketch how AJITPar uses dynamic cost models to
accurately estimate mean task runtimes, despite irregular task sizes.
The key is random scheduling and robust linear regression.

(Joint work with Magnus Morton and Phil Trinder.)

FATA Seminar - Binary session types for psi-calculi (08 November, 2016)

Speaker: Hans Hüttel

The framework of psi-calculi introduced by Bengtson et al. makes it possible to give a general account of variants of the pi-calculus. We use this framework to describe a generic session type system for variants of the pi-calculus. In this generic system, standard properties, including fidelity, hold at the level of the framework and are then guaranteed to hold when the generic system is instantiated.

We show that our system can capture existing systems including the session type system due to Gay and Hole, a type system for progress due to Vieira and Vasconcelos and a refinement type system due to Baltazar et al.  The standard fidelity property is proved at the level of the generic system, and automatically hold when the system is instantiated.

IDA coffee breaks (08 November, 2016)

Speaker: everyone

A chance to catch up informally with members of the IDA section in the Computing Science Common Room.

Analysis of the Cost and Benefits of Search Interactions (07 November, 2016)

Speaker: Dr. Leif Azzopardi

Interactive Information Retrieval (IR) systems often provide various features and functions, such as query suggestions and relevance feedback, that a user may or may not decide to use. The decision to take such an option has associated costs and may lead to some benefit. Thus, a savvy user would take decisions that maximises their net benefit. In this talk, we will go through a number of formal models which examine the costs and benefits of various decisions that users, implicitly or explicitly, make when searching. We consider and analyse the following scenarios: (i) how long a user's query should be? (ii) should the user pose a specific or vague query? (iii) should the user take a suggestion or re-formulate? (iv) when should a user employ relevance feedback? and (v) when would the "find similar" functionality be worthwhile to the user? To this end, we analyse a series of cost-benefit models exploring a variety of parameters that affect the decisions at play. Through the analyses, we are able to draw a number of insights into different decisions, provide explanations for observed behaviours and generate numerous testable hypotheses. This work not only serves as a basis for future empirical work, but also as a template for developing other cost-benefit models involving human-computer interaction.

This talk is based on the recent ICTIR 2016 paper with Guido Zuccon:

SICSA DVF Professor Philip J Scott "An introduction to many-valued logics and effect algebras" (02 November, 2016)

Speaker: SICSA Event
SICSA DVF Professor Philip J Scott from the University of Ottawa will be giving a talk titled "An introduction to many-valued logics and effect algebras" at the University of Strathclyde on Wednesday 2 November. Abstract: The algebras of many-valued Lukasiewicz logics (MV algebras) as well as the theory of quantum measurement (Effect algebras) have undergone considerable development in the 1980s and 1990s; they now constitute important research fields, with connections to several contemporary areas of mathematics, logic, and theoretical computer science. Both subjects have recently attracted considerable interest among groups of researchers in categorical logic and foundations of quantum computing. I will give a leisurely introduction to MV algebras (and their associated logics), as well as the more general world of effect algebras. If time permits, we will also illustrate some new results (with Mark Lawson, Heriot-Watt) on coordinatization of some concrete MV-algebras using inverse semigroup theory. Bio: P. J. Scott is a mathematical logician working in category theory, proof theory, and theoretical computer science. In 1986 he published the book Introduction to Higher Order Categorical Logic (Cambridge University Press) with J. Lambek, which has been highly inuential both in the development of categorical logic, and in its applications in theoretical computer science. In particular, the book establishes the close connections between various type theories, categories, and logics. It thus motivated later works on using category theory and related machinery in programming languages, as well as operational and denotational semantics. Professor Scott is currently Associate Editor of the Cambridge journal Mathematical Structures in Computer Science, and a Coordinating Editor of the North-Holland journal Annals of Pure and Applied Logic. In Canada, his research funding comes from NSERC (Natural Sciences and Engineering Research Council of Canada). Since the early 1990s, Prof. Scott has published foundational papers in areas relating categorical logic to theoretical computer science. Chris Heunen is hosting Professor Philip Scott’s visit to Scotland

Image processing on FPGAs with a DSL and dataflow transformations (02 November, 2016)

Speaker: Dr Rob Stewart

FPGAs are chips that can be reconfigured to exactly match the structure
of a specific algorithm. They are faster than CPUs and need less power
than GPUs, and hence are well suited for remote image processing needs.
They are however notoriously difficult to program, which is often done
by hardware experts working at a very low level. This excludes algorithm
designers across a range of real world domains from exploiting FPGA
technology. Moreover, time and space optimisation opportunities found in
compilers of high level languages cannot be applied to low level
hardware descriptions.

This talk will be in three parts. 1) I will present RIPL, our image
processing FPGA DSL. It comprises algorithmic skeletons influenced by
stream combinator languages, meaning the RIPL compiler is able to
generate space efficient hardware. 2) I will demonstrate our compiler
based dataflow transformations framework, which optimises the dataflow
IR form of RIPL programs before they are synthesised to FPGAs. 3) I will
describe the FPGA based smart camera architecture that RIPL programs
slot into, which is used for evaluation.

SICSA DVF Dr Thomas Bolander “Learning to Act: Qualitative Learning of Deterministic Action Models" (01 November, 2016)

Speaker: SICSA Event
Dr Thomas Bolander, Technical University of Denmark (DTU) will be giving a talk at the University of Edinburgh on “Epistemic Planning with Implicit Coordination" ABSTRACT: In this talk we address the problem of learnability of action models in the context of dynamic epistemic logic. Dynamic epistemic logic is a very expressive formalism for reasoning about (higher-order) knowledge of agents, and for reasoning about the dynamics of such knowledge under the execution of actions. Dynamic epistemic logic provides a very expressive formalism for epistemic planning: planning in which the agents are enriched with the ability to do (higher-order) reasoning about their own knowledge and ignorance, and the knowledge and ignorance of other agents. The ultimate goal of the current work is to integrate learning of actions via observations into epistemic planning. We consider two basic learnability criteria in our setting: finite identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while arbitrary (non-deterministic) actions require more learning power---they are identifiable in the limit. We then move on to a particular learning method, i.e., learning via update, which proceeds via restriction of a space of events within a learning-specific action model. We show how this method can be adapted to learn conditional and non-conditional action models. This is joint work with Nina Gierasimczuk. BIO: Thomas Bolander is an associate professor at the Technical University of Denmark (DTU) in Copenhagen. His research interests are include logic, artificial intelligence, social intelligence, multi-agent systems and automated planning. Of special interest is the modelling of social phenomena and social intelligence with the aim of creating computer systems that can interact intelligently with humans and other computer systems. His recent research focus has been on epistemic planning: enriching the theories of automated planning with the powerful and expressive concepts and structures from dynamic epistemic logic. HOST: Ron Petrick (

Scaling robots and other stuff with Erlang (01 November, 2016)

Speaker: Natalia Chechina

I’m going to give this talk at the end of November at BuildStuff developer conferences in Vilnius (Lithuania) and Kiev (Ukraine).  So it’s a bit skewed towards developer community rather than research community.  Any feedback will be very much appreciated.


I’ll talk about scalability and fault tolerance features of distributed Erlang. In particular, what makes it so good for large scale distributed applications on commodity hardware, where devices are inherently non-reliable and can disappear and re-appear at any moment.


The talk is based on experience of developing Scalable Distributed Erlang (SD Erlang -- a small extension of distributed Erlang for distributed scalability) and integrating Erlang in robotics. So, I’ll share rationale behind design decisions for SD Erlang, lessons learned, advantages, limitations, and plans for the further development. And then talk about benefits of Erlang in distributed robotics, initial findings, and plans.

FATA Seminar - "Almost stable" matchings in the Hospitals / Residents problem with Couples (01 November, 2016)

Speaker: David Manlove

The Hospitals / Residents problem with Couples (HRC) models the allocation of intending junior doctors to hospitals, where couples are allowed to submit joint preference lists over pairs of (typically geographically close) hospitals. In this context we seek a stable matching of doctors to hospitals, but for some instances, such a matching may not exist.  We thus consider MIN BP HRC, the problem of finding a matching that is "as stable as possible" (i.e., admits the minimum number of blocking pairs).  We present some new complexity results for this problem - in general it is NP-hard and difficult to approximate.  We then present the first Integer Programming (IP) and Constraint Programming (CP) models for MIN BP HRC.  Finally, we discuss an empirical evaluation of these models applied to randomly-generated instances of the problem.  We find that on average, the CP model is about 1.15 times faster than the IP model, and when presolving is applied to the CP model, it is on average 8.14 times faster.  We further observe that the number of blocking pairs admitted by a solution is very small, i.e., usually at most 1, and never more than 2, for the (28,000) instances considered.

IDA coffee breaks (01 November, 2016)

Speaker: everyone

A chance to catch up informally with members of the IDA section in the Computing Science Common Room.

I'm an information scientist - let me in! (31 October, 2016)

Speaker: Martin White

For the last 46 years Martin has been a professional information scientist, though often in secret. Since founding Intranet Focus Ltd he has found that the awareness of research into topics such as information behaviour, information quality and information seeking in his clients is close to zero. This is especially true in information retrieval. In his presentation Martin will consider why this is the case, what the impact might be and what (if anything) should and could be done to change this situation.

TechMeetup October: How come none of you are using Cobol? (26 October, 2016)

Speaker: Douglas McCallum

Our speaker tonight will be Douglas McCallum asking "How come none of you are using Cobol?"

Thanks to our sponsors Cultivate, SkyScanner and Glasgow University for making the event possible.

Send any questions to either @KevinBrolly, @aaronbassett or @wattid
The event starts at 6:30pm on the 5th Floor, 18 Lilybank Gardens, Glasgow University, G12 8QQ


SICSA DVF Professor Philip J Scott "“From Goedel to Lambek: studies in the foundations of logic and computation” (26 October, 2016)

Speaker: SICSA Event
SICSA DVF Professor Philip Scott from the University of Ottawa will be giving a talk entitled "From Goedel to Lambek: studies in the foundations of logic and computation" on Wednesday 26 October at Heriot-Watt University. Abstract: In this talk, I want to re-examine some foundations of mathematics and computability theory, based on more recent results in type theory and categorical logic. We shall focus on some themes surrounding computability: What is a computable function? What are "natural" theories of computable functions? What is truth and what are Goedel’s Incompleteness Theorems? Finally, if time permits, I would like to discuss a candidate for an "ideal" model for a moderate constructivist, allowing us to reconcile various competing foundational philosophies. Many of these issues come from my early work with my late colleague Joachim Lambek (McGill). Bio: P. J. Scott is a mathematical logician working in category theory, proof theory, and theoretical computer science. In 1986 he published the book Introduction to Higher Order Categorical Logic (Cambridge University Press) with J. Lambek, which has been highly inuential both in the development of categorical logic, and in its applications in theoretical computer science. In particular, the book establishes the close connections between various type theories, categories, and logics. It thus motivated later works on using category theory and related machinery in programming languages, as well as operational and denotational semantics. Professor Scott is currently Associate Editor of the Cambridge journal Mathematical Structures in Computer Science, and a Coordinating Editor of the North-Holland journal Annals of Pure and Applied Logic. In Canada, his research funding comes from NSERC (Natural Sciences and Engineering Research Council of Canada). Since the early 1990s, Prof. Scott has published foundational papers in areas relating categorical logic to theoretical computer science. The host of Professor Philip Scott visit to Scotland is Chris Heunen.

The Missing Link! A new Skeleton for Evolutionary Multi-Agent Systems in Erlang (26 October, 2016)

Speaker: Prof Kevin Hammond

Evolutionary multi-agent systems (EMAS) play a critical role in many artificial intelligence applications that are in use today. This talk will describe a new parallel pattern for parallel EMAS computations, and its associated skeleton implementation, written in Erlang using the Skel library. The skeleton enables us to flexibly capture a wide variety of concrete evolutionary computations that can exploit the same underlying parallel implementation. The use of the skeleton is shown on two different evolutionary computing applications: i) computing the minimum of the Rastrigin function; and ii) solving an urban traffic optimization problem. We can obtain very good speedups (up to 142.44× the sequential performance) on a variety of different parallel hardware from Raspberry Pis to large-scale multicores and Xeon Phi accelerators, while requiring very little parallelisation effort.

Power, Precision and EPCC (25 October, 2016)

Speaker: Blair Archibald

 I have recently returned from a summer working at EPCC, one of the
  largest high performance computing (HPC) centres in the UK. In this
  talk I'll give give a whirlwind tour of what I got up to during my
  time there!

  I'll start by describing EPCC itself and how it fits into the wider
  HPC community. Then will dive into two of the projects I was involved
  in over summer.

  Firstly, the Adept project which tackles the challenges presented by
  the need for energy efficient computing. This project relies heavily
  on custom hardware to gain fine grain knowledge of power usage. We
  will see how at how energy scales with parallel efficiency, the
  potential hidden cost of programming languages, and some interesting
  future research directions.

  Next, the ExaFLOW project aimed at providing the next generation of
  computational fluid dynamics codes (ready for the "Exa-scale" era).
  We will dive into mixed precision analysis and discover how we can
  analyse floating-point behaviour of scientific codes by way of binary

FATA Seminar - Modularity of Random Graphs (25 October, 2016)

Speaker: Fiona Skerman

An important problem in network analysis is to identify highly connected components or `communities'. Most popular clustering algorithms work by approximately optimising modularity. Given a graph G, the modularity of a partition of the vertex set measures the extent to which edge density is higher within parts than between parts; and the maximum modularity q*(G) of G is the maximum of the modularity over all partitions of V(G) and takes a value in the interval [0,1) where larger values indicates a more clustered graph.
Knowledge of the maximum modularity of random graphs helps determine the significance of a division into communities/vertex partition of a real network. We investigate the maximum modularity of Erdos-Renyi random graphs and find there are three different phases of the likely maximum modularity. This is joint work with Prof. Colin McDiarmid.

Bio: Fiona is a postdoc at Bristol University after a doctorate with Colin McDiarmid in Oxford. She has a particular interest in identifying community structure in networks and also more broadly in phase transitions, random graphs, network coding and positional games.

IDA coffee breaks (25 October, 2016)

Speaker: everyone

A chance to catch up informally with members of the IDA section in the Computing Science Common Room.

From Robotic Ecologies to Internet of Robotic Things: Artificial Intelligence and Software Engineering Issues (19 October, 2016)

Speaker: Dr Mauro Dragone

Building smart spaces combining IoT technology and robotic
capabilities is an important and extended challenge for EU R&D&I, and a key
enabler for a range of advanced applications, such as home automation,
manufacturing, and ambient assisted living (AAL). In my talk I will provide an
overview of robotic ecologies, i.e. systems made up of sensors, actuators and
(mobile) robots that cooperate to accomplish complex tasks. I will discuss the
Robotic Ecology vision and highlight how it shares many similarities with the
Internet of Things (IoT): The ideal aim on both fronts is that arbitrary
combinations of devices should be able to be deployed in everyday environments,
and there efficiently provide useful services. However, while this has the
potential to deliver a range of disruptive services and address some of the
limitations of current IoT efforts, their effective realization necessitates
both novel software engineering solutions and artificial intelligence methods
to simplify their large scale application in real world settings. I will
illustrate these issues by focusing on the results of the EU project RUBICON
( RUBICON built robotic ecologies that can learn to adapt to
changing and evolving requirements with minimum supervision. The RUBICON
approach builds upon a unique combination of methods from cognitive robotics,
machine learning, wireless sensor networks and software engineering. I will
summarise the lessons learned by adopting such an approach and outline
promising directions for future developments.



Mauro Dragone is Assistant Professor with the Research Institute of Signals,
Sensors and Systems (ISSS), School of Engineering & Physical Sciences at
Heriot-Watt University, Edinburgh Centre for Robotics. Dr. Dragone gained more
than 12 years of experience as a software architect and project manager in the
software industry before his involvement with academia. His research expertise
includes robotics, human-robot interaction, wireless sensor networks and
software engineering. Dr. Dragone was involved in a number of EU projects
investigating Internet of Things (IoT) and intelligent control solutions for
smart environments, before initiating and leading the EU project RUBICON

SICSA DVF Professor Philip Scott "An introduction to many-valued logics and effect algebras" (18 October, 2016)

Speaker: SICSA Event
SICSA DVF Professor Philip Scott from the University of Ottawa will be giving a talk entitled "An introduction to many-valued logics and effect algebras" on Tuesday 18 October at the Informatics Forum, University of Edinburgh. Abstract: The algebras of many-valued Lukasiewicz logics (MV algebras) as well as the theory of quantum measurement (Effect algebras) have undergone considerable development in the 1980s and 1990s; they now constitute important research fields, with connections to several contemporary areas of mathematics, logic, and theoretical computer science. Both subjects have recently attracted considerable interest among groups of researchers in categorical logic and foundations of quantum computing. I will give a leisurely introduction to MV algebras (and their associated logics), as well as the more general world of effect algebras. If time permits, we will also illustrate some new results (with Mark Lawson, Heriot-Watt) on coordinatization of some concrete MV-algebras using inverse semigroup theory. Bio: P. J. Scott is a mathematical logician working in category theory, proof theory, and theoretical computer science. In 1986 he published the book Introduction to Higher Order Categorical Logic (Cambridge University Press) with J. Lambek, which has been highly inuential both in the development of categorical logic, and in its applications in theoretical computer science. In particular, the book establishes the close connections between various type theories, categories, and logics. It thus motivated later works on using category theory and related machinery in programming languages, as well as operational and denotational semantics. Professor Scott is currently Associate Editor of the Cambridge journal Mathematical Structures in Computer Science, and a Coordinating Editor of the North-Holland journal Annals of Pure and Applied Logic. In Canada, his research funding comes from NSERC (Natural Sciences and Engineering Research Council of Canada). Since the early 1990s, Prof. Scott has published foundational papers in areas relating categorical logic to theoretical computer science. Chris Heunen is hosting Professor Philip Scott's visit.

Data Plane Programmability for Software Defined Networks (18 October, 2016)

Speaker: Simon Jouet

OpenFlow has established itself as the defacto standard for Software Defined Networking (SDN) by separating the network's control and data planes. In this approach a central controller can alter the match-action pipeline of the individual switches using a limited set of fields and actions preventing. This inherent rigidity prevents the rapid introduction of new data plane functionality that would enable the design of new forwarding logic and other packet processing such as custom routing, telemetry, debugging, security, and quality of service.

In this talk I will present BPFabric a platform, protocol, and language-independent architecture to centrally program and monitor the data plane. It will cover the design of the switches and how they defer from "legacy" or OpenFlow switches and the design of a control API to orchestrate the infrastructure.


FATA Seminar - What we did in the summer (18 October, 2016)

Speaker: All FATA members

The problem of quantification in Information Retrieval and on Social Networks. (17 October, 2016)

Speaker: Gianni Amati

There is a growing interest to know how fast information spreads on social networks, how many unique users are participating to an event, the leading opinion polarity in a stream. Quantifying distinct elements on a flow information is thus becoming a crucial problem in many real time information retrieval or streaming applications. We discuss the state-of-art of quantification and show that many problems can be interpreted within a common framework. We introduce a new probabilistic framework for quantification and show as examples how to count opinions in a stream and how to compute the degrees of separation of a network.

Turbocharging Rack-Scale In-Memory Computing with Scale-Out NUMA (12 October, 2016)

Speaker: Dr Boris Grot

Web-scale online services mandate fast access to massive quantities of
data. In practice, this is accomplished by sharding the datasets across a
pool of servers within a datacenter and keeping each shard within a
server's main memory to avoid long-latency disk I/O. Accesses to non-local
shards take place over the datacenter network, incurring communication
delays that are 20-1000x greater than accesses to local memory. In this
talk, I will introduce Scale-Out NUMA -- a rack-scale architecture with an
RDMA-inspired programming model that eliminates chief latency overheads of
existing networking technologies and reduces the remote memory access
latency to a small factor of local DRAM. I will overview key features of
Scale-Out NUMA and will describe how it can bridge the semantic gap
between software and hardware through integrated support for atomic object


Boris Grot is a Lecturer in the School of Informatics at the University of
Edinburgh. His research seeks to address efficiency bottlenecks and
capability shortcomings of processing platforms for big data. His recent
accomplishments include an IEEE Micro Top Pick and a Google Faculty
Research Award. Grot received his PhD in Computer Science from The
University of Texas at Austin and spent two years as a post-doctoral
fellow at the Parallel Systems Architecture Lab at EPFL.

Full Section Meeting and Strategic Discussion (11 October, 2016)

Speaker: Phil Trinder

This session is essential for all members of the Systems Section. We will

  • Meet new PhD students in the section
  • Discuss progress since the Away Day
  • Discuss strategic plans, including:
    • A Centre for Doctoral Training (CDT) proposal
    • A high-profile Section Workshop as part of the School’s 60th anniversary celebrations

 Feel free to propose other topics by email to


Towards Reliable and Scalable Robot Communication (10 October, 2016)

Speaker: Phil Trinder

The Robot Operating System (ROS) is the de facto standard middleware

for modern robots. However, communication between 

ROS nodes has scalability and reliability issues in practice. This talk reports 

an  investigation into whether Erlang’s lightweight concurrency

and reliability mechanisms have the potential to address these issues.

The basis of the investigation is a pair of simple but typical

robotic control applications, namely two face-trackers: one using

ROS publish/subscribe messaging, and the other a bespoke Erlang

communication framework.


The talk reports experiments that compare five key aspects of the ROS

and Erlang face trackers. We find that Erlang communication scales

better, supporting at least 3.5 times more active processes (700 processes)

than its ROS-based counterpart (200 nodes) while consuming

half of the memory. However, while both face tracking prototypes

exhibit similar detection accuracy and transmission latencies

with 10 or fewer workers, Erlang exhibits a continuous increase in

the total time taken to process a frame as more agents are added,

which we have identified is caused by function calls from Erlang

processes to Python modules via ErlPort. A reliability study shows

that while both ROS and Erlang restart failed computations, the

Erlang processes restart 1000–1500 times faster than ROS nodes,

reducing robot component downtime and mitigating the impact of

the failures.


Joint work with Andreea Lutac, Natalia Chechina, and Gerardo Aragon-Camarasa


Analytics over Parallel Multi-view Data (03 October, 2016)

Speaker: Dr. Deepak Padmanabhan

Conventional unsupervised data analytics techniques have largely focused on processing datasets of single-type data, e.g., one of text, ECG, Sensor Readings and Image data. With increasing digitization, it has become common to have data objects having representations that encompass different "kinds" of information. For example, the same disease condition may be identified through EEG or fMRI data. Thus, a dataset of EEG-fMRI pairs would be considered as a parallel two-view dataset.  Datasets of text-image pairs (e.g., a description of a seashore, and an image of it) and text-text pairs (e.g., problem-solution text, or multi-language text from machine translation scenarios) are other common instances of multi-view data. The challenge in multi-view data analytics is about effectively leveraging such parallel multi-view data to perform analytics tasks such as clustering, retrieval and anomaly detection. This talk will cover some emerging trends in processing multi-view parallel data, and different paradigms for the same. In addition to looking at the different schools of techniques, and some specific techniques from each school, this talk will also be used to present some possibilities for future work in this area.


Dr. Deepak Padmanabhan is a lecturer with the Centre for Data Sciences and Scalable Computing at Queen's University Belfast. He obtained his B.Tech in Comp. Sc. and Engg. from Cochin University (Kerala, India), followed by his M.Tech and PhD, all in computer science, from Indian Institute of Technology Madras. Prior to joining Queen's, he was a researcher at IBM Research - India. He has over 40 publications across top venues in Data Mining, NLP, Databases and Information Retrieval. He co-authored a book on Operators for Similarity Search, published by Springer in 2015. He is the author on ~15 patent applications to the USPTO, including 4 granted patents. He is a recipient of the INAE Young Engineer Award 2015, and is a Senior Member of the ACM and the IEEE. His research interests include Machine Learning, Data Mining, NLP, Databases and Information Retrieval. Email:  URL:

Deep Learning Journal Club - t-SNE dimensional reduction (29 September, 2016)

Speaker: John Williamson

For the IDI journal club, we'll be looking at t-SNE dimensional reduction, from the paper L.J.P. van der Maaten and G.E. Hinton.

Visualizing High-Dimensional Data Using t-SNE. Journal of Machine Learning Research 9(Nov):2579-2605, 2008. PDF

 We'll be doing this one as an interactive notebook, with live code. The notebook is at

Towards a Better Integration of Information Visualisation and Graph Mining (22 September, 2016)

Speaker: Daniel Archambault

As we enter the big data age, the fields of information visualisation and data mining need to work together to tackle problems at scale.  Both of these areas provide complimentary techniques for big data.  Machine learning provides automatic methods that quickly summarise very large data sets which would otherwise be incomprehensible.  Information visualisation provides interfaces that leverage human creativity that can facilitate the discovery of unanticipated patterns.  This talk presents an overview of some of the work conducted in graph mining - an area of data mining that deals specifically with network data.  Subsequently, the talk considers synergies between these two areas in order to scale to larger data sets and examples of projects are presented.  We conclude with a discussion of how information visualisation and data mining can collaborate effectively together in the future.

Teach You a Haskell Course (20 September, 2016)

Speaker: Jeremy Singer

This week, our Functional Programming in Haskell course began. We have around 4000 learners signed up for this massive open online course. Wim and I have spent the past six months developing the learning materials, mostly adapted from the traditional Functional Programming 4 course.

In this talk, I will give an overview of the challenges involved in setting up and running an online course. In short, hard work but very rewarding!

Deep Learning Journal Club (15 September, 2016)


This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-of-the-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Chinese. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.

Systems: Climate is what you expect, the weather is what you get! (13 September, 2016)

Speaker: Professor Saji Hameed

This old saying among weather forecasters is correct. Yet it does give little insight into the workings of the climate system.  While weather can be understood and simulated as instabilities arising within  the atmosphere,  climate involves interactions and exchanges of properties among a wide variety of subsystems that include for example the atmosphere, the ocean and land subsystems. I will first discuss an example of these interactions at play showcasing the El Nino phenomenon.  In the rest of the talk, I will endeavor to describe how software for climate models integrates experiences and expertise  across a wide range of disciplines and the computational challenges faced by the climate modeling community in doing so.

Biography: Professor Hameed is a Senior Associate Professor at the University of Aizu in Fukushima, Japan. He was the Director of Science at APCC, Korea and has been appointed a Senior Visiting Scientist at Japan Agency for Marine Earth Science and Technology (JAMSTECH).

He is credited with the discovery of an ocean-atmosphere coupled mode "Indian Ocean Dipole" which radically changed the prevailing paradigms. At APCC he pioneered an information technology based approach for generating and distributing climate information for societal benefit. He has also worked with APCC and its international partners to develop a climate prediction based approach to managing severe haze and forest fires in Southeast Asia, a severe environmental pollution issue in the area. He is also closely working with scientists at the National Institute of Advanced Science and Technology (Japan) to apply climate and weather science for renewable energy applications.

His current work includes investigating Super El Nino using computational modeling approaches, analyzing climate data using machine learning algorithms, tracking clouds and rain with low cost GPS chips, and continuing investigation into Indian Ocean Dipole that affects global climate.

Technology Enhanced Learning for Computer Science (09 September, 2016)

Speaker: Julie Williamson, Hans-Wolfgang Loidl, Niall Barr, Jeremy SInger

  • Raspberry Pi system development
  • App-based classroom voting
  • Arduino development
  • Interactive Haskell programming tutorials

Systems Seminar: Sorting Sheep from Goats - Automatically Clustering and Classifying Program Failures (07 September, 2016)

Speaker: Marc Roper

In recent years, software testing research has produced notable advances in the area of automated test data generation. It is now possible to take an arbitrary system and automatically generate volumes of high-quality test data. But the problem of checking the correctness or otherwise of the outputs (termed the "oracle problem") still remains.
This talk examines how machine learning techniques can be used to cluster and classify test outputs to separate failing and passing cases.
The feasibility of the approach is demonstrated and shown to have the potential to reduce by an order of magnitude the numbers of outputs that need to be examined following a test run.

This is joint work carried out with Rafig Almaghairbe

Biography: Dr Marc Roper is a Reader in the Department of Computer and Information Sciences at the University of Strathclyde. He has an extensive background in software engineering, particularly in understanding and addressing the problems associated with designing, testing, and evolving large software systems. Much of his research has incorporated significant empirical investigations: either based around controlled participant-based experiments or through the the analysis of open-source systems and large-scale repositories. His more recent work has explored the application of search-based strategies and machine learning techniques to software engineering problems such as test data generation, the identification of security anomalies, and automatic fault detection. His current interests lie in the area of software analytics, in particular building models of software systems behaviour to automatically identify and locate faults.

Making our lives manageable!! Workshop to Reconsider Assessment/Feedback practices in the School (02 September, 2016)

Speaker: Quintin, Rose and Joe - but it's a workshop with everyone participating

We have used broadly the same assessment and feedback practices in the School for decades - and they were designed when we had maximum class sizes of 20-30.  We now have L3 / L4 / M level classes regularly above 100 and occasionally up to and over 140.

We know that coursework and exam marking are the most onerous tasks of teaching a course.  Students are also reporting in the NSS that turnaround times and levels of feedback are an issue.

The purpose of the workshop therefore is to explore methods for handling these numbers, hopefully with a view to finding light-touch improvements that would be implementable this session.

Even if you can't make the whole session, please do come along for some of it.  We'll be breaking for coffee at around 11.

Tea / coffee and hopefully home-baking from Rose on offer too.

Improvising minds: Improvisational interaction and cognitive engagement (29 August, 2016)

Speaker: Adam Linson

In this talk, I present my research on improvisation as a general form of adaptive expertise. My interdisciplinary approach takes music as a tractable domain for empirical studies, which I have used to ground theoretical insights from HCI, AI/robotics, psychology, and embodied cognitive science. I will discuss interconnected aspects of digital musical instrument (DMI) interface design a musical robotic AI system, and a music psychology study of sensorimotor influences on perceptual ambiguity. I will also show how I integrate this work with an inference-based model of neural functioning, to underscore implications beyond music. On this basis, I indicate how studies of musical improvisation can shed light on a domain-general capacity: our flexible, context-sensitive responsiveness to rapidly-changing environmental conditions.


Recognizing manipulation actions through visual accelerometer tracking, relational histograms, and user adaptation (26 August, 2016)

Speaker: Sebastian Stein

Activity recognition research in computer vision and pervasive computing has made a remarkable trajectory from distinguishing full-body motion patterns to recognizing complex activities. Manipulation activities as occurring in food preparation are particularly challenging to recognize, as they involve many different objects, non-unique task orders and are subject to personal idiosyncrasies. Video data and data from embedded accelerometers provide complementary information, which motivates an investigation of effective methods for fusing these sensor modalities.

In this talk I present a method for multi-modal recognition of manipulation activities that combines accelerometer data and video at multiple stages of the recognition pipeline. A method for accelerometer tracking is introduced that provides for each accelerometer-equipped object a location estimate in the camera view by identifying a point trajectory that matches well the accelerometer data. It is argued that associating accelerometer data with locations in the video provides a key link for modelling interactions between accelerometer-equipped objects and other visual entities in the scene. Estimates of accelerometer locations and their visual displacements are used to extract two new types of features: (i)

Reference Tracklet Statistics characterizes statistical properties of an accelerometer’s visual trajectory, and (ii) RETLETS, a feature representation that encodes relative motion, uses an accelerometer’s visual trajectory as a reference frame for dense tracklets. In comparison to a traditional sensor fusion approach where features are extracted from each sensor-type independently and concatenated for classification, it is shown that by combining RETLETS and Reference Tracklet Statistics with those sensor-specific features performs considerably better. Specifically addressing scenarios in which a recognition

system would be primarily used by a single person (e.g., cognitive situational support), this thesis investigates three methods for adapting activity models to a target user based on user-specific training data. Via randomized control trials it is shown that these methods indeed learn user idiosyncrasies.

The whole is greater than the sum of its parts: how semantic trajectories and recommendations may help tourism. (22 August, 2016)

Speaker: Dr. Chiara Renso

During the first part of this talk I will overview my recent activity in the field of mobility data mining with particular interest in the study of semantics in trajectory data and the experience with the SEEK Marie Curie project [1] recently concluded.  Then I will present two highlights of tourism recommendation works based on the idea of semantic trajectories: TripBuilder [2] and GroupFinder [3].  Tripbuilder is based on the analysis of enriched tourist trajectories extracted from Flickr photos to suggest itineraries constrained by a temporal budget and based on the travellers preferences.  The Groupfinder framework recommends a group of friends with whom to enjoy a visit to a place, balancing the friendship relations of the group members with the user individual interests in the destination location.

[2] Igo Ramalho Brilhante, José Antônio Fernandes de Macêdo, Franco Maria Nardini, Raffaele Perego,Chiara Renso. On planning sightseeing tours with TripBuilder. Inf. Process. Manage. 51(2): 1-15 (2015)
[3]  Igo Ramalho Brilhante, José Antônio Fernandes de Macêdo, Franco Maria Nardini, Raffaele Perego,Chiara Renso. Group Finder: An Item-Driven Group Formation Framework. MDM 2016: 8-17


Dr. Chiara Renso holds a PhD and M.Sc. degree in Computer Science from University of Pisa (1992, 1997).  She is permanent researcher at ISTI Institute of CNR, Italy.  Her research interests are related to spatio-temporal data mining, reasoning, data mining query languages, semantic data mining, trajectory data mining.  She has been involved in several EU projects about mobility data mining.  She has been the scientific coordinator of an FP7 Marie-Curie project on semantic trajectories knowledge discovery called SEEK (  She was also coordinator of a bilateral CNR-CNPQ Italy-Brazil project on mobility data mining with Federal University of Cearà.  She is author of more than 90 peer-reviewed publications.  She is co-editor of the book "Mobility Data: Modelling, Management, and Understanding" edited by Cambridge Press in 2013; co-editor of the special issue for Journal on Knowledge and Information system (KAIS) on Context aware data mining; co-editor of International Journal of Knowledge and Systems Science (IJKSS) on Modelling Tools for Extracting Useful Knowledge and Decision Making.  She has been co-chair of three editions of the Workshop on Semantic Aspects of Data Mining in conjunction with IEEE ICDM conference.  She is a regular reviewer of ACM CIKM, ACM KDD, ACM SIGSPATIAL and many journals on these topics.

Logitech presentation (22 August, 2016)

Speaker: Logitech staff

Logitech are visiting the school on Monday. As part of the visit they are going to talk about the company and their research interests. If you want to come along it will be at 11:00 in F121. Will be about 30-40 mins.


Systems Seminar: Machine Learning and Sensor Networks (17 August, 2016)

Speaker: Prof Neil Bergmann

Wireless sensor networks are becoming the eyes and ears (and other senses) of the Internet, allowing high temporal and spatial sampling of data from both the natural and the built environment.  The benefits of wireless operation often mean that such sensor nodes are battery powered, perhaps with some energy harvesting.  Usually, such sensors are limited in their temporal resolution by their limited energy.  "Dumb" sensors simply record and transmit raw transducer data streams for subsequent data analysis by powerful processors.  The majority of the energy used by such sensors is in the radio transmission of the raw data.  Communications energy can be saved, if the data can be compressed or otherwise on a "smart" sensor node, and only compressed or summary information sent, but this requires energy-efficient on-node processing.  This seminar summarises results from a past project using machine-learning techniques for on-sensor processing, and discusses proposals for how this on-sensor processing can be done in a more energy efficient fashion using reconfigurable hardware (FPGAs).

Biography. Prof. Neil Bergmann has been the Chair of Embedded Systems in the School of Information Technology and Electrical Engineering at the University of Queensland, Brisbane, Australia since 2001.  He has Bachelors degrees in Electrical Engineering, Computer Science, and Arts from the University of Queensland, and a PhD in Computer Science from the University of Edinburgh in 1984.  His research interests and in computer systems, especially reconfigurable computing and Wireless Sensor Networks.  He is on sabbatical leave, visiting University of Edinburgh during August 2016.

Systems Seminar: A dynamic object model in Unix processes, and what it can do for us (17 August, 2016)

Speaker: Dr Stephen Kell

Today's diversity of languages, libraries and virtual machines ought to be a boon for programmers, but instead,they fragment our infrastructure, creating integration problems which limit the re-usability of code and the insight of tools. This is particularly evident in how language virtual machines (VMs) continue to exist within Unix processes rather than (as originally anticipated) replacing them. I'll motivate an alternative approach of embracing and extending Unix-like services, and illustrate this with two pieces of technical work: the libcrunch system for run-time type checking (to appear at OOPSLA this year), and an in-progress extension to support spatial memory safety using a technique reminiscent of safe virtual machines. Finally I'll highlight some other potential and/or in-progress applications of the underlying infrastructure, including a precise whole-process garbage collector and a "wide-spectrum" multi-language programming environment.

Ladies of Code Meetup (16 August, 2016)

Speaker: Angie Maguire

We're excited to bring Ladies of Code to Glasgow!

Join us for our launch event on August 16th and meet your chapter leaders and fellow coders.  Evening includes a series of lightning talks on anything to do with coding. Whether you're a code newbie or a superstar veteran, everyone is welcome. 



Skin Reading: Encoding Text in a 6-Channel Haptic Display (11 August, 2016)

Speaker: Granit Luzhnica

In this talk I will present a study we performed in to investigate the communication of natural language messages using a wearable haptic display. Our research experiments investigated both the design of the haptic display, as well as the methods for communication that use it. First, three wearable configurations are proposed basing on haptic perception fundamentals and evaluated in the first study. To encode symbols, we use an overlapping spatiotemporal stimulation (OST) method, that distributes stimuli spatially and temporally with a minima gap. Second, we propose an encoding for the entire English alphabet and a training method for letters, words and phrases. A second study investigates communication accuracy. It puts four participants through five sessions, for an overall training time of approximately 5 hours per participant. 

Journal club on Learning to Generate Images with Perceptual Similarity Metrics, Karl Ridgeway, Jake Snell, Brett D. Roads, Richard S. Zemel, Michael C. Mozer (11 August, 2016)

Speaker: journal club

We will be reading 

Learning to Generate Images with Perceptual Similarity Metrics, Karl RidgewayJake SnellBrett D. RoadsRichard S. ZemelMichael C. Mozer

Human-Pokemon Interaction (and other challenges for designing mixed-reality entertainment) (28 July, 2016)

Speaker: Prof Steve Benford

It’s terrifically exciting to see to arrival of Pokémon Go as the first example of a mixed reality game to  reach a mass audience. Maybe we are witnessing the birth of a new game format? As someone who  has been involved in developing and studying mixed reality entertainment for over fifteen years now, it’s also unsurprising to see people getting hot and bothered about how such games impact on the public settings in which they is played – is Pokémon Go engaging, healthy and social on the one hand or inappropriate, annoying and even dangerous on the other?

 My talk will draw on diverse examples of mixed reality entertainment – from artistic performances and games to museum visits and amusement rides (and occasionally on Pokémon Go too) to reveal the opportunities and challenges that arise when combining digital content with physical experience. In response, I will introduce an approach to creating engaging, coherent and appropriate mixed reality experiences based on designing different kinds of trajectory through hybrid structures of digital and physical content.

 Steve Benford is Professor of Collaborative Computing in the Mixed Reality Laboratory at the University of Nottingham where he also directs the ‘Horizon: My Life in Data’ Centre for Doctoral Training. He was previously an EPSRC Dream Fellow, Visiting Professor at the BBC and Visiting Researcher at Microsoft Research. He has received best paper awards at the ACM’s annual Computer-Human Interaction (CHI) conference in 2005, 2009, 2011 and 2012. He also won the 2003 Prix Ars Electronica for Interactive Art, the 2007 Nokia Mindtrek award for Innovative Applications of Ubiquitous Computing, and has received four BAFTA nominations. He was elected to the CHI Academy in 2012. His book Performing Mixed Reality was published by MIT Press in 2011.

GPG: How to Compute on a Manycore Processor (13 July, 2016)

Speaker: Bernard Goossens

Manycore processors mark time. They barely reach a hundred of cores after ten years of existence. According to Moore's law, we should have more than a thousand. The GPU themselves have more than 5000 SP+DP cores. In the talk I will show that it mainly comes from a useless complexity of the memory (tens of MB when a GPU uses only a few MB) and the interconnect (a NoC or a ring, when in a GPU, cores are simply abutted). I will inventoriate the needed hardware to compute in parallel. I will insist on the importance of determinism, the uselessness of memory and I will point out a favoured communication direction, from the cause to the effect of a causality. I will describe the design of a parallelizing core, build to be combined with itself to form a 3000 core processor. The core design is simple because it embarks quite no memory and it has connections only with two neighbours.

On the software side, I will present a new parallel programming model, not based on an OS thread parallelization but on a hardware parallelization relying on a new "fork" machine instruction added to the ISA. I will present various patterns to parallelize imperative language programming structures: functions, for and while loops, reductions. I will show how such patterns can be used to parallelize classical C functions and how the created threads populate the available hardware thread slots in the processor cores.

The hardware does not use memory. Instead, each core uses a set of registers and functional units which are enough to compute scalars from scalars. In our parallel programming model, we avoid data structures: no arrays, no structures, no pointers, no lists. A parallel computation gets the elements of structured data from parallel inputs and puts the computed elements of structured data from parallel outputs. Inside the computation, only scalars are handled.

The proposed parallel programming model is deterministic. The semantic of a parallel execution is given by a referential sequential order. Hence, running the code sequentially or in parallel produces the same result. Testing and debugging parallel programs is as easy as testing and debugging a sequential run.

Our parallel programming model has strong connections with the functional programming paradigm from the composition of no side effect functions.

Casual Interaction for Smartwatch Feedback and Communication (01 July, 2016)

Speaker: Henning Pohl
Casual interaction strives to enable people to scale back their engagement with interactive systems, while retaining the level of control they desire. In this talk, we will take a look on two recent developments in casual interaction systems. The first p

Casual interaction strives to enable people to scale back their engagement with interactive systems, while retaining the level of control they desire. In this talk, we will take a look on two recent developments in casual interaction systems. The first project to be presented is an indirect visual feedback system for smartwatches. Embedding LEDs into the back of a watch case enabled us to create a form of feedback that is less disruptive than vibration feedback and blends in with the body. We investigated how well such subtle feedback works in an in-the-wild study, which we will take a closer look at in this talk. Where the first project is a more casual form of feedback, the second project tries to support a more casual form of communication: emoji. Over the last years these characters have become more and more popular, yet entering them can take quite some effort. We have developed a novel emoji keyboard around zooming interaction, that makes it easier and faster to enter emoji.

Formal Analysis meets HCI: Probabilistic formal analysis of app usage to inform redesign (30 June, 2016)

Speaker: Muffy Calder (University of Glasgow)

Evaluation of how users engage with applications is part of software engineering, informing redesign and/or design of future apps.  Good evaluation is based on good analysis –  but users are difficult to analyse – they adopt different styles at different times!  What characterises usage style, of a user and of populations of users, how should we characterise the different styles,  how do characterisations evolve, e.g. over an individual user trace,and/or over a number of sessions over days and months, and how do characteristics of usage inform evaluation for redesign and future design?

I try to answer these questions in 30 minutes by outlining a formal, probabilistic approach based on discrete time Markov chains and stochastic temporal logic properties, applying it to a mobile app developed right here in Glasgow and used by tens of thousands of users worldwide.    A new version of the app, based on our analysis and evaluation, has just been deployed. This is experimental design and formal analysis in the wild.  You will be surprised how accessible I can make the formal material.

DYVERSE: from formal verification and control to neuroscience (24 June, 2016)

Speaker: Eva Navarro Lopez
School Seminar

DYVERSE represents a fresh perspective within the theory of hybrid systems and complex dynamical systems, and provides new insights into the modelling, analysis and control of systems with discontinuous transitions and complex behaviours. It was the first-funded project in the UK dedicated to the verification and control of nonlinear hybrid systems.

DYVERSE is a computational dynamical framework for hybrid systems. But, what is a hybrid system? The term itself is confusing and broad, and can be used for any system consisting of elements of a different nature. Recently, hybrid systems have evolved to cyber-physical systems - advanced networked embedded systems combining computation, control and communication.

From the dynamical viewpoint, a hybrid dynamical system integrates continuous-type and discrete- event dynamics. This definition can lead to a wide range of interpretations. Each interpretation has different goals and deals with specific types of problems, and reflects the background of the researchers behind it, whether they are computer scientists, control engineers or applied mathematicians.

DYVERSE framework should be understood as a catalyst of formal computational tools, dynamical systems theory and control engineering methodologies. This gives rise to models, behaviour analysis tools, stability definitions, and control schemes which are novel, and entails a better formulation of complex systems, that is, systems that are changeable and unpredictable in behaviour.

In this talk, we will explore how all these theories can be combined and applied to a wide range of applications: from engineering to neuroscience. I will sum up some of my recent results in the new field that I have defined as hybrid systems neuroscience.

A retrospective on Haskell: Watt on earth were we thinking? (17 June, 2016)

Speaker: by Professor Simon Peyton Jones FRS

Haskell had international parents, but Glasgow can lay strong claim to be its birthplace, as home to five of the fourteen members of the Haskell committee. One of the first implementations of Haskell, GHC, was built in Lilybank Gardens, and a quarter of a century later is still the world's leading Haskell compiler.

 In this talk I'll look back to those early days, and reflect on what makes Haskell so special and influential.

Perspectives on 'Crowdsourcing' (16 June, 2016)

Speaker: Helen Purchase

It is now commonplace to collect data from ‘the crowd’. This seminar will summarise discussions that took place during a recent Dagstuhl seminar entitled “Evaluation in the Crowd: Crowdsourcing and Human-Centred Experiments” – with contributions from psychology, sociology, information visualisation and technology researchers. Bring your favourite definition of ‘Crowdsourcing’ with you!

Top Tips for Research Excellence (15 June, 2016)

Speaker: Dr Tanita Casci, Head of Research Policy (Research Strategy and Innovation Office (RSIO))

Tanita will lead a session on practical tips for enhancing your research  and its impact: topics include pitching and presenting your research  outputs effectively, interacting with editors, and making you and your  research more visible. The content will partly draw on what we have  learned from the REF UoA reviews but also from knowledge of the sector.

This is an informal session that should give some quick 'top tips' that are of value for all but it is also a chance to find out what else RSIO can do for you. Your input will help inform the support you receive from  the research strategy office.

Articulatory directness and exploring audio-tactile maps (09 June, 2016)

Speaker: Alistair Edwards (University of York)

Articulatory directness is a property of interaction first described by Don Norman. The favourite examples are steering a car or scrolling a window. However, I suggest (with examples) that these are arbitrary, learned mappings.  This has become important in work which we have been doing on interactive audio-tactile maps for blind people. Unlike conventional tactile maps, ours can be rotated, maintaining an ego-centric frame of reference for the user. Early experiments suggest that this helps the user to build a more accurate internal representation of the real world - and that a steering wheel does not show articularly directness.

FATA Seminar - Course Allocation with Prerequisites (07 June, 2016)

Speaker: David Manlove

We consider the problem of allocating applicants to courses, where each applicant has a subset of acceptable courses that she ranks in strict order of preference. Each applicant and course has a capacity, indicating the maximum number of courses and applicants they can be assigned to, respectively.  There are also prerequisite or corequisite constraints on courses (e.g., course x can only be taken if course y is also taken).  We consider two different ways of extending preferences over individual courses to preferences over bundles of courses.  Subject to each definition, we present algorithms and complexity results relating to the problem of computing a Pareto optimal matching of applicants to courses.  This is joint work with Katarina Cechlarova and Bettina Klaus, and will be presented at COMSOC 2016.

Predicting Ad Quality for Native Advertisements (06 June, 2016)

Speaker: Dr Ke Zhou,

Native advertising is a specific form of online advertising where ads replicate the look-and-feel of their serving platform. In such context, providing a good user experience with the served ads is crucial to ensure long-term user engagement. 


In this talk, I will explore the notion of ad quality, namely the effectiveness of advertising from a user experience perspective. I will talk from both the pre-click and post-click perspective for predicting quality for native ads. With respect to pre-click ad quality, we design a learning framework to detect offensive native ads, showing that, to quantify ad quality, ad offensive user feedback rates are more reliable than the commonly used click-through rate metrics. We translate a set of user preference criteria into a set of ad quality features that we extract from the ad text, image and advertiser, and then use them to train a model able to identify offensive ads. In terms of post-click quality, we use ad landing page dwell time as our proxy and exploit various ad landing page features to predict ad landing page with high dwell time.

Making for Madagascar (02 June, 2016)

Speaker: Janet Read (University of Central Lancashire)

It is commonly touted in HCI that engagement with users is essential for great product design.  Research reports only successes in participatory design research with children but in reality there is much to be concerned about and there is not any great case to be made for children's engagement in these endeavors.  This talk will situate the work of the ChiCI group in designing with children for children by exploring how two games were designed for, and built for, children in rural Madagascar.  There is something in the talk for anyone doing research in HCI.. and for anyone doing research with human participants.  

FATA Seminar - The Challenge of Typed Expressiveness in Concurrency (31 May, 2016)

Speaker: Jorge A. Perez

By classifying behaviors (rather than data values), behavioral types abstract structured protocols and enforce disciplined message- passing programs. Many different behavioral type theories have been proposed: they offer a rich landscape of models in which types delineate
concurrency and communication. Unfortunately, studies on formal relations between these theories are incipient. In this talk I will argue that clarifying the relative expressiveness of these type systems is a pressing challenge for formal techniques in distributed systems. I will briefly overview works that
address this issue and discuss promising research avenues. (This talk is based on a short position paper to be presented at FORTE'16.)

Multiscale Dataflow Computing: The Maxeler Approach (25 May, 2016)

Speaker: Dr. Tobias Becker

Dataflow computing is a novel way of performing computation which is
fundamentally different from instruction-driven computing with conventional CPUs
and GPUs. Dataflow computers focus on optimising the movement of data in an
application and exploit massive parallelism among thousands of tiny ‘dataflow
cores’. Maxeler Technologies is pioneering this efficient approach to computing
through its Multiscale Dataflow techno­logy. Application experts in science,
engineering or finance can develop and customise their algorithms in a high
level language and optimise their application across several layers of
abstraction, targeting Maxeler’s highly efficient dataflow computers. Compared
to industry standard servers, one to two orders of magnitude improvements in
terms of performance per unit of rack space and performance per Watt are
typically achieved. This has been demonstrated in a range of application
domains, including finance, geology, weather modelling, genomics, and data

FATA Seminar - Preference Elicitation in Matching Markets via Interviews: A Study of Offline Benchmarks (24 May, 2016)

Speaker: Baharak Rastegari

In this work we study two-sided matching markets in which the participants do not fully know their preferences but can learn their preferences by conducting (costly) interviews.The main goal is then to find a good strategy for interviews to be carried out in order to minimize their use, whilst leading to a stable matching. We argue that a meaningful comparison would be against an optimal offline algorithm that has access to agents' preference orderings under complete information. We show that, unless P=NP, no offline algorithm can compute the optimal interview strategy in polynomial time.  If we are additionally aiming for a particular stable matching,  we provide restricted settings under which efficient optimal offline algorithms exist. (This is joint work with Paul Goldberg and David Manlove.)

Implicit Feedback Signals in Query Formulation (20 May, 2016)

Speaker: Dr Milad Shokoushi

Query logs contain valuable information about how users interact with search engines. For instance, frequency and duration of clicks on search results have been widely used as implicit feedback for inferring search success. In this talk, we follow the footprints of users in the logs for inferring additional signals about search satisfaction. In particular, we show how user interactions during query formulation (and reformulation) can be interpreted as implicit feedback. We also demonstrate how these signals can be used to generate pseudo-labels for training auto-completion and voice recognition systems.

Implicit Feedback Signals in Query Formulation (20 May, 2016)

Speaker: Dr Milad Shokoushi

Query logs contain valuable information about how users interact with search engines. For instance, frequency and duration of clicks on search results have been widely used as implicit feedback for inferring search success. In this talk, we follow the footprints of users in the logs for inferring additional signals about search satisfaction. In particular, we show how user interactions during query formulation (and reformulation) can be interpreted as implicit feedback. We also demonstrate how these signals can be used to generate pseudo-labels for training auto-completion and voice recognition systems.

Tornado: Heterogeneous Programming in Java (18 May, 2016)

Speaker: James Clarkson

As the popularity of “big data” frameworks grow, a lot of effort is
currently being exerted trying to improve the performance of JVM (Java Virtual
Machine) based languages, such as Java and Scala. One way of doing this is to
develop mechanisms that allow these languages to make use of hardware
accelerators, such as GPGPUs. As a result there has been a number of projects,
such as Project Sumatra (OpenJDK) [4], Rootbeer [5] and APARAPI (AMD) [6], that
have attempted to support programming GPGPUs from Java. However, a lot of this
prior art only focuses on accelerating simple workloads or providing a interface
into another programming language - making it difficult for them to be used to
create real-world applications. In this talk I will discuss how we have
developed a framework that moves beyond the prior art and allows developers to
accelerate complex Java applications.

Our Java based framework, Tornado, provides developers with a simple a
task-based programming model which allows the assignment of task to device.
Typically, tasks are assigned to execute on a GPGPU, but could equally be
assigned to a multi-core processor or even an FPGA. Moreover, the design of
Tornado means that this assignment can be changed dynamically - meaning that
applications are not artificially restricted to using a specific class of
device. Additionally, the Tornado API has been designed to avoid the need to
re-engineer applications to utilise the framework: to do this we have had to
support a wider range of language features than the prior art - exceptions,
inheritance and objects to name a few.

Finally, we will share our experiences porting a complex CUDA C++ application
into pure Java.

Bio: James Clarkson is a 3rd year PhD student from the University of Manchester
in the UK. He is a member of the Advanced Processor Technologies (APT) group,
working under the supervision of Mikel Lujan. His research interests are
programming languages and programming exotic hardware architectures (in Java!).
He is actively contributing to the EPSRC funded AnyScale [1] and PAMELA [2]
projects, and has previously contributed to the EU funded Mont Blanc project

[1] AnyScale project -
[2] PAMELA project -
[3] Mont Blanc project -
[4] Project Sumatra -
[5] Rootbeer -

FATA Seminar - The Tinder Stable Marriage Problem (17 May, 2016)

Speaker: Josue Ortega

I study the many-to-many matching problem induced by the popular dating app Tinder. I provide empirical evidence suggesting that its matching procedure is unstable, and show, in a simplified setting, that its assignments can be setwise and even pairwise blocked. Tinder's mechanism can be improved by a known two-step procedure which guarantees setwise stability whenever achievable, i.e. when agents' preferences are strongly substitutable, a restriction compatible with men preferences in online dating. I establish a link between strong substitutability and the maximin property that connects two areas of the literature that remained unrelated, and that can be merged to obtain a useful result: deciding who proposes first generates a tradeoff between the optimality versus the simplicity and privacy of the matching. 

Building Trust in the Internet of Things (17 May, 2016)

Speaker: Prof. Derek McAuley
Derek McAuley is Professor of Digital Economy in the School of Computer Science and Director of Horizon at the University of Nottingham, UK. After a PhD and lectureship in the Computer Laboratory at the University of Cambridge he moved to a chair in Depa

For years people have been putting weird things on the Internet and 

considering the technologies required and possibilities of giving 

virtual existence to everyday mundane objects - we put a coffee pot 

online in the late 1980s! The reduction in cost of communications, 

computing and storage mean that these possibilities are now starting to 

encroach into the everyday lives of the public and consumers, and the 

phrase “Internet of Things” (IoT) has been coined. This term mostly 

denotes a sociological phenomena - the increasing awareness, not least 

amongst industry analysts, of the practice of putting “things” online. 

So having been working on the technology for so long, “what could 

possibly go wrong?”. The talk will look at how the modern context, 

including smart phones, location awareness, social media, and pervasive 

personal data mining, is radically altering how we need to perceive 

future deployments of IoT. It requires us to re-evaluate the 

technologies and how they are developed in order to have any chance of 

building the trust into these IoT systems that is needed to make the IoT 

a success, and deliver the many associated societal benefits the 

proponents have promised.

Managing Advisory Parallelism in a Distributed Graph Reducer using Spark Colocation (11 May, 2016)

Speaker: Evgenij Belikov

Work stealing is an adaptive decentralised load balancing mechanism used in many parallel language run-time systems. A key work stealing decision is the choice of the spark, which represents potential parallelism, to donate in response to a received request for work. Commonly, the oldest spark is donated as it often corresponds to a sub-computation with relatively large granularity.This talk explores the effect of colocating sparks based on the maximum prefix matching on encodings of their position within the computation, rather than using their age, on execution time, stealing success, and fragmentation of the virtual shared heap. A comparison is made to the default mechanism used in the
GUM run-time system for Glasgow parallel Haskell.

FATA Seminar - Autonomous Agent Behaviour Modelled in PRISM (10 May, 2016)

Speaker: Ruth Hoffmann

With the rising popularity of autonomous systems and their increased deployment within the public domain, ensuring the safety of these systems is crucial.

Although testing is a necessary part in the process of deploying such systems, simulation and formal verification are key tools, especially at the early stages of design. Simulation allows us to view the continuous dynamics and monitor behaviour of a system. On the other hand, formal verification of autonomous systems allows for a cheap, fast, and extensive way to check for safety and correct functionality of autonomous systems, that is not possible using simulations alone.

In this talk I will demonstrate a simulation and the corresponding probabilistic model of an unmanned aerial vehicle (UAV) in an exemplary autonomous scenario and present results of the discrete models. Further, I discuss a possible formal framework to abstract autonomous systems using simulations to inform probabilistic models.

How fast? How furious? Real optimisations for real people (04 May, 2016)

Speaker: Dr. Pavlos Petoumenos

Optimisation techniques depend on representative workloads for their training, fine tuning, and evaluation. Traditional research areas have workloads which at least try to be representative. Research on smartphones on the other hand utilises completely inappropriate benchmarks. Typical work on the field fails to deal with interactivity, user perception, different classes of mobile applications, significant variation in usage patterns amongst users, and even reproducibility. With the majority of our computing experience revolving around these devices, it's imperative that we find ways to test our optimisations on them properly. In my talk, I will present two novel workload creation techniques targeted at mobile devices. Both of them are lightweight, easy to use, and capture how real users interact with their devices. They do not require any instrumentation, knowledge of the application's internals, or access to the application source code. We use them to drive personalised iterative compilation with almost no negative effect on the user experience and to show that available ANDROID frequency governors leave substantial room for improvement, up to 27% lower energy consumption for the same user experience.

FATA Seminar - Using Session Types for Pop3: A case study (03 May, 2016)

Speaker: Florian Weber

Session types are used to describe communication protocols. We use a case study to show the applicability of session types in the real world and how we bridge the gap between the
abstract message format used in a session type protocol and the concrete message format used by a naturally occurring server. The case study uses POP3, a standard protocol used to retrieve messages from an email server, as an example, presenting an introduction on how to describe standard internet protocols as session types. We use the protocol description language Scribble, which is based on multiparty session types, to express POP3 in the form of a global protocol from which the local protocols for the client and the server are derived. We use a tool called StMungo to translate the Scribble local protocol into a typestate specification, which defines the order in which the communication methods are called, written in Java. We use Mungo, a Java typestate checking tool and compiler, to show that the implementation follows the typestate specification. Mungo checks the correctness of the sequence of method calls. The case study highlights several points of interest for future work on Scribble and the translation process. Furthermore it provides insight into the relationship between Scribble and the real world protocol implementations, suggesting the use of session types for protocol documentation.

Emotion Recognition On the Move (28 April, 2016)

Speaker: Juan Ye (University of St Andrews)

Past research in pervasive computing focuses on location-, context-, activity-, and behaviour-awareness; that is, systems provide personalised services to users adapting to their current locations, environmental context, tasks at hand, and ongoing activities. With the rising of requirements in new types of applications, emotion recognition is becoming more and more desirable; for example, from adjusting the response or interaction of the system to the emotional states of users in the HCI community, to detecting early symptoms of depression in the health domain, and to better understanding the environmental impact on users’ mood in a more wide-scale city engineering area. However, recognising different emotional types is a non-trivial task, in terms of the computation complexity and user study designs; that is is, how we inspire and capture natural expressions of users in real-world tasks. In this talk, I will introduce two emotion recognition systems that are recently developed by our senior honour students in St Andrews, and share our experiences in conducting real-world user studies. One system is a smartphone-based application that unobtrusively and continuously monitor and collect users’ acceleration data and infer their emotional states such as neutral, happy, sad, angry, and scared. The other system infers social cues of a conversation (such as positive and negative emotions, agreement and disagreement) through streaming video captured in imaging glasses.

Automatic Detection of Parallelism in Scientific Fortran using Algorithmic Skeletons and OpenCL (27 April, 2016)

Speaker: Gavin Davidson

General purpose graphics hardware represents a powerful and affordable tool for climate scientists who look to model the vastly complex systems that make up the Earth's atmosphere. However, levering these highly parallel devices in this field requires an understanding of tools like OpenCL and effective strategies for parallelisation. We present a source to source compiler that automatically detects parallelism in scientific Fortran code and produces programs parallelised using OpenCL. Our approach requires no directives or extra information from the user and output code is composed with use of algorithmic skeletons. We evaluate our work using a version of the large eddy simulator along with synthetic test cases and show that this approach yields performance increases with minimal effort.

FATA Seminar - Session Types: Achievements and Challenges (26 April, 2016)

Speaker: Simon Gay

Session types are type-theoretic specifications of communication protocols, introduced by Kohei Honda and collaborators in the mid-1990s. They define the type and sequence of messages exchanged via a communication medium, and allow type-checking techniques to be used
to verify protocol implementations. Whereas data types codify the static structure of information in a computer program, session types codify the dynamic structure of communication in a software
system. The classic slogan "algorithms + data structures = programs" can be generalised to "programs + communication structures = systems", and the full range of type-checking = technology can be generalised too.

In the simplest form, a session type specifies a straightforward sequence of messages. The type !int.?bool.end describes how to run a protocol on an endpoint of a communication channel: first send an integer, then receive a boolean, then terminate. The other endpoint has the dual type ?int.!bool.end. More complex protocols include choice and repetition. For example, the recursive type S defined by S = &< start}: ?int.!bool.S, stop: end > describes a protocol that offers a choice between start and stop, each with its own continuation protocol. The basic idea for protocol verification is to match the structure of a session type with the use of communication operations in a program.

The twenty years since the introduction of session types have seen a dramatic growth in research activity. There is now a substantial community, and most programming language conferences regularly include papers on session types.

The seminar will introduce session types, survey the main themes and achievements of the field, and suggest directions for future work that are likely to be of interest to researchers from the wider area of programming language design and type theory.

Efficient Web Search Diversification via Approximate Graph Coverage (25 April, 2016)

Speaker: Carsten Eickhoff

In the case of general or ambiguous Web search queries, retrieval systems rely on result set diversification techniques in order to ensure an adequate coverage of underlying topics such that the average user will find at least one of the returned documents.

In the case of general or ambiguous Web search queries, retrieval systems rely on result set diversification techniques in order to ensure an adequate coverage of underlying topics such that the average user will find at least one of the returned documents useful. Previous attempts at result set diversification employed computationally expensive analyses of document content and query intent. In this paper, we instead rely on the inherent structure of the Web graph. Drawing from the locally dense distribution of similar topics across the hyperlink graph, we cast the diversification problem as optimizing coverage of the Web graph. In order to reduce the computational burden, we rely on modern sketching techniques to obtain highly efficient yet accurate approximate solutions. Our experiments on a snapshot of Wikipedia as well as the ClueWeb'12 dataset show ranking performance and execution times competitive with the state of the art at dramatically reduced memory requirements.

Why don't SMEs take Cyber Security seriously? (21 April, 2016)

Speaker: Karen Renaud

I have been seconded to the Scottish Business Resilience Centre this year, trying to answer the question in the title. I will explain how I went about carrying out my study and what my findings were.

FATA Seminar - On the Relative Expressiveness of Higher-Order Session Processes (19 April, 2016)

Speaker: Dimitrios Kouzapas

By integrating constructs from the λ-calculus and the π-calculus, in higher-order process calculi exchanged values may contain processes. This paper studies the relative expressiveness of HO π, the higher-order π-calculus in which communications are governed by session types. Our main discovery is that HO , a subcalculus of HO π which lacks name-passing and recursion, can serve as a new core calculus for session-typed higher-order concurrency. By exploring a new bisimulation for HO , we show that HO can encode HO π fully abstractly (up to typed contextual congruence) more precisely and efficiently than the first-order session π-calculus (π). Overall, under session types, HO π, HO , and π are equally expressive; but HO π and HO are more tightly related than HO π and π.

EulerSmooth: Smoothing of Euler Diagrams (14 April, 2016)

Speaker: Dan Archambault (Swansea University)

Drawing sets of elements and their intersections is important for many applications in the sciences and social sciences. In this talk, we presented a method for improving the appearance of Euler diagrams. The approach works on any diagram drawn with closed curves using polygons. It is based on a force system derived from curve shortening flow. In this talk we present this method and discuss its use on practical data sets.

Personal Tracking and Behaviour Change (07 April, 2016)

Speaker: John Rooksby

In this talk I’ll give a brief overview of the personal tracking applications we have been working on at Glasgow, and then describe our work-in-progress on the EuroFIT programme (this is a men’s health intervention being delivered via European football clubs). I’ll conclude with some considerations of the role of Human Computer Interaction in researching behaviour change and developing lifestyle interventions - particularly the role of innovation, user experience design and field trials.


Searching for better health: challenges and implications for IR (04 April, 2016)

Speaker: Dr. Guido Zuccon
A talk about why IR researchers should care about health search

In this talk I will discuss research problems and possible solutions related to helping the general public searching for health information online. I will show that although in the first instance this appears to be a domain-specific search task, research problems associated with this task have more general implications for IR and offer opportunities to develop advances that are applicable to the whole research field. In particular, in the talk I will focus on two aspects related to evaluation: (1) the inclusion of multiple dimensions of relevance in the evaluation of IR systems and (2) the modelling of query variations within the evaluation framework.

Blast Off: Performance, design, and HCI at the Mixed Reality Lab (17 March, 2016)

Speaker: Dr Jocelyn Spence (University of Nottingham)

The University of Nottingham's Mixed Reality Lab is renowned for its work at the forefront of experience design using artistic performance to drive public interactions with technology. However, there is far more going on at the MRL than its inspiring collaborations with Blast Theory. Jocelyn Spence has worked at the intersection of performance and HCI by focusing on more private, intimate groupings involving storytelling. She is now a visiting researcher at the MRL, leading and contributing to projects that take a similarly personal approach to public performance with digital technologies. This talk will cover her current and previous work in Performative Experience Design.

An overview of EPCC in 2016 (16 March, 2016)

Speaker: Dr. Michele Weiland

EPCC is the supercomputing centre at the University of Edinburgh and the home of ARCHER, the UK’s national HPC service. In this talk, I will give an overview of EPCC, which will include a whistlestop tour of our HPC and data analytics research, our involvement in teaching and service provision, as well as what facilities we have and how you can work with us.

FATA Seminar - BIG DATA subgraph query processing: a light filter with smart verification (15 March, 2016)

Speaker: Patrick Prosser

In subgraph isomorphism we have a target graph T and a pattern graph P and the question is “does T contain P?”.  This problem is NP-Complete. One of the problems in BIG DATA is, given a graph database (i.e. a collection of target graphs) does a given query (a pattern graph) exist in the database. Therefore, there are many target graphs and many patterns graphs.  Current state of the art produces an index for each target and pattern graph, where an  index captures a summary of the features in a graph. Prior to performing a subgraph isomorphism test between a pattern P and target T the indices are used to determine if P is trivially not in T. If this test succeeds then T is not a candidate, and if the test fails then a call to a backtracking  search algorithm is made for verification. This is the “filter-verification paradigm”. The BIG DATA approach is to spend considerable effort creating sophisticated indices to avoid having to resort to backtracking search i.e. attempt to answer the decision problem with polynomial effort. This only pays off when the majority of decision problems are unsatisfiable, and therefore problems must exist in the easy unsat region for filtering to work. But what happens if we take a different approach? What happens if we put little effort into creating indices and more effort into crafting smarter subgraph isomorphism algorithms? In this talk I will report on work in progress (with Iva Babukova, Ciaran McCreesh and Christine Solnon) in our new approach “a light filter with smart verification”.

So Many? A Brief Tour of Haskell DSls for Parallel Programming (09 March, 2016)

Speaker: Dr. Patrick Maier

The proliferation of parallel compute devices (multicore CPUs, GPGPUs,
manycore coprocessors) has increased the demand for parallel programming
languages. More specifically, it has increased the demand for parallel
extensions of existing languages, and a popular extension route has been
embedding a parallel DSL into a host language.

Haskell is known as a playground for programming language development, and also
offers good support for embedding DSLs (eg. type class overloading, Template
Haskell meta programming), so it is no surprise that there are several parallel
Haskell DSLs.

How many parallel Haskell DSLs are there? It is hard to say given the sprawling
nature of the Haskell community. Counting publications in major Haskell venues,
on average there were two new parallel DSLs per year over the last 5 years. This
diversity is daunting for novices, particularly for those who were lured to
Haskell by the promise that a pure functional language would make parallel
programming simpler.

In this talk, I will argue that the domain of parallel programming is too
diverse for there to exist a single unifying DSL. I will survey several parallel
Haskell DSLs to illustrate how programming paradigms (data vs task parallelism)
and parallel architectures (multicore vs GPGPU, shared vs distributed memory)
shape their designs.

FATA Seminar - Kidney exchange simulation and IP models (08 March, 2016)

Speaker: JamesTrimble

Kidney exchange schemes have been employed successfully in many countries (including the UK since 2007) to increase the number of kidney transplants from living donors. It is an NP-hard cycle-packing problem to determine the largest possible set of transplants for a given pool of donors and patients.

This talk will present two pieces of work we carried out recently - the first to develop a more scalable approach to optimising the kidney-exchange problem, and the second to help in policy development.

1. New compact integer-programming models for kidney exchange. I will briefly describe the new models we have developed, which frequently outperform the existing state of the art, and present some LP relaxation tightness results. (Joint work with David Manlove, John Dickerson, Benjamin Plaut and Tuomas Sandholm)

2. A simulation project which we carried out for NHS Blood and Transplant to estimate the effects of several policy options. (Joint work with David Manlove)

A Comparison of Primary and Secondary Relevance Judgements for Real-Life Topics (07 March, 2016)

Speaker: Dr Martin Halvey
n this talk I present a user study that examines in detail the differences between primary and secondary assessors on a set of

The notion of relevance is fundamental to the field of Information Retrieval. Within the field a generally accepted conception of relevance as inherently subjective has emerged, with an individual's assessment of relevance influenced by numerous contextual factors. In this talk I present a user study that examines in detail the differences between primary and secondary assessors on a set of "real-world" topics which were gathered specifically for the work. By gathering topics which are representative of the staff and students at a major university, at a particular point in time, we aim to explore differences between primary and secondary relevance judgements for real-life search tasks. Findings suggest that while secondary assessors may find the assessment task challenging in various ways (they generally possess less interest and knowledge in secondary topics and take longer to assess documents), agreement between primary and secondary assessors is high.  

Parallel Computing in the Cloud (04 March, 2016)

Speaker: Rizos Sakellariou
School Seminar

The traditional view of parallel computing has focused on minimizing
execution time.  As the complexity and the costs associated with modern
execution platforms and infrastructures grow, parallel execution time
cannot be viewed as a single objective to meet at any cost. Instead,
with such platforms consuming large amounts of energy, one needs to
assess improvements in execution time against other types of cost. Cloud
computing, despite sharing common origins with the traditional
high-performance computing world, has grown to much more than it,
following a resources-on-demand paradigm, where users can pay for what
they need. However, the underlying infrastructure suffers from
increasing complexity which is partly masked by having users pay for it.

In this respect, the talk will motivate the need to address efficiently
the issues surrounding the use of multiple and/or heterogeneous
resources offered by Cloud providers by capturing these issues as a
multi-objective optimization problem, which requires a good
understanding and appreciation of a number of different trade-offs. The
talk will make this argument by presenting extensive experience and
research on planning the parallel execution of scientific workflows on
the Cloud in a way that tries to strike a balance between execution time
and energy related costs.

FATA Seminar - When can an efficient decision algorithm be used to find and count witnesses? (01 March, 2016)

Speaker: Kitty Meeks

Suppose we have a universe of n elements, and we are interested in subsets of size k that have certain properties; an example would be cycles of length k in a graph on n vertices.  We may simply want to know whether a subset with the property exists ("Does the graph contain a cycle of length k?"), but in many applications we will want more information: this might involve *finding* such a subset (rather than just saying one exists), *counting* how many such subsets there are, or *enumerating* a list of all subsets with the desired property.

For a number of problems of this kind, the fastest known exact algorithm for the decision problem is non-constructive: the algorithm returns either "yes" or "no" without finding a subset with the desired property (if one exists).  This motivates the study of what further information we can learn about our instance using only this fast, non-constructive decision algorithm.

We will model the decision algorithm as a black-box subroutine, or an oracle which answers queries of the form, "Does the subset X of the universe contain at least one witness?"  This is the approach previously adopted by Bjorklund, Kaski, and Kowalik (MFCS 2014), who addressed the problem of using a decision oracle to find a single witness.  In this talk, I will discuss some of the situations in which we can go further, using the decision oracle to find or count (almost) all witnesses.

Steps towards Profile-Based Web Site Search and Navigation (29 February, 2016)

Speaker: Prof. Udo Kruschwitz
Steps towards Profile-Based Web Site Search and Navigation

Web search in all its flavours has been the focus of research for decades with thousands of highly paid researchers competing for fame. Web site search has however attracted much less attention but is equally challenging. In fact, what makes site search (as well as intranet and enterprise search) even more interesting is that it shares some common problems with general Web search but also offers a good number of additional problems that need to be addressed in order to make search on a Web site no longer a waste of time. At previous visits to Glasgow I talked about turning the log files collected on a Web site into usable, adaptive data structures that can be used in search applications (and which we call user or cohort profiles). This time I will focus on applying these profiles to a navigation scenario and illustrate how the automatically acquired profiles provide a practical use case for combining natural language processing and information retrieval techniques (as that is what we really do at Essex).

FATA Seminar - Beyond Graphs -- Canonical Images in Permutation Groups (23 February, 2016)

Speaker: Christopher Jefferson

The famous Graph Isomorphism problems asks, given two graphs A and B, if there is a bijection between the vertices of A and B which preserves edges. This problem is important both theoretically, and practically.

Given a large set of graphs, which we wish to separate into isomorphic classes, it is common to take a 'Canonical Image' of each graph (there is an isomorphism between two graphs if and only if they have the same canonical image). This is much more efficient than calculating an isomorphism between each pair of graphs.

The current algorithms for generating canonical images are limited in two ways -- they only operate on graphs, and they allow any permutation of the graph. This talk will show how we can use a similar technique to find canonical images and isomorphisms of a wide range of objects and actions, in any permutation group G.

Sentiment and Preference Guided Social Recommendation. (22 February, 2016)

Speaker: Yoke Yie Chen
In this talk, I will focus on two knowledge sources for product recommendation: product reviews and user purchase trails.

Social recommender systems harness knowledge from social media to generate recommendations. Previous works in social recommender systems use social knowledge such as social tags, social relationship (social network) and microblogs.  In this talk, I will focus on two knowledge sources for product recommendation: product reviews and user purchase trails. In particular, I will present how we exploit the sentiment expressed in product reviews and user preferences which are implicitly contained in user purchase trails as the basis for recommendation.

Recent Advances in Search Result Diversification for the Web and Social Media (17 February, 2016)

Speaker: Ismail Sengor Altingovde
I will focus on the web search result diversification problem and present our novel contributions in the field.

In this talk, I will start with a short potpourri of our most recent research, emphasis being on the topics related to the web search engines and social Web. Then, I will focus on the web search result diversification problem and present our novel contributions in three directions. Firstly, I will present how the normalizaton of query relevance scores can boost the performance of the state-of-the-art explicit diversification strategies. Secondly, I will introduce a set of new explicit diversification strategies based on the score(-based) and rank(-based) aggregation methods. As a third contribution, I will present how query performance prediction (QPP) can be utilized to weight query aspects during diversification. Finally, I will discuss how these diversification methods perform in the context of Tweet search, and how we improve them using word embeddings.

GPG: Reasoning about Structured Parallel Processes using Types and Hylomorphisms (17 February, 2016)

Speaker: David Castro

The increasing importance of parallelism has motivated the creation of
better abstractions for writing parallel software, such as structured
parallelism using nested algorithmic skeletons. However, statically
choosing a combination of algorithmic skeletons that yield good
speedups when compared with a manually optimised solution remains a
difficult task. In order to do so, it is crucial to be able to
simultaneously reason about both the cost of, and semantic
equivalences between different parallel structures. In this talk, I
will present a new type-based mechanism for reasoning about these
properties, focusing on the introduction of parallelism to a
specification of the functional behaviour of a program. This mechanism
exploits well-known properties of a very general recursion pattern,
hylomorphisms, and a denotational semantics for structured parallel
processes described in terms of hylomorphisms. Using this approach, it
is possible to determine formally whether it is possible to introduce
a desired parallel structure to a program without altering its
functional behaviour, and choose a structure that minimises some
parameter cost model.

Practical and theoretical problems on the frontiers of multilingual natural language processing (16 February, 2016)

Speaker: Dr Adam Lopez
Multilingual natural language processing (NLP) has been enormously successful over the last decade, highlighted by offerings like Google translate. What is left to do?

Multilingual natural language processing (NLP) has been enormously successful over the last decade, highlighted by offerings like Google translate. What is left to do? I'll focus on two quite different, very basic problems that we don't yet know how to solve. The first is motivated by the development of new, massively-parallel hardware architectures like GPUs, which are especially tantalizing for computation-bound NLP problems, and may open up new possibilities for the application and scale of NLP. The problem is that classical NLP algorithms are inherently sequential, so harnessing the power of such processors requires complete rethinking the fundamentals of the field. The second is motivated by the fact that NLP systems often fail to correctly understand, translate, extract, or generate meaning. We're poised to make serious progress in this area using the reliable method of applying machine learning to large datasets—in this case, large quantities of natural language text annotated with explicit meaning representations, which take the form of directed acyclic graphs. The problem is that probabilities on graphs are surprisingly hard to define. I'll discuss work on both of these fronts.

FATA Seminar - Overview of the new Science of Sensor System Software programme grant (16 February, 2016)

Speaker: Muffy Calder

Sensor systems are everywhere: providing/facilitating information, real-time decision-making, actuation.
But,  environment are uncertain and dynamic, and sensors are noisy, decalibrate, may be misplaced, moved, compromised, and generally degraded over time, both individually and as network.  
How can we be assured that a sensor system does what we intend, in a range of dynamic environments?
How can we program and engineer systems in the face of such pervasive uncertainty that cannot be engineered away?
How can we make such a system “smarter”?
This programme brings together mathematics, computer science and engineering to tackle these questions. 

GPG: Inferring Program Transformations from Type Transformations for Partitioning of Ordered Sets into Overlapping Sections (10 February, 2016)

Speaker: Dr Wim Vanderbauwhede

In the distributed computation of finite difference grids (e.g. weather simulations), partitioning of arrays into overlapping sets is an essential step. The overlapping regions are commonly known as "halos". I present a formalism for order-preserving transformations of such halo-vector types into overlapping sections. I will show that this formalism allows to automatically derive instances of dataflow-style programs consisting of opaque element-processing functions combined using higher-order functions.

Information retrieval challenges in conducting systematic reviews (08 February, 2016)

Speaker: Julie Glanville
The presentation will also describe other areas where software such as text mining and machine learning have potential to contribute to the Systematic Review process

Systematic review (SR) is a research method that seeks to provide an assessment of the state of the research evidence on a specific question.  Systematic reviews (SRs) aim to be objective, transparent and replicable and seek to minimise bias by means of extensive  searches.


The challenges of extensive searching will be summarised.  As software tools and internet interconnectivity increase, we are seeing increasing use of a range of tools within the SR process (not only for information retrieval).  This presentation will present some  of the tools we are currently using within the Cochrane SR community and UK SRs, and the challenges which remain for efficient information retrieval.  The presentation will also describe other areas where software such as text mining and machine learning have potential to contribute to the SR process.

ENDS Seminar: Performance and Scalability on Indexed Subgraph Query Processing Method (03 February, 2016)

Speaker: Foteini Katsarou
ENDS seminar talk

Graphs have great capabilities of representing complex structures such as chemical compounds and social networks. A graph dataset is a collection of many graphs. A common problem addressed is the subgraph containment query problem where given a query graph, the graphs that contain the query are retrieved from the dataset. This process involves subgraph isomorphism test. Considering that a direct isomorphism test against all the graphs in the dataset would take significant amount of time, many index-based methods have been proposed to reduce the number of candidate graphs that have to underpass the isomorphism test. However, all the existing work currently focuses on comparing against relatively small datasets in terms of number and size of graphs.

In this presentation we identify 5 fundamental aspects of the subgraph query processing: the average number of nodes and density per graph, the number of distinct labels and number graphs totally in the dataset, and the size of the query graphs in terms of number of edges. Using these fundamental aspects, we perform a systematic study and we analyze the sensitivity of the various methods. Specifically, we use 6 well-established indexing techniques and we extensively compare them against both real and synthetic datasets. We report on their indexing time and size, and on query processing performance in terms of time and false positive ratio. The aims of this study are (a) to derive conclusions about the algorithms' relative performance and (b) to stress-test all algorithms, deriving insights as to their scalability and (c) to highlight how both performance and scalability depend on the above factors.

GPG: AnyScale Apps (03 February, 2016)

Speaker: Dr Jeremy Singer

Imagine developing an app that can run on any device scale, from a tiny wireless mote to a massive cloud datacenter. "Write once, scale anywhere" is the vision of our Anyscale Apps project funded by EPSRC. We are now half-way through the project, so this is a good time to take stock. Key concepts that have emerged so far include:

  • Task variants - interchangeable components with different non-functional characteristics but the same API - e.g. quick-and-imprecise face recognition versus more complex image processing.
  • Economic utility theory to select which variants will execute at a given time
  • The need for realistic benchmarks and testbeds - currently we are evaluating a multi-scale Robot platform.

GPG: Heterogeneous Programming in C++ - Today and Tomorrow (27 January, 2016)

Speaker: Alastair Murray

Until recently the programming of heterogeneous accelerators, such as GPUs, has largely revolved around low-level programming models that were designed to match the capabilities on the hardware. As heterogeneous programming has become more mainstream the need for higher-level models, based on the requirements of the programmer, has become apparent. Many approaches have appeared, but those extending the C++ programming model have seen the most interest as they manage to provide both high-level abstractions and low-level control if required. This has led to a standardisation of key approaches and proposed attempts to provide a unified hardware description.

This talk will describe some of the standardised approaches for parallel and heterogeneous programming in C++ that are appearing today, such as SYCL for OpenCL and the C++17 Parallel STL. Then there will be a more speculative look at how C++ could look when programming forthcoming hardware, such as the Heterogeneous System Architecture (HSA), where the CPU and accelerators are more tightly connected.

Kinesthetic Communication of Emotions in Human-Computer Interaction (21 January, 2016)

Speaker: Yoren Gaffary (INRIA)

The communication of emotions use several modalities of expression, as facial expressions or touch. Even though touch is an effective vector of emotions, it remains little explored. This talk concerns the exploration of the kinesthetic expression and perception of emotions in a human-computer interaction setting. It discusses on the kinesthetic expression of some semantically close and acted emotions, and its role on the perception of these emotions. Finally, this talk will go beyond acted emotions by exploring the expression and perception of a spontaneous state of stress. Results have multiple applications, as a better integration of the kinesthetic modality in virtual environment and of human-human remote communications.

Karnaugh Maps considered harmful: Teaching hardware to computer science students (20 January, 2016)

Speaker: John O'Donnell

The content of many courses in computer systems, especially digital circuit design and its relationship to computer architecture, has remained stagnant for decades.  One reason is that little attention is given to the aims of hardware courses in computer science curricula.  To improve the situation, we need to think about the needs of modern computer science students, match the content to the aims, and take advantage of research.

GPG: Improving Implicit Parallelism (20 January, 2016)

Speaker: Jose Calderon

Using static analysis techniques compilers for lazy functional languages can identify parts of a program that can be legitimately evaluated in parallel with the main thread of execution. These techniques can produce improvements in the runtime performance of a program, but are limited by the static analyses’ poor prediction of runtime performance. This talk outlines the development of a system that uses iterative compilation in addition to well-studied static analysis techniques. Our representation of the parallel programs allows us to use traditional 'blind' search techniques or profile-directed improvement. We compare the results of different search strategies and discuss the pitfalls and limitations of our technique. Overall, the use of iterative feedback allows us to achieve higher performance than through static analysis alone.

FATA Seminar - Symmetry breaking for Ramsey colouring (19 January, 2016)

Speaker: Alice Miller

Ramsey numbers are extremal graph problems and relate to colourings of complete graphs that contain no monochromatic cliques of certain sizes. The Ramsey number R(r1, …, rk) is the smallest integer n for which any k-coloured complete graph on n vertices must have a clique of size ri in colour i, for some 1<=i<=k.
The number R(4,3,3) is often presented as the unknown Ramsey number with the best chances of being found *soon*. Yet, its precise value has remained unknown for almost 50 years (although it has been known that the answer was either 30 or 31). In this talk I will discuss some symmetry breaking techniques and other nifty reductions that were used in a recent paper I was involved in. These techniques allowed us to cut down the search space in order to solve this mystery once and for all using a SAT solver. The talk will contain lots of pictures and no proofs!

Learning to Hash for Large Scale Image Retrieval (14 December, 2015)

Speaker: Sean Moran
In this talk I will introduce two novel data-driven models that significantly improve the retrieval effectiveness of locality sensitive hashing (LSH), a popular randomised algorithm for nearest neighbour search that permits relevant data-points to be ret

In this talk I will introduce two novel data-driven models that significantly improve the retrieval effectiveness of locality sensitive hashing (LSH), a popular randomised algorithm for nearest neighbour search that permits relevant data-points to be retrieved in constant time, independent of the database size.

To cut down the search space LSH generates similar binary hashcodes for similar data-points and uses the hashcodes to index database data-points into the buckets of a set of hashtables. At query time only those data-points that collide in the same hashtable buckets as the query are returned as candidate nearest neighbours. LSH has been successfully used for event detection in large scale streaming data such as Twitter [1] and for detecting 100,000 object classes on a single CPU [2].


The generation of similarity preserving binary hashcodes comprises two steps: projection of the data-points onto the normal vectors of a set of hyperplanes partitioning the input feature space followed by a quantisation step that uses a single threshold to binarise the resulting projections to obtain the hashcodes. In this talk I will argue that the retrieval effectiveness of LSH can be significantly improved by learning the thresholds and hyperplanes based on the distribution of the input data.


In the first part of my talk I will provide a high level introduction of LSH. I will then argue that LSH makes a set of limiting assumptions arising from its data-independence that hamper its retrieval effectiveness. This motivates the second and third parts of my talk in which I introduce two new models that address these limiting assumptions. 


Firstly, I will discuss a scalar quantisation model that can learn multiple thresholds per LSH hyperplane using a novel semi-supervised objective function [3]. Optimising this objective function results in thresholds that reduce information loss inherent in converting the real-valued projections to binary. Secondly, I will introduce a new two-step iterative model for learning the hashing hyperplanes [4]. In the first step the hashcodes of training data-points are regularised over an adjacency graph which encourages similar data-points to be assigned similar hashcodes. In the second step a set of binary classifiers are learnt so as to separate opposing bits (0,1) with maximum margin. Repeating both steps iteratively encourages the hyperplanes to evolve into positions that provide a much better bucketing of the input feature space compared to LSH.


For both algorithms I will present a set of query-by-example image retrieval results on standard image collections, demonstrating significantly improved retrieval effectiveness versus state-of-the-art hash functions, in addition to a set of interesting and previously unexpected results.

[1] Sasa Petrovic, Miles Osborne and Victor Lavrenko, Streaming First Story Detection with Application to Twitter, In NAACL'10.


[2] Thomas Dean, Mark Ruzon, Mark Segal, Jonathon Shlens, Sudheendra Vijayanarasimhan,  and Jay Yagnik, Fast, Accurate Detection of 100,000 Object Classes on a Single Machine, In CVPR'13.


[3] Sean Moran, Victor Lavrenko and Miles Osborne. Neighbourhood Preserving Quantisation for LSH, In SIGIR'13.


[4] Sean Moran and Victor Lavrenko. Graph Regularised Hashing. In ECIR'15.




GPG: Towards Transparent Resilience for the Chapel Parallel Language (09 December, 2015)

Speaker: Ms Konstantina Panagiotopoulou

The rapidly increasing number of components in modern High Performance Computing (HPC) systems provides a challenge on their resilience; predictions of time between failures on ExaScale systems range from hours to minutes. Yet, the prevalent HPC programming model today does not tolerate faults. This talk features the design and initial implementation of transparent resilience for Chapel, a parallel HPC language following the Partitioned Global Address Space (PGAS) programming model.

We address cases of hardware failure on one or multiple nodes during program execution in a distributed setup, using detection and recovery mechanisms. We focus on the runtime system, particularly on the communication (GASNet) and tasking layers to address task parallelism and extend the work on library level to handle data parallelism. Ongoing work addresses integration of distributed task adoption strategies with Chapel's default data distributions. This talk summarises results and experiences from a 2-month internship with the Chapel developer's group at Cray.

Bio: I'm a third year PhD student at Heriot-Watt University and member of the Dependable Systems Group. My research interests are in the design and implementation of Partitioned Global Address Space (PGAS) languages, in the context of High Performance Computing, with focus on resilience. PGAS languages implement the concept of a shared global address space, and use language constructs to distribute data structures over machines, providing the programmer with opportunities to tune data locality and enhance performance. Resilience, is one of the main challenging topics at ExaScale, and an area of gaining popularity among researchers.

FATA Seminar - Three Problems for Constraint Programmers (08 December, 2015)

Speaker: Patrick Prosser

I will present three problems that were used in teaching the masters course in Constraint Programming, CP(M).  Two of these problems were assessed exercises and one was “optional” homework.

Social Robotics Seminar: TBD (04 December, 2015)

Speaker: Stacy Marsella

Tea Rooms: Conversation Oriented Project Management (02 December, 2015)

Speaker: Tim Storer

Effective communication and coordination is essential to the successful
management of software projects of any significant scale.  A variety of software tools have been developed to support different styles of project management.  Of particular note, ticket oriented tools such as Trac, Trello, Asana and Jira support creation, management and tracking of work items, often organised into sprints or milestones.  A considerable amount of meta-data can be attached to tickets, such as dependencies, effort estimates, task owner, progress updates and so on.  In principle, this information can be used to carefully track and monitor progress in a software project.
Unfortunately, a lot of this information has already been communicated in informal conversations, either verbally or through conversation oriented tools such as Slack.  Copying the information into a ticket management system requires effort (and therefore is often simply not done).  This talk sketches some ideas for a conversation oriented approach to project management, in which project information and meta-data is extracted directly from conversation tools and other software project artifacts in real time.  I'll show how we've started to play with some of these ideas in a prototype tool called Tea Rooms and outline plans for the future.

GPG: Parallel Programming in Actor-Based Applications via OpenCL (02 December, 2015)

Speaker: Dr Paul Harvey

GPU and multicore hardware architectures are commonly used in many different application areas to accelerate problem solutions relative to single CPU architectures. The typical approach to accessing these hardware architectures requires embedding logic into the programming language used to construct the application; the two primary forms of embedding are: calls to API routines to access the concurrent functionality, or pragmas providing concurrency hints to a language compiler such that particular blocks of code are targeted to the concurrent functionality. The former approach is verbose and semantically bankrupt, while the success of the latter approach is restricted to simple, static uses of the functionality.

This talk is about combining actor-based programming and OpenCL to simplify programming multicore CPUs and GPUs.

Distinguished Seminar: Building trustworthy refactoring tools (27 November, 2015)

Speaker: Simon Thompson
Computing Science Distinguished Seminar Series

Refactorings are program transformations that are intended to change the way that a program works without changing what it does. Refactoring is used to make programs more readable, easier to maintain and extend, or to improve their efficiency. These changes can be complex and wide-ranging, and so tools have been built to automate these transformations.

Because refactoring involves changing program source code, someone who uses a refactoring tool needs to be able to trust that the tool will not break their code.  In this talk I'll explore what this idea means in practice, and how we provide various levels of assurance for refactorings. While the context is tools for functional programming languages like Haskell and Erlang, the conclusions apply more widely, for instance to object-oriented languages.

Simon Thompson is Professor of Logic and Computation at the University of Kent. Functional programming is his main research field, but he has worked in various aspects of logic, and testing as well. He is the author of books on Haskell, Miranda, Erlang and constructive type theory.

Parallel Skeletons for Branch and Bound Search (25 November, 2015)

Speaker: Blair Archibald

Parallel algorithmic skeletons present a way to separate program logic
from parallel coordination. Skeletons have been widely applied in a variety of
problems areas, however the focus is largely on problems featuring a very
regular structure (such as parallel list reduction or iteration). In this talk
we focus on applying parallel algorithmic skeletons to an irregular problem
area: Branch and Bound (B&B) Search.
We will begin by a general discussion of both parallel algorithmic skeletons and
branch and bound search problems. Then we explore the range of design decisions
available when converting B&B search problems to a parallel skeleton. To finish,
we will focus on a particular skeleton design aimed at delivering a number of
performance guarantees to the user.

GPG: Energy-Modulated Computing: Capacitors, Causality, Concurrency... (25 November, 2015)

Speaker: Prof Alex Yakovlev

For years people have been designing electronic and computing systems focusing on improving performance but only "keeping power and energy consumption in mind". This is a way to design energy-aware or power-efficient systems where energy is considered as a resource whose utilization must be optimized in the realm of performance constraints.

Increasingly, energy and power turn from optimization criteria into constraints, sometimes as critical as, for example, reliability and timing. Furthermore, quanta of energy or specific levels of power can shape the system's action. In other words, the system's behaviour, i.e. the way how computation and communication is carried out, can be determined or modulated by the flow of energy into the system. This view becomes dominant when energy is harvested from the environment or strictly rationed if it comes from internal sources. This view is also analogous to what happens in biological systems.

In this talk we look at the energy-modulated computing paradigm and illustrate its manifestations in system design, such as:

  • Converting electric charge into causality and self-timed operation
  • Using concurrency for best energy utilisation
  • Models for designing energy-proportional computers (resources, modes, order graphs, partial orders)

The talk will hopefully be motivating to a wide range of audience, including electronic and computer engineers interested in physical and mathematical aspects of (concurrent) computations.

Biography. Alexandre (Alex) Yakovlev was born in 1956 in Russia. He received D.Sc. from Newcastle University in 2006, and M.Sc. and Ph.D. from St. Petersburg Electrical Engineering Institute in 1979 and 1982 respectively, where he worked in the area of asynchronous and concurrent systems since 1980, and in the period between 1982 and 1990 held positions of assistant and associate professor at the Computing Science department. Since 1991 he has been at the Newcastle University, where he worked as a lecturer, reader and professor at the Computing Science department until 2002, and is now heading the MicroSystems research group ( at the School of Electrical and Electronic Engineering. His interests and publications are in the field of modelling and design of asynchronous, concurrent, real-time and real-power circuits and systems. He has published six monographs and more than 350 papers in academic journals and conferences, has managed over 30 research contracts and supervised over 40 PhD students. He has been a general chair and PC chair of several international conferences, including the IEEE Int. Symposium on Asynchronous Circuits and Systems (ASYNC), Petri nets (ICATPN), Application of Concurrency to Systems Design (ACSD), Network on Chip Symposium (NOCS), and  has been a chairman of the Steering committee of the ACSD conference for the last 15 years. In 2011-2013 he was a Dream Fellow of EPSRC, UK, to investigate different aspects of energy-modulated computing.

FATA Seminar - Automated Verification of Quantum Circuits (24 November, 2015)

Speaker: Sarah Sharp

When directly simulated on a classical computer, quantum computations can result in an exponential slowdown, so how can we verify quantum protocols without a quantum computer to hand? In this talk I will go over a few different ways in which formal methods can be used to validate quantum protocols and what, if any, speedups can be achieved in the process.

I will start by presenting ways in which equivalence checking can be used for quantum circuits, which can be done by testing if one circuit representation is equal to another, and how to model the build up of the operations on a set of qubits comparing both mapstate and QUIDD representations and their related operations to attempt to further minimise the runtime.      
The previous work by Ebrahim et al. has established a model-checking technique to enable checking the equivalence of protocols described by a specific input language using the stabilizer formalism. By restricting my initial examples to the Clifford group operators, I will demonstrate how the checking of equivalence between pairs of generated circuits can be done using both stabilizer arrays and mapstate representations, looking at the pros and cons of both techniques as well as future approaches.

Social Robotics Seminar: Computational Modeling and Personal Robotics for Extracting Social Signatures (20 November, 2015)

Speaker: Mohamed Chetouani

Social signal processing is an emerging research domain with rich and open fundamental and applied challenges. In this talk, I’ll focus on the development of social signal processing techniques for real applications in the field of psycho-pathology. I’ll overview recent research and investigation methods allowing neuroscience, psychology and developmental science to move from isolated individuals paradigms to interactive contexts by jointly analyzing behaviors and social signals of partners. From the concept of interpersonal synchrony, we’ll show how to address the complex problem of evaluating children with pervasive developmental disorders. These techniques are also demonstrated in the context of human-robot interaction by a new way of using robots in autism (moving from assistive devices to clinical investigations tools). I will finish by closing the loop between behaviors and physiological states by presenting new results on hormones (oxytocin, cortisol) and behaviors (turn-taking, proxemics) during early parent-infant interactions.

Mohamed Chetouani is the head of the IMI2S (Interaction, Multimodal Integration and Social Signal) research group at the Institute for Intelligent Systems and Robotics (CNRS UMR 7222), University Pierre and Marie Curie-Paris 6. He received the M.S. degree in Robotics and Intelligent Systems from the UPMC, Paris, 2001. He received the PhD degree in Speech Signal Processing from the same university in 2004. In 2005, he was an invited Visiting Research Fellow at the Department of Computer Science and Mathematics of the University of Stirling (UK). Prof. Chetouani was also an invited researcher at the Signal Processing Group of Escola Universitaria Politecnica de Mataro, Barcelona (Spain). He is currently a Full Professor in Signal Processing, Pattern Recognition and Machine Learning at the UPMC. His research activities, performed at the Institute for Intelligent Systems and Robotics, cover the areas of social signal processing and personal robotics through non-linear signal processing, feature extraction, pattern classification and machine learning. He is also the co-chairman of the French Working Group on Human-Robots/Systems Interaction (GDR Robotique CNRS) and a Deputy Coordinator of the Topic Group on Natural Interaction with Social Robots (euRobotics). He is the Deputy Director of the Laboratory of Excellence SMART Human/Machine/Human Interactions In The Digital Society.

Departmental Seminar: Technology Transfer in Theory and Practice (19 November, 2015)

Speaker: Prof Joe Armstrong

For the last few years I have worked at Ericsson in the "Systems and Technology" group.  This is a small group which is responsible for Ericsson's software strategy and reports directly to senior technical management.

In this group we try to identify the software that Ericsson will need for it's future projects and start projects that will enable technology transfer.  We work closely with a hardware group that does the same thing for hardware.

In this talk I'll outline the following:

    - The mechanisms for change
    - Why I think projects fail
    - How some projects succeed
    - How projects get financed
    - The Academic/Industry interface

Having looked at this I'll talk about the key areas that are the focus of current research in Ericsson, and where we wish to encourage participation.

I also have a private research agenda and have been battling away with the same old problems for the last 30 years, so I'll say a little about some unsolved problems that interest me (who knows, maybe somebody in the audience will have solved these).

Biography: Joe Armstrong is the inventor of the programming language Erlang.  He has written several books on Erlang and has a PhD from KTH (thesis title: "Making reliable distributed systems in the presence of software errors"). He has founded a successful software company and initiated a number of research projects.

Teaching Discussion Group - Why Do We Lecture? (19 November, 2015)

As part of our a series of informal gatherings to discuss issues of learning and teaching, please join us for our first meeting to discuss why we lecture.

An informal discussion of our views on lectures, what is their value? Are they the best mechanism for teaching and learning? What else could we be doing? Takes place in the common room.

GPG: Some thoughts on Erlang2 (19 November, 2015)

Speaker: Prof Joe Armstrong

Changing a popular program language is very difficult, instead of changing Erlang I define a language erl2 while compiles to erlang.

This talk discusses the following:

   - Why changing a language is difficult
   - New language, modified language or code generator?
   - What's new in erl2
   - Mutable value chains
   - "black box" crash recorders
   - Global "global" processes

This is work in progress

FATA Seminar - Behavioural prototypes (17 November, 2015)

Speaker: Roland Perera

I'll demo a simple language of concurrent objects which explores the design space between type systems and continuous testing. In our language, finite-state programs are checked automatically for multiparty compatibility. This property of communicating automata, taken from the session types literature but here applied to terms rather than types, guarantees that no state-related errors arise during execution: no object gets stuck because it was sent the wrong message, and every message is processed.

The usual object-oriented notion of subtyping is also interpreted at the level of terms rather than types. An abstraction takes the form of a prototypical implementation against which another program can be automatically tested for behavioural conformance. Any program can act as
an abstraction, and conversely every abstraction is a concrete program that can be executed.

Cross Validation between CloudSim and the Glasgow Raspberry Pi Cloud (11 November, 2015)

Speaker: Dhahi Sulaybikh D Alshammari

Currently, researchers in Cloud Computing field usually find it difficult to run their experiments on actual test-bed infrastructure for existing cloud provider. Therefore, they intend to apply their experiments using existing simulators. I am investigating cloud computing performance evaluation using various models and techniques.  I examine two models: a software simulator called CloudSim and a small-scale hardware testbed called the Glasgow Raspberry Pi Cloud. The presentation describes an empirical cross-validation of these two
models, which I performed this year.

FATA Seminar - Stable Marriage and Roommates problems with restricted edges (10 November, 2015)

Speaker: David Manlove

In the Stable Marriage and Roommates problems, a set of agents is given, each of them having a strictly ordered preference list over some or all of the other agents. A matching is a set of disjoint pairs of mutually acceptable agents. If any two agents mutually prefer each other to their partner, then they block the matching, otherwise, the matching is said to be stable. We investigate the complexity of finding a solution satisfying additional constraints on restricted pairs of agents. Restricted pairs can be either forced or forbidden. A stable solution must contain all of the forced pairs, while it must contain none of the forbidden pairs. In this talk we describe a range of algorithmic results for problems involving computing stable matchings in the presence of restricted edges.  Whilst in some cases NP-hardness and strong inapproximability results prevail, certain other cases give rise to polynomial-time algorithms and constant-factor approximation algorithms.  This is joint work with Agnes Cseh.

Social Robotics Seminar: Delighting the User With Speech Synthesis (06 November, 2015)

Speaker: Matthew Aylett

We all know there is something special about speech. Our voices are not just a means of communicating, although they are superb at communicating, they also give a deep impression of who we are. They can betray our upbringing, our emotional state, our state of health. They can be used to persuade and convince, to calm and to excite. Speech synthesis technology offers a means to engage the user, to personify an interface, to add delight to human computer interaction. In this talk I will present speech synthesis work that supports social interaction through the use of emotion, personalisation and audio design, we will relate this technology to requirements in dialogue systems, eyes-free data aggregation and audio interfaces, and I will discuss the challenges the technology faces for a pervasive, eyes-free future.

CereProc Chief Science Officer Dr Matthew Aylett has over 15 years’ experience in commercial speech synthesis and speech synthesis research. He is a founder of CereProc, which offers unique emotional and characterful synthesis solutions and has recently been awarded a Royal Society Industrial Fellowship to explore the role of speech synthesis in the perception of character in artificial agents.

Towards an Adaptive Framework for Performance Portability (04 November, 2015)

Speaker: Patrick Maier

The recent proliferation of parallel architectures --- multicores, 
manycores, GPU accelerators, clusters --- calls for a rethink on 
portability. It is not enough to just compile and run code across 
different architectures. Portable parallel code should also perform 
reasonably across different parallel architectures and configurations, 
even if these configurations differ significantly in crucial parameters 
like number of processors or communication latency.

The AJITPar project aims to achieve a degree of performance portability, 
i.e. the same parallel code should perform decently on a certain range 
of parallel architectures (multicores, manycores and clusters). This 
requires a framework that can transform the parallelism expressed in the 
code to levels that suit the architecture. AJITPar proposes to base this 
framework on a trace-based just-in-time (JIT) compiler for a functional 
language. The reasons are threefold:

(1) Functional programs are easy to transform;

(2) dynamic compilation allows for a wider range of transformations 
including ones depending on runtime information;

(3) trace-based JIT compilers build intermediate data structure (traces) 
that can be used for cost analysis.

In this talk, I will sketch an overview of the AJITPar project and 
report on the current status. In particular, I will talk about building 
a parallel task scheduler for Pycket, a recent trace-based JIT for 
Racket. (Joint work with Magnus Morton and Phil Trinder.)

An electroencephalograpy (EEG)-based real-time feedback training system for cognitive brain-machine interface (cBMI) (04 November, 2015)

Speaker: Kyuwan Choi

In this presentation, I will present a new cognitive brain-machine interface (cBMI) using cortical activities in the prefrontal cortex. In the cBMI system, subjects conduct directional imagination which is more intuitive than the existing motor imagery. The subjects control a bar on the monitor freely by extracting the information of direction from the prefrontal cortex, and that the subject’s prefrontal cortex is activated by giving them the movement of the bar as feedback. Furthermore, I will introduce an EEG-based wheelchair system using the cBMI concept. If we use the cBMI, it is possible to build a more intuitive BMI system. It could help improve the cognitive function of healthy people and help activate the area around the damaged area of the patients with prefrontal damage such as patients with dementia, autism, etc. by consistently activating their prefrontal cortex.

GPG: Performance Portability through Semi-explicit Placement in Distributed Erlang (04 November, 2015)

Speaker: Dr Kenneth MacKenzie

The Erlang language provides features which make it very easy to implement applications distributed over large networks.  The problem then arises of how one should deploy such applications, particularly in networks where the individual nodes may have varying characteristics and where communication times may be non-uniform.

In this talk, I'll describe some work from the RELEASE project at Glasgow where we designed and implemented libraries providing methods for "semi-explicit placement", where the programmer selects nodes for spawning remote processes based on properties of nodes and communication latencies.  We claim that these methods will help programmers to achieve good performance in a portable way, without requiring detailed knowledge of the network in advance.

I'll give an introduction to Erlang and the issues that arise in deploying distributed applications, and then describe our libraries, including some topological and statistical methods used in their design and validation.  The São Tomé shorttail and other birds will also put in an appearance.

FATA Seminar - Mungo: Typechecking Protocols (03 November, 2015)

Speaker: Dimitrios Kouzapas

We are demonstrating Mungo, a tool developed for type-checking typestate for objects in Java.
Typestate is a notion that embeds a state on the type of an object, with each state allowing
only for certain methods to be called. The demonstration will focus on the relation of Mungo
and communication protocols that are based on global session types.

Multidisciplinary Madness in the Wild (29 October, 2015)

Speaker: Prof Jon Whittle (Lancaster University)

This talk will reflect on a major 3 year project, called Catalyst, that carried out 13 multidisciplinary, rapid innovation digital technology research projects in collaboration with community organisations “in the wild”. These projects covered a wide range of application domains including quantified self, behaviour change, and bio-feedback, but were all aimed at developing innovative digital solutions that could promote social change. Over the 3 year project, Catalyst worked in collaboration with around 90 community groups, charities, local councils and other organisations to co-develop research questions, co-design solutions, and co-produce and co-evaluate them. The talk will reflect on what worked well and badly in this kind of highly multidisciplinary research ‘in the wild’ project.

Bio: Jon Whittle is Professor of Computer Science and Head of School at Lancaster’s School of Computing and Communications. His background is in software engineering and human-computer interaction research but in the last six years, he has taken a keen interest in interdisciplinary research. During this time, he has led five major interdisciplinary research projects funded to around £6M. Through these, he has learned a lot about what works — and what doesn’t — when trying to bring researchers from different disciplinary backgrounds together.

ENDS Seminar: Introduction to SDN and OpenFlow (28 October, 2015)

Speaker: Simon Jouet

Over the last few years we have discussed multiple times about OpenFlow and Software Defined Networking (SDN) in the ENDS talks but we never have described what it was or why it became so popular in only a few years. In this talk I would like to cover what it SDN, the change in paradigm from traditional networking and cover it’s most predominant implementation OpenFlow.

Adapting biomechanical simulation for physical ergonomics evaluation of new input methods (28 October, 2015)

Speaker: Myroslav Bachynskyi

Recent advances in sensor technology and computer vision allowed new computer input methods to rapidly emerge. These methods are often considered as more intuitive and easier to learn comparing to the conventional keyboard or mouse, however most of them are poorly assessed with respect to their physical ergonomics and health impact of their usage. The main reasons for this are large input spaces provided by these interfaces, absence of a reliable, cheap and easy-to-apply physical ergonomics assessment method and absence of biomechanics expertize in user interface designers. The goal of my research is to develop a physical ergonomics assessment method, which provides support to interface designers on all stages of the design process for low cost and without specialized knowledge. Our approach is to extend biomechanical simulation tools developed for medical and rehabilitation purposes to adapt them for Human-Computer Interaction setting. The talk gives an overview of problems related to the development of the method and shows answers to some of the fundamental questions.

GPG: Obliterating Obstructions: Detecting Dependencies Disruptive to Parallelisation in Recursive Functions (28 October, 2015)

Speaker: Mr Adam Barwell

To take advantage of increasingly parallel hardware, a simple, safe, and effective method to introduce parallelism is needed. Current approaches can be divided into two broad categories: automatic, and abstraction. Whilst fully automatic solutions mean the programmer need not lift a finger, they tend to target highly specific constructs, rendering them virtually useless in all but a few situations. Conversely, the development of better abstractions and interfaces presents a more general solution, but still requires a level of expertise from the programmer to be effective. This is especially pronounced when a program must be transformed to enable the introduction of parallelism. To reduce the burden this transformation phase places on the proverbial programmer, we propose a method that uses static analysis techniques to identify operations within tail-recursive functions that are obstructive to the introduction of parallelism, and use refactoring techniques to extract and expose potential parallelism in spite of those obstructions.

Bio: Studying under Prof Kevin Hammond and Dr Christopher Brown, Adam is a PhD student at the University of St Andrews. He is currently interested in dependency analysis and program transformation techniques, principally to enable the introduction of parallelism to sequential programs. Having cut his teeth on Prolog and Miranda back at UCL, he currently has an odd fixation with Erlang, but is not above tinkering in Haskell or perpetually intending to play with Idris.

FATA Seminar - Enumeration of knots (27 October, 2015)

Speaker: Craig Reilly

Enumeration of knots is a key problem for mathematicians working in knot theory, a branch of topology.  It has been since the time of Tait and Little in the late 19th century, who tabulated all prime knots up to 10 crossings. Most of the work in tabulating prime knots makes use of DT code representations of knots, however we instead make use of Gauss code representations .  This choice of encoding has the advantage that it is relatively easy to understand, however it also presents problems which will be discussed.  Our enumeration relies on constraint programming, and it appears that this meeting of CP and topology is novel.  The symmetries of the problem are of particular interest and we will explore this during the talk.  The material presented borrows heavily from my masters project.

Detecting Swipe Errors on Touchscreens using Grip Modulation (22 October, 2015)

Speaker: Faizuddin Mohd Noor

We show that when users make errors on mobile devices, they make immediate and distinct physical responses that can be observed with standard sensors. We used three

standard cognitive tasks (Flanker, Stroop and SART) to induce errors from 20 participants. Using simple low-resolution capacitive touch sensors placed around a standard device and a built-in accelerometer, we demonstrate that errors can be predicted using micro-adjustments to hand grip and movement in the period after swiping the touchscreen. In a per-user model, our technique predicted error with a mean AUC of 0.71 in Flanker and 0.60 in Stroop and SART using hand grip, while with the accelerometer the mean AUC in all tasks was ≥ 0.90. Using a pooled, non-user-specific, model, our technique achieved mean AUC of 0.75 in Flanker and 0.80 in Stroop and SART using hand grip and an AUC for all tasks > 0.90 for the accelerometer. When combining these features we achieved an AUC of 0.96 (with false accept and reject rates both below 10%). These results suggest that hand grip and movement provide strong and very low latency evidence for mistakes, and could be a valuable component in interaction error detection and correction systems.

FATA Seminar: When is finding a little graph inside a big graph hard? (20 October, 2015)

Speaker: Ciaran McCreesh

Subgraph isomorphism involves finding a little "pattern" graph inside a larger "target" graph. The problem is NP-complete, but it has lots of important applications. Practical algorithms for the problem can now handle some patterns with up to a thousand vertices, and targets with up to ten thousand vertices---but they cannot handle all such graphs, and we need to make sure we aren't making overly bold claims based upon favourable results from particular benchmark sets.

We've been looking at how to generate really hard random instances for the problem. This isn't as simple as, for example, random maximum clique, because we have lots of parameters we can vary independently. This short talk is mostly about figuring out how we should present the data: we're going to put up some pretty charts with lots of colours, and ask whether you find them helpful in understanding what's going on.

How do I Look in This? Embodiment and Social Robotics (16 October, 2015)

Speaker: Ruth Aylett
Glasgow Social Robotics Seminar Series

Robots have been produced with a wild variety of embodiments, from plastic-skinned dinosaurs to human lookalikes, via any number of different machine-like robots. Why is embodiment important? What do we know about the impact of embodiment on the human interaction partners of a social robot? How naturalistic should we try to be? Can one robot have multiple embodiments? How do we engineer expressive behaviour across embodiments? I will discuss some of these issues in relation to work in the field.

More notes from a small island (16 October, 2015)

Speaker: Jeremy Singer

 I visited Singapore in September, delivering Discrete Math + Linux courses to our new CS students at the Singapore Institute of Technology. In this non-technical talk I will review my experiences - academic, culinary and political. Presentation features lots of fun photos and moderately amusing anecdotes!

Intent aware Interactive Displays: Recent Research and its Antecedents at Cambridge Engineering (15 October, 2015)

Speaker: Pat Langdon and Bashar Ahmad (University of Cambridge)

Current work at CUED aimed at stabilising pointing for moving touchscreen displays has met recent success in Automotive, including funding and Patents. This talk will establish the antecedents of the approach in studies aimed at improving access to computers for people with impairments of movement and vision.

One theme in the EDC has been Computer assisted interaction movement impairment using haptic feedback devices. This early approach showed some promise in mitigating extremes of movement but was dependent on hardware implementations such as the Logitech Haptic mouse. Other major studies since have examined more general issues behind the development of multimodal interfaces: for an Interactive digital TV (EU GUIDE), and for use in adaptive mobile interfaces for new developments of wireless communication, in The India UK Advanced Technology Centre (IU-ATC).
Most recently Pat Langdon’s collaboration with the department’s signal processing group has led to the realisation that predicting the users pointing intentions from extremely perturbed cursor movement is a similar problem to that of the prediction of a moving objects future position based on irregularly timed and cluttered trajectory data points from multiple sources. This raised an opportunity in the Automotive domain and Bashar Ahmad will describe in detail recent research on using software filtering as a way of improving interactions with touchscreens in a moving vehicle.


Dr Pat Langdon is a Principal Research Associate for the Cambridge University Engineering Department and lead researcher in Inclusive design within the Engineering Design Centre. He has originated numerous research project in design for inclusion and HMI since joining the department in 1997. Currently, he is PI of 2 projects, 1 commercial collaboration in automotive and a Co-I of a 4 year EPSRC research collaboration.

Dr Bashar Ahmad is a Senior Research Associate in the Signal Processing and Communications (SigProC) Laboratory, Engineering Department, Cambridge University. Prior to joining SigProC, Bashar was a postdoctoral researcher at Imperial College London. His research interests include statistical signal processing, Bayesian inference, multi-modal human computer interactions, sub-Nyquist sampling and cognitive radio.

Working in Finance IT (14 October, 2015)

Speaker: Richard Croucher

The banks have the most complex IT systems in the world.  Richard has worked on many of these and will focus on his most recent 4 years at Barclays which currently uses 60,000 servers and has around 21,000 IT employees.  Richard will briefly overview what all these computers are used for and the kind of jobs people in IT have.  He will then provide the opportunity to ask questions

As well as Barclays, Richard has consulted on IT to HSBC, RBS, DeutscheBank, CreditSuisse, Flowtraders,  JP. Morgan, Merril Lynch, Bank of America.   He has also held senior Architecture positions with both Microsoft and Sun Microsystems and can contrast jobs in the Finance sector with working for large US technology companies.

Bio: Richard is a platform architect who specializes in high performance systems, including those used by financial institutions for high frequency trading and huge compute clusters with thousands of nodes used in the Cloud. Richard discovered Erlang and OTP three years ago and has adopted this as his platform of choice.   He has designed and helped build a large and complex Cloud based application using Erlang/OTP and has been exploring the design challenges of getting this to scale to millions of users for the last 12 months.

Richard is a multi-disciplinarian with experience across hardware, storage, network, operating systems, DevOps and systems programming.  He has to use all of these when designing a new platform. Over the years, he's programmed in Assembler, Basic, Fortran, Pascal,  C, C++, Java, C# and Erlang.  He has designed and built his own computers working at the chip level and designing his own circuit boards. Richard is currently VP of High Frequency Engineering at Barclays and has had previous roles as  Chief Architect at Sun Microsystems, Principle Architect at Microsoft (Azure).

Richard is a Fellow of STAC Research, a Fellow of the British Computer Society and a Chartered IT Practitioner and holds degrees from Brunel University, the University of Berkshire and the University of East London.

A conceptual model of the future of input devices (14 October, 2015)

Speaker: John Williamson

Turning sensor engineering into advances into human computer interaction is slow, ad hoc and unsystematic. I'll discuss a fundamental approach to input device engineering, and illustrate how machine learning could have the exponentially-accelerating impact in HCI that it has had in other fields.

[caveat: This is a proposal: there are only words, not results!]

GPG: Improving Scalability of Distributed Computing Environments (14 October, 2015)

Speaker: Richard Croucher

Scaling distributed computing environments to extend from 100's to 1000's of nodes is a challenge for programming environments such as Erlang/OTP.   Traditional large scale compute environments use MPI or offer simple Grid computing using Distributed Resource Managers.   Functional languages such as Erlang provide  greater flexibility and programmer productivity than MPI or Grid based programs but don't scale as well.  This session will look at the scaling constraints of ErlangOTP and discuss opportunities for utilising some of the techniques used by MPI, such as RDMA and Multicast to improve its scalability.

FATA Seminar: Complexity of the n-Queens Completion Problem (13 October, 2015)

Speaker: Ian Gent

The n-Queens problem is to place n chess queens on an n by n chessboard so that no two queens are on the same row, column or diagonal in either direction. This is one of the most famous puzzles there is, and is often - incorrectly - attributed to Gauss. It has very often been used as a benchmark for combinatorial search methods, and also very often criticised as a bad test cases [e.g. see *]. The reason for the criticism is that a solution can be computed in time O(n) for any n > 3. 

We show that this criticism does not apply to the completion variant of the problem. That is, given m queens which do not attack each other on an n by n chessboard, can we add n-m queens to get a solution of the n queens problem? We show that this problem is NP-Complete and #P-Complete. We also report how difficult the n-Queens completion problem is on random problems, and thereby seek to rescue the n-Queens problem - in its completion version - as a valid benchmark problem. [This is joint work with Chris Jefferson and Peter Nightingale, St Andrews.]

* see

ENDS Seminar - ARM University Program: Bridging the gap between Industry and Academia (08 October, 2015)

Speaker: Ashkan Tousimojarad

ARM is the industry's leading supplier of microprocessor technology. Today ARM technology is in use in 95% of smart phones, 80% of digital cameras, and 35% of all electronic devices.

After a brief introduction to the company and its activities, I will talk about the ARM University Program and its relationships with universities worldwide. I will then talk about my experience at ARM during the past few months, specifically about the design and implementation of an automatic grader for embedded systems courses. 

At the end of the talk, I will present a short demo and will talk about our plan to extend ARM and partner-based research and development into more universities and higher education institutions globally.

ENDS Seminar (07 October, 2015)

Speaker: Jerry Sobieski

The GÉANT Testbeds Service provides virtualized cyberinfrastructure environments spanning Europe for the use by the research community.   These environments offer computational resources, data transport facilities, switching elements, and other resources, allocated from physical infrastructure components, using dynamic and automated provisioning processes.  These capabilities provide wide area "at scale" testbeds for the purposes of network research and development and testing of distributed applications or other emerging services.   This talk will describe the architectural principles and key [interesting] implementation aspects that underly the GÉANT Testbeds Service.   Aspects and issues that pose future challenges and/or research or collaborative opportunities will be noted and will be discussed in the Q&A following the talk.

GPG: Autonomic Coordination of Skeleton-based Applications over CPU/GPU Multi-Core Architectures (07 October, 2015)

Speaker: Dr Mehdi Goli

Widely adumbrated as patterns of parallel computation and communication, algorithmic skeletons introduce a viable solution for efficiently programming modern heterogeneous multi-core architectures equipped not only with traditional multi-core CPUs, but also with one or more programmable Graphics Processing Units (GPUs). By systematically applying algorithmic skeletons to address complex programming tasks, it is arguably possible to separate the coordination from the computation in a parallel program, and therefore subdivide a complex program into building blocks (modules, skids, or components) that can be independently created and then used in different systems to drive multiple functionalities. By exploiting such systematic division, it is feasible to automate coordination by ad- dressing extra-functional and non-functional features such as application performance, portability, and resource utilisations from the component level in heterogeneous multi-core architecture. In this paper, we introduce a novel approach to exploit the inherent features of skeleton-based applications in order to automatically coordinate them over heterogeneous (CPU/GPU) multi-core architectures and improve their performance. Our systematic evaluation demonstrates up to one order of magnitude speed-up on heterogeneous multi-core architectures.

Bio. Mehdi Goli is a software engineer at Codeplay Ltd. He has got his PhD on "Autonomic Behavioural Framework for Structural Parallelism over Heterogeneous Multi-Core Systems" at IDEAS Research Institute, Robert Gordon University. His current research interests include high-performance computing, scientific GPU computing, parallel computing. He is one of the main designers and developers of heterogeneous back-end for the FastFLow programming framework.

FATA Seminar: On Dots in Boxes or Permutation Pattern Classes (06 October, 2015)

Speaker: Ruth Hoffmann

We will be looking at the notion of permutation pattern classes and the talk will give you a historical and current insight into the work done within permutation patterns and the applications thereof. Additionally, I will shortly talk about the work I have done during my PhD.

Haptic Gaze Interaction - EVENT CANCELLED (05 October, 2015)

Speaker: Poika Isokoski
Eye trackers that can be (somewhat) comfortably worn for long periods are now available. Thus, computing systems can track the gaze vector and it can be used in interactions with mobile and embedded computing systems together with other input and output

Eye trackers that can be (somewhat) comfortably worn for long periods are now available. Thus, computing systems can track the gaze vector and it can be used in interactions with mobile and embedded computing systems together with other input and output modalities. However, interaction techniques for these activities are largely missing. Furthermore, it is unclear how feedback from eye movements should be given to best support user's goals. This talk will give an overview of the results of our recent work in exploring haptic feedback on eye movements and building multimodal interaction techniques that utilize the gaze data. I will also discuss some possible future directions in this line of research.

Challenges in Metabolomics, and some Machine Learning Solutions (30 September, 2015)

Speaker: Simon Rogers

Large scale measurement of the metabolites present in an organism is very challenging, but potentially highly rewarding in the understanding of disease and the development of drugs. In this talk I will describe some of the challenges in analysis of data from Liquid Chromatography - Mass Spectrometry, one of the most popular platforms for metabolomics. I will present Statistical Machine Learning solutions to several of these challenges, including the alignment of spectra across experimental runs, the identification of metabolites within the spectra, and finish with some recent work on using text processing techniques to discover conserved metabolite substructures.

FATA Seminar: Probabilistic Formal Analysis of App Usage to Inform Redesign (29 September, 2015)

Speaker: Oana Andrei

Good design of mobile apps is challenging because users are seldom homogeneous or predictable in the ways they navigate around and use the functionality presented to them. Different populations of users will engage in different ways, and redesign may be desirable or even required to support populations’ different styles of use. In this talk I will present a process of app analysis intended to support understanding of use but also redesign. This process is based on inferring activity patterns (Markov models) from usage logs and employing probabilistic formal analysis to ask questions about the use of the app and characterise the inferred activity patterns. I will illustrate this work via a case study of a mobile app presenting analytic findings and finish with discussions on how the analysis results are feeding into redesign.

Engaging with Music Retrieval (09 September, 2015)

Speaker: Daniel Boland

Music collections available to listeners have grown at a dramatic pace, now spanning tens of millions of tracks. Interacting with a music retrieval system can thus be overwhelming, with users offered ‘too-much-choice’. The level of engagement required for such retrieval interactions can be inappropriate, such as in mobile or multitasking contexts. Using listening histories and work from music psychology, a set of engagement-stratified profiles of listening behaviour are developed. The challenge of designing music retrieval for different levels of user engagement is explored with a system allowing users to denote their level of engagement and thus the specificity of their music queries. The resulting interaction has since been adopted as a component in a commercial music system.

GlobalFestival: Evaluating Real World Interaction on a Spherical Display (03 September, 2015)

Speaker: Julie Williamson (University of Glasgow)

Spherical displays present compelling opportunities for interaction in public spaces. However, there is little research into how touch interaction should control a spherical surface or how these displays are used in real world settings. This paper presents an in the wild deployment of an application for a spherical display called GlobalFestival that utilises two different touch interaction techniques. The first version of the application allows users to spin and tilt content on the display, while the second version only allows spinning the content. During the 4-day deployment, we collected overhead video data and on-display interaction logs. The analysis brings together quantitative and qualitative methods to understand how users approach and move around the display, how on screen interaction compares in the two versions of the application, and how the display supports social interaction given its novel form factor.

Keeping it Local : Runtime Improvements for Manycore Garbage Collection (02 September, 2015)

Speaker: Khaled Alnowaiser

Memory access times in modern manycore processors is non-uniform, since the
memory is distributed in discrete units across the physical sockets. This is known
as non-uniform memory architecture (NUMA). Parallel runtime systems attempt to
improve memory access latency by allocating memory closer to the threads that
will access that data. The Java virtual machine (JVM) hides NUMA complexity from
Java apps. However the JVM does not consider locality when processing objects
for garbage collection.

We describe how to take advantage of connected objects to improve garbage
collection performance. We evaluate our approach on Java apps and big data
workloads. Results show an improvement in GC overhead, with up to 2.5x speedup
and 37% better application performance.

IPv6 Transition at Yahoo (25 August, 2015)

Speaker: Stephen Strowes

Three years after the IPv6 Launch coordinated by the Internet Society, we're now witnessing the final allocations from ARIN's IPv4 inventory. Simultaneously, while we see significant volumes of traffic carried over IPv6, the majority of traffic is still IPv4. We're in the early stages of the transition, not the end of it.

Different networks and countries have presented wildly different deployment patterns. This talk will cover aspects of the ongoing IPv4-to-IPv6 transition, including an overview of what different networks are doing, aspects of what we see from within Yahoo based on two distinct data sets, and some of the broader challenges faced.

Visualizing the Most Controversial and Popular Topics in Wikipedia (21 August, 2015)

Speaker: Anselm Spoerri (Rutgers University)

Conflicts occur in the peer-production process of Wikipedia and can culminate in “edit wars” for specific topics. This talk will present visualizations of the similarities and differences between the most controversial topics that have been identified in 10 different language versions of Wikipedia and discuss the dominant and shared themes of the controversies across languages and cultures. In addition, it will present visualizations of the most popular topics over time in the English version of Wikipedia and visually analyze the relationship between most controversial and popular topics.

Bio: Anselm Spoerri has a PhD from MIT and is a faculty member at the School of Information & Communication at Rutgers University. He teaches and conducts research in the areas of Information Visualization, Data Fusion and Multimedia Interfaces.

Automated Inference of Concurrency Structure for Safe Execution (13 August, 2015)

Speaker: Gul Agha

Concurrent programs today are written in one of two paradigms: actors
and threads with shared memory. While scalable concurrent systems
such as twitter, LinkedIn, and Facebook chat have been implemented
using actor languages, many programmers continue to use threads with
shared memory. Such concurrent programs often ensure the consistency
of their data structures through control-centric synchronization such
as locks. Because control-centric synchronization is disconnected
from the consistency invariants of the data structures, it is hard to
detect bugs caused by incorrect synchronization. Moreover, a
consistency bug may be the result of some unlikely schedule and
therefore not show up in program testing. We have developed an
efficient algorithm using Bayesian probabilistic inference to infer
the concurrency structure of programs based on traces. Annotating the
program with data centric synchronization, facilitates consistency
checking, deadlock detection, and data race prevention. Finally, I
will discuss our current research on how such inference may be used to
'actorize' programs, enforcing data encapsulation and synchronization
through session types.


Short Bio

Gul Agha is Professor of Computer Science at the University of
Illinois at Urbana-Champaign. Dr. Agha's research is in programming
models and languages for open distributed and embedded computation.
Dr. Agha is a Fellow of the IEEE and served as Editor-in-Chief of IEEE
Concurrency: Parallel, Distributed and Mobile Computing (1994-98), and
of ACM Computing Surveys (1999-2007). He has published over 200
research articles and supervised 30 PhD dissertations. His book on
Actors, published by MIT Press, is among the most widely cited works.
Agha and his research group have done pioneering work in Statistical
Model Checking, concolic testing, and application of large smart
sensor networks to civil engineering, among other topics. Dr. Agha is
a co-founder of Embedor Technologies, a company providing solutions
for automated monitoring of civil infrastructure and the Internet of

Decentralized Social Data-Sharing (12 August, 2015)

Speaker: Babak Esfandiari

Bad things can happen when online data-sharing systems are in the hands
of a single authority. Wikipedia, the product of millions of man-hours,
could potentially suddenly disappear if the Wikimedia Foundation runs
out of funds. Google and Facebook respectively use algorithms to rank
search results and events in feeds that might be biased to maximize ad
revenue at the expense of relevance to the user. In this talk, I
investigate a paradigm we call Decentralized Social Data-Sharing (DS)^2,
which addresses such issues by maximizing the autonomy of data-sharing
participants and decoupling the addressing of data from its location. In
particular I look at the effectiveness of data queries in such a system,
the feasibility of building (DS)^2 applications and whether autonomous
peers have an incentive to cooperate and contribute in this setting.

InternFest (05 August, 2015)

Speaker: various
showcase of summer intern projects

Throughout the summer, various undergraduate student interns have been quietly working away in the School of Computing Science. At the InternFest, we want to celebrate their contributions to our research, and find out more about their projects. Please come along to this interactive session to discover what the interns are up to!

Wellth: Refactoring The Maddening Role of Data Dominance in So Called Health Applications/Culture into something actually Useful and Usable (24 July, 2015)

Speaker: m.c. schraefel (University of Southampton)

Most health applications are number centric, aren’t they? Have you noticed that we have step counters, calorie Counters heart rate monitors (another number), weight scales (another number). Have we just convinced ourselves that if we get enough numbers we will become healthier? Or is the dominance of the Number in a Device, our first blush excitement as sensors and algorithm geeks, what we can do very easily with devices: show a number. After all, if showing someone a number worked as a strategy for health, given the ubiquity of the bathroom scale, no one would be overweight ( What is the relationship of these numbers to sustained practice? or perhaps to what we all may actually be shooting for: a better normal? As computer scientists, what else might we do beyond quantified self/data viz with this data towards a more effortless, better normal?

In this talk I’ll overview a few concepts that are being explored around reframing health for performance rather than “illness prevention” ( and also explore some models and scenarios for joining up IoT, machine learning, vision, and HCI towards a better, because wellthier, normal.

m.c. schraefel, PhD, cscs, ceng, fbc, @mcphoo holds a chair in computer science and human performance at the University of Southampton, and also holds a Royal Academy of Engineering Chair co-sponsored with Microsoft Research to investigate design and evaluation of methods to support creativity innovation and discovery in science. m.c. is also a certified strength and conditioning coach, nutrition coach and functional neurology coach, meaning she works with a lot of people in pain. m.c. also leads the interdisciplinary WellthLab at Southampton, the vision of which is to make better normal; the mission of which is to develop the science, engineering and design of interactive systems to support enhancing quality of life for all. WE have PhD-ships! come follow m.c. on twitter @mcphoo

Dynamic process migration in heterogeneous ROS-based environments (22 July, 2015)

Speaker: Jose Cano Reyes

In distributed (mobile) robotics environments, the different computing substrates offer flexible resource allocation options to perform computations that implement an overall system goal. The AnyScale concept that we introduce and describe in this paper exploits this redundancy by dynamically allocating tasks to appropriate substrates (or scales) to optimize some level of system performance while migrating others depending on current resource and performance parameters. In this paper, we demonstrate this concept with a general ROS-based infrastructure that solves the task allocation problem by optimising the system performance while correctly reacting to unpredictable events at the same time. Assignment decisions are based on a characterisation of the static/dynamic parameters that represent the system and its interaction with the environment. We instantiate our infrastructure on a case study application, in which a mobile robot navigates along the floor of a building trying to reach a predefined goal. Experimental validation demonstrates more robust performance (around a third improvement in metrics) under the Anyscale implementation framework.

MMNet 2015 (15 July, 2015)

Speaker: various
Memory Management Network meeting

The theme of this year's MMNet workshop is runtimes at every scale from mobile devices to the data centre. This one-day workshop aims to bring together UK academia and industry with the aim of establishing new links and enhancing existing collaboration within the framework of the UK Memory Management Network.

Oracle Labs Seminar (14 July, 2015)

Speaker: Peter Hsu
The CAVA Computer: Exceptional Parallelism and Energy Efficiency

Designing a new computer system is a very expensive proposition. But 80% of it is exactly the same as every other computer—you need caches, multiprocessing, coherency protocols, memory systems, etc.  Getting all of that right requires skill and experience, but is taken for granted and does not command much of a premium.  Getting it wrong, on the other hand, is a commercial disaster.  At Oracle Labs we have been developing a very energy efficient, highly parallel computer—the RAPID project.  We are now starting to define a second generation project.  In this talk I propose to standardize and open-source a design for the 80% that is the same in every design, so that everyone can concentrate on adding value to their own remaining 20%.

The CAVA computer is a “cluster in a rack” architecture targeting 10nm CMOS technology. The first part of the talk describes a 1024-node system where each node consists of 96-core, 3-issue out-of-order processor chips running at 1GHz with four DDR4 memory channels. Power estimates of different components are discussed, as well as cost projections.  The second part of the talk discusses architectural tradeoffs that were made, how this architecture might play in the HPC exa-scale arena, and broader market implications.  The talk concludes with a list of research topics that I and others at Oracle Labs are actively researching and would be interested in working with students at Universities.

Bio:   Peter Hsu was born in Hong Kong and came to the United States at age 15.  He received a B.S. degree from the University of Minnesota at Minneapolis in 1979, and the M.S. and Ph.D. degrees from the University of Illinois at Urbana-Champaign in 1983 and 1985, respectively, all in Computer Science.  His first job was at IBM Research in Yorktown Heights from 1985-1987, where he worked with the 801 compiler team on code generation for superscalar machines. He then joined his ex-professor at a startup called Cydrome, which developed an innovative VLIW machine, but unfortunately the stock market crashed and the company ran out of money. He moved on to Sun Microsystems in 1988 and tried to build a water-cooled gallium arsenide SPARC processor, but the technology was not sufficiently mature and the effort failed.  He joined Silicon Graphics in 1990 and designed the MIPS R8000 TFP microprocessor, which shipped in the SGI Power Challenge systems in 1995.  He became a Director of Engineering at SGI and worked on various other projects until 1997, when he left to co-found his own startup named ArtX.  ArtX pioneered the shared-memory out-of-order integrated PC graphics northbridge design, but is best known for being the designer of the Nintendo GameCube.  ArtX was acquired by ATI in 2000, which has since been acquired by AMD.  Peter left ArtX in 1999 and worked briefly at Toshiba America, then became a visiting industrial researcher at the University of Wisconsin at Madison in 2001.  He then consulted part time at various startups, and attended the Art Academy University and the California College of the Arts in San Francisco where he learned to paint oil portraits, and a Paul Michael school where he learned to cut and color hair.  In the late 2000’s he consulted for Sun Labs, and that continued when Oracle purchased Sun, which lead to discussions about the RAPID project and eventually becoming an Oracle employee.

Building Effective and Efficient Information Retrieval Systems (26 June, 2015)

Speaker: Jimmy Lin
Machine learning has become the tool of choice for tackling challenges in a variety of domains, including information retrieval

Machine learning has become the tool of choice for tackling challenges in a variety of domains, including information retrieval. However, most approaches focus exclusively on effectiveness---that is, the quality of system output. Yet, real-world production systems need to search billions of documents in tens of milliseconds, which means that techniques also need to be efficient (i.e., fast).  In this talk, I will discuss two approaches to building more effective and efficient information retrieval systems. The first is to directly learn ranking functions that are inherently more efficient---a thread of research dubbed "learning to efficiently rank". The second is through architectural optimizations that take advantage of modern processor architectures---by paying attention to low-level details such as cache misses and branch mispredicts. The combination of both approaches, in essence, allow us to "have our cake and eat it too" in building systems that are both fast and good.

Breaching the Smart Home (26 June, 2015)

Speaker: Chris Speed (University of Edinburgh)
Breaching the Smart Home

This talk reflects upon the work of the Centre for Design Informatics across the Internet of Things. From toilet roll holders that operate as burglar alarms, to designing across the Block Chain, the talk will use design case studies to explore both the opportunities that an interoperability offers for designing new products, practices and markets, but also the dangers. In order to really explore the potential for an Internet of Things ethical boundaries are stressed and sometimes breached. This talk will trace the line between imaginative designing with data, and the exploitation of personal identities.

Prof Chris Speed is Chair of Design Informatics at the University of Edinburgh where his research focuses upon the Network Society, Digital Art and Technology, and The Internet of Things. 

Intro to the Singapore Institute of Technology & Interactive Computing Research Initiatives at SIT (25 June, 2015)

Speaker: Jeannie Lee

Established in 2009, Singapore Institute of Technology (SIT) is Singapore's 5th and newest autonomous university on the island. We will first start with some background and information about the university, and then an overview of potential HCI-related research initiatives and collaborations in the context of Singapore healthcare, hospitality, creative and technology industries. Ideas and discussions are welcome!

A novel many-core interconnection network & A system level simulation platform for the TyTra project (24 June, 2015)

Speaker: Omair Inam

This talk will focus on the collaborative research work undertaken by the speaker with Dr Wim Vanderbauwhede and his team at Glasgow University, during his six month stay as a visiting researcher at the School (Jan-June 2015) . The work spanned two projects, both of which will be presented during the talk.

The first part of the talk will be on exploration of low cost interconnection networks for Many-Core Network-On-chip. A new interconnection topology called the Hierarchical Cross Connected Recursive network (HCCR) will be explored and a group based shortest path routing algorithm that was developed for the HCCR will be presented.


The second part will be related to development of System Level Simulation Platform for the TyTra Project. It is an on-going work on the development of a SystemC based simulation platform. The aim is to simulate a high-level model of heterogeneous targets for high-performance computing, along with a task-based abstraction of a given application, to find a suitable partitioning in an automated fashion, such that it can eventually be incorporated into a turn-key compiler. A very preliminary prototype will be presented.

Social Media Information Organization (19 June, 2015)

Speaker: David Ayman Shamma (Yahoo Labs & Flickr)

Today, beyond content and metadata, information is organized by the online social actions taken upon it. These social activities contribute to the overall conversational nature of media that we create, store, and share. From this, there exists many opportunities to build a new class of social-visual systems to aid in the organization and retrieval processes; these opportunities rely heavily on the both tacit and explicit communicative nature of social multimedia. In this talk, I will discuss the new practice of photography and how the media we create have become conversational media objects. Further, I will present a multifaceted human-centered computing system used to surface geo-located weather photos for editorial inclusion in a mobile application. Using the Flickr photosharing service, we can identify explicit group behavior, implicit photo viewing patterns, and apply modern deep learning computer vision techniques to surface photos for curatorial editors. Further, I will outline new findings and challenges in social media organization including geographic annotation of photographs and regions, community congregation online, and social engagement.

David Ayman Shamma is a Senior Research Scientist at Yahoo Labs and Flickr; he leads the HCI Research Group. He received his Ph.D. at Northwestern University in 2005 in Computer Science. His personal research investigates social multimedia computing and creativity. He currently serves on the steering committees for ACM Multimedia and ACM TVX. In 2013, he was co-chair of the Technical Program at ACM Multimedia. He is Arts & Digital Culture Co-Editor of SIGMM and Co-Editor of the IEEE Multimedia Special Issue on Social Multimedia and Storytelling. In 2014, he was a Visiting Senior Research Fellow at the Keiko-NUS CUTE Center and in 2012 he was appointed as a Senior Member of the ACM.

GPG: Gcc a Glasgow c compiler (17 June, 2015)

Speaker: Dr Paul Cockshott

The compiler implements a parallel extension of C using a data parallel notation similar to the Intel Cylk compiler. It is undergoing tests at the moment but already supports auto-vectorisation and map reduce. Within the next week or two it should have automatic multi-core parallelisation. It is implemented in Java and uses the gcc preprocessor and linker.

FATA Seminar: Strong inapproximability results for a class of optimisation problems (16 June, 2015)

Speaker: Iain McBride

The Hospitals / Residents problem with Couples (HRC) is a generalisation of the classical Hospitals / Residents problem (HR) that is important in practical applications because it models the case where couples submit joint preference lists over pairs of (typically geographically close) hospitals. It is known that an instance of HRC need not admit a stable matching. Deciding whether an instance of HRC admits a stable matching is NP-complete even under some very severe restrictions on the lengths of the participants' preference lists.

Since an instance of HRC need not admit a stable matching, it is natural to seek the 'most stable' matching possible, i.e., a matching that admits the minimum number of blocking pairs. We present a gap-introducing reduction that establishes a strong inapproximability result for the problem of finding a matching in an instance of HRC that admits the minimum number of blocking pairs. Further, we show how this result might be generalised to prove that the minimisation counterpart of a number of NP-complete decision problems based on matchings (and even more general NP-complete problems) may be shown to have the same strong inapproximability bound.

Recruitment to research trials: Linking action with outcome (11 June, 2015)

Speaker: Graham Brennan (University of Glasgow)

Bio: Dr Graham Brennan is a Research Associate and Project Manager in the Institute of Health and Wellbeing with a specialisation in recruitment to behaviour change programmes at the University of Glasgow. He is interested in the impact of health behaviour change programmes on the health of the individual and society as well as the process of engagement and participation. More specifically, his work examines the process and mechanisms of engagement that affect recruitment.


Deep non-parametric learning with Gaussian processes (10 June, 2015)

Speaker: Andreas Damianou

This talk will discuss deep Gaussian process models, a recent approach to combining deep probabilistic structures with Bayesian nonparametrics. The obtained deep belief networks are constructed using continuous variables connected with Gaussian process mappings; therefore, the methodology used for training and inference deviates from traditional deep learning paradigms. The first part of the talk will thus outline the associated computational tools, revolving around variational inference. In the second part, we will discuss models obtained as special cases of the deep Gaussian process, namely dynamical / multi-view / dimensionality reduction models and nonparametric autoencoders. The above concepts and algorithms will be demonstrated with examples from computer vision (e.g. high-dimensional video, images) and robotics (motion capture data, humanoid robotics).

FeedFinder: A Location-Mapping Mobile Application for Breastfeeding Women (04 June, 2015)

Speaker: Madeline Balaam (University of Newcastle)

Breastfeeding is positively encouraged across many countries as a public health endeavour. The World Health Organisation recommends breastfeeding exclusively for the first six months of an infant’s life. However, women can struggle to breastfeed, and to persist with breastfeeding, for a number of reasons from technique to social acceptance. This paper reports on four phases of a design and research project, from sensitising user-engagement and user-centred design, to the development and in-the-wild deployment of a mobile phone application called FeedFinder. FeedFinder has been developed with breastfeeding women to support them in finding, reviewing and sharing public breastfeeding places with other breastfeeding women. We discuss how mobile technologies can be designed to support public health endeavours, and suggest that public health technologies are better aimed at communities and societies rather than individual. 

Dr Madeline Balaam is a lecturer in the School of Computing Science within Newcastle University. 


Boole's legacy for software (03 June, 2015)

Speaker: Professor Muffy Calder
Professor Muffy Calder will give a short, informal and personal overview of Boole’s legacy for software, in particular the ways in which human and physical processes are systematised and implemented through software systems. But do these systems behave as

Two hundred years ago this year George Boole was born. Boole was a largely self-taught mathematical genius and in 1854, as first Professor of Mathematics at Queen’s College, Cork, he founded the discipline of algebraic logic when he published The Laws of Thought An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities. In it he proposed the first practical system of logic in algebraic form, now known as Boolean algebra, which was subsequently the foundation for the scientific and engineering work of Alan Turing, Claude Shannon, and many others, in the development of computation and the computer.

Muffy will give a short, informal and personal overview of Boole’s legacy for software, in particular the ways in which human and physical processes are systematised and implemented through software systems. But do these systems behave as we expect, do they behave as we want them to? Can logic help us answer the questions? The talk will explore how we use logics to reason about the software systems we have built, biological systems that have evolved, and some every day uses (and misuses).


GPG: Costing and Transforming JIT Traces for Adaptive Parallelism (01 June, 2015)

Speaker: Mr J Magnus Morton

Tracing JIT compilation generates units of compilation that are easy to analyse and are known to execute frequently. The AJITPar project aims to investigate whether the information in JIT traces can be used to make better scheduling decisions or perform code transformations to adapt the code for a specific parallel architecture. To achieve this goal, a cost model must be developed to estimate the execution time of an individual trace.

This paper presents the design and implementation of a system for ex- tracting JIT trace information from the Pycket JIT compiler. We define four increasingly parametric cost models for Pycket traces. We test the accuracy of these cost models for predicting the cost of individual traces on a set of loop-based micro-benchmarks. We also compare the accuracy of the cost models for predicting whole program execution time over the Pycket benchmark suite. Our preliminary results show the two simplest models provide only limited accuracy. The accuracy of the more com- plex models depends on the choice of parameters; in ongoing work we are systematically exploring the parameter space.

Graph-of-word: boosting text mining with graphs (29 May, 2015)

Speaker: Michalis Vazirgiannis
CS Seminar

The Bag-of-words model has been the dominant approach for IR and Text mining for many years assuming the word independence and the frequencies as the main feature for feature selection and for query to document similarity. Despite its long and successful usage, bag-of-words ignores words' order and distance within the document - weakening thus the expressive power of the distance metrics. We propose graph-of-word, an alternative approach that capitalizes on a graph representation of documents and challenges the word independence assumption by taking into account words' order and distance. We applied graph-of-word in various tasks such as ad-hoc Information Retrieval, Single-Document Keyword Extraction, Text Categorization and Sub-event DetetiLon in Textual Streams. In all cases the the graph of word approach, assisted by degeneracy at times, outperforms the state of the art baselines in all cases. 

Performance and Scalability on Indexed Subgraph Query Processing Method (27 May, 2015)

Speaker: Foteini Katsarou

Graph Databases have great capabilities of representing complex structures such as chemical compounds and social networks. One of the problems addressed to such a database, is the graph containment queries where given a graph query, the graphs that contain the query are retrieved from the database, a process that involves subgraph isomorphism test. Considering that a direct isomorphism problem against all the graphs in the database would take significant amount of time, many indexing methods have been proposed to reduce the number of candidate graphs that have to underpass the isomorphism test. However, all the existing work currently focuses on just comparing against relatively small graphs both in size and in number of graphs. 

In this presentation we identify a set of key-factors, that influence performance of the related methods: the number of nodes, the average density, the number of distinct labels, the number of graphs, and the query size and we analyze the sensitivity of the various methods. The aims are (a) to derive conclusions about the algorithms' relative performance and (b) to stress-test all algorithms, deriving insights as to their scalability and highlight how both performance and scalability depend on the above factors. 6 well-established indexing techniques (Grapes, GraphGrepSX, CT-index, gIndex, Tree+D, gCode), representatives of the overall design space are extensively compared against both real and synthetic datasets. We report on their indexing time and size, and on query processing performance in terms of time and false positive ratio.

GPG: LHC Computing Beyond the Higgs (27 May, 2015)

Speaker: Prof Dave Britton

Bio: Dave Britton is a professor of physics at the University of Glasgow and Project Leader of the GridPP project that provides Grid computing for particle physics throughout the UK. He is a member of the ATLAS collaboration, one of the experiments at the Large Hadron Collider at CERN with an interest in Higgs decaying to a pair of tau-leptons. Previously he worked on CMS, another of the LHC experiments, qualifying the crystals that make up the end-caps of the electromagnetic calorimeter. He has also worked at the Stanford Linear Accelerator (the BaBar experiment); Cornell (the CLEO experiment); and at DESY in Hamburg (the ARGUS experiment) with an emphasis on tracking detectors. Earlier work at TRIUMF in Vancouver established the most stringent limits on lepton universality through rare pion decays. He has been involved with the GridPP project since conception in 2000 and was one of the lead authors of the proposals for all three phases. Initially appointed as Project Manager, he took over as the GridPP Project leader in 2008. GridPP is a collaboration of Particle Physicists and Computing Scientists from 19 UK Universities together with the Rutherford-Appleton Laboratory and CERN, who have built a Grid for Particle Physics.

Real-Time Weather Forecasting Along 4-D Airplane Trajectories (20 May, 2015)

Speaker: Wim Vanderbauwhede

This talk is about a proposal we intend to submit to the H2020 SESAR call on Air Traffic Management. It deals with the need for accurate weather forecasting along the trajectory of a plane and corrections to that trajectory as a result of the predictions. I want to outline our proposal and discuss the various challenges we need to address.

GPG: Profiling a Parallel Domain Specific Language Using Off-the-shelf Tools (20 May, 2015)

Speaker: Mr Majed Al Saeed

Profiling tools are essential for understanding and tuning the performance of both programs and parallel language implementations.
Assessing the performance of a program in a language with high-level parallel coordination is often complicated by the layers of abstraction present in the language and its implementation. We investigate whether it is possible to profile parallel Domain Specific Languages (DSLs) using existing host language profiling tools. The key challenge is that the host language tools report the performance of the DSL runtime system (RTS) executing the application rather than the performance of the DSL application. The key questions are whether a correct, effective and efficient profiler can be constructed using host language profiling tools; is it possible to effectively profile the DSL implementation, and what capabilities are required of the host language profiling tools? We develop a profiler for the parallel DSL, Haskell Distributed Parallel Haskell (HdpH) using the host language profiling tools. We show that it is possible to construct a profiler (HdpHProf) to support performance analysis of both the DSL applications and the DSL implementation. The implementation uses several new GHC features, including the Ghc-Events Library and ThreadScope, and for the first time two performance analysis tools for DSL HdpH internals, i.e. Spark Pool Contention Analysis, and Registry Contention Analysis.

FATA Seminar - Notes on the Bankruptcy Game (19 May, 2015)

Speaker: Tamas Fleiner

How to divide the estate among creditors in case of a bankruptcy? An entertaining story in connection with a result of Nobel laureate Aumann and Maschler from studying a long standing mystery about the Talmud with the help of Game Theory. The talk contains one or two proofs, we learn what we should say if we want to be big boys at jail and we also hear about a legal issue in connection with levirate marriage. Prerequisites are standard order and arithmetic operations. (This is joint work with Balazs Sziklai.) 

Analyzing online interaction using conversation analysis: Affordances and practices (14 May, 2015)

Speaker: Dr Joanne Meredith (University of Salford)

The aim of this paper is to show how conversation analysis – a method devised for spoken interaction – can be used to analyze online interaction. The specific focus of this presentation will be on demonstrating how the impact of the design features, or affordances, of an online medium can be analyzed using conversation analysis. I will use examples from a corpus of 75 one-to-one Facebook ‘chats’, collected using screen capture software, which I argue can provide us with additional information about participants’ real-time, lived experiences of online interaction.  Through examining a number of interactional practices found in my data corpus, I will show how the analysis of real-life examples of online interaction can provide us with insights in to how participants adapt their interactional practices to suit the affordances of the medium.

Jo Meredith is a Lecturer in Psychology at the University of Salford. Before joining the University of Salford, Jo was a Lecturer at the University of Manchester and completed her doctoral thesis at Loughborough University. She is interested in developing the use of conversation analysis for online interaction, as well as investigating innovative methods for collecting online data.  

Is explicit congestion notification usable with UDP? (13 May, 2015)

Speaker: Colin Perkins

This talk will present an initial measurement study to determine whether Explicit Congestion Notification (ECN) is usable with UDP flows that traverse the public Internet. This is interesting because ECN is an important part of current IETF proposals for congestion control of UDP-based interactive multimedia traffic, and because of increasing use of UDP as a substrate on which new transport protocols can be deployed. Our results show that UDP-based servers can be reached using packets with ECT(0) marks with very high probability. We compare reachability of the same set of servers using ECN with TCP, finding a smaller fraction can successfully negotiate and use ECN in that case.

GPG: Parallel Search, Backjumping, and Brittle Skeletons (13 May, 2015)

Speaker: Mr Ciaran McCreesh

The subgraph isomorphism problem is to find a little pattern graph inside a big target graph. Most algorithms for the problem are based upon inference and backtracking search. I'll look at one of these algorithms, and discuss how to parallelise it. The main complication is backjumping: when a conflict is reached, this algorithm can sometimes prove that it is safe to backtrack several steps immediately. I'll discuss how we can refactor backjumping as a special kind of fold, and then explain why the standard fold skeleton is no good: to avoid getting an absolute slowdown, we need both controlled work-stealing, and work cancellation, neither of which have been given the attention they deserve in the literature.

FATA Seminar - A Tale of Two Workshops (12 May, 2015)

Speaker: Baharak Rastegari

David Manlove and I, with the help of a handful of volunteers, organized two international events recently: (i) COST Action IC1205 meeting on Matching and Fair Division, and (ii) 3rd International Workshop on Matching Under Preferences (MATCH-UP 2015). In this talk I'll tell you about an almost one year planning that went to these events, and all the fun and the troubles!

GPG: Enabling design-space exploration for robot SLAM - for accuracy, performance and energy (07 May, 2015)

Speaker: Prof Paul Kelly

SLAM - simultaneous location and mapping - is a key platform for understanding 3D environments for a huge range of applications, spanning robotics and augmented reality and beyond. Building a really usable map of the environment requires "dense" methods currently feasible in realtime only on powerful hardware.  This talk will introduce SLAMBench, a publicly-available software framework which represents a starting point for quantitative, comparable and validatable experimental research to investigate trade-offs in performance, accuracy and energy consumption of a dense RGB-D SLAM system. SLAMBench provides a KinectFusion implementation in C++, OpenMP, OpenCL and CUDA, and harnesses the ICL-NUIM dataset of synthetic RGB-D sequences with trajectory and scene ground truth for reliable accuracy comparison of different implementation and algorithms. We present an analysis and breakdown of the constituent algorithmic elements of KinectFusion, and experimentally investigate their execution time on a variety of multicore and GPU-accelerated platforms. For a popular embedded platform, we also present an analysis of energy efficiency for different configuration alternatives.  This work is part of a larger research agenda aiming to push the limits of compiler technology up the "food chain", to explore higher-level algorithmic aspects of the design space and low-level implementation choices together, and I will present some preliminary results showing some of the potential of this idea.

Biography: Paul Kelly is Professor of Software Technology at Imperial College London, where he heads the Software Performance Optimisation research group and serves as co-Director of Imperial's Centre for Computational Methods in Science and Engineering (

GPG: Type-driven Verification of Communicating Systems (06 May, 2015)

Speaker: Dr Edwin Brady

Idris ( is a general-purpose programming language with an expressive type system which allows a programmer to state properties of a program precisely in its type. Type checking is equivalent to formally and mechanically checking a program's correctness. Introductory examples of programs verified in this way typically involve length preserving operations on lists, or ordering invariants on sorting.

Realistically, though, programming is not so simple: programs interact with users, communicate over networks, manipulate state, deal with erroneous input, and so on. In this talk I will give an introduction to programming in Idris, with demonstrations, and show how its advanced type systems allows us to express such interactions precisely. I will show how it supports verification of stateful and communicating systems, in particular giving an example showing how to verify properties of concurrent communicating systems.

Intermittent Control in Man and Machine (30 April, 2015)

Speaker: Henrik Gollee

An intermittent controller generates a sequence of (continuous-time) parametrised trajectories whose parameters are adjusted intermittently, based on continuous observation. This concept is related to "ballistic" control and differs from i) discrete-time control in that the control is not constant between samples, and ii) continuous-time control in that the trajectories are reset intermittently.  The Intermittent Control paradigm evolved separately in the physiological and engineering literature. The talk will give details on the experimental verification of intermittency in biological systems and its applications in engineering.

Advantages of intermittent control compared to the continuous paradigm in the context of adaptation and learning will be discussed.

Trainable Interaction Models for Embodied Conversational Agents (30 April, 2015)

Speaker: Mary Ellen Foster

Human communication is inherently multimodal: when we communicate with one another, we use a wide variety of channels, including speech, facial expressions, body postures, and gestures. An embodied conversational agent (ECA) is an interactive character -- virtual or physically embodied -- with a human-like appearance, which uses its face and body to communicate in a natural way. Giving such an agent the ability to understand and produce natural, multimodal communicative behaviour will allow humans to interact with such agents as naturally and freely as they interact with one another, enabling the agents to be used in applications as diverse as service robots, manufacturing, personal companions, automated customer support, and therapy.

To develop an agent capable of such natural, multimodal communication, we must first record and analyse how humans communicate with one another. Based on that analysis, we then develop models of human multimodal interaction and integrate those models into the reasoning process of an ECA. Finally, the models are tested and validated through human-agent interactions in a range of contexts.

In this talk, I will give three examples where the above steps have been followed to create interaction models for ECAs. First, I will describe how human-like referring expressions improve user satisfaction with a collaborative robot; then I show how data-driven generation of facial displays affects interactions with an animated virtual agent; finally, I describe how trained classifiers can be used to estimate engagement for customers of a robot bartender.

Bio: Mary Ellen Foster will join the GIST group as a Lecturer in July 2015. Her main research interest is embodied communication: understanding human face-to-face conversation by implementing and evaluating embodied conversational agents (such as animated virtual characters and humanoid robots) that are able to engage in natural, face-to-face conversation with human users. She is currently a Research Fellow in the Interaction Lab at the School of Mathematical and Computer Sciences at Heriot-Watt University in Edinburgh, and has previously worked in the Robotics and Embedded Systems Group at the Technical University of Munich and in the School of Informatics at the University of Edinburgh.  She received her Ph.D. in Informatics from the University of Edinburgh in 2007.

Safe, Correct, and Fast Low-Level Networking (29 April, 2015)

Speaker: Robert Clipsham

In current implementations of low-level networking stacks, performance is
  favoured over safety, security, and correctness of applications. Despite
  modern languages and abstractions being available for higher levels of
  networking stacks, these have so far been dismissed due to the stringent
  performance requirements. This paper proposes using the Rust programming
  language to introduce the same level of safety and abstraction which is
  expected at higher levels of the stack, without sacrificing the expected

GPG: Haskell MOOC - Crowdsourcing the Curriculum (29 April, 2015)

Speaker: Dr Jeremy Singer

The University is expanding its portfolio of Massive Open Online Courses (MOOCs), hosted on the FutureLearn platform. Wim and I are designing a course entitled 'Introduction to Functional Programming in Haskell'. It's going to be available to distance learners. It will also form the first part of our Functional Programming 4 course.

In this GPG meeting, we'll give some background about MOOCs. We'd be interested to hear your experiences of online learning. Then we will discuss the concepts we hope to cover in the new Haskell MOOC, as well as our strategy for course scalability. Please come along and help to shape this course!

Designing for the Don't Cares. A story about a socio-technical system (17 April, 2015)

Speaker: Ian Sommerville
CS Seminar on Software Engineering

In this talk, I will discuss some of the issues that arose when designing a national socio-technical educational system with a potential user base of over a million users. I will discuss some of the software engineering strategies that were used and why these failed as well as how the use of 'user stories' proved to be a successful approach for understanding the ways in which the system may be used. I will generalise from this experience to speculate on broader issues of government IT system failure.

To Beep or Not to Beep? Comparing Abstract versus Language-Based Multimodal Driver Displays (02 April, 2015)

Speaker: Ioannis Politis

Abstract: Multimodal displays are increasingly being utilized as driver warnings. Abstract warnings, without any semantic association to the signified event, and language-based warnings are examples of such displays. This paper presents a first comparison between these two types, across all combinations of audio, visual and tactile modalities. Speech, text and Speech Tactons (a novel form of tactile warnings synchronous to speech) were compared to abstract pulses in two experiments. Results showed that recognition times of warning urgency during a non-critical driving situation were shorter for abstract warnings, highly urgent warnings and warnings including visual feedback. Response times during a critical situation were shorter for warnings including audio. We therefore suggest abstract visual feedback when informing drivers during a non-critical situation and audio in a highly critical one. Language-based warnings during a critical situation performed equally well as abstract ones, so they are suggested as less annoying vehicle alerts.

FATA Seminar - Symmetry in Constraint Programming (31 March, 2015)

Speaker: Karen Petrie

Symmetry in constraints has always been important but in recent years has become a major research area in its own right. A key problem in constraint programming has long been recognised: search can revisit equivalent states over and over again. In principle this problem has been solved, with a number of different techniques. Research remains very active for two reasons. First, there are many difficulties in the practical application of the techniques that are known for symmetry exclusion, and overcoming these remain important research problems. Second, the successes achieved in the area so far have encouraged researchers to find new ways to exploit symmetry.

This talk will give a whistle stop tour of symmetry elimination in constraint programming, before looking at what the open problems are which are ripe for research.

ENDS Seminar: Towards Automated Design Space Exploration and Code Generation for FPGAs using Type Transformations (30 March, 2015)

Speaker: Waqar Nabi and Wim Vanderbauwhede

The increasing use of diverse architectures resulting in heterogeneous platforms for High-Performance Computing (HPC) presents a significant programming challenge. The resultant design productivity gap is a bottleneck to achieving the maximum possible performance. Our current work aims to address this design productivity gap specifically for FPGAs, where it is a major obstacle to their wider adoption in HPC.


We will present the TyTra design flow, which is being developed in the context of our larger project that aims to create a turn-key compiler for heterogeneous target platforms.


We will discuss an evolving custom high-level language, the TyTra language, that facilitates generation of different correct-by-construction program variants through type- transformations.


We will then talk about the custom target intermediate language for the high-level TyTra language, the Tytra-IR, which is similar to LLVM, but is extended to include explicit parallelization semantics that enable it to describe the different configurations associated with each program variant. It also allows direct association of each of them with an accurate estimate of cost and performance. We will briefly discuss this cost model and our on-going work with an estimator and  code generator for FPGAs.

ENDS Seminar: ARRCS Presentations (25 March, 2015)

Speaker: Kristian Hentschel, Shinyi Breslin, Robert Clipsham

Three students from the ARRCS course will present to the ENDS group.

1. Data Centre Design and Analysis for Energy Efficiency

Presenter: Kristian Hentschel

Summary: As evidenced by different, new and carefully designed approaches in newly built datacentres across the world, data centres still have a high potential for optimization of their energy usage. Power, for both running the actual machines and the cooling system, is a large part of the total cost of ownership. Low-power systems can reduce the cost, but may not provide sufficient performance. More traditional systems do not allow efficient scaling to changing workloads. The proposed research aims to evaluate how careful analysis, planning and design of hardware and software components of such a distributed system can alleviate these issues. In the presentation, I will describe and evaluate past and present approaches to this problem, and outline directions for the proposed research.

2. Implementing a Context-Aware Mobile Operating System

Presenter: Shinyi Breslin

Summary: With the ubiquity of modern-day smartphones and the various sensors it possesses, context-aware computing becomes plausible. Context-aware computing considers the current context, obtained through analysis of sensor data, to make decisions. Chu et al. [1] argue and propose a design for context generation to be governed by the mobile operating system, where several areas such as scheduling, energy management, and I/O devices can benefit. Whilst a number of context-aware mobile systems have been developed, none have yet to fully explore and evaluate the feasibility of a context-aware mobile OS.

[1] Chu, David, et al. "Mobile apps: it’s time to move up to condos." Proceedings of the 13th USENIX conference on Hot topics in operating systems. USENIX Association, 2011.

3. A Modern Approach to Systems Programming

Presenter: Robert Clipsham

Summary: Modern systems code is primarily written using C and C++, due to the need for high, predictable performance, and direct control of the hardware. This, unfortunately, leaves critical code vulnerable to a range of preventable bugs and security vulnerabilities, ranging from buffer overflows and memory corruption, to race conditions and protocol violations. By combining recent advances in programming language research, and implementing new compiler optimisations, I assert that it is possible to design a language which can provide guaranteed correctness of systems code, without sacrificing performance.

GPG: ARRCS-Systems Student Presentations (25 March, 2015)

Speaker: Craig Mclaughlin, Dimitar Petrov, Gordon Reid

1. Static Verification for Modern Software Systems

Presenter: Craig Mclaughlin

Summary: The current proposal addresses the task of verifying properties of software systems which may be concurrent, distributed and/or operating on heterogeneous architectures. Several projects have extended techniques for sequential programs to the concurrent setting to handle concurrency within GPGPU programming (principally OpenCL/CUDA). Others have taken ideas from type theory to improve static guarantees about communication systems (such as MPI). Recent work has explored combining separation logic and session types to support more powerful reasoning for distributed programs. The aim of the current proposal is to apply this hybrid logic in the context of concurrent and distributed programming models in an effort to enhance the properties one can verify statically about these systems.

2. MapReduce with CUDA to achieve inter- and intra-node parallelism

Presenter: Dimitar Petrov

Summary: As the scale of high-performance computing grows, new needs arise for increased parallelism, reduced complexity and improved programmability. The work proposed here combines MapReduce with CUDA to achieve both inter- and intra-node parallelism. Map and reduce tasks are combined into chunks and executed on the GPU with the aim of increased performance. Although similar solutions have been proposed, all of these rely on the developer's understanding of GPU programming, either CUDA or OpenCL. We propose using Rootbeer to alleviate the issue and make the system accessible to a wider developer audience. Rootbeer allows programmers to write code in Java and have the (de)serialization, kernel code generation and kernel launch done automatically. Rootbeer also provides additional abstractions so that it can run complex Java objects on a GPU.

3. Generating optimal performance portable OpenCL code from existing OpenCL code.

Presenter: Gordon Reid

Summary: OpenCL was developed as a platform-independent way of writing code for execution on a number of different device types. OpenCL guarantees functional portability, but not performance portability. There is a body of existing work in obtaining performance portability using a number of different methods. Some authors have opted for acceptable performance portability between some CPUs and GPUs, doing minimal changes to OpenCL code with some of the changes being manually written by the developer and selected using ifdefs or different source files. Other work takes a more dramatic approach, either requiring the program to be rewritten in another high-level language which is then compiled down to OpenCL code, or are attempting performance portability via a completely new compiler implementation. My plan is in two parts. The first involves using existing OpenCL code like in some previous work but going further and automatically tuning and changing more aspects of the device and kernel code for many device types, including the Intel Xeon Phi. The second involves a look into analysis and optimisations at the SPIR level to explore finer grained optimisations made possible by this new intermediate format.

Situated Social Media Use: A Methodological Approach to Locating Social Media Practices and Trajectories (24 March, 2015)

Speaker: Alexandra Weilenmann (University of Gothenburg)

In this talk, I will present a few examples of methodological explorations of social media activities, trying to capture and understand them as located, situated practices. This methodological endeavor spans over analyzing patterns in big data feeds (here Instagram) as well as small-scale video-based ethnographic studies of user activities. A situated social media perspective involves examining how production and consumption of social media are intertwined. Drawing upon our studies of social media use in cultural institutions we show how visitors are orienting to their social media presence while attending to physical space during the visit, and how editing and sharing processes are formed by the trajectory through the space. I will discuss the application and relevance of this approach for understanding social media and social photography in situ. I am happy to take comments and feedback on this approach, as we are currently working to develop it.

Alexandra Weilenmann holds a PhD in informatics and currently works at the Department of Applied IT, University of Gothenburg, Sweden. She has over 15 years experience researching the use of mobile technologies, with a particular focus on adapting traditional ethnographic and sociological methods to enable the study of new practices. Previous studies includes mobile technology use among hunters, journalists, airport personnel, professional drivers, museum visitors, teenagers and the elderly. Weilenmann has experience working in projects in close collaboration with stakeholders, both regarding IT development projects (e.g. Ricoh Japan) and with Swedish special interest organizations (e.g. Swedish Institute of Assistive Technology). She has served on several boards dealing with issues of the integration of IT in society, for example the Swedish Government’s Use Forum, Swedish Governmental Agency for Innovation Systems (Vinnova) and as an expert for telephone company DORO.

FATA Seminar - Progress as Compositional Lock-Freedom (24 March, 2015)

Speaker: Ornela Dardha

A session-based process satisfies the progress property if its sessions never get stuck when it is executed in an adequate context. Pre- vious work studied how to define progress by introducing the notion of catalysers, execution contexts generated from the type of a process. In this paper, we refine such definition to capture a more intuitive notion of context adequacy for checking progress. Interestingly, our new catal- ysers lead to a novel characterisation of progress in terms of the stan- dard notion of lock-freedom. Guided by this discovery, we also develop a conservative extension of catalysers that does not depend on types, gen- eralising the notion of progress to untyped session-based processes. We combine our results with existing techniques for lock-freedom, obtaining a new methodology for proving progress. Our methodology captures new processes wrt previous progress analysis based on session types. 

Mobile interactions from the wild (19 March, 2015)

Speaker: Kyle Montague (Dundee)

Laboratory-based evaluations allow researchers to control for external factors that can influence participant interaction performance. Typically, these studies tailor situations to remove distraction and interruption, thus ensuring users’ attention on the task and relative precision in interaction accuracy. While highly controlled laboratory experiments provide clean measurements with minimal errors, interaction behaviors captured within natural settings differ from those captured within the laboratory. Additionally, laboratory-based evaluations impose time restrictions on user studies. Characteristically lasting no more than an hour at a time, they restrict the potential for capturing the performance changes that naturally occur throughout daily usage as a result of fatigue or situational constraints. These changes are particularly interesting when designing for mobile interactions where the environmental factors can pose significant constraints and complications on the users interaction abilities.

This talk will discuss recent works exploring mobile touchscreen interactions from the wild involving participants with motor and visual impairments - sharing the successes and pitfalls of these approaches, and the creation of a new data collection framework to support future mobile interaction studies in-the-wild.

Get A Grip: Predicting User Identity From Back-of-Device Sensing (19 March, 2015)

Speaker: Mohammad Faizuddin Md Noor

We demonstrate that users can be identified using back-of-device handgrip changes during the course of the interaction with mobile phone, using simple, low-resolution capacitive touch sensors placed around a standard device. As a baseline, we replicated the front-of-screen experiments of Touchalytics and compare with our results. We show that classifiers trained using back-of-device could match or exceed the performance of classifiers trained using the Touchalytics approach. Our technique achieved mean AUC, false accept rate and false reject rate of 0.9481, 3.52% and 20.66% for a vertical scrolling reading task and 0.9974, 0.85% and 2.62% for horizontal swiping game task. These results suggest that handgrip provides substantial evidence of user identity, and can be a valuable component of continuous authentication systems.

Towards Effective Non-Invasive Brain-Computer Interfaces Dedicated to Ambulatory Applications (19 March, 2015)

Speaker: Matthieu Duvinage

Disabilities affecting mobility, in particular, often lead to exacerbated isolation and thus fewer communication opportunities, resulting in a limited participation in social life. Additionally, as costs for the health-care system can be huge, rehabilitation-related devices and lower-limb prostheses (or orthoses) have been intensively studied so far. However, although many devices are now available, they rarely integrate the direct will of the patient. Indeed, they basically use motion sensors or the residual muscle activities to track the next move.

Therefore, to integrate a more direct control from the patient, Brain-Computer Interfaces

(BCIs) are here proposed and studied under ambulatory conditions. Basically, a BCI allows you to control any electric device without the need of activating muscles. In this work, the conversion of brain signals into a prosthesis kinematic control is studied following two approaches. First, the subject transmits his desired walking speed to the BCI. Then, this high-level command is converted into a kinematics signal thanks to a Central Pattern Generator (CPG)-based gait model, which is able to produce automatic gait patterns. Our work thus focuses on how BCIs do behave in ambulatory conditions. The second strategy is based on the assumption that the brain is continuously controlling the lower limb. Thus, a direct interpretation, i.e. decoding, from the brain signals is performed. Here, our work consists in determining which part of the brain signals can be used.

[ ENDS Seminar] Ringneck : Mutation Testing for Component Dependencies (18 March, 2015)

Speaker: Tim Storer

On a typical computing platform heterogeneous software components exist within a complex eco-system of dependencies, in which components and their dependencies must be installed alongside other, potentially conflicting systems.  Dependency management systems use specifications of a component's dependencies (typically including identifier and valid release version range) to mediate this complexity by searching for combinations of compatible component versions.  The research in this talk is based on the hypothesis that dependency descriptions may be either over-constrained (creating unnecessary conflicts between components and making the overall platform more brittle than it needs to be) or under-constrained (supposedly valid configurations result in system failures).  Mutation testing is used to validate the version ranges specified in dependency descriptors and search for counter examples of over and under specification.  The talk presents early results applying the technique to the Maven dependency management system.

GPG: Computational Modelling of Materials and Structures (18 March, 2015)

Speaker: Prof Chris Pearce & Dr Lukasz Kaczmarczyk

Our research is focussed on the computational modelling of materials and structures, with particular focus on multi-scale mechanics and multi-physics problems, applied to problems ranging from safety critical structures to biomechanics, supported by EPSRC, EU, TSB and industry. The Finite Element Method (FEM) is an extremely powerful numerical technique for finding approximate solutions to a broad range of science and engineering processes that are governed by Partial Differential Equations (PDEs). It has revolutionised simulation and predictive modelling in science and engineering and has had a pervasive impact on industrial engineering analysis.

Despite the undoubted success of FEM, there is a continuous drive to push finite element technology beyond current capabilities, to solve increasingly complex real-world problems as efficiently as possible. Established commercial FE software can be relatively slow to adopt new technologies due to the dominance of out-of-date software architecture. Perhaps the greatest part of FE code development is expended in dealing with technical problems related to software implementation, rather than resolving the underlying physics. The biggest challenge is to create a computationally tractable problem, which can be solved efficiently while simultaneously delivering an accurate and robust solution by controlling the numerical error.

The presentation will first present the context of our research, briefly describing some examples from our projects, before looking at more details of our software development platform (MoFEM). This is a flexible and adaptable framework, while tackling the conflicting requirements of accuracy and computational efficiency. The catalyst for the creation of MoFEM was the need for a flexible and numerically accurate modelling environment for multi-physics problems, driven by the need of our industrial partners.

Gait analysis from a single ear-worn sensor (17 March, 2015)

Speaker: Delaram Jarchi

Objective assessment of detailed gait patterns is important for clinical applications. One common approach to clinical gait analysis is to use multiple optical or inertial sensors affixed to the patient body for detailed bio-motion and gait analysis. The complexity of sensor placement and issues related to consistent sensor placement have limited these methods only to dedicated laboratory settings, requiring the support of a highly trained technical team. The use of a single sensor for gait assessment has many advantages, particularly in terms of patient compliance, and the possibility of remote monitoring of patients in home environment. In this talk we look into the assessment of a single ear-worn sensor (e-AR sensor) for gait analysis by developing signal processing techniques and using a number of reference platforms inside and outside the gait laboratory. The results are provided considering two clinical applications such as post-surgical follow-up and rehabilitation of orthopaedic patients and investigating the gait changes of the Parkinson's Disease (PD) patients.

HCI in cars: Designing and evaluating user-experiences for vehicles (12 March, 2015)

Speaker: Gary Burnett (University of Nottingham)

Driving is an everyday task which is fundamentally changing, largely as a result of the rapid increase in the number of computing and communications-based technologies within/connecting vehicles. Whilst there is considerable potential for different systems (e.g. on safety, efficiency, comfort, productivity, entertainment etc.), one must always adopt a human-centred perspective.  This talk will raise the key HCI issues involved in the driving context and the effects on the design of the user-interface – initially aiming to minimise the likelihood of distraction. In addition, the advantages and disadvantages of different evaluation methods commonly employed in the area will be discussed. In the final part of the talk, issues will be raised for future vehicles, particularly considering the impact of increasing amounts of automation functionality, fundamentally changing the role of the human “driver” - potentially from that of vehicle controller periodically to one of system status monitor. Such a paradigm shift raises profound issues concerning the design of the vehicle HMI which must allow a user to understand the “system" and also to seamlessly forgo and regain control in an intuitive manner. 

Gary Burnett is Associate Professor in Human Factors in the Faculty of Engineering at the University of Nottingham. 

ENDS Seminar: Real-Time Multimedia Applications in an Ossified Internet (11 March, 2015)

Speaker: Stephen McQuistin

Middleboxes have ossified the transport-layer of the Internet, limiting real-time networked multimedia applications to use TCP or UDP, despite the standardisation of new transport protocols that better support their requirements. To improve transport for these applications, we must reinterpret and extend existing protocols. 
In this talk, I will present unordered, time-lined TCP, a TCP variant designed to support real-time multimedia traffic while being widely deployable.

GPG: Many-Core Compiler Fuzzing (11 March, 2015)

Speaker: Dr Alastair Donaldson

Parallel programming models for many-core systems, such as the OpenCL programming model, claim to allow the construction of portable many-core software.  Though performance portability may be an elusive goal, functional portability should not be.  Functional portability depends on reliable compilers for many-core programming languages.  This presents a real challenge for industry because many-core devices, such as GPUs, are evolving rapidly, as are the associated many-core languages (e.g., a revision of the OpenCL specification appears approximately once every 18 months). Compiler-writers are thus continually playing catch-up.

I will present recent ideas on how to apply random testing (fuzzing) to many-core compilers, in the context of OpenCL.  The aim of this work is to help vendors to improve the quality of their compilers by discovering bugs quickly.  Following up on successful prior works on sequential compiler testing, we have designed two techniques for generating random OpenCL kernels for purposes of compiler testing. The first approach builds on the Csmith project from the University of Utah (PLDI'11).  
Here, we generate random OpenCL kernels that are guaranteed to be free from undefined behaviour and to behave deterministically.  For such a kernel, differing results between two OpenCL implementations indicates that one of the implementations has compiled the kernel erroneously.  
The second approach builds on the "equivalence modulo inputs" idea from researchers at UC Davis (PLDI'14).  Here we start with an OpenCL kernel and generate mutations from the kernel such that, for a given input, each mutant is guaranteed by construction to compute the same result as the original kernel.  In this case, observable differences between mutants for the given input indicate compiler bugs.

I will report on a large testing campaign with respect to 19 OpenCL (device, compiler) configurations.  We found bugs in every configuration that we tested, including in compilers from AMD, Nvidia, Intel and Altera.  Many of the bugs we reported have now been fixed by the associated vendors.  In the talk I will show some examples of the bugs the technique has uncovered.

This is joint work with Christopher Lidbury, Andrei Lascu and Nathan Chong, and is due to appear at PLDI'15.

Bio: Alastair Donaldson is a Senior Lecturer in the Department of Computing, Imperial College London, where he leads the Multicore Programming Group and is Coordinator of the FP7 project CARP: Correct and Efficient Accelerator Programming.  He has published more than 50 peer-reviewed papers in formal verification and multicore programming, and leads the GPUVerify project on automatic verification of GPU kernels, which is a collaboration with Microsoft Research.  Before joining Imperial, Alastair was a Visiting Researcher at Microsoft Research Redmond, a Research Fellow at the University of Oxford and a Research Engineer at Codeplay Software Ltd.  He holds a PhD from the University of Glasgow.

FATA Seminar - Verification and Control of Partially Observable Probabilistic Real-Time Systems (10 March, 2015)

Speaker: Gethin Norman

In this talk I will outline automated techniques for the verification and control of probabilistic real-time systems that are only partially observable. To formally model such systems, we define an extension of probabilistic timed automata in which local states are partially visible to an observer or controller. Quantitative properties of interest, relate to the probability of an event’s occurrence or the expected value of some reward measure. I will propose techniques to either verify that such a property holds or to synthesise a controller for the model which makes it true. The approach is based on an integer discretisation of the model’s dense-time behaviour and a grid-based abstraction of the uncountable belief space induced by partial observability. The latter is necessarily approximate since the underlying problem is undecidable, however both lower and upper bounds on numerical results can be generated.

Generating Implications for Design (05 March, 2015)

Speaker: Corina Sas (Lancaster University)

A central tenet of HCI is that technology should be user-centric, with designs being based around social science findings about users. Nevertheless a key challenge in interaction design is translating empirical findings into actionable ideas that inform design. Despite various design methods aiming to bridge this gap, such implications for informing design are still seen as problematic. However there has been little exploration into what practitioners understand by implications for design, the functions of such implications and the principles behind their creation. We report on interviews with twelve expert HCI design researchers probing: the roles and types of implications, their intended beneficiaries, and the process of generating and evaluating them. We synthesize different types of implications into a framework to guide the generation of implications. Our findings identify a broader range of implications than those described in ethnographical studies, capturing technologically implementable knowledge that generalizes to different settings. We conclude with suggestions about how we might reliably generate more actionable implications.

Dr. Sas is a Senior Lecturer in HCI, School of Computing and Communications, Lancaster University. Her research interests include human-computer interaction, interaction design, user experience, designing tools and interactive systems to support high level skill acquisition and training such as creative and reflective thinking in design, autobiographical reasoning, emotional processing and spatial cognition. Her work explores and integrates wearable bio sensors, lifelogging and memory technologies, and virtual reality.

Imaging without cameras (05 March, 2015)

Speaker: Matthew Edgar

Conventional cameras rely upon a pixelated sensor to provide spatial resolution. An alternative approach replaces the sensor with a pixelated transmission mask encoded with a series of binary patterns. Combining knowledge of the series of patterns and the associated filtered intensities, measured by single-pixel detectors, allows an image to be deduced through data inversion. At Glasgow we have been extending the concept of a `single-pixel camera' to provide continuous real-time video in excess of 10 Hz, at non-visible wavelengths, using efficient computer algorithms. We have so far demonstrated some applications for our camera such as imaging through smoke, through tinted screens, and detecting gas leaks, whilst performing sub-Nyquist sampling. We are currently investigating the most effective image processing strategies and basis scanning procedures for increasing the image resolution and frame rates for single-pixel video systems.

GPG: Semi-Automatic Refactoring for (Heterogeneous) Parallel Programs (04 March, 2015)

Speaker: Dr Chris Brown

Modern multicore systems offer huge computing potential. Exploiting large parallel systems is still a very challenging task, however, especially as many software developers still use overly-sequential programming models. In this talk, I will present a radical and novel approach to introducing and tuning parallelism for heterogeneous shared-memory systems (comprising a mixture of CPUs and GPUs), that combines algorithmic skeletons, machine-learning, and refactoring tool support. Specifically, I will show how to use skeletons to model the parallelism, machine learning to predict the optimal configuration and mapping and refactoring to introduce the parallelism into the application. Finally, I will demonstrate our tools on a number of applications, showing that we can easily obtain comparable results to hand-tuned optimised versions.

FATA Seminar - Scheduling sailing match races (03 March, 2015)

Speaker: Patrick Prosser

There is a form of sailing with skippers sailing one-vs-one in a round-robin, known as Match Racing. The early stages of the Americas Cup are done as round-robins. Recently, we have been performing research on how to improve the schedules and to make racing fairer. In this talk I will describe the problem and present the 13 rules described in the ISAF World Sailing Umpire’s Manual for constructing a legal schedule.  A constraint model is then presented. We show that some of the published schedules are in fact illegal, violate ISAF rules. There are also some “missing” schedules, some that we believe are provably impossible given the rules. This is a presentation of “work in progress”.

What I Learned at Google and eBay (02 March, 2015)

Speaker: Randy Shoup

eBay and Google operate some of the largest Internet sites on the planet. At large scale, small problems become magnified, and new sets of challenging problems arise.  This talk will share several "war stories" of problems encountered at scale at each company.  It will also offer some learnings about what has worked well -- and what has not -- in building and maintaining an innovative engineering culture, a flexible and powerful technology stack, and efficient development processes. It will conclude with some suggestions about how other organizations can apply those learnings themselves.

Bio: Randy Shoup has worked as a senior technology leader and executive in Silicon Valley at companies ranging from small startups, to mid-sized places, to eBay and Google. In his consulting practice, he applies this experience to scaling the technology infrastructures and engineering organizations of his client companies. He served as CTO of KIXEYE, a 500-person maker of real-time strategy games for web and mobile devices. Prior to KIXEYE, he was Director of Engineering in Google's cloud computing group, leading several teams building Google App Engine, the world's largest Platform as a Service. Previously, he spent 6 1/2 years as Chief Engineer and Distinguished Architect at eBay, building several generations of eBay's real-time search infrastructure. Randy is a frequent keynote speaker and consultant in areas from scalability and cloud computing, to analytics and data science, to engineering culture and DevOps. He is particularly interested in the nexus of people, culture, and technology.

Apache Cordova Tutorial (26 February, 2015)

Speaker: Mattias Rost

Mattias Rost will lead a two hour, hands-on tutorial on Apache Cordova ( Apache Cordova is a platform for building native mobile applications using HTML, CSS and JavaScript. Everyone welcome. Bring a laptop!

GPG: Towards Performance Portability for Heterogeneous Systems (a Unified View of Algorithmic Choices and Hardware Optimisations) (25 February, 2015)

Speaker: Dr Christophe Dubach

Computing systems have become increasingly complex with the emergence of heterogeneous hardware combining multicore CPUs and GPUs. These parallel systems exhibit tremendous computational power at the cost of increased programming effort. This results in a tension between achieving performance and code portability / ease of programming.

In this talk I will present a novel approach that offers high-level programming, code portability and high-performance. It is based on algorithmic pattern composition coupled with a powerful, yet simple, set of rewrite rules. This enables systematic transformation and optimization of a high-level program into a low-level hardware specific representation which leads to high performance code. I will show how a subset of the OpenCL programming model can be mapped to low-level patterns and how to automatically generate high performance OpenCL code on par with highly tuned implementations for multicore CPUs and GPUs.

A technical report describing this work can be found on arXiv:

Bio: Christophe Dubach received his Ph.D in Informatics from the University of Edinburgh in 2009 and holds a M.Sc. degree in Computer Science from EPFL (Switzerland). He is a Lecturer (Assistant Professor) in the Institute for Computing Systems Architecture at the University of Edinburgh (UK). In 2010 he spent one year as a visiting researcher at the IBM Watson Research Center (USA) working on the LiquidMetal project. His current research interests includes high-level programming models for heterogeneous systems, co-design of both computer architecture and optimising compiler technology, adaptive microprocessor, and the application of machine learning in these areas.

FATA Seminar - Formal analysis of Edinburgh buses using GPS data (24 February, 2015)

Speaker: Daniel Reijsbergen

We present recent work on the development of stochastic performance models of a public transportation network using real-world data. The data is provided to us by the Lothian Buses company, which operates an extensive bus network in Edinburgh. In particular, we use datasets of GPS measurements with about 30-40 seconds between subsequent observations. Some quantities of interest that can be analysed using this data are the times needed to complete specific route segments, and the 'headway', the distance (in terms of journey completion)
between subsequent buses. Both can be modelled using established formalisms, namely Markov chains and time series respectively. We briefly discuss several applications, including a 'what-if' scenario involving the introduction of trams to the Edinburgh city centre, and the evaluation of the punctuality of frequent services in terms of criteria set by the Scottish government.

Analysing UK Annual Report Narratives using Text Analysis and Natural Language Processing (23 February, 2015)

Speaker: Mahmoud El-Haj
In this presentation I will show the work we’ve done in our Corporate Financial Information Environment (CFIE) project.

In this presentation I will show the work we’ve done in our Corporate Financial Information Environment (CFIE) project.  The Project, funded by ESRC and ICAEW, seeks to analyse UK financial narratives, their association with financial statement information, and their informativeness for investors using Computational Linguistics, heuristic Information Extraction (IE) and Natural Language Processing (NLP).  We automatically collected and analysed a number of 14,000 UK annual reports covering a period between 2002 and 2014 for the UK largest firms listed on the London Stock Exchange. We developed software for this purpose which is available online for general use by academics.  The talk includes a demo on the software that we developed and used in our analysis: Wmatrix-import and Wmatrix.  Wmatrix-import is a web-based tool to automatically detect and parse the structure of UK annual reports; the tool provides sectioning, word frequency and readability metrics.  The output from Wmatrix-import goes as input for further NLP and corpus linguistic analysis by Wmatrix - a web based corpus annotation and retrieval tool which currently supports the analysis of small to medium sized English corpora.




CFIE Project

Compositional Data Analysis (CoDA) approaches to distance in information retrieval (20 February, 2015)

Speaker: Dr Paul Thomas
Many techniques in information retrieval produce counts from a sample

Many techniques in information retrieval produce counts from a sample, and it is common to analyse these counts as proportions of the whole—term frequencies are a familiar example.  Proportions carry only relative information and are not free to vary independently of one another: for the proportion of one term to increase, one or more others must decrease.  These constraints are hallmarks of compositional data.  While there has long been discussion in other fields of how such data should be analysed, to our knowledge, Compositional Data Analysis (CoDA) has not been considered in IR. In this work we explore compositional data in IR through the lens of distance measures, and demonstrate that common measures, naïve to compositions, have some undesirable properties which can be avoided with composition-aware measures.  As a practical example, these measures are shown to improve clustering.

FATA Seminar - The Subgraph Isomorphism Problem: three new ideas (17 February, 2015)

Speaker: Ciaran McCreesh

In the subgraph isomorphism problem, we are given a pattern graph P, and a target graph T, and we wish to find "a copy of P inside T". I will introduce three new practical improvements to the simple algorithms presented by Patrick last year.

Firstly, I will discuss supplemental graphs. The key idea is that a subgraph isomorphism i from P to T is also a subgraph isomorphism F(i) from F(P) to F(T), for certain transformations F. This lets us generate redundant constraints: we can search for a mapping which is simultaneously a subgraph isomorphism between several carefully selected pairs of graphs.

Secondly, I will introduce an intermediate level of inference for an all-different constraint. Traditionally a matching-based approach is used, but this scales poorly to large target graphs and generally does not provide much additional filtering. We use a weaker counting-based approach, which is much faster and which usually gives the same amount of filtering.

Thirdly, I will revisit conflict-directed backjumping. I will show that there is no need to maintain conflict sets when working with cloned domains. I will also explain how the counting all-different algorithm can produce more fine-grained information on a conflict, allowing longer backjumps.

Users versus Models: What observation tells us about effectiveness metrics (16 February, 2015)

Speaker: Dr. Paul Thomas
This work explores the link between users and models by analysing the assumptions and implications of a number of effectiveness metrics, and exploring how these relate to observable user behaviours

Retrieval system effectiveness can be measured in two quite different ways: by monitoring the behaviour of users and gathering data about the ease and accuracy with which they accomplish certain specified information-seeking tasks; or by using numeric effectiveness metrics to score system runs in reference to a set of relevance judgements.  In the second approach, the effectiveness metric is chosen in the belief that it predicts ease or accuracy.

This work explores that link, by analysing the assumptions and implications of a number of effectiveness metrics, and exploring how these relate to observable user behaviours.  Data recorded as part of a user study included user self-assessment of search task difficulty; gaze position; and click activity.  Our results show that user behaviour is influenced by a blend of many factors, including the extent to which relevant documents are encountered, the stage of the search process, and task difficulty.  These insights can be used to guide development of batch effectiveness metrics.

Blocks: A Tool Supporting Code-based Exploratory Data Analysis (12 February, 2015)

Speaker: Mattias Rost

Large scale trials of mobile apps can generate a lot of log data. Logs contain information about the use of the apps. Existing support for analysing such log data include mobile logging frameworks such as Flurry and Mixpanel, and more general visualisation tools such as Tableau and Spotfire. While these tools are great for giving a first glimpse at the content of the data and producing generic descriptive statistics, they are not great for drilling down into the details of the app at hand. In our own work we end up writing custom interactive visualisation tools for the application at hand, to get a deeper understanding of the use of the particular app. Therefore we have developed a new type of tool that supports the practice of writing custom data analysis and visualisation. We call it Blocks. In this talk I will talk describe what Blocks is, how Blocks encourages code writing, and how it supports the craft of log data analysis.

Mattias Rost is a researcher in Computing Science at the University of Glasgow. He is currently working on the EPSRC funded Populations Programme.

GPG: Equational Reasoning in Fine Grain Algorithms (11 February, 2015)

Speaker: Dr John O'Donnell

Algorithms expressed using a small collection of combinators, including map, fold, scan, and sweep, can be implemented efficiently on circuits, FPGAs, and GPGPUs. Equational reasoning is effective for deriving such algorithms and proving them correct. I will describe its use in recent and current work on a family of algorithms related to functional arrays.

The DeepTree Exhibit: Visualizing the Tree of Life to Facilitate Informal Learning (05 February, 2015)

Speaker: Florian Block (Harvard University)

More than 40% of Americans still reject the theory of evolution. This talk focuses on the DeepTree exhibit, a multi-user multi-touch interactive visualization of the Tree of Life. The DeepTree has been developed to facilitate collaborative visual learning of evolutionary concepts. The talk will outline an iterative process in which a multi-disciplinary team of computer scientists, learning scientists, biologists, and museum curators worked together throughout design, development, and evaluation. The outcome of this process is a fractal-based tree layout that reduces visual complexity while being able to capture all life on earth; a custom rendering and navigation engine that prioritizes visual appeal and smooth fly-through; a multi-user interface that encourages collaborative exploration while offering guided discovery. The talk will present initial evaluation outcomes illustrating that the large dataset encouraged free exploration, triggers emotional responses, and visitor self-selected multi-level engagement and learning.

Bio: Florian earned his PhD in 2010 at Lancaster University, UK (thesis titled “Reimagining Graphical User Interface Ecologies”). Florian’s work at SDR Lab has focused on using multi-touch technology and information visualization to facilitate discovery and learning in museums. He has worked on designing user interfaces for crowd interaction, developed the DeepTree exhibit, an interactive visualization of the tree of life (, as well as introduced methodological tools to quantify engagement of fluid group configurations around multi-touch tabletops in museums. Ultimately, Florian is interested in how interactive technology can provide unique new opportunities for learning, to understand which aspects of interactivity and collaboration contributes to learning, and how to use large datasets to engage the general public in scientific discovery and learning.

Glasgow Mobile Apps Meetup (04 February, 2015)

Speaker: Raj Sark
Based on the famous TechMeetup format, we will have 2 talks on mobile tech

Based on the famous TechMeetup format, we will have 2 talks on mobile tech, networking and some beer and pizza. We will provide vegan and gluten-free pizzas too!.

Speakers will be Raj Sark from Lupo, and Euan Freeman, from School of Computing Science Glasgow.

Find more information about the speakers and get the FREE tickets here:

Event schedule:

6.00pm - registration
6.15pm - pizza & drinks
6.45pm - talks & discussions
7.30pm - networking
9.00pm - close

FATA Seminar - A semantic deconstruction of session types (03 February, 2015)

Speaker: Alceste Scalas

I will illustrate a semantic approach to the foundations of session types, by revisiting them in the abstract setting of labelled transition systems. The crucial insight is a simulation relation  which generalises the usual syntax-directed notions of typing and subtyping, and encompasses both synchronous and asynchronous binary session types. This allows to extend the session types theory to some common programming patterns which are not typically considered in the session types literature.

Supporting text entry review mode and other lessons from studying older adult text entry (29 January, 2015)

Speaker: Emma Nicol and Mark Dunlop (Strathclyde)

As part of an EPSRC project on Text Entry for Older Adults we have ran several workshops. A theme of support "write then review" style of entry has emerged from these workshops. In this talk we will present the lessons from our workshops along with our experimental keyboard that supports review mode through highlighting various elements of the text you have entered. Android demo available for download during talk.

Incpy: Function Memoisation in Python (28 January, 2015)

Speaker: David R. White

IncPy is a memoisation tool designed to speed-up scientific script debugging in Python. I am interested in IncPy due to its relevant to the AnyScale project. In this talk, I’ll describe how IncPy fits into the AnyScale vision, and then discuss how IncPy works based on my experience  porting and polishing the code.

GPG: Parallelisation and Model Coupling of Weather Simulations on Heterogeneous Platforms (28 January, 2015)

Speaker: Dr Wim Vanderbauwhede

This talk covers the work I did over the summer at Kyoto University in Japan. It involved porting a Large Eddy Simulator to OpenCL and creating the Glasgow Model Coupling Framework (GMCF), a novel model coupling framework, aimed at creating systems of communicating simulators. I will discuss the porting approach and performance of the OpenCL LES and explain the rationale and architecture of GMCF, and discuss our current work on it.

FATA Seminar - Boole's Legacy for Software (27 January, 2015)

Speaker: Muffy Calder

This will be a practice talk for a public lecture (to a scientific audience) I will be giving in Cork to celebrate the 200th anniversary of Boole’s birth.  (Boole was a Professor at UC Cork).  I will give the polished lecture here in the School later this year, this will be an informal practice where I will look for feedback from FATA.

FATA Planning Meeting (20 January, 2015)


 A (hopefully) short meeting to plan the talks for the rest of the year.

ENDS Seminar: Rank Join Queries in NoSQL Databases (14 January, 2015)

Speaker: Nikos Ntarmos

 Rank (i.e., top-k) join queries play a key role in modern analytics
tasks. However, despite their importance and unlike centralized
settings, they have been completely overlooked in cloud NoSQL settings.
This talk will discuss our results in this area, as presented in the
VLDB 2014 conference. Baseline solutions are offered using SQL-like
tools (Hive, Pig) based on MapReduce jobs. We then provide a number of
solutions based on specialized indices. The first such solution uses
simple inverted indices accessed with MapReduce jobs. The second
solution adapts and extends a popular centralized rank-join algorithm.
We further contribute a novel statistical structure comprising
histograms and Bloom filters, which forms the basis for our third
solution. We also provide MapReduce algorithms to build these indices,
algorithms to allow for concurrent online index updates, and query
processing algorithms utilizing them. Last, we discuss the results of an
extensive experimental evaluation, reporting on three metrics: query
execution time, network bandwidth consumption, and dollar-cost for query

GPG: Formal semantics for C-like languages (14 January, 2015)

Speaker: Dr Mark Batty

For performance, modern CPUs (and GPUs) admit relaxed behaviour: violations of sequential consistency. To enable efficient implementation above relaxed hardware without fencing simple memory accesses, programming languages like C11 and C++11/14 also admit relaxed behaviour, introducing substantial complexity to the programming model. My work provides a formal semantics for C11 and C++11/14 concurrency that was developed in close communication with the ISO committees that define the languages. The precise formal model has been used for several positive results: to identify and fix problems in the ISO specification, for proofs of the soundness of compilation mappings, in the development of formal reasoning principles, and in the proofs of basic tenets of the design. At the same time, the formal model allows us to criticise the design, and pinpoint its flaws.

In this talk I will review some of the positive results built on the formal C++11 memory model, including the proof of one of the key design goals of the language: that simple data-race-free programs are provided with sequentially consistent semantics (DRF-SC). I will then show that the C++11 memory model is fatally flawed when applied to C-like languages. I will discuss what might be done about this open problem, and I will talk about the extension of formal relaxed memory models to cover GPU computing.

FATA Seminar: Type-Based Verification of Message-Passing Parallel Programs (13 January, 2015)

Speaker: Vasco Vasconcelos

We present a type-based approach to the verification of the communication structure of parallel programs. We model parallel imperative programs where a fixed number of processes, each equipped with its local memory, communicates via a rich diversity of primitives, including point-to-point messages, broadcast, reduce, and array scatter and gather. The theory includes a decidable dependent type system incorporating abstractions for the various communication operators, a form of primitive recursion, and collective choice. We further introduce a core programming language for imperative, message-passing, parallel programming, and show that the language enjoys progress. Joint work with Francisco Martins, Eduardo R.B. Marques, Hugo A. López, César Santos and Nobuko Yoshida.

Towards Effective Retrieval of Spontaneous Conversational Spoken Content (08 January, 2015)

Speaker: Gareth J. F. Jones
Spoken content retrieval (SCR) has been the focus of various research initiatives for more then 20 years.

Spoken content retrieval (SCR) has been the focus of various research initiatives for more then 20 years. Early research focused on retrieval of clear defined spoken documents principally from the broadcast news domain. The main focus of this work was spoken document retrieval (SDR) task at TREC-6-9. The end of which saw SDR declared a largely solved problem. However, this was soon found to be a premature conclusion relating to controlled recordings of professional news content and overlooking many of the potential challenges of searching more complex spoken content. Subsequent research has focused on more challenging tasks such as search of interview recordings and semi-professional internet content.  This talk will begin by reviewing early work in SDR, explaining its successes and limitations, it will then move to outline work exploring SCR for more challenging tasks, such as identifying relevant elements in long spoken recordings such as meetings and presentations, provide a detailed analysis of the characteristics of retrieval behaviour of spoken content elements when indexed using manual and automatic transcripts, and conclude with a summary of the challenges of delivering effective SCR for complex spoken content and initial attempts to address these challenges. 

FATA Seminar - Quiz (16 December, 2014)

Speaker: Rob Irving

Addressing the Fundamental Attribution Error of Design Using the ABCS (11 December, 2014)

Speaker: Gordon Baxter

Why is it that designers continue to be irritated when users struggle to make their apparently intuitive systems work? I will explain how we believe that this perception is related to the fundamental attribution error concept from social psychology. The problem of understanding users is hard, though, because there is so much to learn and understand. I will go on to talk about the ABCS framework, a concept we developed to help organise and understand the information we know about users, and using examples will illustrate how it can affect system design.

Gordon Baxter is a co-author of the book Foundations For Designing User Centred Systems

ENDS Christmas Quiz (10 December, 2014)

Speaker: Jeremy Singer

The traditional ENDS Christmas Quiz, presented by Dr Jeremy Singer, and followed by the ENDS Christmas meal.

GPG: Pull and Push arrays, Effects and Array Fusion (10 December, 2014)

Speaker: Dr Josef Svenningsson

We present two different representations for functional parallel arrays, Pull- and Push arrays. Functions written using these representations can easily be fused, which means that there is no performance penalty in writing in a high-level compositional style. We show how to use these representations in conjunction and how allows for a very expressive programming style for writing efficient array programs.

FATA Seminar - Demand-indexed computation (09 December, 2014)

Speaker: Roland Perera

I'll talk about an idea that came out of the work on program slicing that I did for my PhD.
An important role of GUIs is to provide control over how much of the output of a computation we actually see, via widgets like scrollpanes, collapsible lists, and tooltips. This usually means computing all the output upfront and then hiding some of it, or computing it on demand using ad hoc, application-specific logic.
A somewhat independent observation is that pattern-matching imposes a demand on the thing being pattern-matched: a case expression needs to know something (but perhaps not everything) about the scrutinee in order to decide which branch to take, and a function defined by a set of equations needs to know something (but perhaps not everything) about the argument in order to decide which of its defining equations is applicable.
"Tries" (a.k.a. prefix trees), extended with a notion of variable binding, can be used to formalise both of these notions of demand. I'll outline an operational semantics for a simple functional language where the demand on the output is specified explicitly in the form of a trie of a suitable type. Running the same program with more demand produces correspondingly more output. I plan to extend this with a notion of "differential" trie, representing a change in demand, plus a differential operational semantics which, given an increase in demand, does just enough work to produce the required extra output. Although I haven't worked this bit out yet, I'll try to explain the idea with several examples.

Watson (05 December, 2014)

Speaker: Angus McCann

Speaker Profile – Angus McCann
Angus McCann is a Healthcare Systems Specialist working within IBM's European Healthcare team having worked for IBM for over 25 years.   He holds a BSc (Hons) in Electronics and Electrical Engineering from the University of Edinburgh, an MSc in Healthcare Informatics from the Royal College of Surgeons (Edinburgh) and is completing a Masters in Public Health with the University of Manchester.   Angus sits on the BCS Health Scotland Committee, the Industry Advisory Board and the International Business Forum for Scotland's Digital Health Institute and is also a citizen representative on the City of Edinburgh Health and Social Care Partnership which is the new governance body for health and social care in the city.  His interests include cars, technology, rock music and chocolate.  

The session will introduce IBM's cognitive Computing platform 'Watson' and then highlight its use in the context of healthcare, an industry that we all have some reliance upon.   It will outline some of the early use cases that are being addressed with Watson.  
Watson is an artificially intelligent computer system capable of answering questions posed in natural language.  The technology was originally developed to answer questions on the quiz show, “Jeopardy”. It is now being developed and utilised by IB for applications in telecommunications, healthcare, government and financial services.  More detail on Watson is available at

Augmenting and Evaluating Communication with Multimodal Flexible Interfaces (04 December, 2014)

Speaker: Eve Hoggan

This talk will detail an exploratory study of remote interpersonal communication using our ForcePhone prototype. This research focuses on the types of information that can be expressed between two people using the haptic modality, and the impact of different feedback designs. Based on the results of this study and my current work, I will briefly discuss the potential of deformable interfaces and multimodal interaction techniques to enrich communication for users with impairments. Then I will finish with an introduction to neurophysiological measurements of such interfaces.

Eve Hoggan is a Research Fellow at the Aalto Science Institute and the Helsinki Institute for Information Technology HIIT in Finland, where she is vice-leader of the Ubiquitous Interaction research group. Her current research focuses on the creation of novel interaction techniques, interpersonal communication and non-visual multimodal feedback. The aim of her research is to use multimodal interaction and varying form factors to create more natural and effortless methods of interaction between humans and technology regardless of any situational or physical impairment. More information can be found at

Joint ENDS/GPG Seminar: Reliable Scalable Symbolic Computation: The Design of SymGridPar2 (03 December, 2014)

Speaker: Phil Trinder

Symbolic computations are challenging to parallelise as they have
complex data and control structures, and both dynamic and highly
irregular parallelism.  The SymGridPar framework has been developed to
address these challenges on small-scale parallel architectures.
However, as the number of cores in compute clusters continues to grow
exponentially, and as the communication topology is becoming
increasingly complex, an improved parallel symbolic computation
framework is required: SymGridPar2.

In this talk, I'll explain how two main aspects of the design of
SymGridPar2, fault tolerance and locality control, interact with
dynamic scheduling of parallelism.  Fault tolerance is achieved by
tracking the location of tasks as they are scheduled across the
network, and by replicating tasks that were affected by node failure.
Locality control exposes an abstraction of the communication topology
so programs can control how close together tasks shall be placed by
the dynamic scheduler.

FATA Seminar - Russian Dolls Search (02 December, 2014)

Speaker: Ciaran McCreesh

Russian Doll Search is a general algorithmic technique for solving hard optimisation problems, which looks a bit like Branch and Bound combined with Dynamic Programming. I'll give an overview of how it works, and what it's been used for, and will then speculate about how we might parallelise it.

Grand Challenges in CS (28 November, 2014)


Grand Challenges Session
This session will have several purposes:

- to encourage everyone to think about how to do the most significant

- to develop a shared understanding of the most important problems and
directions for computing science research

- to stimulate interaction between the research groups: perhaps someone
in another group has expertise relevant to your grand challenge, if only
they knew what your problem was


Wim Vanderbauwhede: exploiting manycore architectures

John Rooksby: Effective Mobile Health

Paul Siebert:TBC

Simon Rogers: Challenges in computational biology

GPG: Modelling atmospheric aerosol: why should we care about complexity and how do we find out associated impacts? (26 November, 2014)

Speaker: Dr David Topping

Uncertainties associated with impacts of aerosol particles on climate are larger than those of any other atmospheric components. In addition, fine particulate material is widely acknowledged as one of the most important pollutant impacting on air quality and human health. Comprised of both inorganic and organic material, inorganic material is restricted to a few well-understood compounds. However, organic material can comprise many thousands, as yet largely unidentified, compounds with a vast range of properties. Owing to this complexity and diversity of atmospheric aerosol components, quantification of the properties that determine their highly uncertain climatic and human health impacts requires the development of novel technological applications. Encompassing multiple disciplines, this includes developments in the field of physics, chemistry, mathematics, engineering and computing.

Regional coupled chemistry-climate models attempt to carry our 'best knowledge' of how aerosols form and evolve on a UK and EU wide scale. Unfortunately, the process of embedding our 'best knowledge' is far from ideal. Chemical complexity comes at a premium and the associated impacts remain unknown. Mitigating this problem requires solutions from the fields of mathematics and computing. In this talk I will cover research carried out at the University of Manchester in this area along with developments from external collaborators.

FATA Seminar - Choreographies in the wild (25 November, 2014)

Speaker: Massimo Bartoletti

Distributed applications can be constructed by composing services which interact by exchanging messages according to some global communication pattern, called choreography. Under the assumption that each service adheres to its role in the choreography, the overall application is

However, in wild scenarios like the Web or cloud environments, services may be deployed by different participants, which may be mutually distrusting (and possibly malicious). In these cases, one can not assume (nor enforce) that services always adhere to their roles.

Many formal techniques focus on verifying the adherence between services and choreographic roles, under the assumption that no participant is malicious; in this case, strong communication-correctness results can be obtained, e.g. that the application is deadlock-free. However, in wild
scenarios such techniques can not be applied.

In this talk we present a paradigm for designing distributed applications in wild scenarios. Services use contracts to advertise their intended communication behaviour, and interact via sessions once a contractual agreement has been found. In this setting, the goal of a designer is to realise honest services, which respect their contracts in all execution contexts (also in those where other participants are malicious).

A key issue is that the honesty property is undecidable in general. In this talk we discuss verification techniques for honesty, targeted at agents specified in the contract-oriented calculus CO2. In particular, we show how to safely over-approximate the honesty property by a model-checking technique which abstracts from the contexts a service may be engaged with.

Blocks: A Tool Supporting Code-based Exploratory Data Analysis (20 November, 2014)

Speaker: Mattias Rost

Large scale trials of mobile apps can generate a lot of log data. Logs contain information about the use of the apps. Existing support for analysing such log data include mobile logging frameworks such as Flurry and Mixpanel, and more general visualisation tools such as Tableau and Spotfire. While these tools are great for giving a first glimpse at the content of the data and producing generic descriptive statistics, they are not great for drilling down into the details of the app at hand. In our own work we end up writing custom interactive visualisation tools for the application at hand, to get a deeper understanding of the use of the particular app. Therefore we have developed a new type of tool that supports the practice of writing custom data analysis and visualisation. We call it Blocks. In this talk I will talk describe what Blocks is, how Blocks encourages code writing, and how it supports the craft of log data analysis.

Mattias Rost is a researcher in Computing Science at the University of Glasgow. He is currently working on the EPSRC funded Populations Programme. He was awarded his PhD by the University of Stockholm in 2013. 

GPG: Parallel Computation of Multifield Topology (19 November, 2014)

Speaker: Dr David Duke

Codes for computational science and downstream analysis (visualization and/or statistical modelling) have historically been dominated by imperative thinking. While this situation is evolving, e.g. through adoption of functional ideas in toolkits for high-performance visualization, we are still a long way from seeing a functional language such as Haskell used routinely in live application, certainly for those involving peta-scale data and above.

This talk describes recent work on multifield data analysis that has led to new questions in nuclear physics, and the expanding role of Haskell in this research programme.  Following an introduction to visualization and topological analysis, I will describe the recent analysis of data from HPC simulation of nuclear fission undertaken by US collaborators that has led to new insight into the process of fission.  The talk will cover ongoing work using parallel functional programming on both shared and distributed memory architecture, and conclude with questions about the utility and future of functional programming in large-scale computational science.

FATA Seminar - Life out on the Savannah: formal models meet mixed-reality systems (18 November, 2014)

Speaker: Michele Sevegnani

We report on work with our HCI friends, Tom Rodden and Steve Benford, on modelling and analysis for Benford’s Savannah mixed reality “game”.  We show how our novel bigraphical model of four perspectives of the system (computational, technical, human and physical), gives us new ways to analyse relationships between the perspectives and prove formally that there are cognitive dissonances in the system, as exemplified by user-trials.
No bigraph algebra required, we do everything with graphical forms (ie pictures)!
Formal modellers,  HCI experts, computer scientists, all welcome!

Handling Big Streaming Data with DILoS (14 November, 2014)

Speaker: Alexandros Labrinidis


For the past few years, our group has been working on problems related to Big Data through several projects. After briefly discussing these projects, the rest of this talk will present DILoS, which focuses on load management for ``Big Streaming Data.'' 

Today, the ubiquity of sensing devices as well as of mobile and web applications continuously generates a huge amount of data in the form of streams, which need to be continuously processed and analyzed, to meet the near-real-time requirements of monitoring applications. Such processing happens inside Data stream management systems (DSMSs), which efficiently support continuous queries (CQs). CQs inherently have different levels of criticality and hence different levels of expected quality of service (QoS) and quality of data (QoD). In order to provide different quality guarantees, i.e., service level agreements (SLAs), to different client stream applications, we developed DILoS, a novel framework that exploits the synergy between scheduling and load shedding in DSMS. In overload situations, DILoS enforces worst-case response times for all CQs while providing prioritized QoD, i.e., minimize data loss for query classes according to their priorities.  We further propose ALoMa, a new adaptive load manager scheme that enables the realization of the DILoS framework. ALoMa is a general, practical DSMS load shedder that outperforms the state-of-the-art in deciding when the DSMS is overloaded and how much load needs to be shed. We implemented DILoS in our real DSMS prototype system (AQSIOS) and evaluated its performance for a variety of real and synthetic workloads. Our experiments show that our framework (1) allows the scheduler and load shedder to consistently honor CQs' priorities and (2) maximizes the utilization of the system processing capacity to reduce load shedding.

DILoS was developed in collaboration with Thao N.Pham (as part of her PhD thesis) and Panos K. Chrysanthis. This work has been funded in part by two NSF Awards and a gift from EMC/Greenplum.

MyCity: Glasgow 2014 (13 November, 2014)

Speaker: Marilyn Lennon

During the summer of 2014, we (a small team of researchers at Glasgow University) designed, developed and deployed a smartphone app based game for the commonwealth games in Glasgow. The aim of our game overall was to try to get people to engage with Glasgow, find out more about the commonwealth games, and above all to get people to walk more through 'gamification'. In reality though - we had no time or money for a well designed research study and proper exploration of gamification and engagement and in fact a huge amount of our efforts were focused instead on testing in app advertising models, understanding business models for 'wellness' apps, dealing with research and enterprise and considering routes for commercialisation of our underlying platform and game. Come along and hear what we learned (good and bad) about deploying a health and wellness app in the 'real world'.

Dr Marilyn Lennon is a senior lecturer in Computer and Information Sciences at the University of Strathclyde.

GPG: Python and Parallelism (12 November, 2014)

Speaker: Mr J. Magnus Morton

In this talk we'll discuss the Python language - and scripting languages in general - and its suitability for parallel programming. I'll present my 5th year project that is parallelising compiler for Python.

FATA Seminar - Subgraph Isomorphism Problem: simple algorithms (11 November, 2014)

Speaker: Patrick Prosser

In the subgraph isomorphism problem (SIP), we are given two graphs, G and H where G is a the pattern graph and H the target. The problem is then to determine if there is a subgraph of H (the target graph) that is isomorphic to G (the pattern graph). Generally, the problem is NP-hard. I will present some simple SIP algorithms, all using BitSet encodings, and progressively modify a base algorithm to give more sophisticated algorithms that better exploit problem structure.

ENDS Seminar: Query Processing in Graph Database (05 November, 2014)

Speaker: Jing Wang

Graph database could model a large range of scenarios, which makes the related research widely concerned. Among various research topics, graph querying is the one we are focused on. This talk would cover basic issues of query processing in graph database, especially the subgraph query. Also, our ongoing research on better performance of subgraph query processing would be included.

GPG: SKI combinators (really) are Turing computable (05 November, 2014)

Speaker: Prof Greg Michaelson

Since Turing established the equivalence of Turing machines and the lambda calculus in 1936/7, proof of the Turing computability of Curry’s combinatory calculus seems to have rested indirectly on its equivalence to the lambda calculus rather than direct construction. In this seminar, a Turing machine that reduces combinator expressions is presented. The TM is over 1000 quintuplets long and further illustrates the utter unsuitability for software engineering of Turing’s elegant and succinct model of computability.

FATA Seminar - Linear numeral systems (04 November, 2014)

Speaker: Ian Mackie

We take a fresh look at an old problem of representing natural numbers in the lambda-calculus.  Our interest is in finding representations where we can compute efficiently (and where possible, in constant time) the following functions: successor, predecessor, addition, subtraction and test for zero. Surprisingly, we find a solution in the linear lambda-calculus, where copying and erasing are not permitted.

Reducing the password burden: Investigating the effectiveness of data-driven authentication on mobile (04 November, 2014)

Speaker: Dr Mike Just
Recent research on the effectiveness of performing implicit authentication on smart phones, where sensor data is used to authenticate a user based upon their behaviour.

I will overview our group's recent research on the effectiveness of performing implicit authentication on smart phones, where sensor data is used to authenticate a user based upon their behaviour. In addition to results related to usability, security, and resource consumption, I will discuss some practical deployment issues related to training duration, and behaviour stability.

Mike is a Senior Lecturer and the Associate Director of the Interactive and Trustworthy technologies group at GCU. He has published on many areas of computer security and cryptography, and is particularly interested in building usable security solutions. In 2003 he designed the Government of Canada's online account recovery solution, used by more than 6 million citizens and businesses. He recently lead a two-year EU project investigating the use of mobile phone sensors for authentication, that will be the subject of this presentation. Mike obtained his PhD from Carleton University (Canada), and in addition to his academic work, he spent 10 years in both the private and public sectors. You can find more information, including publications, at

ENDS Seminar: Teach yourself Java 8 in 24 hours ^H^H^H^H^H minutes (29 October, 2014)

Speaker: Dr Jeremy Singer

I will give a lightning tour of new features in Java 8, focusing particularly on lambda expressions and streams. This will be an interactive session, so please come along and contradict me if you know more about lambdas than me.

GPG: High-Performance Computer Algebra: A Case Study Experience Report (29 October, 2014)

Speaker: Dr Patrick Maier

At the tail end of the HPC-GAP project I spent some months trying to parallelise a computer algebra problem with the aim to scale up to supercomputers.

In this talk I'll gently introduce the problem, namely finding invariant bilinear forms in the representation theory of Hecke algebras.  I'll present a schematic overview of the sequential algorithm (originally written in the computer algebra system GAP) and discuss how to parallelise it (using the SymGridPar2 framework). I'll present some performance results, including runtime and (estimated) speedup figures on up to 1024 cores, and an analysis why this problem is difficult to parallelise. There is a paper with details:

I am not an expert on the algebraic side of this case study; apologies in advance for all the half-truths I'll be telling. (The real purpose of this talk is to help me remember the little algebra I've learned during the project.)

FATA Seminar - Size versus truthfulness in the House Allocation problem (28 October, 2014)

Speaker: Baharak Rastegari

I will present our result from last year (presented in EC 2014) on designing truthful mechanisms for the House Allocation (HA) problem. HA is the problem of allocating a set of objects among a set of agents, where each agent has ordinal preferences (possibly involving ties) over a subset of the objects. We focus on truthful mechanisms without monetary transfers for finding large Pareto optimal matchings.  

What I've learned so far about the recognition-based graphical passwords (Users and Developers Guidelines) (28 October, 2014)

Speaker: Hani Aljahdali
Development of Graphical Authentication Schemes

This talk will present guidelines about developing and using recognition-based graphical passwords properly in terms of usability and security. Those guidelines are based on in depth-interviews with 23 graphical password users from my previous studies. The guidelines will show the aspects that need to be considered for future work in the field of recognition-based graphical passwords.  

Ms. Male Character - Tropes vs Women (23 October, 2014)

Speaker: YouTube Video - Anita Sarkeesian

In this session we will view and discuss a video from the Feminist Frequency website ( The video is outlined as follows: "In this episode we examine the Ms. Male Character trope and briefly discuss a related pattern called the Smurfette Principle. We’ve defined the Ms. Male Character Trope as: The female version of an already established or default male character. Ms. Male Characters are defined primarily by their relationship to their male counterparts via visual properties, narrative connection or occasionally through promotional materials."

ENDS Seminar: GLANF: An Open Framework for Network Function Virtualization in Software-Defined Networks (22 October, 2014)

Speaker: Simon Jouet

As part of our ongoing research in Software Defined Networking (SDN) and the applications of central knowledge in Data Centres, we have designed an open framework for Network Function Virtualization (NFV). This talk will be about the current issue of middlebox deployment in large scale DC, how NFV tries to alleviates the cost and management problems and finally how GLANF unifies SDN and NFV to provide a infrastructure independent, elastic and transparent NFV framework.

GPG: Propositions as Sessions, Semantically (22 October, 2014)

Speaker: Dr J. Garrett Morris

Session types provide a static guarantee that concurrent programs respect communication protocols. Recently Caires, Pfenning, and Toninho, and Wadler, have developed a correspondence between the propositions of linear logic and session typed pi-calculus processes.

In this talk, I'll attempt to relate the cut-elimination semantics of this approach to a more typical operational semantics for session-typed concurrency in a functional language.  I'll present a minimal concurrent functional language, GV, with a type system based on Wadler's interpretation of session types. I'll give a small-step operational semantics for GV. I'll develop a suitable notion of deadlock for our functional setting, based on existing approaches for capturing deadlock in pi-calculus, and show that well-typed GV programs are deadlock-free, deterministic and terminating.  I'll also define two extensions of GV and show that they preserve deadlock freedom.

FATA Seminar - New Software Applications for Kidney Exchange (21 October, 2014)

Speaker: David Manlove

In this talk I will give some background to kidney exchange in the UK context, and present some recent results from quarterly matching runs.  I will then give an overview of two new software applications for kidney exchange that emerged from summer MSc projects.  The first of these, due to Tommy Muggleton, is a tool for visualising input datasets and optimal solutions for kidney exchange problem instances.  The second, due to James Trimble, allows datasets to be generated that are a better reflection of the UK real data than a previous generator produced, and also permits characteristics of datasets to be analysed, and optimal solutions to be compared and contrasted with respect to different optimality criteria.

A review of multiple graphical password user studies and reported results (21 October, 2014)

Speaker: Soumyadeb Chowdhury
Overview of user studies in graphical authentication.

This  talk will present a brief review of all the user studies (known to me) in the field of GASs that had explored the memorability of multiple graphical passwords . The review for each of the user studies will discuss the system used for the experiment, the experimental protocol and the results obtained from the experiment and our inferences (which are based upon the research published by the respective authors).

Chiasson, S. et al., 2009. Multiple Password Interference in Text and Click-Based Graphical Passwords. In Proceedings of the 16th ACM conference on Computer and Communications Security. New York, 2009. ACM. 

Everitt, K.M., Bragin, T., Fogarty, J. & Kohno, T., 2009. A Comprehensive Study of Frequency, Interference, and Training of Multiple Graphical Passwords. In Proceedings of The 27th International Conference on Human Factors in Computing Systems- CHI., 2009

Moncur, W. & Leplatre, G., 2007. Pictures at the ATM: Exploring the Usability of Multiple Graphical Passwords. In Proceedings of the ACM SIGCHI., 2007

Chowdhury, S., Poet, R. & Mackenzie, L., 2014. Passhint: Memorable and Secure Authentication. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14)., 2014. ACM Press

Postcard from Singapore (17 October, 2014)

Speaker: Jeremy Singer
report on recent travels

I recently spent three weeks in Singapore, teaching introductory material (AF2, Linux3) to our new class of CS students. In this talk, I will give an honest (uncensored!) account of my experiences, both pedagogical and gustatory. There were highs and lows in both domains. Talk will be accompanied by lots of photos!

Use of Eye Tracking to Rethink Display Blindness. (16 October, 2014)

Speaker: Sheep Dalton

Public and situated display technologies are an increasingly common part of many urban spaces, including advertising displays on bus stops, interactive screens providing information to tourists or visitors to a shopping centre, and large screens in transport hubs showing travel information as well as news and advertising content. Situated display research has also been prominent in HCI, ranging from studies of community displays in cafes and village shops to large interactive games in public spaces and techniques to allow users to interact with different configurations of display and personal technologies.

Observational studies of situated displays have suggested that they are rarely looked at. Using a mobile eye tracker during a realistic shopping task in a shopping center, we show that people look at displays more than might be expected given observational studies but for very short times (1/3rd of second on average), and from quite far away. We characterize the patterns of eye-movements that precede looking at a display and discuss some of the design implications for the design of situated display technologies that are deployed in public space.

Ensemble: An Imperative Language for Embedded Systems, High Performance Computing, and Distributed Systems based on the Pi-Calculus (15 October, 2014)

Speaker: Paul Harvey
Actor-based programming

For my Ph.D I have built an actor-based programming language and applied it to embedded systems, desktop, and GPGPU & multicore programming. In each case, the language has provided a simpler programming model than the current norm. This talk will be about the last part of my work which is applying the language to distributed and adaptive computing situations. 
As certain design decisions are still being made/rethought, contributions and criticism are more than welcome!

FATA Seminar - An introduction to knot theory (14 October, 2014)

Speaker: Brendan Owens

I will give a gentle introduction to mathematical knot theory, focusing on combinatorial aspects that may be of interest to computing scientists, and avoiding technical details.  Topics include knot diagrams and Reidemeister moves, knots and graphs, tabulation of knots, and some discussion of ways of encoding knots.  I will include a brief description of my ongoing search for alternating ribbon knots (joint with Frank Swenton).

Android Permissions (14 October, 2014)

Speaker: Rosanne English
HUSH Research Group Talk

Lecture about the Go Programming Language (13 October, 2014)

Speaker: Dave Cheney
In this lecture he will talk about various aspects of Google's Go programming language.

Lecture about the Go Programming Language


Speaker: Dave Cheney, Canonical


Place:  Sir Alwyn Williams Building, Level 5


Date: Monday 13th October, 17.00


Dave Cheney ( works on the Juju project at Canonical ( and has been a Go evangelist for several years.

In this lecture he will talk about various aspects of Google's Go programming language.

All welcome.

Academic Cloud Computing Research: Pitfalls and Opportunities (08 October, 2014)

Speaker: Blesson Varghese
Seminar on cloud computing

This talk will cover at least five fundamental pitfalls, which can restrict academics from conducting cloud computing research at the infrastructure level, which is currently where the vast majority of academic research lies. Instead academics should be conducting higher risk research, in order to gain understanding and open up entirely new areas.

The case for embracing abstractions provided by clouds will be presented through five opportunities: user driven research, new programming models, PaaS environments, and improved tools to support elasticity and large-scale debugging. The objective of this paper is to foster discussion, and to define a roadmap forward, which will allow academia to make longer-term impacts to the cloud computing community.


Link to paper presented at HotCloud '14 

GPG Seminar - Hardware Support for Shared-memory Concurrency: Reconciling Programmability with Performance (08 October, 2014)

Speaker: Dr Vijay Nagarajan

With regard to hardware support for shared-memory concurrency, an inherent tradeoff between programmability and performance is presumed. For instance,  the most intuitive memory consistency model, sequential consistency (SC),  is presumed to be too expensive to support; likewise primitive synchronization operations like memory fences and atomic read-modify-writes (RMWs) (which are used as the building blocks of higher level synchronization constructs) are costly in current processors; finally, there are question marks about whether cache coherence will scale with increasing number of cores.

In this talk, I will argue that it is indeed possible to provide hardware support that enhances programmability without sacrificing performance. First, I will show how SC can be enforced efficiently using conflict ordering, a novel technique for achieving memory ordering. Second, I will show how RMWs can be implemented efficiently in x86 architectures. I will conclude the talk with a scalable approach to cache coherence called consistency-directed coherence.

FATA Summer Summary, or, how did you spend your summer? (07 October, 2014)

Speaker: FATA members

Accelerating Datacenter Services with Reconfigurable Logic (03 October, 2014)

Speaker: Aaron Smith
SICSA visiting speaker seminar

Datacenter workloads demand high computational capabilities, flexibility, power efficiency, and low cost. It is challenging to improve all of these factors simultaneously. To advance datacenter capabilities beyond what commodity server designs can provide, we have designed and built a composable, reconfigurable fabric at Microsoft to accelerate portions of large-scale software services. In this talk I will describe a medium-scale deployment of this fabric on a bed of 1,632 servers, and discuss its efficacy in accelerating the Bing web search engine along with future plans to improve the programmability of the fabric.



Aaron Smith is a member of the Computer Architecture Group at Microsoft Research. He is broadly interested in optimizing compilers, computer architecture and reconfigurable computing. Over the past 15 years he has led multiple industrial and research compiler projects at Metrowerks/Freescale Semiconductor, The University of Texas at Austin and Microsoft. He received his PhD in Computer Science from UT-Austin in 2009 and is currently serving as co-General Chair of CGO 2015.

Economic Models of Search (02 October, 2014)

Speaker: Leif Azzopardi

Understanding how people interact when searching is central to the study of Interactive Information Retrieval (IIR). Most of the prior work has either been conceptual, observational or empirical. While this has led to numerous insights and findings regarding the interaction between users and systems, the theory has lagged behind. In this talk, I will first provide an overview of the typically IIR process. Then I will introduce an economic model of search based on production theory. This initial model is then extended to incorporate other variables that affect the interaction between the user and the search engine. The refined model is more realistic, provides a better description of the IIR process and enables us to generate eight interaction-based hypotheses regarding search behavior. To validate the model, I will show how the observed search behaviors from an empirical study with thirty-six participants were consistent with the theory. This work, not only, describes a concise and compact representation of search behavior, but also provides a strong theoretical basis for future IIR research. The modeling techniques used are also more generally applicable to other situations involving Human Computer Interaction, and could be helpful in understand many other scenarios.

This talk is based on the paper, “Modeling Interaction with Economic Models of Search” which received an Honorable Mention at ACM SIGIR 2014, see:

GPG:Dataflow programming for optimised real-time computer vision on FPGAs (01 October, 2014)

Speaker: Dr Rob Stewart

The analysis of human activity from video sequences has witnessed a major growth in applications including surveillance, vehicle autonomy and intelligent domiciles. In many application domains, there is a strong need for intelligent video processing and more context-aware acquisition devices at source, to considerably reduce the amount of data to be transferred between sensors to perform real-time processing.

I will report ongoing work in the EPSRC Rathlin project between Heriot-Watt Edinburgh and Queens Belfasts Universities, where we are addressing such issues with FPGA-based processors for image processing. The focus of this talk will be on the programmability of FPGAs using dataflow programming languages. I will briefly explore existing dataflow architectures and languages, then describe our multicore CPU and FPGA optimisations for a person tracking case study, and conclude with ongoing work on program transformation for heterogeneous architectures involving FPGAs.

CANCELLED Instrumental Interaction in Multisurface Environments (25 September, 2014)

Speaker: Michel Beaudouin-Lafon
This talk will illustrate the principles and applications of instrumental interaction, in particular in the context of the WILD multi surface environment.

Unfortunately this talk has been cancelled.


Open for Business: Harnessing the Power of Social Media Streams (25 September, 2014)

Speaker: Social Media Analytics Researchers and Student Start-Ups

Knowledge Exchange Event at the School of Computing Science

Social media provide a rich seam of data on people, newsworthy events, social and professional networks, customers, brands, and opinions which researchers and companies are beginning to unlock.

Businesses and public agencies are invited to join us at this free event to find out about the innovative techniques that researchers at Glasgow University's School of Computing Science are using to uncover new value from social media, and discover the commercial opportunities emerging from this space. 

  • Discover how our researchers are detecting real world events within a few minutes of their occurrence from live witnesses - often before emergency services and news agencies pick them up. 
  • Find out how social media can be combined with sensor technologies to navigate smart cities and find the venues, attractions and hotspots or peaceful retreats to suit their interests. 
  • Learn how event organisation and business networking is being transformed by combining speaker and delegate social media data with a novel end-to-end event management platform. 
  • Realise how to connect with your organisation's  dynamically through their social media presence, without having to recruit or follow them. 
  • Understand the commercial opportunity that social media analysis presents your organisation. Take inspiration from young innovative start-up companies carving out their place in an emerging market place.


The detailed programme of the event can be found here.  

Using degraded MP3 quality to encourage a health improving walking pace: BeatClearWalker (18 September, 2014)

Speaker: Andreas Komninos

Promotion of walking is integral to improving public health for many sectors of the population. National governments and health authorities now widely recommend a total daily step target (typically 7,000- 10,000 steps/day). Meeting this target can provide considerable physical and mental health benefits and is seen as a key target for reducing national obesity levels, and improving public health. However, to optimise the health benefits, walking should be performed at a “moderate” intensity - often defined as 3 times resting metabolic rate, or 3 METs. While there are numerous mobile fitness applications that monitor distance walked, none directly target the pace, or cadence, of walkers.

BeatClearWalker is a fitness application for smartphones, designed to help users learn how to walk at a moderate pace (monitored via walking cadence, steps/min.) and encourage maintenance of that cadence. The application features a music player with linked pedometer. Based on the user’s target cadence, BeatClearWalker will apply real-time audio effects to the music if the target walking cadence is not being reached. This provides an immersive and intuitive application that can easily be integrated into everyday life as it allows users to walk while listening to their own music and encourages eyes-free interaction with the device.

This talk introduces the application, its design and evaluation. Results show that using our degraded music decreases the number of below-cadence steps and, furthermore, that the effect can persist when the degradation is stopped.

GIST Seminar (Automotive UI / Mobile HCI) (11 September, 2014)

Speaker: Alex Ng and Ioannis Politis
Ioannis and Alex will present their papers from Automotive UI and Mobile HCI

Speaker: Ioannis Politis
Title: Speech Tactons Improve Speech Warnings for Drivers

This paper describes two experiments evaluating a set of speech and tactile driver warnings. Six speech messages of three urgency levels were designed, along with their tactile equivalents, Speech Tactons. These new tactile warnings retained the rhythm of speech and used different levels of roughness and intensity to convey urgency. The perceived urgency, annoyance and alerting effectiveness of these warnings were evaluated. Results showed that bimodal (audio and tactile) warnings were rated as more urgent, more annoying and more effective compared to unimodal ones (audio or tactile). Perceived urgency and alerting effectiveness decreased along with the designed urgency, while perceived annoyance was lowest for warnings of medium designed urgency. In the tactile modality, ratings varied less as compared to the audio and audiotactile modalities. Roughness decreased and intensity increased ratings for Speech Tactons in all the measures used. Finally, Speech Tactons produced acceptable recognition accuracy when tested without their speech counterparts. These results demonstrate the utility of Speech Tactons as a new form of tactile alert while driving, especially when synchronized with speech.

Speaker: Alex Ng
Title: Comparing Evaluation Methods for Encumbrance and Walking on Interaction with Touchscreen Mobile Devices

In this talk, I will be presenting our accepted paper at this year’s MobileHCI. The paper compares two mobile evaluation methods, walking on a treadmill and walking on the ground, to evaluate the effects of encumbrance (holding objects during interaction with mobile devices) while the preferred walking speed (PWS) is controlled. We will discuss the advantages and limitations of each evaluation method when examining the impact of encumbrance.

Robotics at SoCS for Teaching and Research (II) (05 September, 2014)

Speaker: Gerardo Aragon-Camarasa, Daniel Callended & Paul Siebert

We  would like to introduce you to the robotic facilities we now have at the School to support research and teaching: In AY 2013-2014 we ran a small number of teaching projects on Dexterous Blue, that proved both challenging and enormously popular with those Level 3 and Level 4 students who were working on this platform. What made this an interesting prospect from a teaching perspective is that the Robot Operating System (ROS) , a freely available platform for robot control and applications development, already provides everything required to get a full-blooded robot project working. Despite the complexity of robot control and sensing, ROS allowed our Level 3 undergraduate team to integrate their software modules to produce an impressive demonstration of a robot recognising the presence of a face and then making a drawing using a felt-tip marker pen.
Therefore, ROS enables students to concentrate on making their bite-sized contribution within a much more complex operational system.

Building on our experience, the School has now procured a Baxter Research Robot from Rethink Robotics to support undergraduate and postgraduate projects, providing a second two-armed full-scale robot on which advanced projects can be supported. We shall overview Baxter, detailing it’s wide array of built-in sensors and control system. We will also give a tour of what it is like to develop for Baxter by discussing its SDK and the number of facilities it provides, as well as how it interfaces with the ROS middle-ware. From there, we will describe ROS and present a brief overview of its capabilities for teaching purposes. ROS provides the interface layer between hardware and software, and accordingly can be used for other robotic facilities, such as the big blue robot, named Dexterous Blue, housed on level 7 of the Boyd Orr building (and is used for research in clothing perception and manipulation).

A number of video and live demonstrations (yes, you can meet Baxter during and after the talk!!) will be shown involving both robots, emphasising the teaching potential for exploring topics in Computer Vision applications, parallel and distributed algorithms and systems, software engineering, artificial intelligence, machine learning, information retrieval and big data. Live demos will include: endpoint position mimicry using a Kinect-like device, autonomous block stacking and organisation, face detection with tracking, joint position recording and playback, and arm puppeteering, among others. All of these designed and implemented in a two months summer internship by Daniel Callander!!

If you are interested in running robotics-based projects that both engage and challenge, come along to see what the School has to offer in two well supported two-armed robotics platforms.

Workshop: Designing Peer Instruction Questions for Learning (not assessment) (05 September, 2014)

Speaker: Professor Beth Simon

As computing educators, we often have little experience with and even less love for multiple-choice questions.  Expertise in our discipline is based on problem-solving, analysis of trade-offs, etc. – but not on memorization.  However, the efficacy of the Peer Instruction pedagogy is predicated on “good” multiple-choice questions (asking students to discuss how many pints they had at the pub last night would not be expected to support learning of binary search trees).  

What makes a good Peer Instruction question?  One that is hard, but not too hard.  One that draws students into discussion.  One students can LEARN from.  How does one write such a beast?  Unfortunately there’s little literature and no algorithm.  In this workshop, we’ll:

·      compare and contrast no-so-good and better PI questions used in university Computer Science courses
·      introduce a scaffolding to support development of “good” PI questions
·      share and discuss PI questions developed by my colleagues and available on


If you are interested in the earlier Friday 12pm talk and this workshop, and would like lunch in between, please RSVP using the form below, so we can plan both lunch and materials targeting your courses:

How we teach impacts students learning, performance, and persistence: Results from three studies of Peer Instruction in Computer Science (05 September, 2014)

Speaker: Professor Beth Simon

What a course “is” and “does” can be viewed through the lens of instructional design. Any course should be based around the learning goals we have for students taking the course – what it is we want them to know and be able to do when they finish the course.  Describing how we go about supporting students in achieving those goals can be broken into two parts: a) the content/materials we choose to cover and b) the methods/pedagogical approaches we employ.  In this talk I review the results of three studies looking at the impact of method or pedagogical approach in computing courses.  Specifically, I’ll review our experience using the Peer Instruction method (aka “clickers”) in computer science courses at UC San Diego and discuss the following:

a) an observed 50% reduction in fail rate in four computing courses adopting Peer Instruction,

b) an in-situ comparison study showing Peer Instruction students to perform 6% better than students in a standard  “lecture” setting, and

c) a 30% increase in retention of majors after adopting a trio of best practices in our introductory programming course (Peer Instruction, Media Computation, and Pair Programming).


This session is followed by a workshop on designing peer instruction questions at 2pm - if you would like lunch in between, and so we can tailor aspects of the session to your discipline, please let us know before 9am Wednesday 3rd, using the following form:

On Inverted Index Compression for Search Engine Efficiency (01 September, 2014)

Speaker: Matteo Catena

Efficient access to the inverted index data structure is a key aspect for a search engine to achieve fast response times to users’ queries. While the performance of an information retrieval (IR) system can be enhanced through the compression of its posting lists, there is little recent work in the literature that thoroughly compares and analyses the performance of modern integer compression schemes across different types of posting information (document ids, frequencies, positions). In this talk, we show the benefit of compression for different types of posting information to the space- and time-efficiency of the search engine. Comprehensive experiments have been conducted on two large, widely used document corpora and large query sets; using different modern integer compression algorithms, integrated into a modern IR system, the Terrier IR platform. While reporting the compression scheme which results in the best query response times, the presented analysis will also show the impact of compression on frequency and position posting information in Web corpora that have large volumes of anchor text.

Video explanations for university (computing) courses: Lessons drawn from MOOCs (01 September, 2014)

Speaker: Professor Beth Simon

Question: How many times should a child be able to watch and hear their teacher solving a long division problem?  5 times?  10 times?  As many times as the student needs?

The brouhaha surrounding MOOCs has increased interest in the potential of short video snippets for supporting learning.  In this talk, I’ll highlight relevant cognitive science research on multi-media learning with a focus on creating videos showing “problem solving” in computer science.*  Specific highlights include recommended video length, the value of inserting labels on the various parts of a process, the use of “in-video” prediction quizzes to support active video watching, and why low-cost production processes may better support learning.

 *Though these suggestions are also relevant for our colleagues in other problem-solving intensive disciplines.


TechMeetup (27 August, 2014)

Speaker: 1. Marc Burgauer 2. Matt Wynne & Seb Rose:
1. Marc Burgauer: "What's the worst that could happen?" 2. Matt Wynne & Seb Rose: Continuous Delivery

* Marc Burgauer: "What's the worst that could happen?" - How authentic  connection enables change and innovation

This talk presents the answers I have found about how we can remain  authentic in a blame culture; how we can build authentic trust and enable  safe-to-fail environments to strengthen our connections, as well as my own experience applying these practices.

* Matt Wynne & Seb Rose: Continuous Delivery

Continuous delivery is the Next Big Thing™ in software development. But  what is it? Who’s doing it, and why? What are the pitfalls? Matt and Seb  will give you the theory and share their practical experience. You’ll  leave the talk with a clear understanding of what it means, and what it  takes, to practice continuous delivery.

Interactive Visualisation of Big Music Data. (22 August, 2014)

Speaker: Beatrix Vad

Musical content can be described by a variety of features that are measured or inferred through the analysis of audio data. For a large music collection this establishes the possibility to retrieve information about its structure and underlying patterns. Dimensionality reduction techniques can be used to gain insight into such a high-dimensional dataset and to enable visualisation on two-dimensional screens. In this talk we investigate the usability of these techniques with respect to an interactive exploration interface for large music collections based on moods. A method employing Gaussian Processes to extend the visualisation with additional information about its composition is presented and evaluated

Behavioural Biometrics for Mobile Touchscreen Devices (22 August, 2014)

Speaker: Daniel Buschek

Fast dynamic type-checking of unsafe code (and other stories) (11 August, 2014)

Speaker: Stephen Kell
ENDS visiting speaker

Huge amounts of widely-used code are written in C, C++ and other unsafe 
languages (meaning languages which do not enforce any type- or 
memory-safety invariants). Considerable existing work has added some 
kind of invariant checking to such languages (usually just to C), but 
with several caveats: sacrificing source- or binary-level compatibility, 
imposing high run-time overheads, and (almost always) checking spatial 
and/or temporal memory correctness but ignoring type-correctness.

To start, I'll describe libcrunch, a language-agnostic infrastructure 
for dynamically checking the type-correctness of unsafe code.  Using a 
novel disjoint metadata implementation and careful integration with 
existing toolchains (notably a C front-end), libcrunch allows fast 
dynamic type checking with full source- and binary-level compatibility 
and with generally low run-time overheads.

Towards the end of the talk I'll zoom out a little to place libcrunch 
alongside some other projects, as part of a wider agenda: closing the 
gap between "static" and dynamic language infrastructure, hence enabling 
greater degrees of cross-language composition, reasoning and tool 

Baxter robot (01 August, 2014)

Speaker: Daniel Callander, Gerardo Aragon-Camarasa, Paul Siebert

Inference in non‐linear dynamical systems – a machine learning perspective, (08 July, 2014)

Speaker: Carl Rasmussen

Inference in discrete-time non-linear dynamical systems is often done using the Extended Kalman Filtering and Smoothing (EKF) algorithm, which provides a Gaussian approximation to the posterior based on local linearisation of the dynamics. In challenging problems, when the non-linearities are significant and the signal to noise ratio is poor, the EKF performs poorly. In this talk we will discuss an alternative algorithm developed in the machine learning community which is based message passing in Factor Graphs and the Expectation Propagation (EP) approximation. We will show this method provides a consistent and accurate Gaussian approximation to the posterior enabling system identification using Expectation Maximisation (EM) even in cases when the EKF fails.

SPLS Key Talk: Stream-processing for Functional Programmers (18 June, 2014)

Speaker: Ryan Newton

Functional programming and stream-processing have shared history -- from early work on dataflow architectures, VAL, and SISAL, to Haskell's use of stream-based IO (before monads) or the modern-day resurgence of Haskell stream libraries (iteratees, pipes, conduit). These days, "streaming" can mean a lot of things; StreamIt, based on synchronous-dataflow, has totally ordered streams and will not duplicate stateful stream processors, whereas Apache Storm makes the opposite decisions. The first part of this talk will overview this broad landscape.

We argue that the degree of dynamism (e.g. in data-rates and stream topologies) is the major axis along which various stream technologies are differentiated. In the second part of this talk, we describe our past and ongoing work on navigating this spectrum, by developing technologies that leverage regularities where they occur, but tolerate dynamism. We have studied profile-driven program partitioning, and other compilation topics, and our current thrust for developing stream DSLs overlaps heavily with work on data-parallel DSLs (e.g. Accelerate).

Scottish Programming Languages Seminar (SPLS) (18 June, 2014)


The Scottish Programming Languages Seminar is a forum for discussion of all aspects of programming languages. We meet for an afternoon once every few months. The summer 2014 SPLS meeting will be held at the University of Glasgow. Meetings are open; if you wish to attend, please, register here .

Towards A Research-Oriented Culture for the Teaching and Learning of Computing (18 June, 2014)

Speaker: Professor Raymond Lister
Professor Lister discusses the double-lives that academics lead - noting how different research and teaching cultures are. Using examples drawn from Australasian projects, he shows how research practices need to, and can, be drawn into our learning and

Academics lead a double life. In our research lives we see ourselves as
part of a community that reaches beyond our own university. We read
literature, we attend conferences, we publish, and the cycle
repeats, with community members building upon each other's work. But in
our other life we rarely discuss teaching beyond our own university,
nor are we guided by theory or literature; instead we simply follow our
private instincts.  Whereas in research, we build upon the previous
research cycle, in teaching we reinvent the wheel.  As the American NSF recently
noted, "undergraduate computing education today often looks much as it
did several decades ago".  That is, while the information technology we teach
has changed dramatically over the decades, our pedagogy remains largely unchanged.

Academics in computing, or in any other discipline, can approach their
teaching as research into how novices become experts.  The Australian
and New Zealand BRACElet project is one model of how this can be done.
It was a multi-institutional action research study of how novice
programmers comprehend and write computer programs.  While BRACElet was a
research project, it remains close to educational practice, with much of
the data analyzed coming from exam papers completed by first year undergraduates
at the participating universities.  This talk will review the research
results from that project.

GPG: A Future of Parallel Programming Languages (04 June, 2014)

Speaker: Dr Hans-Wolfgang Loidl

The development of parallel hardware has  seen radical changes over the last two decades. These  changes have  made parallel programming  main-stream technology, with  the advent  of  multi-cores, and  now pose  new  challenges in  exploiting hierarchical, heterogeneous  hardware, in  the form  of clusters  of accelerated multi-cores.

In this talk I will examine design  principles in a range of high-level parallel programming  languages  aiming  to  tackle  these  challenges  and  match  their development with that of the underlying hardware.  As a concrete instance I will discuss abstractions explored  in the context of Glasgow  parallel Haskell (GpH) and I  will give  recent performance  results indicating how  a very  high level language  approach can  manage  to  adapt to  underlying  hardware. No  previous knowledge of Klingon will be required for this talk.

Adaptive Interaction (02 June, 2014)

Speaker: Professor Andrew Howes
A utility maximization approach to understanding human interaction with technology

This lecture describes a theoretical framework for the behavioural sciences that holds high promise for theory-driven research and design in Human-Computer Interaction. The framework is designed to tackle the adaptive, ecological, and bounded nature of human behaviour. It is designed to help scientists and practitioners reason about why people choose to behave as they do and to explain which strategies people choose in response to utility, ecology, and cognitive information processing mechanisms. A key idea is that people choose strategies so as to maximise utility given constraints. The framework is illustrated with a number of examples including pointing, multitasking, skim- reading, online purchasing, Signal-Detection Theory and diagnosis, and the influence of reputation on purchasing decisions. Importantly, these examples span from perceptual/motor coordination, through cognition to social interaction. Finally, the lecture discusses the challenging idea that people seek to find optimal strategies and also discusses the implications for behavioral investigation in HCI.

Big Data in the Social Sciences (02 June, 2014)

Speaker: Sarah Birch
One-day workshop to take place on 23 June

One-day workshop to take place on 23 June. This workshop will bring together prominent scholars from a number of institutions to discuss the theoretical and methodological challenges associated with the use of Big Data.
Registration is free, although there are a limited number of places available.

For further information, contact Sarah Birch ( or Philip Habel (

GPG: The Design of the AJITPar Parallel Coordination Language (28 May, 2014)

Speaker: Dr Patrick Maier

The AJITPar (Adaptive Just-In-Time Parallelisation) project aims to achieve portable parallel performance by combining dynamic trace-based just-in-time compilation of a high-level parallel functional language with dynamic demand-driven scheduling of parallelism. This will involve estimating the granularity of parallel tasks by online profiling and static analysis, and (in later project stages) adapting granularity by online code transformations.

The starting point of AJITPar is lambdachine, a recently developed sequential Just-In-Time (JIT) compiler for Haskell. In this talk, I'll present our ideas for how to make lambdachine parallel. To this end, we design a low-level domain-specific language (DSL) for task-parallel computations. Specifically, this DSL should deal with task creation and scheduling, communication between and synchronisation of tasks, and serialisation of data (including tasks).

The design goals for this DSL are as follows:
* It should be expressive enough to enable building higher-level abstractions, like algorithmic skeletons.
* It should be flexible enough to express a range of benchmarks, from regular matrix bashing to highly irregular symbolic computations.
* It should support an equational theory of program transformations, to support online transformation in later stages of AJITPar.
* Most importantly, it should be easy to bolt on top of the single-threaded lambdachine runtime system.

[Apologies to those who have heard this talk 4 weeks ago at Heriot-Watt.  I'll say exactly the same, just 30% slower to make up for the longer slot.]

Complex networks and complex processes (27 May, 2014)

Speaker: Professor Simon Dobson

There is increasing interest in using complex networks to model phenomena, and especially in the construction of systems of networks that interact and respond to each other -- the so-called complex adaptive coupled networks. They seem to offer a level of abstraction that is appropriate for capturing the large-scale dynamics of real-world processes without becoming lost in the detail. This talk introduces such networks for a formal audience, describes some recent work in urban traffic modelling, and speculates on the combination of complex networks with sensor data to study environmental incidents such as flooding.

National and international data linkage — top down or bottom up? (23 May, 2014)

Speaker: John Bass, SICSA Distinguished Visiting Fellow

Abstract: How do we create a national, let alone international, linkage?
At the national level, projects rarely have the time and willingness to
pay attention to detail, and tend to create "broad brush" data. Small
local entities often take pride in making use of local knowledge to
create high-quality linkage. Is it possible to have a big picture that
still reflects the quality of linkage found in a local cancer registry?
It's easier than you might think!

Biography: After an early career in marine zoology combined with
computing, John Bass has been at the leading edge of health-related data
linkage in Australia since 1984. Early work on infant mortality in
Western Australia resulted in a linked dataset that became the
cornerstone of the Telethon Institute for Child Health Research. He then
implemented the Australian National Death Index in Canberra before
returning to Perth as the founding manager of the Western Australian
linked health data project — the first of its kind in the country. He
designed and implemented the technical system of this group, which is
widely recognised as the foremost data linkage unit in Australia. John
stepped aside from his position in 2000 but has continued a close
relationship with the project, designing and overseeing the
implementation of genealogical links and then spending several years
working with state and federal government to implement the first
large-scale linkage of national pharmaceutical and general practice
information. This involved the development of new best-practice privacy
protocols that are now widely adopted across Australia. He was a core
participant in developing a detailed plan for the implementation of a
second state-based data linkage unit involving New South Wales and the
Australian Capital Territory. In 2008 John moved to Tasmania, where he
spent four years planning and paving the way for the implementation of a
state-wide data linkage unit. He is now semi-retired, but still working
on new developments in data linkage technology.

Callisto: revisiting parallel runtime systems for multicore architectures (23 May, 2014)

Speaker: Tim Harris
ENDS seminar by an industrial research partner

Tim Harris is a researcher at Oracle Labs where his interests include parallel programming, operating systems, and architecture support for programming language runtime systems. His other recent work has focused on the implementation of software transactional memory for multi-core computers, and the design of programming language features based on it. He is a co-author of the Morgan Claypool textbook on Transactional Memory.

Tim has a BA and PhD in computer science from Cambridge University Computer Laboratory. He was on the faculty at the Computer Laboratory from 2000-2004 where he led the department’s research on concurrent data structures and contributed to the Xen virtual machine monitor project. He was at Microsoft Research from 2004, and then joined Oracle Labs in 2012.

GPG: Propositions as Sessions (21 May, 2014)

Speaker: Prof Philip Wadler

Continuing a line of work by Abramsky (1994), by Bellin and Scott (1994), and by Caires and Pfenning (2010), among others, this paper presents CP, a calculus in which propositions of classical linear logic correspond to session types. Continuing a line of work by Honda (1993), by Honda, Kubo, and Vasconcelos (1998), and by Gay and Vasconcelos (2010), among others, this paper presents GV, a linear functional language with session types, and presents a translation from GV into CP. The translation formalises for the first time a connection between a standard presentation of session types and linear logic, and shows how a modification to the standard presentation yield a language free from deadlock, where deadlock freedom follows from the correspondence to linear logic.

ICFP, September 2012; Journal of Functional Programming, Best Papers of ICFP 2012.

Reasoning about Optimal Stable Matchings under Partial Information (20 May, 2014)

Speaker: Baharak Rastegari

We study two-sided matching markets in which participants are initially endowed with partial preference orderings, lacking precise information about their true, strictly ordered list of preferences. We wish to reason about matchings that are stable with respect to agents' true preferences, and which are furthermore optimal for one given side of the market. We present three main results. First, one can decide in polynomial time whether there exists a  matching that is stable and optimal under all strict preference orders that refine the given partial orders, and can construct this matching in polynomial time if it does exist. We show, however, that deciding whether a given pair of agents are matched in all or no such optimal stable matchings is co-NP-complete, even under quite severe restrictions on preferences. Finally, we describe a polynomial-time algorithm that decides, given a matching that is stable under the partial preference orderings, whether that matching is stable and optimal for one side of the market under some refinement of the partial orders.

Recipes for PhD (16 May, 2014)

Speaker: Milad Shokouhi

PhD is a bit like cooking. Most people follow similar steps but the outcome could be very different. Each of us have our own special recipes, and eventually we all make a unique PhD cookbook. In this talk, I'll share my recipes. Bon Appétit!

Milad Shokouhi is a Senior Applied Researcher working for Bing at Microsoft Research Cambridge. He is also an honorary lecturer in School of Computing Science at the University of Glasgow. Before joining Microsoft, he did his PhD on federated search at RMIT University in 2007. His other research interests include auto-completion, personalization, federated search and query reformulation. He has published more than 30 papers and has served on the program committee of most major IR conferences and journals.

Web-scale Semantic Ranking (16 May, 2014)

Speaker: Dr Nick Craswell
Bing Ranking Techniques

Semantic ranking models score documents based on closeness in meaning to the query rather than by just matching keywords. To implement semantic ranking at Web-scale, we have designed and deployed a new multi-level ranking systems that combines the best of inverted index and forward index technologies. I will describe this infrastructure which is currently serving many millions of users and explore several types of semantic models: translation models, syntactic pattern matching and topical matching models. Our experiments demonstrate that these semantic ranking models significantly improve relevance over our existing baseline system. This is the repeat of a WWW2014 industry track talk.

A Domain Specific Language for GPU Programming (14 May, 2014)

Speaker: Dr John O'Donnell


GPG Seminar: A Faster ESFA Implementation: some useful work and a bit of outrage (14 May, 2014)

Speaker: Dr Cordelia Hall and Dr John O'Donnell

John gave a talk on ESFAs last year. This is a fine-grain data-parallel algorithm suited to GPUs. We have a direct implementation written in CUDA but this is a large and low-level piece of code. This talk explores techniques used to optimise the code. We've also developed a much more readable language for GPU programming called AL, and have implemented ESFAs using it.

Mining Behavior Models from User-Intensive Web Applications (13 May, 2014)

Speaker: Dr. Giordano Tamburrelli

Many modern user-intensive applications, such as Web applications, must satisfy the interaction requirements of thousands if not millions of users, which can be hardly fully understood at design time. Designing applications that meet user behaviors, by efficiently supporting the prevalent navigation patterns, and evolving with them requires new approaches that go beyond classic software engineering solutions.
In this talk we present a preliminary approach called BEAR that automates the acquisition of user-interaction requirements in an incremental and reflective way. More precisely, the approach builds upon inferring a set of probabilistic Markov models of the users’ navigational behaviors, dynamically extracted from the interaction history given in the form of a log file. BEAR builds on top of a well established formal tool, indeed it analyzes the inferred models to verify quantitative properties by means of probabilistic model checking. The talk discusses the capabilities of BEAR and illustrates the preliminary results obtained on a Web application currently in use.

CS Education – a hot topic (09 May, 2014)

Speaker: Quintin Cutts

Although I've worked in CS education research and schools outreach for over 15 years, it occurred to me that I've almost never given a talk on it in the School.  Given that CS in schools is a hot topic globally right now, and there's a lot happening in Scotland too, it's time to break the silence…

But, I wondered, of all that work, what should I tell them in just 30 mins?  I reflected on what makes academics tick (as far as I can tell!) and so what might be of interest… here's my top four:

1.  Interesting stuff
2.  Nice people from around the world
3.  Doing good
4.  Money (so we can do the above)

The talk will touch on all of these in relation to the work I've been doing in Scotland, the UK and the US.  Given that we're at such an interesting juncture in CS education nationally and internationally, there'll also be an invite to engage with as much or as little of it as you choose…

Introducing Tagged Pointers to the HotSpot JVM (07 May, 2014)

Speaker: Mr Wing Li

Our previous research has shown that that non-Java languages, such as Clojure and Scala, frequently box their primitive values within objects. These objects are typically short lived, requiring garbage collection later. Tagged pointers address this problem by encoding the primitive value directly in the bitstring of the pointer word itself, removing the need to allocate an object on the heap. However, checking each pointer reference for tags and type lookup can add to the overhead of this technique. This presentation is on my work implementing tagged pointers within the industry-standard HotSpot JVM interpreter. I describe some of the decisions made and their effects. I also describe the modifications made to the template based interpreter to support tagged pointers.

GPG Seminar: On the Capability and Achievable Performance of FPGAs for HPC Applications (07 May, 2014)

Speaker: Dr Wim Vanderbauwhede

In this talk, we take a look at how suitable FPGAs are for High-Performance Computing (HPC), and outline our proposed approach to achieving optimal performance-per-Watt.

Mining Behavior Models from User-Intensive Web Applications (06 May, 2014)

Speaker: Dr Giordano Tamburrelli

Many modern user-intensive applications, such as Web applications, must satisfy the interaction requirements of thousands if not millions of users, which can be hardly fully understood at design time. Designing applications that meet user behaviors, by efficiently supporting the prevalent navigation patterns, and evolving with them requires new approaches that go beyond classic software engineering solutions.
In this talk we present a preliminary approach called BEAR that automates the acquisition of user-interaction requirements in an incremental and reflective way. More precisely, the approach builds upon inferring a set of probabilistic Markov models of the users’ navigational behaviors, dynamically extracted from the interaction history given in the form of a log file. BEAR builds on top of a well established formal tool, indeed it analyzes the inferred models to verify quantitative properties by means of probabilistic model checking. The talk discusses the capabilities of BEAR and illustrates the preliminary results obtained on a Web application currently in use.

Do I need to fix a failed component now, or can I wait until tomorrow? (06 May, 2014)

Speaker: Prof Muffy Calder

Ideally in systems in which failures are monitored and sensed, an engineer would fix a failure immediately. But this might not be possible due to limited resources and/or physical distance to a device. So how does an engineer prioritise and make best use of their resources, while still ensuring the service is operating within acceptable levels of risk of failure?

We hypothesise predictive event-based modelling and reasoning with a stochastic temporal logic can inform decision making when failures occur. We show, with a real industrial case study (a safety critical comms system for NATS), that by relating the status of assets to service behaviour in a CTMC model, the risk of service failure now, and over various time frames, future failure rates, and interventions, can be quantified. We reason both in the context of how the system is designed to meet service requirements, and how it actually meets service requirements, when the models are calibrated with rates derived from historical, field data.

Open Access and Public Engagement (02 May, 2014)

Speaker: Jamie Gallagher

The way we disseminate our research is changing. There are increasing drivers to publish in open access journals and to engage wider audiences with our research. The newly appointed public engagement officer and a representative from the library will discuss these quickly developing areas and how they relate to your research. They will demonstrate the way public engagement and open access can be used to create more impact with your work; show how you can include these in grant proposals and detail the help and assistant available from the University in these areas.

GPG Seminar: Auto-tuned Programming Patterns and the Programmability Gap (30 April, 2014)

Speaker: Dr Christian Fensch

My work tackles an impending software crisis. Developments in hardware technology are set to render programmers unable to write efficient new applications, and to make existing applications run slower on new processors than on old ones. I propose a technique enabling programmers to generate applications that automatically adapt to future processors.

In my talk, I will briefly discuss the roots of the problem and the key insight to address it: computer programs exhibit recurring patterns. Detecting and exploiting these patterns is already a very hard problem. Once done, these patterns can be used to automate the process of transferring programs to future processors. In the remainder of this talk, I will present several activities that follow along this insight.

First, I will present Partans – an autotuning framework for stencil computation. In particular, I will discuss the performance impact of the PCIe interconnect in a seemingly homogeneous system. Second, I present results from a collaborative project with Samsung investigating how to design pattern based programming hierarchies. Finally, I will show some preliminary results on expressing commonly used benchmarks in a patternised way and conclude with an outlook onto my future research activities.

Canonical Labelling of Graphs (29 April, 2014)

Speaker: Dr Alice Miller

Many of us in FATA are concerned with symmetry in combinatorial objects, specifically in the use of isomorphism checking to eliminate copies of the same object during their generation, or to prevent redundant work during combinatorial search (for example, during model checking). I have been using the graph isomorphism package, nauty, for years, and eventually decided that I should find out how it works!

The most efficient method used for graph isomorphism checking is that of using a certificate, or cononical labelling . A canonical labelling is a function C that maps any graph to a natural number in such a way that C(G1)=C(G2) if and only if G1 and G2 are isomorphic. In this talk I discuss canonical labelling of trees, and of graphs in general. For the latter, I describe the popular algorithm using equitable partitions used by the most successful graph isomorphism programs such as nauty. The talk will include explanatory examples and  pictures and no pseudocode!

Optimized Interleaving for Retrieval Evaluation (28 April, 2014)

Speaker: Filip Radlinski

Interleaving is an online evaluation technique for comparing the relative quality of information retrieval functions by combining their result lists and tracking clicks. A sequence of such algorithms have been proposed, each being shown to address problems in earlier algorithms. In this talk, I will formalize and generalize this process, while introducing a formal model: After identifying a set of desirable properties for interleaving, I will show that an interleaving algorithm can be obtained as the solution to an optimization problem within those constraints. This approach makes explicit the parameters of the algorithm, as well as assumptions about user behavior. Further, this approach leads to an unbiased and more efficient interleaving algorithm than any previous approach, as I will show a novel log-based analysis of user search behaviour.

GPG Seminar: CUDA vs. OpenCL (23 April, 2014)

Speaker: Mr Paul Harvey

Last year I spent the summer at the University of Aizu in Japan, working on porting a particle dispersion simulation from CUDA to OpenCL. This talk will discuss the results from this, including a number of comparisons on different hardware platforms between the two technologies, including the Xeon Phi.

GIST Talk - Accent the Positive (10 April, 2014)

Speaker: Alistair Edwards

The way people speak tells a lot about their origins – geographical and social, but when someone can only speak with the aid of an artificial voice (such as Stephen Hawking), conventional expectations are subverted. The ultimate aim of most speech synthesis research is more human-sounding voices, yet the most commonly used one, DecTalk, is quite robotic. Why is this - and is a human voice always appropriate?

This seminar will explore some of the limitations and possibilities of speech technology.

Gaussian Processes for Big Data (03 April, 2014)

Speaker: Dr James Hensman

Gaussian Process (GP) models are widely applicable models of functions, and are used extensively in statistics and machine learning for regression, classification and as components of more complex models. Inference in a Gaussian process model usually costs O(n^3) operations, where n is the number of data. In the Big Data (tm) world, it would initially seem unlikely that GPs might contribute due to this computational requirement.

Parametric models have been successfully applied to Big Data (tm) using the Robbins-Monro gradient method, which allows data to be processed individually or in small batches. In this talk, I'll show how these ideas can be applied to Gaussian Processes. To do this, I'll form a variational bound on the marginal likelihood: we discuss the properties of this bound, including the conditions where we recover exact GP behaviour.

Our methods have allowed GP regression on hundreds of thousands of data, using a standard desktop machine. for more details, see .

Software Defined Data Centres through Container based Virtualisation (28 March, 2014)

Speaker: Mr. Simon Jouet

Due to structural and design reasons such as uncertain resource demand, pour workload placement and long provisioning time scales modern data centres have been running as low as 10% utilisation. In order to increase utilisation the modern approach has been to allocate multiple tenants onto the same server through virtualisation techniques such as Xen, however, even with highly sophisticated placement and consolidation algorithms servers rarely runs above 50% utilisation.

In this talk, I will discuss about container-based virtualisation, a finer grain approach to multi-tenant isolation and the possible immediate benefits on server utilisation. Secondly I will briefly introduce Docker and how it can significantly change current DC management. Finally I will explain my idea on how containers can be used to improve server utilisation as well as network placement and isolation (slicing), storage and finally resiliency.

Julie and the Giant Sphere (28 March, 2014)

Speaker: Julie Williamson

Over the last five months, I have been leading a short knowledge exchange project with Pufferfish Ltd.   The project revolved around analyzing pedestrian traffic around the large spherical displays designed by Pufferfish Ltd.  During this talk, I will discuss my experiences in securing KE funds to support the project, using the university press office to engage with the media, and how I went about completing a variety of public engagement activities.

GPG Seminar: Profiling-Based Characterisation of Glasgow Parallel Haskell Applications (26 March, 2014)

Speaker: Mr Evgenij Belikov

We present a profiling-based characterisation of 8 small and medium-sized semi-explicitly parallel functional applications from several domains with respect to thread granularity, communication, and memory management profile, which are highly relevant for dynamic and adaptive parallelism control. The results confirm that the parallel Haskell implementation copes well with large numbers of potential threads, quantify the impact of the communication rate on parallel performance, and identify memory management overhead as a major limiting factor to scalability on shared-memory machines. This characterisation helps to improve our understanding of the behaviour of parallel functional programs and discover program characteristics that can be exploited to optimise application performance by improving the effectiveness and efficiency of parallelism management.

Composite retrieval of heterogeneous web search (24 March, 2014)

Speaker: Horatiu Bota

Traditional search systems generally present a ranked list of documents as answers to user queries. In aggregated search systems, results from different and increasingly diverse verticals (image, video, news, etc.) are returned to users. For instance, many such search engines return to users both images and web documents as answers to the query ``flower''. Aggregated search has become a very popular paradigm. In this paper, we go one step further and study a different search paradigm: composite retrieval. Rather than returning and merging results from different verticals, as is the case with aggregated search, we propose to return to users a set of "bundles", where a bundle is composed of "cohesive'' results from several verticals. For example, for the query "London Olympic'', one bundle per sport could be returned, each containing results extracted from news, videos, images, or Wikipedia. Composite retrieval can promote exploratory search in a way that helps users understand the diversity of results available for a specific query and decide what to explore in more detail. 


We proposed and evaluated a variety of approaches to construct bundles that are relevant, cohesive and diverse. We also utilize both entitiy and term as a surrogate to represent items and demonstrate their effectiveness of bridging the "mismatch" gap among heterogeneous sources. Compared with three baselines (traditional "general web only'' ranking, federated search ranking and aggregated search),  our evaluation results demonstrate significant performance improvement for a highly heterogeneous web collection.

Three Languages in Three Weeks: A Swedish approach to introducing programming (21 March, 2014)

Speaker: John Hamer

For the past four years I have been teaching a course "Programmering för Nybörjare" ("Programming for Beginners") that introduces programming using three diverse programming languages: Python, Prolog and Erlang. I will describe how the course is organised, and what a novice student can reasonably be expected to achieve within the space of a week.

GPG Seminar: Experience and Open Questions with Xeon Phi (19 March, 2014)

Speaker: Susanne Oehler and Paul Cockshott

This will cover what we have done to enable the Glasgow Pascal Compiler to target the Xeon Phi, what we have learned from doing this, and what open questions there are for the further development of the system. We are particularly interested in soliciting advice from others on what we should do next.

FATA Seminar - Profile-based optimal matchings in the Student/Project Allocation (18 March, 2014)

Speaker: Augustine Kwanashie
Profile-based optimal matchings in the Student/Project Allocation

In the Student/Project Allocation problem (SPA) we seek to assign students
to group or individual projects offered by lecturers. Students are required
to provide a list of projects they find acceptable in order of preference.
Each student can be assigned to at most one project and there are
constraints on the maximum number of students that can be assigned to each
project and lecturer.  A matching in this context is a set of
student-project pairs that satisfies these constraints.
We seek to find matchings that satisfy optimality criteria based on the
profile of a matching. This is a vector whose ith component indicates the
number of students obtaining their ith-choice project. Various profile-based
optimality criteria have been studied. For example, one matching M1 may be
preferred to another matching M2 if M1 has more students with first-choice
projects than M2.
In this talk we present an efficient algorithm for finding optimal matchings
to SPA problems based on various well known profile-based optimality
criteria. We model SPA as a network flow problem and describe a modified
augmenting path algorithm for finding a maximum flow which can then be
transformed to an optimal SPA matching. This approach allows for additional
constraints, such as project and lecturer lower quotas, to be handled
flexibly without modifying the original algorithm.

Query Auto-completion & Composite retrieval (17 March, 2014)

Speaker: Stewart Whiting and Horatiu Bota

=Recent and Robust Query Auto-Completion by Stewart Whiting=

Query auto-completion (QAC) is a common interactive feature that assists users in formulating queries by providing completion suggestions as they type. In order for QAC to minimise the user’s cognitive and physical effort, it must: (i) suggest the user’s intended query after minimal input keystrokes, and (ii) rank the user’s intended query highly in completion suggestions. QAC must be both robust and time-sensitive – that is, able to sufficiently rank both consistently and recently popular queries in completion suggestions. Addressing this trade-off, we propose several practical completion suggestion ranking approaches, including: (i) a sliding window of query popularity evidence from the past 2-28 days, (ii) the query popularity distribution in the last N queries observed with a given prefix, and (iii) short-range query popularity prediction based on recently observed trends. Through real-time simulation experiments, we extensively investigated the parameters necessary to maximise QAC effectiveness for three openly available query log datasets with prefixes of 2-5 characters: MSN and AOL (both English), and Sogou 2008 (Chinese). Results demonstrate consistent and language-independent improvements of up to 9.2% over a non-temporal QAC baseline for all query logs with prefix lengths of 2-3 characters. Hence, this work is an important step towards more effective QAC approaches.


=Composite retrieval of heterogeneous web search by Horatiu Bota=

Traditional search systems generally present a ranked list of documents as answers to user queries. In aggregated search systems, results from different and increasingly diverse verticals (image, video, news, etc.) are returned to users. For instance, many such search engines return to users both images and web documents as answers to the query ``flower''. Aggregated search has become a very popular paradigm. In this paper, we go one step further and study a different search paradigm: composite retrieval. Rather than returning and merging results from different verticals, as is the case with aggregated search, we propose to return to users a set of "bundles", where a bundle is composed of "cohesive'' results from several verticals. For example, for the query "London Olympic'', one bundle per sport could be returned, each containing results extracted from news, videos, images, or Wikipedia. Composite retrieval can promote exploratory search in a way that helps users understand the diversity of results available for a specific query and decide what to explore in more detail. 


We proposed and evaluated a variety of approaches to construct bundles that are relevant, cohesive and diverse. We also utilize both entitiy and term as a surrogate to represent items and demonstrate their effectiveness of bridging the "mismatch" gap among heterogeneous sources. Compared with three baselines (traditional "general web only'' ranking, federated search ranking and aggregated search),  our evaluation results demonstrate significant performance improvement for a highly heterogeneous web collection.

GPG Seminar: Bridging the Divide: A New Tool-Supported Methodology for Programming Heterogeneous Multicore Machines (12 March, 2014)

Speaker: Dr Vladimir Janjic

In this talk, we present a new programming methodology for introducing and tuning parallelism for heterogeneous shared-methodology systems (comprising a mixture of CPUs and GPUs), that combines algorithmic skeletons, Monte-Carlo Tree Search, and refactoring tool support. Using your approach, we demonstrate easily obtainable, significant and scalable speedups of up to 41 over the sequential code on a 24-core heterogeneous multiprocessor, and comparable to the best possible speedups that could be obtained.

FATA Seminar - Verifying Differential Privacy by Program Logic (11 March, 2014)

Speaker: Marco Gaboardi
Verifying Differential Privacy by Program Logic

Differential Privacy is becoming a standard approach in data privacy: it offers ways to answer queries about sensitive information while providing strong, provable privacy guarantees, ensuring that the presence or absence of a single individual in the database has a negligible statistical effect on the query's result. Many specific queries have been shown to be differentially private, but manually checking that a given query is differentially private can be both tedious and rather subtle. Moreover, this process becomes unfeasible when large programs are considered.

In this talk I will introduce the basics of differential privacy and some of the fundamental mechanisms for building differentially private programs. Additionally, I will present a verification approach based on Hoare Logic useful to certify a broad range of probabilistic programs as differentially private.

Studying the performance of semi-structured p2p information retrieval (10 March, 2014)

Speaker: Rami Alkhawaldeh

In recent decades, retrieval systems deployed over peer-to-peer (P2P) overlay networks have been investigated as an alternative to centralised search engines. Although modern search engines provide efficient document retrieval, there are several drawbacks, including: a single point of failure, maintenance costs, privacy risks, information monopolies from search engines companies, and difficulty retrieving hidden documents in the web (i.e. the deep web). P2P information retrieval (P2PIR) systems promise an alternative distributed system to the traditional centralised search engine architecture. Users and creators of web content in such networks have full control over what information they wish to share as well as how they share it.




Researchers have been tackling several challenges to build effective P2PIR systems: (i) collection (peer) representation during indexing, (ii) peer selection during search to route queries to relevant peers and (iii) final peer result merging. Semi-structured P2P networks (i.e, a partially decentralised unstructured overlay network) offer an intermediate design that minimizes the weakness of both centralised and completely decentralised overlay networks and combines the advantages of those two topologies. So, an evaluation framework for this kind of network is necessary to compare the performance of different P2P approaches and to be a guide for developing new and more powerful approaches. In this work, we study the performance of three cluster-based semi-structured P2PIR models and explain the effectiveness of several important design considerations and parameters on retrieval performance, as well as the robustness of these types of network.


4pm @ Level 4

Programming ability in 4th year (07 March, 2014)

Speaker: Patrick Prosser

Last semester 29 of my CP(M) students submitted program code for their 1st assessed exercise. As part of the feedback, I delivered a presentation to the class where I anonymised code and showed every student everyone’s code. We then had an open discussion.  I want to replay this presentation to you, our academics and researchers.

I will start by presenting the assessed exercise (Sudoku). I will then give a model answer (implemented by myself). You can then critique my code. I will then present snippets from all student submissions. This will give you an idea of what our students think is acceptable code and it will give you feedback on how well we do with respect to teaching our students to program.

FATA Seminar - Verification of Concurrent Quantum Protocols by Equivalence Checking (04 March, 2014)

Speaker: Simon Gay
Verification of Concurrent Quantum Protocols by Equivalence Checking

We present a tool which uses a concurrent language for describing quantum systems, and performs verification by checking equivalence between specification and implementation. In general, simulation of quantum systems using current computing technology is infeasible. We restrict ourselves to the stabilizer formalism, in which there are efficient simulation algorithms.

In particular, we consider concurrent quantum protocols that behave functionally in the sense of computing a deterministic input-output relation for all interleavings of the concurrent system.
Crucially, these input-output relations can be abstracted by superoperators, enabling us to take advantage of linearity.  This allows us to analyse the behaviour of protocols with arbitrary input, by simulating their operation on a finite basis set consisting of stabilizer states.

Despite the limitations of the stabilizer formalism and also the range of protocols that can be analysed using this approach, we have applied our equivalence checking tool to specify and verify interesting and practical quantum protocols from teleportation to secret sharing.

Joint work with Ebrahim Ardeshir-Larijani and Rajagopal Nagarajan.

FATA Seminar - Backbones for equality (25 February, 2014)

Speaker: Mike Codish
Backbones for equality

Mike is visiting the school between 24th and 26th February, 2014. His research interests include the development and application of formal techniques to aid in the compilation and implementation of sequential and concurrent logic programs and to analyse, optimise and reason about such programs.  In this talk Mike will give a general introduction to his work. Specifically, he will describe his structured approach to solving finite domain constraint problems through an encoding to SAT, and the BEE tool (Ben-Gurion Equi-propogation Encoder). He will introduce the notion of the backbone of a CNF formula, and describe his recent work, in which backbones are generalized to capture equations between literals.

Inside The World’s Playlist (23 February, 2014)

Speaker: Manos Tsagkias


We describe the algorithms behind Streamwatchr, a real-time system for analyzing the music listening behavior of people around the world. Streamwatchr collects music-related tweets, extracts artists and songs, and visualises the results in two ways: (i)~currently trending songs and artists, and (ii)~newly discovered songs.


A Novel Approach in Task-based Parallelism (19 February, 2014)

Speaker: Ashkan Tousimojarad

The Glasgow Parallel Reduction Machine (GPRM) is a novel task-based parallel framework targeting shared-memory many-core systems. Our approach is to provide a task composition language with default parallel evaluation, implemented using a restricted subset of C++ which is familiar and easy to use for the end users.

In this talk, I will shortly describe the structure of the GPRM. Micro-benchmarks used to identify bottlenecks in other approaches, specifically OpenMP, will be shown. Then I will present a solution we have used in our framework to increase the scalability of an LU Factorisation algorithm for Sparse Matrices. 

GPG Seminar: Pattern-based Autotuning of Parallel Programs (19 February, 2014)

Speaker: Dr Murray Cole

As well as offering a simple API, pattern (or skeleton) based parallel programming models define implementation templates which can be tuned automatically. We will review the motivations for such an approach, and present four case-studies of its application.

FATA Seminar - Sudoku as an Assessed Exercise in Constraint Programming: an analysis of student programming ability (18 February, 2014)

Speaker: Patrick Prosser
Sudoku as an Assessed Exercise in Constraint Programming: an analysis of student programming ability

In the Constraint Programming course (Masters) the first exercise is to code up a solver for the Sudoku puzzle, investigating the effect of using 2 subtly different models and checking that problem instances do indeed have unique solutions. I will present snippets of java code from the 29 students who submitted work. Much of this presentation will be entertaining, but the underlying message is rather serious.

GPG Seminar: Compiler Optimisations: from Splendid Isolation to the Hive Mind (12 February, 2014)

Speaker: Dr Jeremy Singer

This is a roadmap talk about general compiler optimisations. I will start by giving a brief overview of how compiler optimisations can be classified, discussing ahead-of-time optimisation, feedback-directed optimisation and runtime optimisation. I will discuss how information sharing enables more effective optimisation. I will argue that this trend is taking us to a Big Brother optimisation utopia.

FATA Seminar - An Exact Branch and Bound Algorithm with Symmetry Breaking for the Maximum Balanced Induced Biclique Problem (11 February, 2014)

Speaker: Ciaran McCreesh
An Exact Branch and Bound Algorithm with Symmetry Breaking for the Maximum Balanced Induced Biclique Problem

Garey and Johnson's Hard Problem GT24 is to determine whether a given graph contains a balanced induced biclique of a particular size. We already have some good algorithms for the maximum clique problem---we adapt these, to introduce the first branch and bound algorithm for the maximum balanced induced biclique problem.

But why care about balanced induced bicliques? We are not aware of any applications (although other biclique variants do show up in data mining). Instead, we are interested because the problem has some nice properties from an algorithmic perspective: solutions are reasonably well-behaved, and it has the simplest possible non-trivial symmetry. We show how the the symmetry can be removed, using only two small lines of code. But how much easier does this make the problem? How do symmetries interact with branch and bound? What about parallel branch and bound? And what's going on with random graphs?

Guest Talk - Enhancing Network Structure to Increase Resilience and Survivability (11 February, 2014)

Speaker: Prof James P.G. Sterbenz

Resilience and survivability of the Future Internet is increasingly important to preserve critical services, particularly against attackers with knowledge of the structure and vulnerabilities of the network, as well as against large scale disasters that affect a large area.  A brief motivation and introduction will be given to the ResiliNets architecture, strategy, design principles, and analysis methodology.  This presentation will then describe the graph-theoretic properties required for flow-robustness, and introduce our path diversity measures.  Two current research directions will then be described:  how to add links to existing graphs under cost constraints to increase flow robustness, and geographic diversity as a basis for multipath geodiverse end-to-end transport (ResTP) and routing (GeoDivRP).

Models and Modalities (05 February, 2014)

Speaker: Joe Davidson

In this talk I will attempt to examine the relationship between the size of a program written in a particular language and the expressiveness of the semantics of that language. We compare across the program/memory boundary using Random Access Stored Program (RASP)  machines as a Von Neumann model, and the classic Turing Machine as our Harvard architecture. But will also iterate on our RASP machine to see how minor semantic changes affect our program sizes within a single memory model.

In addition, I explain how semantics are not as suitable as we’d like for these comparisons and use this to motivate how I have grounded our models using Field Programmable Gate Arrays. I’ll will present the current results from both the program+semantics analysis and grounding into FPGAs.

GPG Seminar: Parallelising the Computational Algebra System GAP (05 February, 2014)

Speaker: Dr Alexander Konovalov

I will give an overview of the project "HPC-GAP: High Performance Computational Algebra and Discrete Mathematics" aimed at parallelising the GAP system ( to support both shared and distributed memory programming models. In particular, I will describe the memory model used by the multithreaded version of the GAP system, corresponding GAP language extensions, and challenges that we are meeting while making the GAP code thread-safe.

GIST Talk - Socially Intelligent Sensing Systems (04 February, 2014)

Speaker: Dr Hayley Hung

One of the fundamental questions of computer science is about understanding how machines can best serve people. In this talk, I will focus on answering the question of how automated systems can achieve this by being aware of people as social beings. So much of our lives revolve around face-to-face communication. It affects our relationships with others, the influence they have over us, and how this can ultimately transform into decisions that affect a single person or many more people. However, we understand relatively little about how to automate the perception of social behaviour and recent research findings only touch the tip of the iceberg.

In this talk, I will describe some of the research I have carried out to address this gap by presenting my work on devising models to automatically interpret face-to-face human social behaviour using cameras, microphones, and wearable sensors. This will include addressing problems such as automatically estimating who is dominating the conversation? Are these two people attracted to each other? I will highlight the challenges facing this fascinating research problem and open research questions that remain.

Bio: Hayley Hung is an Assistant Professor and Delft Technology Fellow in the Pattern Recognition and Bioinformatics group at the Technical University of Delft in the Netherlands. Before that she held a Marie Curie Intra-European Fellowship at the Intelligent Systems Lab at the University of Amsterdam, working on devising models to estimate various aspects of human behaviour in large social gatherings. Between 2007-2010, she was a post-doctoral researcher at Idiap Research Institute in Switzerland, working on methods to automatically estimate human interactive behaviour in meetings such as dominance, cohesion and deception. She obtained her PhD in Computer Vision from Queen Mary University of London, UK in 2007 and her first degree from Imperial College, UK in Electrical and Electronic Engineering.

FATA Seminar - Stability in networks (04 February, 2014)

Speaker: Ágnes Cseh
Stability in networks

The well-known notion of stable matchings can be extended in several interesting ways, one of them operates with network flows. The stable flow problem lies on the border of Mathematics, Economics and Computer Science. We are given a directed network, where the vertices symbolize vendors, while the edges stand for the possible deals between them. We talk about stability if there is no pair of vendors who mutually want to change the current flow of goods. In this talk, we shorty summarize the results currently known about the problem. Besides showing algorithms to find such flows, we also sketch problems related to max flows, flows over time, restricted edges, multicommodity flows and uncoordinated markets.

GIST Talk - Passive Brain-Computer Interfaces for Automated Adaptation and Implicit Control in Human-Computer Interaction (31 January, 2014)

Speaker: Dr Thortsen Zander

In the last 3 decades the interaction mean of Brain-Computer Interfaces (BCIs) has been investigated extensively. While most research aimed at the design of supportive systems for severely disabled persons, the last decade showed a trend towards applications for the general population. For users without disabilities a specific type of BCIs, that of passive Brain-Computer Interfaces (pBCIs), has shown high potential of improving Human-Machine and Human-Computer Interaction. In this seminar I will discuss the categorization of BCI research, in which we introduced the idea of pBCIs in 2008 and potential areas of application. Specifically, I will present several studies providing evidence that pBCIs can have a significant effect on the usability and efficiency of given systems. I will show that the users situational interpretation, intention and strategy can be detected by pBCIs. This information can be used for adapting the technical system automatically during interaction and enhance the performance of the Human-Machine System. From the perspective of pBCIs a new type of interaction, which is based on implicit control, emerges. Implicit Interaction aims at controlling a computer system by behavioral or psychophysiological aspects of user state, independently of any intentionally communicated commands. This introduces a new type of Human-Computer Interaction, which in contrast to most forms of interaction implemented nowadays, does not require the user to explicitly communicate with the machine. Users can focus on understanding the current state of the system and developing strategies for optimally reaching the goal of the given interaction. Based on information extracted by a pBCI and the given context the system can adapt automatically to the current strategies of the user. In a first study, a proof of principle is given, by implementing an Implicit Interaction to guide simple cursor movements in a 2D grid to a target. The results of this study clearly indicate the high potential of Implicit Interaction and introduce a new bandwidth of applications for passive Brain-Computer Interfaces.

GIST Talk - Mindless Versus Mindful Interaction (30 January, 2014)

Speaker: Yvonne Rogers

We are increasingly living in our digital bubbles. Even when physically together – as families and friends in our living rooms, outdoors and public places - we have our eyes glued to our own phones, tablets and laptops. The new generation of ‘all about me’ health and fitness gadgets, wallpapered in gamification, is making it worse. Do we really need smart shoes that tell us when we are being lazy and glasses that tell us what we can and cannot eat? Is this what we want from technology – ever more forms of digital narcissism, virtual nagging and data addiction? In contrast, I argue for a radical rethink of our relationship with future digital technologies. One that inspires us, through shared devices, tools and data, to be more creative, playful and thoughtful of each other and our surrounding environments.

Econophysics and the Euro Crisis (29 January, 2014)

Speaker: Paul Cockshott

The talk will analyse the Euro Crisis on the basis of a model of financial systems as being characterised by a law of increasing entropy which has the effect of precipitating a rentier class and of engendering periodic crises.

It presents an overview of the prospects of the stability pact which gives a very sceptical assessment of its chances of success.

Analysing Twitter Traffic on the Independence Referendum - A Thesis Procrastination Project (TPP) (29 January, 2014)

Speaker: Michael Comerford

Social media platforms like Twitter are of growing interest to political science and sociology as additional data sources and the Independence Referendum offers an opportunity to analyse public discourse in a unique set of historical circumstances. Working with Policy Scotland in the College of Social Sciences I've been harvesting twitter traffic for the #IndyRef and visualising the network of twitter users and hashtags over snapshot time periods. The purpose of this talk is to outline my methodology and some early findings. I'd be keen to get feedback on whether considering the data as a network is useful and what further analysis in this vein might be interesting. Further details of the project can be found here:

GPG Seminar: Smart, Adaptive Mapping of Parallelism in the Presence of External Workload (29 January, 2014)

Speaker: Prof. Michael O'Boyle

Given the wide scale adoption of multi-cores in main stream computing, parallel programs rarely execute in isolation and have to share the platform with other applications that compete for resources. If the external workload is not considered when mapping a program, it leads to a significant drop in performance. This talk  describes an automatic approach that combines compile-time knowledge of the program with dynamic runtime workload information to determine the best adaptive mapping of programs to available resources. This approach delivers increased performance for the target application without penalizing the existing workload. This approach is evaluated on NAS and SpecOMP parallel benchmark programs across a wide range of workload scenarios.

On average, our approach achieves performance gain of 1.5x over a state-of-art schemes.

Session Types Revisited (28 January, 2014)

Speaker: Ornela Dardha
Session Types Revisited

Session types are a formalism to model structured communication- based programming. A session type describes communication by specifying the type and direction of data exchanged between two parties. When session types and session primitives are added to the syntax of standard π-calculus types and terms, they give rise to ad- ditional separate syntactic categories. As a consequence, when new type features are added, there is duplication of efforts in the theory: the proofs of properties must be checked both on ordinary types and on session types. We show that session types are encodable in ordinary π types, relying on linear and variant types. Besides being an expressivity result, the encoding (i) removes the above redun- dancies in the syntax, and (ii) the properties of session types are derived as straightforward corollaries, exploiting the correspond- ing properties of ordinary π types. The robustness of the encoding is tested on a few extensions of session types, including subtyping, polymorphism and higher-order communications.

Foyer Screens Discussion (28 January, 2014)

Foyer Screens Discussion

Requirements analysis: how do we want to use these screens?

Advanced Management Techniques for Many-core Communication Systems (22 January, 2014)

Speaker: Sharifa Khanjari

A Quadtree is a tree in which each internal node has exactly four children. Quadtrees are most often used to partition a two-dimensional space by recursively subdividing it into four quadrants. My research is about building an infrastructure for many cores on chip networks architectures using novel topologies. The infrastructure should focus on quality of service provisioning and task allocation. This talk will provide an overview of the quadtree topology in network on chip and some preliminary results when comparing it with mesh using locality distribution.

2 months in California: from bigraphs to autonomous vehicles (+ a bit of sunshine) (21 January, 2014)

Speaker: Michele Sevegnani
2 months in California: from bigraphs to autonomous vehicles (+ a bit of sunshine)

Professor Sengupta's group at UC Berkeley has been investigating the use of the BigActor Model as a formalism for modelling and controlling systems of autonomous vehicles such as Unmanned Air Vehicles (UAV) Autonomous Surface Vehicles (ASV) and Autonomous Underwater Vehicles (AUV). BigActors are distributed concurrent computational entities that interact with a dynamical structure of the world modelled as a bigraph. An example application of BigActors is the specification of environmental monitoring missions in which teams of autonomous vehicles collaborate for locating and tracking oil spills in the ocean.
The aim of my recent visit to UC Berkeley was to adapt and combine the modelling and verification approach developed at Glasgow for the the Homework run-time verification system with the BigActor Model in order to define a general framework for analysis of mobile robotic systems.
In this talk, I will present the first step in this direction: an encoding of BigActors to bigraphs. This is work in progress.

New Opportunities and Changing Scenarios in Research Funding (17 January, 2014)


 New Opportunities and Changing Scenarios in Research Funding

This is going to be a discussion on emerging research funding opportunities.
Scottish government funded or is funding two Innovation Centres: CENSIS and Data Lab.
However,  research funding requires involvement of a company and is also of short term duration (3-12 months).
This requires changes in the way we traditionally seek research funding.
 CENSIS and Data Lab organisers will discuss the objectives of those ICs and their implementation approaches.

We will also cover Horizon 2020 opportunities. The new Call 1 submission date is 23rd April 2017.
University and College support for gaining research funding will also be covered.

Duncan Beamer, CENSIS
Rod Murray-Smith/Iadh Ounis - Data Lab
Sara, Diegoli, College/University Support - Horizon 2020
Joemon Jose - Horizon 2020

GIST Talk - Designing Hybrid Input Paradigms (16 January, 2014)

Speaker: Abigail Sellen

Visions of multimodal interaction with computers are as old as the field of HCI itself: by adding voice, gesture, gaze and other forms of input, the hope is that engaging with computers might be more efficient, expressive and natural. Yet it is only in the last decade that the dominance of multi-touch and the rise of gesture-based interaction are radically altering the ways we interact with computers. On the one hand these changes are inspirational and open up the design space; on the other hand, it has caused fractionation in interface design and added complexity for users.  Many of these complexities are caused by layering new forms of input on top of existing systems and practices. I will discuss our own recent adventures in trying to design and implement these hybrid forms of input, and highlight the challenges and the opportunities for future input paradigms. In particular, I conclude that the acid test for any of these new techniques is testing in the wild. Only then can we really design for diversity of people and of experiences

GPG Seminar: Predicting the Statistical Behaviour of Multicore Applications (15 January, 2014)

Speaker: Dr Kenneth MacKenzie

I'll talk about some work that I did in the recently-ended ADVANCE project ( We were looking at multicore applications built from components, and the goal was to make static predictions of statistical properties (throughput, latency) of the overall application based on the behaviour of the components.

We had some success in this, but also met with some very strange behaviour. I'm hoping that the audience may be able to offer explanations for some of the odd behaviour we observed.

EDSAC Replica Project (10 January, 2014)

Speaker: Andrew Herbert

The aim of the EDSAC Replica Project is to build a fully functional replica of the Cambridge University Electronic Delay Storage Automatic Computer (EDSAC) as it was when it ran it’s first programs in 1949.  Built by a team led by M.V. Wilkes, EDSAC was the world’s first practical electronic digital computer providing a computing service to the university as a whole.  Three Nobel Prizes were attributed to the giant leap in computing power that EDSAC delivered to Cambridge scientists.

Andrew will describe EDSAC and its principles of operation, showing what is possible in a machine that can only obey 5-600 instructions a second and has just 512 of store, but which is not weighed down with the volume of code required by a modern operating system or programming language.  He will then go on to describe the challenges in replicating 1940’s technology in the 21st Century and some of the ways in which modern computers are helping in the task.

Network coding-based data dissemination for Accident Warning Systems in Vehicular Ad-hoc Networks (08 January, 2014)

Speaker: Niaz Chowdhury

Abstract: Data dissemination in Vehicular Ad-hoc Networks (VANETs) is challenging because of the dynamics of this network. It becomes more intense when VANET works with Accident Warning System (AWS) like real-time application that delivers time-dependent safety notifications. Most existing protocols use limited-scope-broadcast to increase reachability ratio of their warnings within a tolerable delay. Though there are other initiatives to distribute warning notifications in VANETs, but a higher reachability ratio within a tolerable delay can most likely be secured by limited-scope-broadcast. However, in turn it creates Broadcast Storm Problem (BSP) in dense traffic and accounts for long delay in notification delivery and data-loss in the worst case. Our research aims to develop a data dissemination scheme that matches the performance of limited-scope-broadcast but will not create contention and collision in channel access by employing Network Coding. This new scheme will reduce number of transmission by encoding multiple notifications together and aggregates multiple encoded notifications further with a view to reduce channel access competition. 

GPG Seminar: LuvvieScript (08 January, 2014)

Speaker: Mr Gordon Guthrie

LuvvieScript ( )

The old polyglot world of large software development teams supported by diverse operational teams no longer works. The current software environment demands teams that are capable of dripping software releases to the public by continuously deploying features. Modern teams are increasingly eschew normal operations support for DevOps - where the systems maintenance and monitoring is done by the software developers. These ways of working are only possible where features can be delivered 'meat to metal' by the software devs - from human factors and design (usually in the browser) through to the server and load distribution infrastructure - down through persistent storage to the disk and back up to end user.

Traditional heterogeneous environments, for instance client-side javascript speaking to server side Ruby speaking SQL to persistent storage, are increasingly problematic in this world. Each transition between two languages creates an impedance gap which slows down development and debugging and (usually) fractures the teams. Node.js is an attempt to create an impedance-free development environment by having a common language (Javascript) client-side and server-side. Some people don't think bring the callback single threaded model to the server is wise. However the rise of support for source maps in the browser means that 'compile to js' languages are becoming viable options. Clojure Script has led the way.

LuvvieScript is an attempt to bring pattern-matching, event-driven, actor-based functional programming to the browser in a strict sub-set of Erlang (but not OTP!). The browser is intrinsically low concurrency (around 10ish 'things' on a normal web page) and needs only primitive restart semantics. The production toolchain for LuvvieScript is to take a syntactically valid Erlang module compile it to core Erlang and transpile that to a valid javascript Abstract Syntax Tree (the Parser API from Mozilla). That JS AST can then be manipulated with normal Javascript syntax tools before being converted to valid Javacript with an associated source map using tools like ESCodegen. Additional runtime requirements (for instance dependent modules that require co-loading) can be specified using custom attributes in Erlang which enable the developer to arbitrarily annotate the Erlang Abstract Syntax Tree.

The resulting deployment environment will include an in-page client-side 'run-time' and a set of server-side Erlang libraries that will encapsulate the browser-server comms enabling the front and back-end processes to send each other messages transparently (and front-end processes to send each other ones too). The developer will no longer directly script the Dom - but operate on a client-side model which will be rendered to the Dom. In this world user actions (mouse clicks, key presses, etc, etc) will present to the developer as messages and have to be actively subscribed to - making for a recognizably 'gen server' style of client-side programming.

About the speaker
Gordon Guthrie has been programming since the (late) 1970s, was Chief Technical Architect at - the Service Architect at the City of Edinburgh and more recently CEO/CTO of Vixo. He has only been programming Erlang for a single decade unfortunately.

School Christmas Party (20 December, 2013)


FATA Seminar - Christmas Quiz (17 December, 2013)

Speaker: Simon Gay
FATA Christma Quiz

Machine Learning for Back-of-the-Device Multitouch Typing (17 December, 2013)

Speaker: Daniel Buschek

IDI Seminar: Machine Learning for Back-of-the-Device Multitouch Typing (17 December, 2013)

Speaker: Daniel Buscheck

ENDS Seminar: ForgetMeNot (11 December, 2013)

Speaker: Jeremy Singer
ForgetMeNot - a new memory management scheme

In a future computational environment with terabytes of non-volatile RAM and petabytes of cloud-based backing store, can we avoid explicit delete/free()s and implicit garbage collection altogether? Health warning: this is a highly speculative, low-content presentation.

Heriot-Watt & Glasgow Parallelism Session (11 December, 2013)


FATA Seminar - What do we mean by persistence in stochastic models? (10 December, 2013)

Speaker: Rebecca Mancy
What do we mean by persistence in stochastic models?

In this talk I will present some of the research I'm currently working on relating to disease persistence in structured populations (e.g. those that have spatial structure or multiple host species). I will briefly introduce the model that prompted this work and the analytic theory associated with it that gives a measure of persistence capacity and a persistence threshold for deterministic models. I will then describe the computational study that I am currently working on to establish the extent to which this threshold is informative about behaviour in a stochastic version of the model.
In the second part of the talk, I will introduce several definitions of persistence from the literature and others that I have briefly considered, explaining why I believe that these don't fully capture what it is that we want to know about the system. Finally, I will introduce (early thoughts on) a new measure of stochastic persistence that I am working on, highlighting aspects of implementation that might provide interesting problems in algorithm development.

GPG Seminar: Challenges in the CloPeMa Robot Project (04 December, 2013)

Speaker: Dr Paul Cockshott, Dr Paul Siebert

Escape From the Ivory Tower: The Haskell Journey from 1990 to 2013 (03 December, 2013)

Speaker: Simon Peyton Jones

Haskell is my first baby, born slightly before my son Michael, who now has a job as a software engineer (working for Oege de Moor in Oxford).  Like Michael, Haskell’s early childhood was in Glasgow, in the warm embrace of the functional programming group at the Department of Computing Science, and enjoying the loving attention of Phil Wadler, John Hughes, John Launchbury, John O’Donnell, Will Partain, Simon Marlow, Cordelia Hall, Andy Gill, and other parent figures.  

From these somewhat academic beginnings as a remorselessly pure functional programming language, Haskell has evolved into a practical tool used for real applications.  Despite being over 20 years old, Haskell is, amazingly, still in a state of furious innovation.  In my talk I’ll try to give a sense of this long story arc, and give a glimpse of what we are up to now.

Periodic Subject Review (29 November, 2013)

Speaker: Karen Renaud

The University is currently carrying out a periodic review of our school. We are required to prepare a document to be submitted by the end of the year (for a visit in February).  We, Tania and Karen, are required to consult with staff and students in order to get their inputs, and to inform our review. We want to use the cakes talk slot to achieve this. We will pose some questions and then the floor will be yours.

GIST Seminar (28 November, 2013)

Speaker: Graham Wilson/Ioannis Politis
Perception of Ultrasonic Haptic Feedback / Evaluating Multimodal Driver Displays under Varying Situational Urgency

Two talks this week from members of the GIST group. 

Graham Wilson: Perception of Ultrasonic Haptic Feedback

Abstract: Ultrasonic haptic feedback produces tactile sensations in mid-air through acoustic radiation pressure. It is a promising means of providing 3D tactile sensations in open space without the user having to hold an actuator. However, research is needed to understand the basic characteristics of perception of this new feedback medium, and so how best to utilize ultrasonic haptics in an interface. This talk describes the technology behind producing ultrasonic haptic feedback and reports two experiments on fundamental aspects of tactile perception: 1) localisation of a static point and 2) the perception of motion. Traditional ultrasonic haptic devices are large and fixed to a horizontal surface, limiting the interaction and feedback space. To expand the interaction possibilities, the talk also discusses the feasibility of a mobile, wrist-mounted device for gestural interaction throughout a larger space. 

Ioannis Politis: Evaluating Multimodal Driver Displays under Varying Situational Urgency

Abstract: Previous studies have investigated audio, visual and tactile driver warnings, indicating the importance of conveying the appropriate level of urgency to the drivers. However, these modalities have never been combined exhaustively and tested under conditions of varying situational urgency, to assess their effectiveness both in the presence and absence of critical driving events. This talk will describe an experiment evaluating all multimodal combinations of such warnings under two contexts of situational urgency: a lead car braking and not braking. The results showed that responses were quicker when more urgent warnings were used, especially in the presence of a car braking. Participants also responded faster to the multimodal as opposed to unimodal signals. Driving behaviour improved in the presence of the warnings and the absence of a car braking. These results highlight the utility of multimodal displays to rapidly and effectively alert drivers and demonstrate how driving behaviour can be improved by such signals.

GPG Seminar: Fusing GPU kernels within a novel single-source C++ API (27 November, 2013)

Speaker: Paul Kier

The prospect of GPU kernel fusion is often described in research papers as a standalone command-line tool. Such a tool adopts a usage pattern wherein a user isolates, or annotates, an ordered set of kernels. Given such OpenCL C kernels as input, the tool would output a single kernel, which performs similar calculations, hence minimising costly runtime intermediate load and store operations. Such a mode of operation is, however, a departure from normality for many developers, and is mainly of academic interest.

Automatic compiler-based kernel fusion could provide a vast improvement to the end-user's development experience. The OpenCL Host API, however, does not provide a means to specify opportunities for kernel fusion to the compiler. Ongoing and rapidly maturing compiler and runtime research, led by Codeplay within the LPGPU EU FP7 project, aims to provide a higher-level, single-source, industry-focused C++-based interface to OpenCL. Along with LPGPU's AES group from TU Berlin, we have now also investigated opportunities for kernel fusion within this new framework; utilising features from C++11 including lambda functions; variadic templates; and lazy evaluation using std::bind expressions.

While pixel-to-pixel transformations are interesting in this context, insomuch as they demonstrate the expressivity of this new single-source C++ framework, we also consider fusing transformations which utilise synchronisation within workgroups. Hence convolutions, utilising halos; and the use of the GPU's local shared memory are also explored.

A perennial problem has therefore been restructured to accommodate a modern C++-based expression of kernel fusion. Kernel fusion thus becomes an integrated component of an extended C++ compiler and runtime.

FATA Seminar - Two-sided Matching with Partial Information (26 November, 2013)

Speaker: Baharak Rastegari
Two-sided Matching with Partial Information

"Two-sided matching markets model many practical settings, such as corporate hiring and university admission. In the traditional model, it is assumed that all agents have complete knowledge of their own preferences. As markets grow large, however, it becomes impractical for agents to precisely assess their rankings over all agents on the other side of the market. We propose a novel model of two-sided matching in which agents start with partial information about their preferences, but are able to refine this information via interviews. Our goal is to design a centralized interview policy that guarantees the outcome to be stable and optimal for one side of the market, while minimizing the number of interviews. We give evidence suggesting that the problem is hard in the general case, and show that it is polynomial-time solvable in a restricted, yet realistic, setting."

Dublin City Search: An evolution of search to incorporate city data (24 November, 2013)

Speaker: Dr Veli Bicer, IBM Research Dublin
ors, devices, social networks, governmental applications, or service networks. In such a diversity of information, answering specific information needs of city inhabitants requires holistic information retrieval techniques, capable of harnessing differen

Dr Veli Bicer is a researcher at Smarter Cities Technology Center of IBM Research in Dublin. His research interests include semantic data management, semantic search, software engineering and statistical relational learning. He obtained his PhD from Karlsruhe Institute of Technology, Karlsruhe, Germany and B.Sc. and M.Sc. degrees in computer engineering from Middle East Technical University, Ankara, 

IDI Seminar: Uncertain Text Entry on Mobile Devices (21 November, 2013)

Speaker: Daryl Weir

Modern mobile devices typically rely on touchscreen keyboards for input. Unfortunately, users often struggle to enter text accurately on virtual keyboards. We undertook a systematic investigation into how to best utilize probabilistic information to improve these keyboards. We incorporate a state-of-the-art touch model that can learn the tap idiosyncrasies of a particular user, and show in an evaluation that character error rate can be reduced by up to 7% over a baseline, and by up to 1.3% over a leading commercial keyboard. We furthermore investigate how users can explicitly control autocorrection via how hard they touch.

FATA Seminar - Cliques, Bicliques, Clubs and Colours (19 November, 2013)

Speaker: Ciaran McCreesh
Cliques, Bicliques, Clubs and Colours

A clique in a graph is a set of vertices, each of which is adjacent to every other vertex in this set. Finding a maximum clique is one of the fundamental NP-hard problems. We discuss how a branch and bound algorithm using greedy graph colouring can be used to solve this problem in practice. We then show how to adapt the algorithm to find maximum independent sets, maximum balanced bicliques, and maximum k-cliques (if a clique is a set of friends, a 2-clique is a set of people who are either friends or who have a mutual friend, and a k-clique is a set of people separated by distance at most k). We finish with a discussion about k-clubs, which are a stricter variation of k-cliques.

Elementary, my dear Java: Detecting patterns in object-oriented code (19 November, 2013)

Speaker: Jeremy Singer
SEIS lunchtime seminar

In this talk I will review the ideas of low-level code patterns for Java. I will show examples of these code patterns, discuss how they can be detected and give a short survey of useful applications. I may also bring a pipe and deer-stalker, in true Holmesian style.

Economic Models of Search (18 November, 2013)

Speaker: Leif Azzopardi


Predicting Screen Touches From Back-of-Device Grip Changes (14 November, 2013)

Speaker: Faizuddin Mohd Noor

We demonstrate that front-of-screen targeting on mobile phones can be predicted from back-of-device grip manipulations. Using simple, low-resolution capacitive touch sensors placed around a standard phone, we outline a machine learning approach to modelling the grip modulation and inferring front-of-screen touch targets. We experimentally demonstrate that grip is a remarkably good predictor of touch, and we can predict touch position 200ms before contact with an accuracy of 18mm.

IDI Seminar: Predicting Screen Touches From Back-of-Device Grip Changes (14 November, 2013)

Speaker: Faizuddin Mohd Noor

We demonstrate that front-of-screen targeting on mobile phones can be predicted from back-of-device grip manipulations. Using simple, low-resolution capacitive touch sensors placed around a standard phone, we outline a machine learning approach to modelling the grip modulation and inferring front-of-screen touch targets. We experimentally demonstrate that grip is a remarkably good predictor of touch, and we can predict touch position 200ms before contact with an accuracy of 18mm.

FATA Seminar (12 November, 2013)

Speaker: Dimitrios Kouzapas
Globally Governed Session Semantics

This paper proposes a new bisimulation theory based on multiparty session types where a choreography specification governs the behaviour of session typed processes and their observer. The bisimulation is defined with the observer cooperating with the observed process in order to form complete global session scenarios and usable for proving correctness of optimisations for globally coordinating threads and processes. The induced bisimulation is strictly more fine-grained than the standard session bisimulation. The difference between the governed and standard bisimulations only appears when more than two interleaved multiparty sessions exist. The compositionality of the governed bisimilarity is proved through the soundness and completeness with respect to the governed reduction-based congruence.

Online Learning in Explorative Multi Period Information Retrieval (11 November, 2013)

Speaker: Marc Sloan


In Multi Period Information Retrieval we consider retrieval as a stochastic yet controllable process, the ranking action during the process continuously controls the retrieval system's dynamics and an optimal ranking policy is found in order to maximise the overall users' satisfaction. Different aspects of this process can be fixed giving rise to different search scenarios. One such application is to fix search intent and learn from a population of users over time. Here we use a multi-armed bandit algorithm and apply techniques from finance to learn optimally diverse and explorative search results for a query. We can also fix the user and dynamically model the search over multiple pages of results using relevance feedback. Likewise we are currently investigating using the same technique over session search using a Markov Decision Process.

FATA Seminar (05 November, 2013)

Speaker: Muffy Calder
How to win users and influence developers with probabilistic user meta models.

A short talk outlining the definition and role of new probabilistic meta models of user activity  patterns for a mass deployed app - how can reasoning about the meta models include future developments of an app? A lively fusion of formal modelling, model checking, HCI and statistics, courtesy of the Populations project.

Stopping Information Search: An fMRI Investigation (04 November, 2013)

Speaker: Eric Walden

Information search has become an increasingly important factor in people's use of information systems.  In both personal and workplace environments, advances in information technology and the availability of information have enabled people to perform far more search and access much more information for decision making than in the very recent past.  One consequence of this abundance of information has been an increasing need for people to develop better heuristic methods for stopping search, since information available for most decisions now overwhelms people's cognitive processing capabilities and in some cases is almost infinite.  Information search has been studied in much past research, and cognitive stopping rules have also been investigated.  The present research extends and expands on previous behavioral research by investigating brain activation during searching and stopping behavior using functional Magnetic Resonance Imaging (fMRI) techniques.  We asked subjects to search for information about consumer products and to stop when they believed they had enough information to make a subsequent decision about whether to purchase that product.  They performed these tasks while in an MRI machine.  Brain scans were taken that measured brain activity throughout task performance.  Results showed that different areas of the brain were active for searching and stopping, that different brain regions were used for several different self-reported stopping rules, that stopping is a neural correlate of inhibition, suggesting a generalized stopping mechanism in the brain, and that certain individual difference variables make no difference in brain regions active for stopping.  The findings extend our knowledge of information search, stopping behavior, and inhibition, contributing to both the information systems and neuroscience literatures.  Implications of our findings for theory and practice are discussed.

FATA Seminar (29 October, 2013)

Speaker: Patrick Prosser
Constraint Programming and Stable Roommates

In the stable roommates problem we have n agents, where each agent ranks all other agents. The problem is then to match agents into pairs such that no two agents prefer each other to their matched partners. A remarkably simple constraint encoding is presented that uses O(n^2) binary constraints, and in which arc-consistency (the phase-1 table) is established in O(n^3) time. This leads us to a specialised n-ary constraint that uses O(n) additional space and establishes arc-consistency in O(n^2) time. An empirical study is presented and it is observed that the n-ary constraint model can read in, model and output all matchings for an instances with n = 1,000 in about 2 seconds on current hardware platforms. This leads us to a question: egalitarian SR is NP-hard, but where are the hard problems?

SD Erlang Operational Semantics (23 October, 2013)

Speaker: Natalia Chechina

The RELEASE project aims to scale Erlang to build reliable general-purpose software, such as server-based systems, on massively parallel machines. To extend the Erlang concurrency-oriented paradigm to large-scale reliable parallelism (10^5 cores) we have implemented an extension to the Erlang language, Scalable Distributed Erlang (SD Erlang)p. Key goals in scaling the computation model are to provide mechanisms for controlling locality and reducing connectivity, and to provide performance portability.

In this talk we introduce an operational semantics for SD Erlang. The semantics defines an abstract state and presents transition of SD Erlang functions. In addition we have validated the consistency between the formal semantics and the SD Erlang implementation. Our approach is based on property-based testing, in particular we use the Erlang QuickCheck tool developed by Quivq.

FATA Seminar (22 October, 2013)

Speaker: Various
Short talks for the summer research of the group and planning

Towards Technically assisted Sensitivity Review of UK Digital Public Records (21 October, 2013)

Speaker: Tim Gollins

There are major difficulties involved in identifying sensitive information in digital public records. These difficulties, if not addressed, will together with the challenge of managing the risks of failing to identify sensitive documents, force government departments into the precautionary closure of large swaths of digital records. Such closures will inhibit timely, open and transparent access by citizens and others in civic society. Precautionary closures will also prevent social scientists’ and contemporary historians’ access to valuable qualitative information, and their ability to contextualise studies of emerging large scale quantitative data. Closely analogous problems exist in UK local authorities, the third sector, and in other countries which are covered by the same or similar legislation and regulation. In 2012, having conducted investigations and earlier research into this problem, and with new evidence of immediate need emerging from the 20 year rule transition process, The UK National Archives (TNA) highlighted this serious issue facing government departments in the UK Public Records system; the Abaca project is the response.


The talk will outline the role of TNA, the background to sensitivity review, the impact of the move to born digital records, the nature of the particular challenge of reviewing them for sensitivity, and the broad approach that the Abaca Project is taking.



Next Monday, 4pm at 423

One VM to Rule Them All (ENDS Seminar) (16 October, 2013)

Speaker: Chris Seaton
Implementing Ruby on JVM at Oracle Labs

The Virtual Machines research group at Oracle Labs is exploring ways to implement high performance virtual machines for a wide range of languages, building on technology in the JVM.
Graal is a JVM compiler, written in Java, that exposes an API to the running program. Using Graal a program can directly access its own IR and supporting mechanisms such as dynamic code installation and invalidation.
Truffle is a framework for implementing languages in Java as simple AST interpreters, and specialising the AST over time to better suit the running program and input.
When Truffle is run on a JVM with the Graal compiler it can take the IR of all of the Java AST methods that make up your interpreter and compile them as a single machine code method optimised with partial-evaluation.
Chris's internship at Oracle Labs over the first half of this year looked at implementing the Ruby programming language using Graal and Truffle. Ruby is an extremely dynamic language and an implementation such as JRuby that uses bytecode to interface to the JVM must make a lot of redundant checks and boxing. We show how our techniques entirely remove these obstacles to high performance.
Our implementation of Ruby is both strikingly simple, written as just a Java program that interprets Ruby AST, and strikingly fast, running compute intensive benchmarks significantly faster than any other implementation of Ruby.
These techniques should be interesting to anyone who implements custom languages or who is interested in how JVM languages may work in the future.

Accelerating research on big datasets with Stratosphere (14 October, 2013)

Speaker: Moritz Schubotz
Stratosphere is a research project investigating new paradigms for scalable, complex analytics on massively-parallel data sets.

Stratosphere is a research project investigating new paradigms for scalable, complex analytics on massively-parallel data sets. The core concept of Stratosphere is the PACT programming model that extends MapReduce with second order functions like Match, CoGroup and Cross, which allows researchers to describe complex analytics task naturally. The result are directed acyclic that are optimized for parallel execution, by a cost based optimizer that incorporates user code properties, and executed by the Nephele Data Flow Engine. Nephele is a massively parallel data flow engine dealing with resource management, work scheduling, communication, and fault tolerance.

In the seminar session we introduce and showcase how researchers can set their working environment quickly and start doing research right away. As a proof of concept, we present how a simple java program parallelized optimized by Stratosphere obtained top results at the "exotic" Math search task at NTCIR-10. While other research groups optimized index structures and data formats and waited several hours for their indices to be build on high end hardware, we could focus on the essential program logic use basic data types and run the experiments on a heterogenous desktop cluster in several minutes.

IDI Seminar: Around-device devices: utilizing space and objects around the phone (07 October, 2013)

Speaker: Henning Pohl

For many people their phones have become their main everyday tool. While phones can fulfill many different roles, they also require users to (1) make do with affordance not specialized for the specific task, and (2) closely engage with the device itself. In this talk, I propose utilizing the space and objects around the phone to offer better task affordance and to create an opportunity for casual interactions. Around-device devices are a class of interactors, that do not require the user to bring special tangibles, but repurpose items already found in the user’s surroundings. I'll present a survey study, where we determined which places and objects are available to around-device devices. I'll also talk about a prototype implementation of hand interactions and object tracking for future mobiles with built-in depth sensing.

IDI Seminar: Extracting meaning from audio – a machine learning approach (03 October, 2013)

Speaker: Jan Larsen

CSS: See you in Beijing! (27 September, 2013)

Speaker: Alice Miller

I recently visited China for two weeks: a week in Guangzhou and a week in Beijing. This involved a research visit to Sun Yat-sen University (SYSU), and attendance at a conference in Beijing (plus a bit of sightseeing). As some of you may well be planning a similar trip in the future, in this talk I’ll give some background on SYSU and discuss some of the things to remember when travelling to China. Mainly though, I’ll show you some of my photographs!

Validity and Reliability in Cranfield-like Evaluation in Information Retrieval (23 September, 2013)

Speaker: Julián Urbano

The Cranfield paradigm to Information Retrieval evaluation has been used for half a century now as the means to compare retrieval techniques and advance the state of the art accordingly. However, this paradigm makes certain assumptions that remain a research problem in Information Retrieval and that may invalidate our experimental results.

In this talk I will approach the Cranfield paradigm as an statistical estimator of certain probability distributions that describe the final user experience. These distributions are estimated with a test collection, which actually computes system-related distributions that are assumed to be correlated with the target user-related distributions. From the point of view of validity, I will discuss the strength of that correlation and how it affects the conclusions we draw from an evaluation experiment. From the point of view of reliability, I will discuss on past and current practice to measure the reliability of test collections and review several of them accordingly.

Exploration and contextualization: towards reusable tools for the humanities. (16 September, 2013)

Speaker: Marc Bron

The introduction of new technologies, access to large electronic

cultural heritage repositories, and the availability of new

information channels continues to change the way humanities

researchers work and the questions they seek to answer. In this talk I

will discuss how the research cycle of humanities researchers has been

affected by these changes and argue for the continued development of

interactive information retrieval tools to support the research

practices of humanities researchers. Specifically, I will focus on two

phases in the humanities research cycle: the exploration phase and

contextualization phase. In the first part of the talk I discuss work

on the development and evaluation of search interfaces aimed at

supporting exploration. In the second part of the talk I will focus on

how information retrieval technology focused on identifying relations

between concepts may be used to develop applications that support


Quantum Language Models (19 August, 2013)

Speaker: Alessandro Sordoni

A joint analysis of both Vector Space and Language Models for IR

using the mathematical framework of Quantum Theory revealed how both

models allocate the space of density matrices. A density matrix is

shown to be a general representational tool capable of leveraging

capabilities of both VSM and LM representations thus paving the way

for a new generation of retrieval models. The new approach is called

Quantum Language Modeling (QLM) and has shown its efficiency and

effectiveness in modeling term dependencies for Information


Toward Models and Measures of Findability (21 July, 2013)

Speaker: Colin Wilkie
A summary of the work being undertaken on Findability

In this 10 minute talk, I will provide an overview of the project I am working on, which is about Findability, and review some of the existing models and measures of findability, before outlining the models that I have working on.

How cost affects search behaviour (21 July, 2013)

Speaker: Leif Azzopardi
Find out about how microeconomic theory predicts user behaviour...

In this talk, I will run through the work I will be presenting at SIGIR on "How cost affects search behavior". The empirical analysis is motivated and underpinned using the Search Economic Theory that I proposed at SIGIR 2011. 

[SICSA DVF] Language variation and influence in social media (15 July, 2013)

Speaker: Dr. Jacob Eisenstein
Dr. Eisenstein works on statistical natural language processing, focusing on social media analysis, discourse, and latent variable models

Languages vary by speaker and situation, and change over time.  While variation and change are inhibited in written corpora such as news text, they are endemic to social media, enabling large-scale investigation of language's social and temporal dimensions. The first part of this talk will describe a method for characterizing group-level language differences, using the Sparse Additive Generative Model (SAGE). SAGE is based on a re-parametrization of the multinomial distribution that is amenable to sparsity-inducing regularization and facilitates joint modeling across many author characteristics. The second part of the talk concerns change and influence. Using a novel dataset of geotagged word counts, we induce a network of linguistic influence between cities, aggregating across thousands of words. We then explore the demographic and geographic factors that drive spread of new words between cities. This work is in collaboration with Amr Ahmed, Brendan O'Connor, Noah A. Smith, and Eric P. Xing.

Jacob Eisenstein is an Assistant Professor in the School of Interactive Computing at Georgia Tech. He works on statistical natural language processing, focusing on social media analysis, discourse, and latent variable models. Jacob was a Postdoctoral researcher at Carnegie Mellon and the University of Illinois. He completed his Ph.D. at MIT in 2008, winning the George M. Sprowls dissertation award.


The Use of Correspondence Analysis in Information Retrieval (11 July, 2013)

Speaker: Dr Taner Dincer
This presentation will introduce the application of Correspondence Analysis in Information Retrieval

This presentation will introduce the application of Correspondence Analysis (CA) to Information Retrieval. CA is a well-established multivariate, statistical, exploratory data analysis technique. Multivariate data analysis techniques usually operate on a rectangular array of real numbers called a data matrix whose rows represent r observations (for example, r terms/words in documents) and columns represent c variables (for the example, c documents, resulting in a rxc term-by-document matrix). Multivariate data analysis refers to analyze the data in a manner that takes into account the relationships among observations and also among variables. In contrast to univariate statistics, it is concerned with the joint nature of measurements. The objective of exploratory data analysis is to explore the relationships among objects and among variables over measurements for the purpose of visual inspection. In particular, by using CA one can visually study the “Divergence From Independence” (DFI) among observations and among variables.

For Information Retrieval (IR), CA can serve three different uses: 1) As an analysis tool to visually inspect the results of information retrieval experiments, 2) As a basis to unify the probabilistic approaches to term weighting problem such as Divergence From Randomness and Language Models, and 3) As a term weighting model itself, "term weighting based on measuring divergence from independence". In this presentation, the uses of CA for these three purposes are exemplified.

[GIST] Talk -- The Value of Visualization for Exploring and Understanding Data (11 July, 2013)

Speaker: Prof John Stasko

Investigators have an ever-growing suite of tools available for analyzing and understanding their data. While techniques such as statistical analysis, machine learning, and data mining all have benefits, visualization provides an additional unique set of capabilities. In this talk I will identify the particular advantages that visualization brings to data analysis beyond other techniques, and I will describe the situations when it can be most beneficial. To help support these arguments, I'll present a number of provocative examples from my own work and others'. One particular system will demonstrate how visualization can facilitate exploration and knowledge acquisition from a collection of thousands of narrative text documents, in this case, reviews of wines from Tuscany.

The CloPeMa project: robotic Clothes Perception and Manipulation. (20 June, 2013)

Speaker: Computer Vision and Graphics Group

(Remember the big bundle of blue robot that sat in the Alwyn Williams building foyer? This is the story of what happened to that….)


We present current progress in CloPeMa, a 3 year open-source EU-FP7 research project which aims to advance the state of the art in the autonomous perception and manipulation of fabrics, textiles and garments. The goal of CloPeMa is to build a robot system that will learn to manipulate, perceive and fold a variety of textiles.

The novelty and uniqueness of this project is due chiefly to its generality. Various garments will be presented in a random pile on an arbitrary background and novel ways of manipulating them (sorting, folding, etc.) will be learned on demand in a real-life dynamic environment. A key requirement is to remove any specific restrictions about how textiles can be given to and handled by the robot and accordingly is expected to lead to greater robustness and reliability, and also to widen the field of robotics manipulation applications.


CloPeMa's main objective is closer integration of perception, action, learning, and reasoning. Perception means integrated haptic and visual sensing, recognition, and support for a perception-action reactive cycle. Actions will be performed by a cooperating pair of robotic hands, part of the CloPeMa experimental testbed that we have here in Glasgow. The hands will combine state-of-the-art solutions for manipulation of limp material: variable strength grip on a non-rigid hand mechanism using smart materials and tactile sensors with large areas of “artificial skin”.


Members of the Computer Vision and Graphics Group are developing the primary vision system for the Clopema robot and this talk will outline the current state of this system, overall progress to date in CloPeMa and plans for on-going and future developments using the CloPeMa robot facility. 


Information Visualization for Knowledge Discovery (13 June, 2013)

Speaker: Professor Ben Schneiderman, University of Maryland - College Park
This talk reviews the growing commercial success stories such as, and, plus emerging products such as will be covered.

This talk reviews the growing commercial success stories such as, and, plus emerging products such as will be covered.

Full information on the talk is available on the University events listings.

The Matrix Mechanics of Modern Economies (07 June, 2013)

Speaker: Dave Zachariah

In this talk we will try to give answers to the questions "What is money?" and "What is the source of economic value?" using concepts from matrix algebra. We will also show how these tools enable framework to understand income distributions in market economies, the nature of government surpluses and sector balances, the fallacy of austerity and persistent trade surpluses, and the wealth of nations.

On being the CSA for Scottish Government (04 June, 2013)

Speaker: Muffy Calder

An overview of what I do in "the other job".

A study of Information Management in the Patient Surgical Pathway in NHS Scotland (03 June, 2013)

Speaker: Matt-Mouley Bouamrane

We conducted a study of information management processes across the patient surgical pathway in NHS Scotland. While the majority of General Practitioners (GPs) consider electronic information systems as an essential and integral part of their work during the patient consultation, many were not fully satisfied with the functionalities of these systems. A majority of GPs considered that the national eReferral system streamlined referral processes. Almost all GPs reported marked variability in the quality of discharge information. Preoperative processes vary significantly across Scotland, with most services using paper based systems. There is insufficient use made of information provided through the patient electronic referral and a considerable duplication of effort with the work already performed in primary care. Three health-boards have implemented electronic preoperative information systems. These have transformed clinical practices and facilitated communication and information-sharing among the multi-disciplinary team and within the health boards. Substantial progress has been made towards improving information transfer and sharing within the surgical pathway in recent years but there remains scope for further improvements at the interface between services.

New Group Medley (31 May, 2013)

Speaker: Phil Trinder

Abstract: We are a new group joining the department, and will present a series of 5 minute talks outlining some of our research. Topics are as diverse as:

·         Researching Reliable Performance-Portable Parallel Computing – Phil Trinder
·         An Overview of Autonomous Mobile Programs -  Natalia Chechina
·         Elegance – Joe Davidson
·         Scalable Persistent Storage for Erlang – Amir Ghaffari
·         The Design and Implementation of Scalable Parallel Haskell – Malak Aljabri
·         Profiling Distributed-Memory Parallel Haskell – Maj Al Saeed


On List Colouring and List Homomorphism of Permutation and Interval Graphs (28 May, 2013)

Speaker: Jessica Enright

List colouring is an NP-complete decision problem even if the total number of colours is three. It is hard even on planar bipartite graphs. I give a sketch of a polynomial-time algorithm for solving list colouring of permutation graphs with a bounded total number of colours. This generalises to a polynomial-time algorithm that solves the list-homomorphism problem to any fixed target graph for a large class of input graphs including all permutation and interval graphs.

Interdependence and Predictability of Human Mobility and Social Interactions (23 May, 2013)

Speaker: Mirco Musolesi

The study of the interdependence of human movement and social ties of individuals is one of the most interesting research areas in computational social science. Previous studies have shown that human movement is predictable to a certain extent at different geographic scales. One of the open problems is how to improve the prediction exploiting additional available information. In particular, one of the key questions is how to characterise and exploit the correlation between movements of friends and acquaintances to increase the accuracy of the forecasting algorithms.

In this talk I will discuss the results of our analysis of the Nokia Mobile Data Challenge dataset showing that, by means of multivariate nonlinear predictors, it is possible to exploit mobility data of friends in order to improve user movement forecasting. This can be seen as a process of discovering correlation patterns in networks of linked social and geographic data. I will also show how mutual information can be used to quantify this correlation; I will demonstrate how to use this quantity to select individuals with correlated mobility patterns in order to improve movement prediction. Finally, I will show how the exploitation of data related to friends improves dramatically the prediction with respect to the case of information of people that do not have social ties with the user.

Discovering, Modeling, and Predicting Task-by-Task Behaviour of Search Engine Users (20 May, 2013)

Speaker: Salvatore Orlando

Users of web search engines are increasingly issuing queries to accomplish their daily tasks (e.g., “finding a recipe”, “booking a flight”, “read- ing online news”, etc.). In this work, we propose a two-step methodology for discovering latent tasks that users try to perform through search engines. Firstly, we identify user tasks from individual user sessions stored in query logs. In our vision, a user task is a set of possibly non-contiguous queries (within a user search session), which refer to the same need. Secondly, we discover collective tasks by aggregating similar user tasks, possibly performed by distinct users. To discover tasks, we propose to adopt clustering algorithms based on novel query similarity functions, in turn obtained by exploiting specific features, and both unsupervised and supervised learning approaches.  All the proposed solutions were evaluated on a manually-built ground-truth.

Furthermore, we introduce the the Task Relation Graph (TGR) as a representation of users' search behaviors on a task-by-task perspective, by exploiting the collective tasks obtained so far. The task-by-task behavior is captured by weighting the edges of TGR with a relatedness score computed between pairs of tasks, as mined from the query log.  We validated our approach on a concrete application, namely a task recommender system, which suggests related tasks to users on the basis of the task predictions derived from the TGR. Finally, we showed that the task recommendations generated by our technique are beyond the reach of existing query suggestion schemes, and that our solution is able to recommend tasks that user will likely perform in the near future. 


Work in collaboration with Claudio Lucchese, Gabriele Tolomei, Raffaele Perego, and Fabrizio Silvestri.



[1] C. Lucchese, S. Orlando, R. Perego, F. Silvestri, G. Tolomei. "Identifying Task-based Sessions in Search Engine Query Logs". Forth ACM Int.l Conference on Web Search and Data Mining (WSDM 2011), Hong Kong, February 9-12, 2011

[2] C. Lucchese, S. Orlando, R. Perego, F. Silvestri, G. Tolomei. "Discovering Tasks from Search Engine Query Logs", To appear on ACM Transactions on Information Systems (TOIS). 

[3] C. Lucchese, S. Orlando, R. Perego, F. Silvestri, G. Tolomei. "Modeling and Predicting the Task-by-Task Behavior of Search Engine Users". To appear in Proc. OAIR 2013, Int.l Conference in the RIAO series.

Personality Computing (13 May, 2013)

Speaker: Alessandro Vinciarelli



Personality is one of the driving factors behind everything we do and experience

in life. During the last decade, the computing community has been showing an ever

increasing interest for such a psychological construct, especially when it comes

to efforts aimed at making machines socially intelligent, i.e. capable of interacting with

people in the same way as people do. This talk will show the work being done in this

area at the School of Computing Science. After an introduction to the concept of

personality and its main applications, the presentation will illustrate experiments

on speech based automatic perception and recognition. Furthermore, the talk will

outline the main issues and challenges still open in the domain.  

Funding for Academic-Business Collaboration (10 May, 2013)

Speaker: Stephen Marshall and Elwood Vogt

This talk will cover the range of funding available, from First Step Awards, which provide up to £5,000 to buy out an academic’s time spent on a small project with a Scottish SME, to the University’s IAA (Impact Acceleration Account) and Knowledge Exchange Fund, which can provide up to £30,000 to support a range of KE interventions.

Fast and Reliable Online Learning to Rank for Information Retrieval (06 May, 2013)

Speaker: Katja Hoffman

Online learning to rank for information retrieval (IR) holds promise for allowing the development of "self-learning search engines" that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge.


In this talk I will present my recent work, which addresses the challenges posed by learning from natural user interactions. First, I will detail a new method, called Probabilistic Interleave, for inferring user preferences from users' clicks on search results. I show that this method allows unbiased and fine-grained ranker comparison using noisy click data, and that this is the first such method that allows the effective reuse of historical data (i.e., collected for previous comparisons) to infer information about new rankers. Second, I show that Probabilistic Interleave enables new online learning to rank approaches that can reuse historical interaction data to speed up learning by several orders of magnitude, especially under high levels of noise in user feedback. I conclude with an outlook on research directions in online learning to rank for IR, that are opened up by our results.

Sensing Infrastructure for a mini-Smart City within SoCS (03 May, 2013)

Speaker: Craig Macdonald and Dyaa Albakour

In this talk, we will describe our motivations and plans to deploy a sensing infrastructure within SAWB. In particular, we will describe how a mini-Smart city environment fits within wider initiatives, such as the University's sensor systems research area, and the SMART FP7 project. Indeed, such Smart city environments will facilitate information access and search for real-world events. We will then discuss plans for deploying visual sensors within SAWB, describing the proposed locations, the analysis that will be performed and the protection policies implemented. 

The Hospitals/Residents problem with Free pairs (30 April, 2013)

Speaker: Augustine Kwanashie

In the classical Hospitals/Residents problem, a blocking pair exists with respect to a matching if both agents would be better off by coming together, rather than remaining with their partners in the matching (if any). However blocking pairs that exist in theory need not undermine a matching in practice. The absence of social ties between agents may cause a lack of awareness about the existence of blocking pairs in practice. We define the Hospitals/Residents problem with Free pairs (HRF) in which a subset of acceptable resident-hospital pairs are identified as free. This means that they can belong to a matching M but they can never block M. Free pairs essentially correspond to resident and hospitals that do not know one another. Relative to a relaxed stability definition for HRF, called local stability, we show that locally stable matchings can have different sizes and the problem of finding a maximum locally stable matching is NP-hard, though approximable within 3/2. Furthermore we give polynomial time algorithms for two special cases of the problem.  This is joint work with David Manlove.


Entity Linking for Semantic Search (29 April, 2013)

Speaker: Edgar Meij

Semantic annotations have recently received renewed interest with the explosive increase in the amount of textual data being produced, the advent of advanced NLP techniques, and the maturing of the web of data. Such annotations hold the promise for improving information retrieval algorithms and applications by providing means to automatically understand the meaning of a piece of text. Indeed, when we look at the level of understanding that is involved in modern-day search engines (on the web or otherwise), we come to the obvious conclusion that there is still a lot of room for improvement. Although some recent advances are pushing the boundaries already, information items are still retrieved and ordered mainly using their textual representation, with little or no knowledge of what they actually mean. In this talk I will present my recent and ongoing work, which addresses the challenges associated with leveraging semantic annotations for intelligent information access. I will introduce a recently proposed method for entity linking and show how it can be applied to several tasks related to semantic search on collections of different types, genres, and origins. 

Causality (26 April, 2013)

Speaker: Neil McDonnell

There has been a significant amount of work within Analytic Philosophy directed at understanding our concept of Causation. The central question is: what are the conditions that must obtain in order that one thing be considered the cause of another? Hume was famously skeptical on this question but David Lewis, an ardent Humean, made some substantial breakthroughs in his 1973 Counterfactual Analysis of Causation. This analysis forms the de facto standard test for causation in certain legal contexts and has had an enormous impact on the philosophical literature and beyond. Recently, Computer Scientists Joe Halpern and Judea Pearl adapted a central insight of Lewis's analysis into their account of causal modelling for the computer sciences.

In this paper I will introduce the Lewisian concept of Causation, discuss some problems for it that are the object of my thesis, and then tie that to the work of Judea Pearl in particular.

Optimizing Multicore Java Virtual Machines (17 April, 2013)

Speaker: Khaled Alnowaiser

The Java Virtual Machine (JVM) consumes a significant portion of its execution time performing internal services such as garbage collection and optimising compilation. Multicore processors offer the potential to reduce JVM service overhead by utilising the parallel hardware. However, the JVM developers face many challenges to adapt and achieve optimal performance. This talk will motivate and discuss multicore garbage collection performance and some behavioural observations of OpenJDK Hotspot JVM. We will propose some potential solutions to JVM performance optimisation.

A hierarchy related to interval orders (16 April, 2013)

Speaker: Sergey Kitaev

A partially ordered set (poset) is an interval order if it is isomorphic to some set of intervals on the real line ordered by left-to-right precedence. Interval orders are important in mathematics, computer science, engineering and the social sciences. For example, complex manufacturing processes are often broken into a series of tasks, each with a specified starting and ending time. Some of the tasks are not time-overlapping, so at the completion of the first task, all resources associated with that task can be used for the following task. On the other hand, if two tasks have overlapping time periods, they compete for resources and thus can be viewed as conflicting tasks.

A poset is said to be (2+2)-free if no two disjoint 2-element chains have comparable elements. In 1970, Fishburn proved that (2+2)-free posets are precisely interval orders. Recently, Bousquet-Mélou, Claesson, Dukes, and Kitaev introduced ascent sequences, which not only allowed us to enumerate interval orders, but also to connect them to other combinatorial objects, namely to Stoimenow's matchings, to certain upper triangular matrices, and to certain pattern avoiding permutations (a very active area of research these days). A host of papers by various authors has followed this initial paper.

In this talk, I will review some of results from these papers and will discuss a hierarchy of objects related to interval orders.

Flexible models for high-dimensional probability distributions (04 April, 2013)

Speaker: Iain Murray

Statistical modelling often involves representing high-dimensional probability distributions. The textbook baseline methods, such as mixture models (non-parametric Bayesian or not), often don’t use data efficiently. Whereas the machine learning literature has proposed methods, such as Gaussian process density models and undirected neural network models, that are often too computationally expensive to use. Using a few case-studies, I will argue for increased use of flexible autoregressive models as a strong baseline for general use.

[GIST] Talk -- Shape-changing Displays: The next revolution in display technology? (28 March, 2013)

Speaker: Dr Jason Alexander

Shape-changing interfaces physically mutate their visual display surface
to better represent on-screen content, provide an additional information
channel, and facilitate tangible interaction with digital content. This
talk will preview the current state-of-the art in shape-changing
displays, discuss our current work in this area, and explore the grand
challenges in this field. The talk will include a hardware demonstration
of one such shape-changing device, a Tilt Display.

Jason is a lecturer in the School of Computing and Communications at
Lancaster University. His primary research interests are in
Human-Computer Interaction, with a particular interest in developing the
next generation of interaction techniques. His recent research is
hardware-driven, combining tangible interaction and future display
technologies. He was previously a post-doctoral researcher in the
Bristol Interaction and Graphics (BIG) group at the University of
Bristol. Before that he was a Ph.D. student in the HCI and Multimedia
Lab at the University of Canterbury, New Zealand. More information can
be found at

TechMeetup (27 March, 2013)

Speaker: Jason Frame & Iain Watt
TechMeetup Glasgow ( ) is back on the 5th Floor of the School of Computing this evening from 6:30pm.


TechMeetup Glasgow  ( ) is back on the 5th Floor of the School of Computing this evening from 6:30pm.

The talks are:

Brain Rules  - Iain Watt
In 2009 Dr. John Medina gave us 12 "Brain Rules" - what scientists know for sure about how our brains work.
In this talk I'll ask you to consider how we as technologists might take advantage of some of these "brain rules" to be happier and more productive in our creative endeavours.

A JavaScript Extravaganza - Jason Frame

There'll be beer and pizza as usual and plenty of time before, between, and after the talks to catch up on the latest tech news & gossip. As ever, the event is free and no sign-up is necessary.

TechMeetup is made possible by the amazing financial support from the University of GlasgowNewContextScottLogicSkyScanner and small donations from community members. Thank you all.


Engineering Adaptive Software Systems (19 March, 2013)

Speaker: Dr Arosha Bandara

Adaptive software systems have been the focus of significant research activity due to their promise of addressing some of the complexity challenges associated with large software intensive systems.  In 2003, Kephart and Chess published their vision of autonomic computing, which aimed to address some of the challenges of software complexity.  In essence, they proposed that software architectures should incorporate a layer, analogous to the autonomic nervous system, that could adapt the behaviour of the system to meet particular quality attributes (e.g., security, usability, etc.). The challenges of engineering such systems encompass a range of computing disciplines, that include requirements engineering, software architectures and usability.  This talk will explore these challenges, drawing on work being done at The Open University in the areas of adaptive user interfaces, information security and privacy. 

Using formal stochastic models to guide decision making -- Should I fix this problem now or in 3 hours? (19 March, 2013)

Speaker: Michele Sevegnani

NATS is the UK's main air navigation service provider. Its control centre in Prestwick constantly monitors the status of its infrastructure via thousands of sensors situated in numerous radar and communication sites all over the UK's territory. The size and complexity of this system often makes it difficult to interpret the sensed data and impossible to predict the system's future behaviour.

In this talk, we present on-going work in which a stochastic model is used to guide decision making. In particular, we will show a prototype web-app based on the formal model that could allow the engineering team in the control room to perform stochastic model checking in a simple and intuitive way, without prior knowledge of formal methods. The analysis results can then be used to schedule, prioritise and optimise maintenance, without affecting safety.

Proactive Social Media Use of Emergency Authorities (19 March, 2013)

Speaker: Preben Bonnen & Martin Marcher
Preben Bonnén and Martin Marcher will be discussing the opportunities and perspectives of proactive social media use by civil authorities in the context of civil protection.

In the summer of 2012, the Danish Forum for Civil Protection and Emergency Planning / Forum for Samfundets Beredskab (FSB), started a large project focusing on the authorities' proactive use of social media, primarily Facebook and Twitter. The inspiration came from the Norwegian and Swedish police, who not only proactively use Facebook and Twitter, but they have also previously made thorough considerations regarding the possibilities and prospects for the use of social media.

The rationale behind the launch of an analysis, and later that year a seminar the 2nd of November 2012 in the Danish Parliament, were the growing challenges authorities are facing in relation to both the media and the press, and in relation to social media. In all cases there is an expectation of quick information, and even so more, in the possible event of a major incident where questions and the need for information would multiply. But when questions are many, the information from the authorities is typically and usually moderate. That may change with proactive use of social media.

Basically, there isn’t much that can prevent authorities using social media in ensuring society preparedness.  For example, the police force can use social media tools to convey important information to the public, create campaigns targeting specific social segments, communicate enquiries regarding criminals or missing persons, and issue traffic warnings. Besides reaching their target audience, who may not usually be involved in dialogue with police, there is a good possibility of increasing dialogue with the general public. This can be achieved through chats with the public on various issues chosen by citizens themselves, on issues they find relevant within their own society. In conclusion, police presence on social media over time will be expected as a normal part of their everyday job. Preben Bonnén and Martin Marcher from Forum for Civil Protection and Emergency Planning (FSB) will present a detailed presentation discussing the opportunities and perspectives that present themselves to authorities in society preparedness, and to what extent they do so. 

Query Classification for a Digital Library (18 March, 2013)

Speaker: Deirdre Lungley

The motivation for our query classification is the insight it gives the digital content provider into what his users are searching for and hence how his collection could be extended. This talk details two query classification methodologies we have implemented as part of the GALATEAS project ( one log-based, the other using wikified queries to learn a Labelled LDA model. An analysis of their respective classification errors indicates the method best suited to particular category groups. 

Dynamic analysis tools considered difficult (to write) (15 March, 2013)

Speaker: Stephen Kell

  Dynamic analysis tools are widely used for both profiling and
 bug-finding, but are difficult to develop. Portable approaches rely on
 instrumentation, which is complex to specify and difficult to re-use. I
  will give an overview of the DiSL and FRANC systems which address
(respectively) these two difficulties, borrowing concepts from
  aspect-oriented and event-driven programming.  I will also outline some
unfortunate properties of the Java platform which, as revealed by bitter
  experience, make it especially difficult to achieve *high-coverage*
  dynamic analysis tools.

GIST Seminar: A Study of Information Management Processes across the Patient Surgical Pathway in NHS Scotland (14 March, 2013)

Speaker: Matt-Mouley Bouamrane

Preoperative assessment is a routine medical screening process to assess a patient's fitness for surgery. Systematic reviews of the evidence have suggested that existing practices are not underpinned by a strong evidence-base and may be sub-optimal.

We conducted a study of information management processes across the patient surgical pathway in NHS Scotland, using the Medical Research Council Complex Intervention Framework and mixed-methods.

Most preoperative services were created in the last 10 years to reduce late theatre cancellations and increase the ratio of day-case surgery. 2 health-boards have set up electronic preoperative information systems and stakeholders at these services reported overall improvements in processes. General Practitioners' (GPs) referrals are now done electronically and GPs considered electronic referrals as a substantial improvement. GPs reported minimal interaction with preoperative services. Post- operative discharge information was often considered unsatisfactory.

Conclusion: Although some substantial progress have been made in recent years towards improving information transfer and sharing among care providers within the NHS surgical pathway, there remains considerable scope for improvements at the interface between services.

Extremal graphs (12 March, 2013)

Speaker: Patrick Prosser and Alice Miller

Reusing Historical Interaction Data for Faster Online Learning to Rank for IR (12 March, 2013)

Speaker: Anne Schuth


Online learning to rank for information retrieval (IR) holds promise for allowing the development of ³self-learning² search engines that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge.


In this paper we investigate whether and how previously collected (historical) interaction data can be used to speed up learning in online learning to rank for IR. We devise the first two methods that can utilize historical data (1) to make feedback available during learning more reliable and (2) to preselect candidate ranking functions to be evaluated in interactions with users of the retrieval system. We evaluate both approaches on 9 learning to rank data sets and find that historical data can speed up learning, leading to substantially and significantly higher online performance. In particular, our preselection method proves highly effective at compensating for noise in user feedback. Our results show that historical data can be used to make online learning to rank for IR much more effective than previously possible, especially when feedback is noisy.

Scientific Lenses over Linked Data: Identity Management in the Open PHACTS project (11 March, 2013)

Speaker: Alasdair Gray,

Scientific Lenses over Linked Data: Identity Management in the Open PHACTS project

Alasdair Gray, University of Manchester


The discovery of new medicines requires pharmacologists to interact with a number of information sources ranging from tabular data to scientific papers, and other specialized formats. The Open PHACTS project, a collaboration of research institutions and major pharmaceutical companies, has developed a linked data platform for integrating multiple pharmacology datasets that form the basis for several drug discovery applications. The functionality offered by the platform has been drawn from a collection of prioritised drug discovery business questions created as part of the Open PHACTS project. Key features of the linked data platform are:

1) Domain specific API making drug discovery linked data available for a diverse range of applications without requiring the application developers to become knowledgeable of semantic web standards such as SPARQL;

2) Just-in-time identity resolution and alignment across datasets enabling a variety of entry points to the data and ultimately to support different integrated views of the data;

3) Centrally cached copies of public datasets to support interactive response times for user-facing applications.


Within complex scientific domains such as pharmacology, operational equivalence between two concepts is often context-, user- and task-specific. Existing linked data integration procedures and equivalence services do not take the context and task of the user into account. We enable users of the Open PHACTS platform to control the notion of operational equivalence by applying scientific lenses over linked data. The scientific lenses vary the links that are activated between the datasets which affects the data returned to the user



Alasdair is a researcher in the MyGrid team at the University of Manchester. He is currently working on the Open PHACTS project which is building an Open Pharmacological Space to integrate drug discovery data. Alasdair gained his PhD from Heriot-Watt University, Edinburgh, and then worked as a post-doctoral researcher in the Information Retrieval Group at the University of Glasgow. He has spent the last 10 years working on novel knowledge management projects investigating issues of relating data sets.

Further Adventures with the Raspberry Pi Cloud (05 March, 2013)

Speaker: David White, Jeremy Singer (and L4 project student)

With money from GU Chancellor's Fund, we have been constructing a scale model of a cloud datacenter out of Raspberry Pi boards. In this presentation, we will give details of the aims of the project, potential deployment in research and teaching contexts, and progress to date.

Formal Models for Populations of User Activity Patterns and Varieties of Software Structures (05 March, 2013)

Speaker: Oana Andrei

The challenges raised by developing mobile applications come from the way these apps interweave with everyday life and are distributed globally via application centres or stores to a wide range of users. People use an app according to their needs and understanding, therefore one could observe variations in usage frequencies of features or time and duration of use. The same mobile app varies with app settings, mobile device settings, device model or operating system.

For this talk we present work in progress on a formal modelling approach suitable for representing and analysing the user activity patterns and the structural variability of a software system. It is based on a stochastic abstraction of the populations of software in use and the software uses, building upon results from statistical analysis of user activity patterns. One aim of our current research is to design for variability of uses and contexts that mobile software developers may not be able to fully predict. Based on the automatically logged feedback on in-app usage and configurations, inference methods and formal modelling and analysis connect and collaborate to provide information on relevant populations of similar user behaviour and software structure and to evaluate their performance and robustness. This way we can track behavioural changes in the population of users and suggest software improvements to fit new user behaviours and contexts and changes in the user behaviour. The software designers and developers will then (re)consider the design objectives and strategies, create more personalised modules to be incorporated in the software and identify new opportunities to improve the overall user experience. We use a real life case study based on an iOS game to illustrate the concepts.

This talk is based on a joint work with Muffy Calder, Mark Girolami and Matthew Higgs.

Modelling Time & Demographics in Search Logs (01 March, 2013)

Speaker: Milad Shokouhi

Knowing users' context offers a great potential for personalizing web search results or related services such as query suggestion and query completion. Contextual features cover a wide range of signals; query time, user’s location,  search history and demographics can all  be regarded as contextual features that can be used for search personalization.

In this talk, we’ll focus on two main questions:

1)      How can we use the existing contextual features, in particular time, for improving search results (Shokouhi & Radinsky, SIGIR’12).

2)      How can we infer missing contextual features, in particular user-demographics, based on search history (Bi et al., WWW2013).


Our results confirm that (1) contextual features matter and (2) that many of them can be inferred from search history.

Pre-interaction Identification By Dynamic Grip Classification (28 February, 2013)

Speaker: Faizuddin Mohd Noor

We present a novel authentication method to identify users at they pick up a mobile device. We use a combination of back-of-device capacitive sensing and accelerometer measurements to perform classification, and obtain increased performance compared to previous accelerometer-only approaches. Our initial results suggest that users can be reliably identified during the pick-up movement before interaction commences.

Wireless sensor networks for real time particle tracking in inaccessible environments (27 February, 2013)

Speaker: George Maniatis

One of the most difficult problems of contemporary Geophysics is the description and the prediction of the movement of the riverbeds. According to the Lagrangian description of the system the whole movement can be resolved into the combinational result of the movement of individual grains across several time and space scales. The verification of this type of models demands the acquisition of data that a) express the synergistic effect of hydrological and topographical circumstances, b)describe the movement of each grain as an continuous process, especially during events of special interest (like floods) and c) give representative macroscopic information for the riverbed (synchronous monitoring of many grains).Although many of the contemporary technologies have been applied (advanced RFID techniques, specialized piezoelectric sensors, sonar e.t.c) none of the existing datasets meets all the above three requirements. The first stage of this project is the development of a Wireless Sensor that will be able to monitor robustly all the phases of individual grain movement (entrainment, transition, deposition) by correlating measures for both causal and result factors (experienced accelerations and travel-path length/position respectively).The second stage will be the deployment of a number of sensors which will be installed into artificial and/or natural stones and will form a Wireless Network of smart-pebbles- motes that would address the need for representative macroscopic information. The final stage will be the deployment of this WSN into a motoring system that will ,along with the data concerning the movement of the grains, provide synchronous information about the state of the river (stage discharge, flow velocity, local topography e.t.c).This is a challenging application, with constrains posed on all the "aspects" of the WSN (from the motes and the physical to the network and finally the application layer) .Those constrains are driven from the special characteristics of the system (difficult initial sensor calibration, demand for robust under- water RF communication, harsh environmental conditions e.t.c) and the stochasticity of the understudy process (need for robust event detection algorithms, decision making based on very variable thresholds, real time reprogramming for recalibration e.t.c).

NPL Sensors Presentation (27 February, 2013)

Speaker: Carlos Huggins

The National Physical Laboratory provides much of the UK’s outward facing support to science and commerce in the field of metrology, viz, the science and practice of measurement.  This covers anything from international work on definition of fundamental standards,   realisation of practical equipment than can transfer knowledge along the supply chain, to training and best practice support.  Typical and topical examples of these roles will be discussed, covering fields as varied as nuclear power, energy harvesting, climate and science, and the audience will be challenged to answer a question which may be key in achieving impact from their own research  work : “do I have a way of  convincing  a series of strangers to believe and adopt my results?”.  The role of the Knowledge Networks team in supporting the Measurement Network , and other networks, in facilitating progress in this type of challenge will be discussed.

Why am I not running the world? (26 February, 2013)

Speaker: Dave McKay

Inspired by Suranga Chandratillake’s Turing lecture, I want to develop his theme of the “The Boffin Phallacy”. Using wild assertions and examples from my own career, and with no humility whatsoever, I will point out some things that Suranga missed. I will put aside fears of losing my academic friends and alienating academic researchers everywhere, and try to show that a business life is exciting and sexy. Along the way, I hope to suggest some ways that we can turn out computing graduates who will one day run the globe.

Model Checking Port-Based Network Access Control for Wireless Networks (26 February, 2013)

Speaker: Yu Lu

With the rapid development of Internet, the security of network protocols becomes the focus of research. The 802.1X standard is the IEEE standard for port-based network access control. The 802.1X standard delivers powerful authentication and data privacy as part of its robust, extensible security framework. It is this strong security, assured authentication, and dependable data protection that has made the 802.1X standard the core ingredient in today’s most successful network access control (NAC) solutions. As the central access authentication, the importance of IEEE 802.1X protocol's security properties is obvious. Formal methods is an crucial software and protocol analysis and verification tool. Formal methods includes model checking, logic inference, and theorem proving, etc.

We could use model checking to help analyse security protocols by exhaustively inspecting reachable composite system states in a finite state machine representation of the system. The IEEE 802.1X standard provides port-based network access control for hybrid networking technologies. We describe how the current IEEE 802.1X mechanism for 802.11 wireless networks can be modelled in the PROMELA modelling language and verified using the SPIN model checker. We aim to verify a set of essential security properties of the 802.1X, and also to find out whether the current combination of the IEEE 802.1X and 802.11 standards provide a sufficient level of security.

Learn Physics by Programming (22 February, 2013)

Speaker: Scott Walck

I will describe a course for second-year physics students designed to
deepen understanding of basic physics by using a precise, expressive
language to expose the structure of a physical theory. With the
functional programming language Haskell, we use types, higher-order
functions, and referential transparency to encourage clear thinking and
to build data structures appropriate for problems in physics. The
results can be plotted or animated as appropriate.

Time-Biased Gain (21 February, 2013)

Speaker: Charlie Clark
Time-biased gain provides a unifying framework for information retrieval evaluation

Time-biased gain provides a unifying framework for information retrieval evaluation, generalizing many traditional effectiveness measures while accommodating aspects of user behavior not captured by these measures. By using time as a basis for calibration against actual user data, time-biased gain can reflect aspects of the search process that directly impact user experience, including document length, near-duplicate documents, and summaries. Unlike traditional measures, which must be arbitrarily normalized for averaging purposes, time-biased gain is reported in meaningful units, such as the total number of relevant documents seen by the user. In work reported at SIGIR 2012, we proposed and validated a closed-form equation for estimating time-biased gain, explored its properties, and compared it to standard approaches. In work reported at CIKM 2012, we used stochastic simulation to numerically approximate time-biased gain, an approach that provides greater flexibility, allowing us to accommodate different types of user behavior and increases the realism of the effectiveness measure. In work reported at HCIR 2012, we extended our stochastic simulation to model the variation between users. In this talk, I will provide an overview of time-biased gain, and outline our ongoing and future work, including extensions to evaluate query suggestion, diversity, and whole-page relevance. This is joint work with Mark Smucker.

BCS/IET Turing Lecture (20 February, 2013)

Speaker: Suranga Chandratillake

What they didn't teach me: building a technology company and taking it to market

Annual BCS/IET Turing Lecture: see for full details.

Free registration at: 


Armed with a good degree and interested in relatively esoteric extremes of Computer Science, Suranga Chandratillake was all set for an academic career. A combination of events conspired to take him down the industry route instead, and he found himself starting and running his own successful company.

In going through this process he realised just how little his (otherwise excellent) education had prepared him for the challenges of starting a company, building and marketing a product and growing an organisation.

Given the economic and social impact that such endeavour can have, Suranga asks what could be done to better equip those starting down this path today?

During this year’s Turing Lecture, our speaker will cover the background to this 2000 decision and his experience of going into industry versus academia, beginning with Autonomy plc and later the founding and path to growth of his company, blinkx plc.

He will cover the technology developed at both companies including efforts to reduce complexity, increase customer-centricity and the unique challenges of building for consumers.

Suranga will also cover ‘the rest’: the importance of marketing and PR in the technology industry, raising capital, running an IPO and managing the human element (hiring, firing and cultivating people and a culture)

Turning from personal experiences, Suranga will reflect on why this route is important (including the significance of industry on technology progress and its impact on employment and national wealth) as well as how he learnt about things he didn't know before and touch on comparisons between UK and US university degrees.

He will briefly refer to ‘Turing's World: the incredible, pervasive influence of computers on our lives’ and conclude by sharing his thoughts on what more might be done to help create successful technology companies.

A Parallel Task Composition Approach to Manycore Programming (20 February, 2013)

Speaker: Ashkan Tousimojarad

Many-core processors have emerged to change the parallel computation world. Efficient utilization of these platforms is a great challenge. The Glasgow Parallel Reduction Machine (GPRM) is a novel, flexible framework for parallel task-composition based manycore programming. We structure programs into task code, written as C++ classes, and communication code, written in a restricted subset of C++ with pure functional semantics and parallel evaluation. Therefore, our approach views programs as parallel compositions of (sequential) tasks.
In this talk I will discuss the GPRM, the virtual machine underlying our framework. I demonstrate the potential using an implementation of a merge sort algorithm on a 64-core Tilera processor, as well as on a conventional Intel quad-core processor. The results show that our approach actually outperforms the OpenMP code, while facilitates writing of parallel programs.

The Black Hole Methodology (19 February, 2013)

Speaker: Wendy Goucher

Research is tough, demanding, frustrating and not always rewarding.  And then there is the inescapable problem.  In this case it was “ How do you prove there is a problem?” and thereby is the issue.  There is no way to prove it because the evidence is invisible or non-existent.  This is the story of how that obstacle was tackled.  The solution wasn’t perfect, but it was a way forward.

MultiMemoHome Project Showcase (19 February, 2013)

Speaker: various

This event is the final showcase of research and prototypes developed during the MultiMemoHome Project (funded by EPSRC). 

Big Data and how it's influencing the modern computing landscape (18 February, 2013)

Speaker: Prof Triantafillou
Big data is arguably the biggest buzzword to have hit the CS community at large in the last few years

Big data is arguably the biggest buzzword to have hit the CS community at large in the last few years.
In this talk I will strive to explain what the big fuss is all about, providing answers to the following questions.
What does "big data" mean?
Why is it important to society and to computing scientists?
What are the essential tools/technologies?
Why does it necessitate a new suit of related technologies?
What are the key open challenges?
Which fields of CS does it cover?

Time permitting, I will overview some of our latest research results.

Evaluating Bad Query Abandonment in an Iterative SMS-Based FAQ Retrieval System (14 February, 2013)

Speaker: Edwin Thuma

We investigate how many iterations users are willing to tolerate in an iterative Frequently Asked Question (FAQ) system that provides information on HIV/AIDS. This is part of work in progress that aims to develop an automated Frequently Asked Question system that can be used to provide answers on HIV/AIDS related queries to users in Botswana. Our system engages the user in the question answering process by following an iterative interaction approach in order to avoid giving inappropriate answers to the user. Our findings provide us with an indication of how long users are willing to engage with the system. We subsequently use this to develop a novel evaluation metric to use in future developments of the system. As an additional finding, we show that the previous search experience of the users has a significant effect on their future behaviour.

Information processing in emergency management environments (12 February, 2013)

Speaker: Stefan Raue

In this talk I will discuss some of my work on information processing in emergency management environments. In particular, I will focus on crowdsourcing techniques to improve the response to adverse events resulting from natural or man-made hazards. I will talk about the information needs of emergency services during the early stages of response, and discuss the information processing activities to which crowdsourcing activities could be beneficial. There are multiple technical, social and ethical challenges arising from the prospect of involving the crowd in large-scale information processing tasks in this time- and safety-critical environment.

ITECH: Web Startup Pitches (06 February, 2013)

Speaker: Leif Azzopardi
ITECH students will be pitching the designs of their web apps.

ITech Students will be presenting the designs of their web applications. Each team has five minutes to describe their application and its objectives, along with discussing the user personas the app caters for and a walkthrough of the application using wireframes.

Multicriteria Optimization Approach to Select Images as Passwords in Recognition Based Graphical Authentication Systems (05 February, 2013)

Speaker: Soumyadeb Chowdhury

Recognition-based graphical authentication systems (RGBSs) use images as passwords. The major goal of our research is to investigate the usability and guessability i.e. vulnerability of the different image types, Mikon, doodle, art and object (sports, food, sculptures etc) to written and verbal descriptions, when used as passwords in RBGS. We conducted two longitudinal user studies over a period of 4 months to evaluate the usability (100 users) and guessability based on verbal descriptions (70 users), of  these image types when used as passwords in RGBSs. After deriving conclusions based on a statistical analysis of the data, the research question was “How to rank image types based on both the criteria”. Usability and guessability are in conflict, when assessing the suitability of an image for use as a password. Since the statistical analysis alone does not unambiguously identify the most suitable image to be used as password, here, we present a new approach which effectively integrates a series of techniques to rank images, taking into account the conflicting criteria.

Ethical Challenges in Large Scale Mobile HCI (04 February, 2013)

Speaker: Alistair Morrison

The launch of 'app stores' on several mobile software platforms is a relatively recent phenomenon, and many HCI researchers have begun to take advantage of these distribution platforms to run human trials and gather data from hundreds of thousands of users. However, this new methodology radically changes participant-researcher relationships and has moved current researcher practice beyond available ethical guidelines. In this talk I will outline the ethical challenges specific to running mass participation mobile software trials. I present a classification scheme for categorising mobile software trials, along with a complementary set of recommended guidelines for each identified category. I encourage feedback and debate, as this work is intended to stimulate discussion towards the creation of a community consensus on ethical practice.

[IR] Searching the Temporal Web: Challenges and Current Approaches (04 February, 2013)

Speaker: Nattiya Kanhabua

In this talk, we will give a survey of current approaches to searching the

temporal web. In such a web collection, the contents are created and/or

edited over time, and examples are web archives, news archives, blogs,

micro-blogs, personal emails and enterprise documents. Unfortunately,

traditional IR approaches based on term-matching only can give

unsatisfactory results when searching the temporal web. The reason for this

is multifold:  1) the collection is strongly time-dependent, i.e., with

multiple versions of documents, 2) the contents of documents are about

events happened at particular time periods, 3) the meanings of semantic

annotations can change over time, and 4) a query representing an information

need can be time-sensitive, so-called a temporal query.


Several major challenges in searching the temporal web will be discussed,

namely, 1) How to understand temporal search intent represented by

time-sensitive queries? 2) How to handle the temporal dynamics of queries

and documents? and 3) How to explicitly model temporal information in

retrieval and ranking models? To this end, we will present current

approaches to the addressed problems as well as outline the directions for

future research.

GIST Seminar: : Understanding Visualization: A Formal Approach using Category Theory and Semiotics (31 January, 2013)

Speaker: Dr Paul Vickers

We combine the vocabulary of semiotics and category theory to provide general framework for understanding visualization in practice, including: relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. Further work will explore how the framework may be used to compare visualizations, especially those of different modalities. This may offer predictive potential before expensive user studies are carried out.

Who is old - and why should we care? (29 January, 2013)

Speaker: Dr Alistair Edwards

On Al Roth Nobel Prize-winning lecture (29 January, 2013)

Speaker: David Manlove

The 2012 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (commonly known as the Nobel Prize in Economics) was awarded jointly to Professors Alvin E Roth and Lloyd S Shapley (see "for the theory of stable allocations and the practice of market design".

Lloyd Shapley is the co-author of the famous Gale-Shapley algorithm (with David Gale, who sadly died in 2008).  Al Roth has been instrumental in turning theory into practice through his involvement with centralised clearinghouses in many application domains, including junior doctor allocation and kidney exchange, in addition to contributing many important theoretical results himself.

The Nobel Prize announcement was made on 15 October, and the two laureates gave their award lectures on 8 December before receiving the awards on 10 December.  We will watch Al Roth’s lecture, entitled “The Theory and Practice of Market Design” (43 mins).  This is highly relevant to FATA research, as well as being very accessible to anyone who is interested in knowing “who gets what” when it comes to sharing around scarce resources.

Probabilistic rule-based argumentation for norm-governed learning agents (28 January, 2013)

Speaker: Sebastian Riedel

There is a vast and ever-increasing amount of unstructured textual data at our disposal. The ambiguity, variability and expressivity of language makes this data difficult to analyse, mine, search, visualise, and, ultimately, base decisions on. These challenges have motivated efforts to enable machine reading: computers that can read text and convert it into semantic representations, such as the Google Knowledge Graph for general facts, or pathway databases in the biomedical domain. This representations can then be harnessed by machines and humans alike. At the heart of machine reading is relation extraction: reading text to create a semantic network of entities and their relations, such as employeeOf(Person,Company), regulates(Protein,Protein) or causes(Event,Event). 

In this talk I will present a series of graphical models and matrix factorisation techniques that can learn to extract relations. I will start by contrasting a fully supervised approach with one that leverages pre-existing semantic knowledge (for example, in the Freebase database) to reduce annotation costs. I will then present ways to extract additional relations that are not yet part of the schema, and for which no pre-existing semantic knowledge is available. I will show that by doing so we cannot only extract richer knowledge, but also improve extraction quality of relations within the original schema. This helps to improve over previous state-of-the-art by more than 10% points mean average precision. 

Test Automation to Perform Regression Testing using an Action-Based Paradigm (25 January, 2013)

Speaker: Paul Mullen

When using manual testing to perform regression testing, it can prove expensive, time consuming and potentially inaccurate. With software failures costing an estimated 300 billion USD, there needs to be a cheap and reliable way to perform testing given a short period of time.

Test automation allows a team to perform testing without utilising manpower and can be used as an on demand service. Integrating test automation into a test plan can be a large project and traditional methods suffer from maintenance and extension costs, over the life of a product. To counter this, a new action-based paradigm was designed which relates the test script to the use cases being tested rather than the user interface.

This talk dispels common misconceptions about testing and discusses how test automation can provide a robust and effective tool in the hunt for defects over the lifetime of a product.

Monitoring Crowd Movement using Social Media Platforms (18 January, 2013)

Speaker: Stefan Raue

Twenty years ago the first smartphone prototypes were presented to the public. Since then continuous technological improvements enabled the production of devices for the mass market, allowing the general public to adopt this technology. Globally there are now an estimated 1,038 billion smartphones in use. This has led to changes in the way information is exchanged amongst the general public. In my talk I will focus on the use of social media platforms from mobile devices as an example of a shift in communication behaviour during events (festival, sports, and incidents).


In this talk I will cover some of my recent work on monitoring crowd movement for large-scale public events based on social media data, alongside stories of recent social media use (e.g. during Hurricane Sandy) from around the world. The presentation will contain a brief demo visualising crowd movement during the London Olympics and the T-in-the-Park festival in 2012.


I will finish the talk by going all the way back to 1994 when Hewlett Packard released a vision video (“Synergies”) covering the technology required for future emergency management. Many of the devices and technologies shown in the video are now part of our everyday life...some require much more research.

Technology Systems in the Retail and Investment Banking Industry (11 January, 2013)

Speaker: Abyd Adhami

IT forms a strategic and integral part of every major banking and financial institution worldwide. These firms spend hundreds of millions (and for many even billions) on technology systems each year. This presentation will provide an overview of technology and its use in the banking and financial industry, touching on a number of areas, both across retail and investment banking technology.

It will look at some of the major integration architecture challenges that banks face as they continue to develop and enhance their portfolio of technology systems, striving for that commercial edge over the competition.  It will also explore a couple of “cutting edge” and interesting areas of how technology is shaping the world of electronic trading and order routing systems.

NOTE: This presentation is based on my 10 years of IT experience working across several banking and financial institutions.  Whilst preserving client confidentiality, it will include some examples of real banking systems/diagrams. It is not intended to be very technical, and I promise to include lots of stories and interesting facts hopefully appealing to a wider community.

The Hospitals Residents Problem with Couples (11 December, 2012)

Speaker: Iain McBride

The Hospitals Residents Problem (HR) is a familiar problem which seeks a stable bipartite matching amongst two sets: one containing residents and one containing hospitals. Each group expresses a strict linear preference over some subset of the members of the other set. The problem is well understood and an efficient algorithm due to Gale and Shapley exists which guarantees to find a stable matching in an instance of HR.

However, the problem becomes intractable when the residents are able to form linked pairs, or couples. This problem is known as the Hospitals Residents Problem with Couples (HRC). In this case Ronn has previously shown that the problem of deciding whether a stable matching exists in an instance of HRC is NP-complete. We show that this NP-completeness result holds for very restricted versions of HRC. Further, we provide an IP formulation for finding a stable matching in an instance of HRC, or reporting that the instance supports no stable matching, and provide early empirical results derived from this IP model.

Severe Weather in Kyoto (07 December, 2012)

Speaker: Wim Vanderbauwhede

This summer I spent a 2-month research visit at the Disaster Prevention Research Institute of Kyoto University. I went there to work on GPU acceleration of numerical weather simulations. I will discuss the motivation for this work, briefly explain the actual research and the outcomes.

A model checking approach for air traffic control requirement analysis (04 December, 2012)

Speaker: Michele Sevegnani

NATS provides air traffic navigation services to over 6,000 aircraft flying through UK controlled airspace every day. The huge challenge faced by the engineering team at the NATS control centre in Prestwick is to monitor constantly the status of the equipment required to provide safe and efficient en route services. This involves interpreting and unstructured data feed generated by thousands of diverse sensors such as communication link monitors but also intrusion sensors.
In this talk we will describe how stochastic modelling and checking can be employed to help in this task. Our models allow us to quantify the overall performance of the monitoring system and the quality of the service provided, to predict future behaviours, to react to external events and to plan future upgrades by identifying weaknesses of the system and optimise assets.

Paired and altruistic kidney donation in the UK: how algorithms can help (30 November, 2012)

Speaker: David Manlove

A patient who requires a kidney transplant, and who has a willing but incompatible donor, may be able to "swap" their donor with that of another patient in a similar position.  This creates a "kidney exchange", involving two or more pairs swapping kidneys in a cyclic manner.  Altruistic donors can also trigger "domino paired donation chains" involving incompatible patient-donor pairs, with the final donor donating a kidney to the deceased donor waiting list.


NHS Blood and Transplant operate a UK-wide matching scheme, as part of the National Living Donor Kidney Sharing Schemes, which identifies potential kidney exchanges and domino paired donation chains involving incompatible patient-donor pairs and altruistic donors on their database every three months.

Since July 2008, NHSBT have used software produced by School of Computing Science in order to construct an optimal solution to the underlying optimisation problem at each quarterly matching run.  This has led to at least 165 actual transplants to date.

In this talk I will describe the application in more detail and outline briefly the computational problems involved.  I will then give an overview of the results obtained to date, illustrating a couple of web applications that have been developed to assist with this task.  This is joint work with Gregg O'Malley, who has been solely responsible for the implementation of the software currently in use.


Space-time modelling of climatic trends (29 November, 2012)

Speaker: Peter F. Craigmile

Classical assessments of climatic trends are based on the analysis of
a small number of time series. Considering trend to be only smooth
changes of the mean value of a stochastic process through time is
limiting, because it does not provide a mechanism to study changes of
the mean that could also occur over space. Thus, in studies of climate
there is a substantial interest in being able to jointly characterize
trends over time and space.  In this talk we discuss the salient
features of climate data that must incorporated in statistical models
that characterize trend.  We build wavelet-based space-time
hierarchical Bayesian models that can be used to simultaneously model
trend, seasonality, and error, allowing for the possibility that the
error process may exhibit space-time long-range dependence. We
demonstrate how these statistical models can be used to assess the
significance of trend over time and space.  We motivate and apply our
methods to the analysis of space-time temperature trends.

IDI Seminar (29 November, 2012)

Speaker: Konstantinos Georgatzis
Efficient Optimisation for Data Visualisation as an Information Retrieval Task

Visualisation of multivariate data sets is often done by mapping data onto a low-dimensional display with nonlinear dimensionality reduction (NLDR) methods. We have introduced a formalism where NLDR for visualisation is treated as an information retrieval task, and a novel NLDR method called the Neighbor Retrieval

Visualiser (NeRV) which outperforms previous methods. The remaining concern is that NeRV has quadratic computational complexity with respect to the number of data. We introduce an efficient learning algorithm for NeRV where relationships between data are approximated through mixture modeling, yielding efficient computation with near-linear computational complexity with respect to the number of data. The method is much faster to optimise as the number of data grows, and it maintains good visualisation performance.

ERMMM - Economic Resource Modelling for Memory Management (28 November, 2012)

Speaker: Jeremy Singer

How do we share resources equitably between competing individuals? In this particular case, how do we share main memory between concurrent JVM processes? Can micro-economic theory provide inspiration to software systems architects? In this week's ENDS talk I aim to address these questions in a pragmatic way.

Turing's Universal Computing Machine (23 November, 2012)

Speaker: Paul Cockshott

The talk will cover the background to Turing's Machine. Why Universality was important, and address the philosophical confusion at the heart of the belief that Turing Machines and the Lambda calculus are equivalent. The talk is one that I gave to the British Mathematical Colloquium in May on the Turing Centenary.

VM Migration: Juggling the Data Center. (21 November, 2012)

Speaker: Gregg Hamilton

One major goal of data center operators is to give predictable, bounded performance guarantees (or SLAs) across their network. However, with the majority of traffic flows being highly dynamic and short-lived, achieving balanced network performance is somewhat problematic. Current virtual machine (VM) migration techniques balance server workloads using CPU and memory resources as migration indicators, with few considering the effects on network performance. This talk will look at the topic of my PhD work: combining server-side and network performance indicators to achieve a stable and predictable network through VM migration.

PEPA and the Diffusion Problem (20 November, 2012)

Speaker: Michael Jamieson

PEPA, a formalism for keeping track of events during interacting stochastic processes, has been advocated in this School for use in biological investigations. An example is the study of nitric oxide diffusing in a blood vessel. In this talk I will suggest that PEPA be regarded as complementary to another method, which I will describe, of accounting for this diffusion.

Context data in lifelog retrieval (19 November, 2012)

Speaker: Liadh Kelly
Context data in lifelog retrieval

Advances in digital technologies for information capture combined with
massive increases in the capacity of digital storage media mean that it is
now possible to capture and store much of one's life experiences in a
personal lifelog. Information can be captured from a myriad of personal
information devices including desktop computers, mobile phones, digital
cameras, and various sensors, including GPS, Bluetooth, and biometric
devices. This talk centers on the investigation of the challenges of
retrieval in this emerging domain and on the examination of the utility of
several implicitly recorded and derived context types in meeting these
challenges. For these investigations unique rich multimodal personal
lifelog collections of 20 months duration are used. These collections
contain all items accessed on subjects' PCs and laptops (email, web pages,
word documents, etc), passively captured images depicting subjects' lives
using the SenseCam device (, and
mobile text messages sent and received. Items are annotated with several
rich sources of automatically derived context data types including
biometric data (galvanic skin response, heart rate, etc), geo-location
(captured using GPS data), people present (captured using Bluetooth data),
weather conditions, light status, and several context types related to the
dates and times of accesses to items.


Personification Using Affective Speech Synthesis (16 November, 2012)

Speaker: Matthew P. Aylett
Personification Using Affective Speech Synthesis: An Introduction

For many applications multimodal systems require the ability to convey personality. This requirement varies from the explicit, where a virtual agent is mimicking a human in an immersive training application[1], to the implicit, effective interaction in many systems is improved dues to the enhanced involvement and pleasure of using a a multimodal system which creates a sense of a personality[2].

If this multimodal system is required to communicate with human using audio then speech synthesis is a critical element in rendering the sense of personality. In this paper we discuss the issues raised by the difficulty of assessing personality in artificial systems as well as the possible strategies for enhancing speech synthesis in order to create a deeper sense of personality rather than the more traditional objective of naturalness.

We examine work in expressive speech synthesis and present work carried out at CereProc in order to create emotional synthesis. The expressive functionality of CereVoice[3] is available in a
downloadable application and has been used in order to produce some novel character based android voices. We will examine the elements of this system in terms of accent, voice adaptation, and expressive synthesis.

We will then present preliminary results from an emotional assessment of the CereProc system which will form part of a baseline of evaluation for further assessment of personality. To conclude we will present our proposed personality assessment of this speech material.

The work presented forms part of the 'personify' project supported by the Royal Society which commenced on the 1st of January 2012.

[1] L.Hall, S.Jones, A.Paiva, and R.S Aylett, “Fearnot!: providing
children with strategies to cope with bullying,” in 8th International
Conference on Interaction Design and Children, 2009.

[2] T.Bickmore, L.Pfeifer, and B.Jack, “Taking the time to care:
Empowering low health literacy hospital patients with virtual nurse
agents,” in SIGCHI Conference on Human Factors in Computing Systems,

[3] Aylett, M.P., Pidcock, C.P., “The CereVoice Characterful Speech
Synthesiser SDK”, AISB, Newcastle. pp.174-8, 2007.

Dr Matthew Aylett has over 10 years experience in commercial speech synthesis and speech synthesis research. He is a founder of CereProc, which offers unique emotional and characterful synthesis solutions and has recently been awarded a Royal Society Industrial Fellowship to explore the role of speech synthesis in the perception of character in artificial agents.

Detector Development for Hadron Physics and Applications (15 November, 2012)

Speaker: Bjoern Seitz
Detector Development for Hadron Physics and Applications

Nuclear physics research requires the detection and characterisation of ionising radiation over a large range of energies. This is especially true for fundamental science, but also for the manifold applications generated by nuclear physics research. Hence, nuclear physics group develop a large range of expertise in radiation detection and the development of associated instrumentation across the full data chain, from signal generation to interpretation, from simulation to hardware deployment. The presentation will give a brief overview over the instrumentation development activities for fundamental and applied nuclear physics undertaken at the University of Glasgow and will highlight some particular strength of the group.

Open Problems in 2-level Compact Routing (14 November, 2012)

Speaker: Paul Jakma

"A quick talk on the subject of my PhD, on some of the open problems in compact routing. In particular, issues around selecting landmark nodes in 2-level compact routing schemes, and their influence on other problems such as policy in routing."

The Trials and Tribulations of Typestate Inference (13 November, 2012)

Speaker: Iain McGinniss

Typestate is the combination of traditional object-oriented type theory with finite state machines that represent allowable sequences of method calls. A textual definition of a typestate, as required in specifying the type of a function parameter, is verbose to the point of being impractical. Therefore it is desirable to be able to omit such definitions where they can be accurately inferred. In this talk, I shall discuss my attempts to formally define and prove a typestate inference algorithm for a simple calculus, TS1.

From Search to Adaptive Search (12 November, 2012)

Speaker: Udo Kruschwitz
Generating good query modification suggestions or alternative queries to assist a searcher remains however a challenging issue

Modern search engines have been moving away from very simplistic interfaces that aimed at satisfying a user's need with a single-shot query. Interactive features such as query suggestions and faceted search are now integral parts of Web search engines. Generating good query modification suggestions or alternative queries to assist a searcher remains however a challenging issue. Query log analysis is one of the major strands of work in this direction. While much research has been performed on query logs collected on the Web as a whole, query log analysis to enhance search on smaller and more focused collections (such as intranets, digital libraries and local Web sites) has attracted less attention. The talk will look at a number of directions we have explored at the University of Essex in addressing this problem by automatically acquiring continuously updated domain models using query and click logs (as well as other sources).

The impact of the new Computer Science standards in NZ schools (12 November, 2012)

Speaker: Professor Tim Bell
The impact of the new Computer Science standards in NZ schools

New achievement standards for Computer Science are being phased in for high
schools from 2011 to 2013. This talk will describe the process that led to
the changes, and report on two studies on the impact of the new standards on
teachers and students respectively. It will conclude with speculation on the
future impact on universities once students arrive who have access to the

The first study is a survey of teachers, looking at the existing base of
teacher expertise, their motivations for changing, and how they have found
the transition. The second study is a detailed analysis of 151 student
reports submitted at the end of 2011 for a standard on algorithms,
programming languages and HCI. We will look at the kinds of learning that
students reported on, misconceptions, and the importance of students being
asked the right questions and using good examples to produce a quality work.

Tim Bell is a professor in the Department of Computer Science and
Software Engineering at the University of Canterbury in Christchurch, New
Zealand. His main current research interest is computer science education;
in the past he has been also worked on computers and music, and data
compression. His ``Computer Science Unplugged'' project is widely used
internationally, and its books and videos have been translated into about 17
languages. He is also a qualified musician, and performs regularly on
instruments that have black-and-white keyboards.

An Integer Programming Approach to the Hospitals/Residents Problem with Ties (06 November, 2012)

Speaker: Augustine Kwanashie

Matching problems generally involve the assignment of agents of one set to those of another. Often some or all of the agents have preferences over one another. An example of such a problem is the Hospitals/Residents problem with Ties (HRT) which models the problem of assigning graduating medical students to hospitals based on agents having preferences over one another, which can involve ties. Finding a maximum stable matching given an HRT instance is known to be NP-hard. We investigate integer programming techniques for producing optimal stable matchings that perform reasonably well in practice.  Gains made in the size of these matchings can deliver considerable benefits in some real-life applications. We describe various techniques used to improve the performance of these integer programs and present some empirical results.

Building Brains (19 October, 2012)

Speaker: Professor Steve Furber

When his concept of the universal computing machine finally became an engineering reality, Alan Turing speculated on the prospects for such machines to emulate human thinking. Although computers now routinely perform impressive feats of logic and analysis, such as searching the vast complexities of the global internet for information in a second or two, they have progressed much more slowly than Turing anticipated towards achieving normal human levels of intelligent behaviour, or perhaps “common sense”. Why is this?

Perhaps the answer lies in the fact that the principles of information processing in the brain are still far from understood. But progress in computer technology means that we can now realistically contemplate building computer models of the brain that can be used to probe these principles much more readily than is feasible, or ethical, with a living biological brain.

Pi-Cost and a brief introduction to DR-PI-OMEGA and DR-PI - Towards Formalizing the Cost of Computation in a Distributed Computer Network (16 October, 2012)

Speaker: Manish Gaur

The picalculus is a basic abstract language for describing communicating processes and has a very developed behavioural theory expressed as equivalence relation between process descriptors; A process P equivalent to a process Q signifies that although P and Q may be intentionally very different they offer essentially same behaviour to the users. The basic language and its related theory has been extended in myriad ways in order to incorporate many different aspect of concurrent behaviour. In this talk, we present a new variation on the picalculus, picost, in which the use of channels must be paid for. Processes operate relative to a cost environment; and communication can only happen if principals have provided sufficient funds for the channels associated with the communications. We define a bisimulation based behavioural preorder in which processes are related if, intuitively, they exhibit the same behaviour but one may be efficient than the other. We justify our choice of preorder by proving that it is characterised by three intuitive properties which behavioural preorders should satisfy in a framework in which the use of resources must be funded.

This development, apart from other applications, is useful in formalising a distributed network with routers acting as an active component in determining the quality of service of the network. We developed two formal languages for distributed networks where computations are described explicitly in the presence of routers. Our model may be considered as an extension of the asynchronous distributed pi-calculus (ADpi). We believe that such models help in prototyping the routing algorithms in context of large networks and reasoning about them while abstracting away the excessive details. Being general, the model may also be applied to demonstrate the role of routers in determining the quality of services of the network. Further in this talk, we intend to very briefly describe the frame work and results obtained about such descriptions.

Empirical Computer Science: how not to do it (09 October, 2012)

Speaker: Patrick Prosser

Empirical Computer Science is hard. To do it well you have to be ruthlessly honest and more than a little bit paranoid. I will present two examples of "How Not to do Empirical Computer Science". NOTE: "All persons, places, and events in this presentation are real. Certain speeches and thoughts are necessarily constructions by the presenter. No names have been changed to protect the innocent, since God Almighty protects the innocent as a matter of Heavenly routine." (quote plagiarized from Kurt Vonnegut's The Sirens of Titan)

Of bison and bigraphs: modelling interactions in physical/virtual spaces (15 May, 2012)

Speaker: Muffy Calder

Mixed reality systems present a wide range of challenges for formal modelling -- how can we model interactions in both physical and virtual spaces? We start to explore this question through a specific application: modelling Steve Benford's Savannah game using Bigraphical reactive systems. The Savannah game is a collaborative, location-based game in which groups of `lions' (i.e. children with devices) hunt together on a virtual savannah that is overlaid on a (physical) open playing field. This work is in the preliminary stages and so unusually for a formal methods talk, we will not give the details of *any* formal models! Instead we will focus on which aspects of the game we can formalise and reason about, and assumptions about the level of detail required for the physical space and for the virtual space.

Soci (01 January, 2012)


Escape From the Ivory Tower: The Haskell Journey from 1990 to 2011 (01 January, 2012)

Speaker: Simon Peyton Jones
Haskell is my first baby, born slightly before my son Michael, who now has a job as a software engineer (working for Oege de Moor in Oxford). Like Michael, Haskell’s early childhood was in Glasgow, in the warm embrace of the functional programming group at the Department of Computing Science, and enjoying the loving attention of Phil Wadler, John Hughes, John Launchbury, John O’Donnell, Will Partain, Cordelia Hall, Simon Marlow, Andy Gill, and other parent figures. From these somewhat academic beginnings as a remorselessly pure functional programming language, Haskell has evolved into a practical tool used for real applications. Despite being over 20 years old, Haskell is, amazingly, still in a state of furious innovation. In my talk I’ll try to give a sense of this long story arc, and give a glimpse of what we are up to now.

Escape From the Ivory Tower: The Haskell Journey from 1990 to 2011 (01 January, 2012)

Speaker: Simon Peyton Jones
Haskell is my first baby, born slightly before my son Michael, who now has a job as a software engineer (working for Oege de Moor in Oxford). Like Michael, Haskell’s early childhood was in Glasgow, in the warm embrace of the functional programming group at the Department of Computing Science, and enjoying the loving attention of Phil Wadler, John Hughes, John Launchbury, John O’Donnell, Will Partain, Simon Marlow, Andy Gill, and other parent figures. From these somewhat academic beginnings as a remorselessly pure functional programming language, Haskell has evolved into a practical tool used for real applications. Despite being over 20 years old, Haskell is, amazingly, still in a state of furious innovation. In my talk I’ll try to give a sense of this long story arc, and give a glimpse of what we are up to now.

Escape From the Ivory Tower: The Haskell Journey from 1990 to 2011 (01 January, 2012)

Speaker: Simon Peyton Jones
A seminar to celebrate Simon's Honorary Degree:

Escape From the Ivory Tower: The Haskell Journey from 1990 to 2011 (01 January, 2012)

Speaker: Simon Peyton Jones
A seminar to celebrate Simon's Honorary Degree:

Add an event