PhD projects and funding opportunities

PhD projects and funding opportunities

The combination of history, tradition and current status of Computing Science research in Glasgow offers the best possible training for future computer scientists.
Faizuddin Mohd Noor, PhD graduate, 2016

Please browse the projects listed below, each of which has been proposed by a member of staff. Competitive scholarships are available for UK/EU students (and a very limited number for students from elsewhere). In many cases, these proposals can be modified - please contact the academic responsible to discuss. We also accept self-defined projects. For general information please email socs-pgr-enquiries@glasgow.ac.uk. Please note that offers of admission are made independent of scholarship decisions.  If your study is dependent upon funding, be aware that offers of scholarship are confirmed independently.  If you applied for a specific scholarship or project, you may wish to use the enquiries address above to confirm your status if you have not received a communication from us.  

Details of how to apply can be found on the Postgraduate research study page.

Funded projects:

Funded PhD projects

Social Acceptability of Passenger VR Experieinces

Closing date: 31 July 2019

This 42-month PhD is part of the prestigious ERC Advanced Grant ViAjeRo (https://viajero-project.org/), which will investigate motion sickness, social acceptability and interaction in virtual and augmented reality passenger experiences. This project will harness the benefits of fully autonomous vehicles, and will greatly reduce time and effort wasted during journeys, by developing new ways for passengers to use virtual and augmented reality technologies for entertainment, work and collaboration on the move.

You will be working with Stephen Brewster and Julie Williamson in the Glasgow Interactive Systems Section (GIST) in the School of Computing Science at Glasgow, and Frank Pollick in the School of Psychology.

For more details see mig.dcs.gla.ac.uk/viajero_phd.

_________________________________________________________

 

Actionable Information Finding from Crisis Big Data through Structured Collaboration between AIs and Volunteers

Closing date: applications will be evaluated on a rolling basis

Description: The Glasgow Information Retrieval group is looking for motivated students interested in our doctoral program. We are looking for a PhD student to work on emerging machine learning challenges in the emergency management domain to support response efforts during natural disasters (e.g. flooding, earthquakes or hurricanes). A successful student taking this opportunity will work with both public reports from first responders as well as high volume of social media data, working to improve the situational awareness of response personnel in the command and control centre during disasters.

The broad aim of this PhD programme is to examine and extend state-of-the-art active learning and artificial intelligence algorithms (e.g. new neural network architectures) when integrated with volunteering efforts (crowdsourcing), with the goal of identifying and cross-referencing actionable information from social media with on-going response activities. This will involve learning about how machine learning algorithms evolve over time and can be directed/tuned through human input, as well as learning about emergency response working practices and what makes information valuable during emergency situations.

Environment: The successful candidate will enrol as a PhD student at the School of Computing Science (Information, Data, Analysis Section), University of Glasgow, under the supervision of Dr Richard McCreadie. The successful candidate will be based in the Glasgow Information Retrieval Group, and will be expected to collaborate with experts in Big Data processing, Machine Learning from across the IDA Section. The successful candidate will have access to a state-of-the-art cluster of machines, including a cluster of new GPU servers, as well as terabytes of historical social media data.

Skills: The ideal candidate will have a strong background in Computer Science and some background in Statistics. In particular, the student is expected to have strong programming skills, some prior experience of machine learning and/or crowdsourcing, a good command of English and team working skills.

Eligibility: Full funding is provided for EU/UK students (standard EU/UK fees and stipend rates included). Non-EU/UK students can apply, however they have higher international fees, which will not be fully covered by the scholarship.

Contact Information: For further information, interested candidates can contact Richard McCreadie (richard.mccreadie@glasgow.ac.uk)

 


Funding opportunities and project proposals:

Funding opportunities and scholarships

EPSRC Doctoral Training Account

Deadline: Ultimo January 2020 (exact date to be confirmed) 

The School holds a Research Council Doctoral Training Account which is used to support PhD studentships for UK and EU residents (fees and stipend) - if you are an EU student you must be ordinarily resident in the UK for 3 years prior to the start of an award and have a residence in the UK.  To be eligible for an award, candidates must hold at least an upper second class honors degree from a UK university, or an overseas equivalent. These scholarships are highly competitive.

Other School funds may be used to fund both home/EU and overseas students for fees and stipend.

For more details, contact socs-pgr-enquiries@glasgow.ac.uk

 

College Scholarships

Deadline: Ultimo January 2020 (exact date to be confirmed)

The College of Science & Engineering has a number of PhD research scholarships, for which academically excellent candidates, home/EU and overseas, are encouraged to apply. Applicants must hold at least an upper second class honours degree or equivalent. The value of the scholarship award includes fees at the home/EU fee rate and a maintenance award commensurate with Research Council guidelines (£14,553 for 2017-2018). These scholarships are highly competitive.

For further details, contact socs-pgr-enquiries@glasgow.ac.uk or see College of Science & Engineering Scholarships

 

China Scholarship Council (CSC)

Deadline: Ultimo January 2020 (exact date to be confirmed)

This scheme provides academically excellent Chinese students with the opportunity to study for a PhD at the University of Glasgow. The scholarships are supported jointly by the China Scholarship Council and the University of Glasgow. The School of Computing Science is typically awarded a few scholarships per year depending on the quality of the applicants.

Further information: CoSE website 

  

Kelvin Smith Scholarships

The prestigious Lord Kelvin/Adam Smith Scholarship scheme offers outstanding research students of any nationality the opportunity to undertake doctoral training in the context of cutting-edge interdisciplinary research projects.   

Further information: Kelvin Smith Scholarship

 

Other sources

Applicants may find the following sources of information useful in seeking funding:

For international applicants only

 


Formal Analysis, Theory and Algorithms (FATA)

Modelling and automated verification of large-scale sensor network infrastructures - Dr. Michele Sevegnani

 

The emergence of Wireless Sensor Network (WSN) technology is becoming increasingly prevalent in cities where massive amounts of sensor nodes are deployed over large geographical areas for the purpose of providing high-value services to citizens. The many computational and physical entities pertaining to sensor nodes, communication networks and applications form complex sensor network infrastructures. This infrastructure raises new challenges when deployed at a large or dense scale. Specifically, it becomes increasingly difficult to validate the design of a WSN against application requirements (e.g., concerning security, functional behaviour, and quality of service) due to the large numbers of connected devices. In contrast with small-scale environments (e.g., homes, offices), at a large-scale, it is crucial to make explicit how large numbers of sensors are to be grouped into logical network partitions to allow reasoning at application-specific abstractions (e.g., building, bridge, park) rather than individual sensors. The requirements of applications, such as the sample rate of sensors (ensuring recognition accuracy), data delivery, delay tolerance, node density in an area, etc., can differ between network partitions, which in turn calls for techniques to provide suitable models of these complex infrastructures and validate their design. Furthermore, the state of sensor network infrastructures evolves over time as a result of cyber-attacks, node/network failure, battery depletion of nodes, updates, the deployment of new applications, and/or changing environmental conditions. It is important to notice that many of these large-scale infrastructures are deployed in the open air and thus are prone to node failure caused by harsh environmental conditions. So models have to capture not only the spatial (i.e., physical hierarchy) and operational (i.e., hardware, network, application configuration) aspects of these infrastructures, but also their dynamic behaviour. Models are key to verification of whether evolving sensor network infrastructures conform to application requirements throughout their life-cycle. Bigraphs, a universal modelling formalism (with algebraic and diagrammatic form) for systems that evolve in space, time, connectivity and interaction, provide a good starting point for tackling the challenges of WNS.

The proposed research will aim to more secure and resilient WSNs through design and deployment that is supported by formal, computational models and automated analysis. Models will help to articulate key design decisions, their analysis will reveal design consequences, and provide explanation of, and guidance for interventions in, unanticipated runtime behaviours such as cyber-attacks and node failures.

The project is associated with the EPSRC S4 programme grant (http://www.dcs.gla.ac.uk/research/S4/).

References:
[1] Milner, Robin. The space and motion of communicating agents. Cambridge University Press, 2009.
[2] Sevegnani, Michele, and Muffy Calder. "Bigraphs with sharing." Theoretical Computer Science 577 (2015): 43-73.
[3] Calder, Muffy, et al. "Real-time verification of wireless home networks using bigraphs with sharing." Science of Computer Programming 80 (2014): 288-310.
[4] Sevegnani, Michele, et al. “Modelling and Verification of Large-Scale Sensor Network Infrastructures.” ICECCS (2018).

Contact: emailweb.


Efficient Algorithms for Matching Problems Involving Preferences - Dr. David Manlove

 

Matching problems involving preferences are all around us: they arise when agents seek to be allocated to one another on the basis of ranked preferences over potential outcomes.  Examples include the assignment of school leavers to universities, junior doctors to hospitals and children to schools.  For example in the second case, junior doctors may have preferences over the hospitals where they wish to undertake their training, and hospitals may rank their applicants on the basis of academic merit.

In many of these applications, centralised matching schemes produce allocations in order to clear their respective markets.  One of the largest examples is the National Resident Matching Program in the US, which annually matches around 30,000 graduating medical students to their first hospital posts.  At the heart of schemes such as this are algorithms for producing matchings that optimise the satisfaction of the agents according to their preference lists.  Given the number of agents typically involved, it is of paramount importance to design efficient (polynomial-time) algorithms to drive these matching schemes.

There are a wide range of open problems involving the design and analysis of algorithms for computing optimal matchings in the presence of ranked (or ordinal) preferences.  Many of these are detailed in the book “Algorithmics of Matching Under Preferences” by David Manlove (http://www.optimalmatching.com/AMUP).  This PhD project will involve tackling some of these open problems with the aim of designing new efficient algorithms for a wide range of practical applications.  There will also be scope for algorithm implementation and empirical evaluation, and the possibility to work with practical collaborators such as the National Health Service.

The importance of the research area was recognised in 2012 through the award of the Nobel Prize in Economic Sciences to Alvin Roth and Lloyd Shapley for their work on algorithms for matching problems involving preferences.

Contact: emailweb.


Empirical Algorithmics with Graphs - Dr. Patrick Prosser

 

Many real world problems map into hard problems in graph theory. For example, scheduling problems are at heart graph colouring problems, recognising communities in social networks is clique finding, some problems in computational chemistry are graph isomorphism problems and stable matching problems correspond to the selection of edges while maintaining stability.

One of the challenges is to develop new algorithms with improved performance for these hard problems. A second challenge is to identify what problem features most affect the performance of algorithms. And thirdly, given the availability of multi-core and parallel processing, can we implement efficient parallel variants of our algorithms? Therefore, our research is a mix of theory (propose new algorithms), engineering (implement those algorithms and make them parallel), empirical study (carry out experiments to investigate algorithmic behaviour) and application (solve real problems and incorporate our algorithms into constraint programming toolkits).

Recently we have reported new algorithms for maximum clique, algorithms for variants of this problem (labelled clique, k-cliques and k-clubs), subgraph isomorphism, parallel versions of most of these algorithms, new constraint encodings for the stable roommates problem and symmetry breaking in graph representations. We continue to see challenges in using smart backtracking search, learning while searching, exploiting parallelism, symmetry breaking in search and addressing enormous problems.

A PhD will address some of the above problems (matching, colouring, routing, clique finding, isomorphism) and will require the development and implementation of new algorithms, facilitating an empirical investigation of algorithmic behaviour. In the PhD we might expect the application of algorithms to enormous problems, the exploitation of parallelism techniques  (bit-parallel, multi-core, distributed parallelism), the incorporation of symmetry breaking and learning while searching and where applicable, making the implementation of parallel search less difficult for the programmer.

Contact: emailweb.


Model checking UAVs - Dr Alice Miller

 

Increasingly, software controlled systems are designed to work for long periods of time without human intervention. They are expected to make their own decisions, based on the state of the environment and knowledge acquired through learning. These systems are said to be autonomous and include self-driving cars, autonomous robots and unmanned aerial vehicles (UAVs).

The verification of autonomous systems is a vital area of research. Public acceptance of these systems relies critically on the assurance that they will behave safely and in a predictable way, and formal verification can go some way to provide this assurance. In this project we aim to embed a probabilistic model checking engine within a UAV, which will enable both learning and runtime verification.

Model checking is a Computing Science technique used at the design stage of system development. A small logical model of a system is constructed in conjunction with properties expressed in an appropriate temporal logic. An automated verification tool (a model checker) allows us to check the properties against the model. Errors in the model may indicate errors in the system design. New verification techniques allow this process to take place at run-time, and so enable us to analyse possible future behaviours and the best course of action to take to achieve a desired outcome. For example, a verification engine on board a UAV might allow us to determine the correct response of the UAV during extreme weather conditions in order to avoid a collision. Or it might predict the best route to take to achieve its mission, given the likelihood of failure of some of the UAV components.

Objectives:

  • Identify and refine existing runtime verification techniques suitable for in-flight UAVs
  • Implement verification methodology demonstrator on board a UAV (developed in the School of Engineering at Glasgow)

Contact: emailweb.


Modelling and Optimisation of Demand Side Management - Dr. Gethin Norman

 

The objective of this project is to apply quantitative verification to the modelling and optimisation of Demand Side Management (DSM) in the Smart Grid. The Smart Grid is distinguished from classical electrical grids by the two way flow of electricity and information, allowing communication and collaboration between the consumers and producers. DSM aims to use this collaboration to shape the consumers' electrical demand to match supply (load-balancing) [1]. DSM can yield large improvements in energy and operational efficiency, reduction in greenhouse gas emissions, improvements in customer satisfaction and reduce network investment. Matching demand to supply requires coordinated diagnostic and prediction techniques using the flow of information to make control-decisions. In fact, McDonald [2] explains that the Smart Grid framework is essentially a control-optimisation problem with objectives for demand, distribution, assets, transmission and workforce.

The proposed research will aim to synthesise control strategies for building energy management systems that optimise energy load-balance while satisfying constraints on the building's operation and evaluate trade-offs between building management policies, environmental requirements, performance and efficiency. The foundations for this research will be the quantitative verification paradigm and in particular quantitative multi-objective model checking [3,4]. In this quantitative approach, models can include information regarding both the likelihood and timing of a system's evolution. Such behaviour is required for modelling energy usage as a consumer's behaviour is neither deterministic nor independent of the time of day.

[1] G. Strbac. Demand-side management: benefits and challenges. Energy Policy, 36:4419–4426, 2008

[2] J. McDonald. Smart grid applications, standards development and recent deployments. IEEE Power & Energy Society Boston Chapter Presentation, September 2009.

[3] K. Etessami, M. Kwiatkowska, M. Vardi, and M. Yannakakis. Multi-objective model check- ing of Markov decision processes. Logical Methods in Computer Science, 4(4):1–21, 2008.

[4] V. Forejt, M. Kwiatkowska, G. Norman, D. Parker, and H. Qu. Quantitative multi-objective verification for probabilistic systems. In Proc. 17th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS’11), volume 6605 of LNCS, pages 112–127. Springer, 2011.

Contact: emailweb.


Exploiting Graph Structure Algorithmically – Dr. Kitty Meeks

 

Graphs – otherwise known as networks – are an extremely powerful structure for representing diverse datasets: any data that involves relationships between pairs of objects is naturally represented in this way.  They can, for example, represent which pairs of people are acquainted in a social network, which pairs of cities are connected by a direct transport link, or which pairs of farms sell animals to each other.

Once we have represented our data as a graph, we want to answer questions about it.  What is the largest group of people who all know each other?  What is the fastest or cheapest route for visiting a certain set of cities?  Where should we monitor a livestock trade network to detect a disease outbreak as soon as possible?  The bad news is that many questions of this type are computationally intractable – algorithms become prohibitively slow on large instances – if we need to solve them on arbitrary graphs. 

However, not all graphs are the same.  A social network graph has different structural properties from a transport network graph, which in turn has different properties from a livestock trade graph.  In many cases we can leverage an understanding of the mathematical structure of the graphs to design algorithms which scale much better when applied to graphs having these specific properties.

A PhD will involve addressing one or more case studies in this area, with specific applications potentially drawn from Chemistry, Medicine, Epidemiology, Social Network Analysis or Statistics.  For any case study, the research will involve identifying structural parameters of the relevant graphs, exploiting these to design efficient “FPT” algorithms, and implementing and testing the algorithms on real data.

Contact: emailweb


Glasgow Systems Section (GLASS)

Lift: a Performance Portable Programming Language for the Applications and Hardware of the Future - Dr. Michel Steuwer

 

Applications based on artificial intelligence continue to profoundly change the ways computer systems and humans interact with each other. To further improve computers' understanding and enhance the intelligence of machines, we build increasingly complex algorithms and models which require more and more powerful computer hardware. To accommodate the increasing demand for higher performance and efficiency of new applications, the hardware landscape is specializing and diversifying faster than ever. This offers exciting opportunities as novel hardware architectures offer greater efficiency enabling applications which were unthinkable just a few years ago. Unfortunately, with current programming languages and tools, each new hardware target requires costly re-writing and re-optimization of the software. Instead, we would like to write portable software which achieves high performance on each target hardware.

Lift (www.lift-project.org) is a promising novel approach aiming to achieve this performance portability on modern parallel architectures via a sophisticated optimizing compiler which explores optimization opportunities expressed as formal rewrite rules. This PhD project will give you the opportunity to contribute to the Lift project and help to shape the future of parallel programming and the optimization of machine learning applications.

We are interested in build upon and strengthen the core technologies developed in the Lift project. Depending on your strength, interests, and motivation there are different possible directions for ranging from 1) foundational work in programming language theory which underpins the Lift language and compiler to 2) applied work optimizing real-world computer vision and machine learning applications and 3) enabling Lift to target specific hardware architectures such as FPGAs.

You will join the vibrant Lift research team which by Autumn 2018 will consist of 8 PhD students, one post-doctoral researcher, and two leading academics - born in 9 different countries. PhD students from the Lift team have been engaging with our industrial partners via internships at Oracle Labs, Codeplay, Microsoft Research, Nvidia, and Lawrence Livermore National Laboratory.

Ideal candidates will have an excellent degree (BSc or MSc) in computer science or related discipline. You should have a strong interest in programming languages and compilers for modern parallel hardware architectures.

The ideal candidate would have the following skills:

- Knowledge and strong interest in compiler design or programming language design

- Good skills in functional programming and/or good skills in parallel programming

- Basic understanding of machine learning algorithms

- Excellent collaboration and communication skills

- Strong motivation to succeed in a competitive research field

- Flexibility and ability to work alone and in a team environment

- Fluent in spoken and written English

Contact: emailweb


Improving Internet Protocol Standards - Dr. Colin Perkins

 

The technical standards underpinning Internet protocols are described in the RFC series of documents (https://www.rfc-editor.org/). These are supposed to provide accurate descriptions of the protocols used in the network, from which implementors can work. However, they’re often written in an informal and imprecise style, and interoperability and security problems frequently arise because the specifications contain inconsistent terminology, describe message syntax in imprecise ways, and specify protocol semantics informally or by example. The authors of the specifications tend to be engineers, expert at protocol design, but not at writing clear, consistent, specifications. Further, specifications are increasing written, and read, by those for whom English is not their native language, further complicating the issue.

Formal languages and tools for protocol specification and verification have been available for many years, but have not seen wide adoption in the Internet standards community. This is not because such tools offer no benefit, but because they have a steep learning curve. The benefits offered in terms of precision and correctness are not seen to outweigh the complexity in authoring and reading the resulting standards.

The goal of this project is to explore semi-formal techniques that can be used to improve protocol specifications, at a cost that’s acceptable to the engineers involved in the standards community. This will involve tools to parse protocol specifications and to encourage authors towards the use of structured English vocabulary, that is both precise for the reader, especially the non-native speaker, and offers some ability for machine verification of protocol properties and generation of protocol code. Success will not be perfection, but rather uptake by the community of tools and novel techniques that improve specification clarity, and help ensure correctness

Contact: emailweb


Costed Computational Offload - Dr. Natalia Chechina, Dr. Jeremy Singer, Prof. Phil Trinder

 

Computations now often execute in dynamic networks with a range of compute devices available to them. For example a mobile robot may perform route planning or image analysis computations using its own resources, or might offload the computation to a nearby server to obtain a better result faster.

We have already developed cost models to minimise execution time by controlling the movement of mobile computations in networks. Such a computation periodically recalculates network and program parameters and will move if the estimated time to complete at the current location Th exceeds the time to complete at the fastest available location Tn plus the communication time Tc, that is Th > Tn + Tc.

The purpose of the PhD research is to develop and implement appropriate models to decide when and where to offload computation, and how much work to do. A key challenge is to monitor a dynamic network, e.g. can we predict how long a computational resource will be available from past network behaviour? Another challenge is to develop and implement appropriate models that scale the computation. For example how detailed should the offloaded planning activity be? If the computation takes too long we risk losing the connection.

The project will run within the vibrant Glasgow Parallelism research Group http://www.dcs.gla.ac.uk/research/gpg/ with both experienced and youthful supervisors, and is associated with the EPSRC AnyScale Apps project http://anyscale.org/

Contact: emailweb


Compact Routing for the Internet Graph - Dr. Colin Perkins

 

Internet routing algorithms do not scale. This project will build on a class of centralised algorithms, known as compact routing algorithms, that have appealing theoretical scaling properties, and develop them to form practical distributed algorithms and protocols that scale in real-world topologies, while also supporting traffic engineering and routing policies.

The currently deployed Internet routing protocol is BGPv4. This is a path vector protocol that, in the absence of policy constraints, finds shortest path routes, but that offers a wide range of tools to enforce routing policy and influence the chosen routes. Due to the underlying shortest path algorithm, however, state requirements for each node in the BGP routing graph have been proven to scale faster than linearly with the size of the network (i.e., with the number of prefixes). This has been manageable until now because the limited size of IPv4 address space has constrained the number of prefixes. However, with the uptake in deployment of IPv6, this can no longer be guaranteed, and we need to find long-term scalable routing protocols.

The so-called compact routing algorithms achieve sub-linear scaling with the size of the network by abandoning shortest path routing, and using landmark-based routing. While the theoretical worst case stretch for compact routing relative to shortest path routing is large, previous work in the School has demonstrated that the stretch achieved on the Internet graph is not significant, and has developed a distributed algorithm for landmark selection. This project will extend this work to develop a fully distributed version of the compact routing algorithm, and realise it as a prototype routing protocol that could, conceivably, be deployed on the Internet as a long-term scalable routing solution.

Contact: emailweb


Peer-to-peer and Real-time Traffic Over QUIC - Dr. Colin Perkins

 

The QUIC protocol, originally developed by Google but currently being standardised in the IETF, is a next-generation Internet transport protocol. The primary use case is to replace TCP and TLS-1.3 as the transport for HTTP/2 traffic, increasing security, reducing latency, and solving some problems with mobility. It is ideal for web traffic and streaming video, but is not well suited to interactive real-time traffic (VoIP, video conferencing, VR and AR, gaming, etc.) or to peer-to-peer use.

This project will explore peer-to-peer use of QUIC for real time media, including NAT traversal, rendezvous, congestion control, FEC, and partial reliability. The objective is to develop an integrated solution that fits with the existing QUIC framework, while providing general-purpose mechanisms to help future applications. In the way that QUIC replaces TCP for web traffic, the goal here is to replace UDP and RTP for interactive real-time flows.

The work would involve close collaboration with the IETF and industry.

Contact: emailweb


Post Sockets – What is the Transport Services API of the Future? - Dr. Colin Perkins

 

The Berkeley Sockets API is showing its age. Over its 35 year history, it has become the ubiquitous portable networking interface, allowing applications to simply make effective use of TCP connections and UDP datagrams. Now, though, as a result of changes in the network and new application needs, the limitations of the Sockets API are becoming apparent. Post Sockets is a project to re-imagine the network transport API in the light of many years experience, changes in the network, better understanding of transport services, new application needs, and new programming languages and operating system services.

Details can be found at https://csperkins.org/research/post-sockets/ – the work is being done in parallel to standards work in IETF, API developments in industry (e.g., Apple is implementing the IETF work in iOS), and developing APIs in new programming languages such as Rust and Go.

Contact: emailweb


Securing Future Networked Infrastructures through Dynamic Normal Behaviour Profiling - Dr. Dimitrios Pezaros and Dr. Simon Rogers

 

Since its inception, the Internet has been inherently insecure. Over the years, much progress has been made in the areas of information encryption and authentication. However, infrastructure and resource protection against anomalous and attack behaviour are still major open challenges. This is exacerbated further by the advent of Cloud Computing where resources are collocated over virtualised data centre infrastructures, and the number and magnitude of security threats are amplified.

Current techniques for statistical, network-wide anomaly detection are offline and static, relying on the classical Machine Learning paradigm of collecting a corpus of training data with which to train the system. There is thus no ability to adapt to changing network and traffic characteristics without collecting a new corpus and re-training the system. Assumptions as to the characteristics of the data are crude: assuming measured features are independent through a Naïve Bayes classifier, or that projections that maximise the variance within the features (PCA) will naturally reveal anomalies. Moreover, there currently is no framework for profiling the evolving normal behaviour of networked infrastructures and be able to identify anomalies as deviations from such normality.

The overarching objective of this PhD project is to design a network-wide anomaly detection framework that will be able to operate on (and integrate) partial data, work in short timescales, and detect previously unseen anomalies. The work will bridge machine learning with experimental systems research, and will evaluate the devised mechanisms over real-world virtualised networked environments and traffic workloads.

The student will use recent developments in statistical ML to develop flexible probabilistic models that can capture the rapidly evolving view of the network. For example, Dirichlet Process priors for mixture models that allow new clusters to emerge as new behaviours are observed.

On the systems side, the student will develop traffic monitoring, accounting, and analysis modules that will be distributed and deployed on-demand across the network to then synthesise information and construct network-wide traffic views in order to allow characteristics learnt at one point in the network to be used elsewhere.

The research will be jointly supervised by academics from the Embedded Networked and Distributed Systems (ENDS) and the Inference Dynamics and Interaction (IDI) groups at the School of Computing Science, and will be conducted as part of the Networked Systems Research Laboratory (netlab). The student will be given access to actual Internet traffic traces, and a state-of-the-art virtualised testbed with fully programmable platforms at all software and hardware layers to experiment with.

The work will spread across some very vibrant and cross-disciplinary research areas, and the student will be equipped with highly demanded skills in Machine Learning, CyberSecurity and next generation network architectures.

Contact: emailweb


Performance Verification for Virtualized and Cloud Infrastructures - Dr. Dimitrios Pezaros

 

  • How do you verify the performance of your distributed applications?
  • How do you configure your Cloud-based network-server farm to deliver maximum throughput?
  • How do you know you are getting the performance you have paid for from your provider?

The Internet has seen great success mainly due to its decentralised nature and its ability to accommodate myriad services over a simple, packet-switched communication paradigm. However, measurement, monitoring, and management of resources have never been a native part of the Internet architecture that prioritised efficient data delivery over accountability of resource usage.

This has led to a global, complex network that has been notoriously hard to debug, to measure its temporal performance, and to verify that it delivers consistent service quality levels. The lack of such capabilities has so far been swept under the carpet due to the distribution of resources across the global Internet, and the over-provisioning of network bandwidth which has also been the main stream of revenue for network operators.

However, the Internet landscape has been changing drastically over the past few years: the penetration of Cloud computing imposes significant centralisation of compute-network-storage resources over data centre infrastructures that exhibit significant resource contention; and providers’ revenue increasingly depends on their ability to differentiate, and offer predictable and high-performing services over this new environment. The increased collocation of diverse services and users over centralised infrastructures, as well as the many layers of virtualisation (VM, network, application, etc.) required to support such multi-tenancy make the development of always-on measurement and troubleshooting mechanisms a challenging research problem to tackle.

The overarching objective of this PhD project is to design native instrumentation and measurement support for performance verification over virtualised collocation infrastructures. This will enable data centre operators to monitor and troubleshoot their (physical and virtual) infrastructure on-demand, and provide “network measurement as a service” to tenants through exposing appropriate interfaces. Application providers (tenants) will in turn be able to define measurement metrics and instantiate the corresponding modules to verify their applications’ performance, and to validate that their service level agreements with the hosting infrastructure providers are being honoured.

The work will entail experimental research in the areas of Network Function Virtualisation (NFV) and Software-Defined Networking (SDN) with a view towards enabling programmable measurement at the various layers (and locations) of future virtualised infrastructures. For example, it will explore how network nodes can efficiently provide accounting and reporting functionality alongside their main forwarding operation; what support from the end-systems (and virtual machines) will be required in order to synthesise and deploy novel end-to-end performance verification services; and what the specification and execution interfaces of such programmable environment should be.

The research will be conducted as part of the Networked Systems Research Laboratory (netlab) at the School of Computing Science and the student will be given access to a state-of-the-art virtualisation infrastructure and relevant platforms. Through the strong experimental nature of this project, the student will contribute to a currently buzzing research area, and will be equipped with highly demanded expertise in virtualised systems design, development, and evaluation. Digging under the surface, this work can have transformative effects on the design of future converged ICT environments that will need to deliver high-performance services, and where the boundaries between network, end-system, and application are becoming increasingly blurry.

Contact: emailweb


ProgNets 2.0 - Dr. Dimitrios Pezaros

 

Active and programmable networks were a popular research area about 15 years ago but eventually faded due to security and isolation concerns (how do I trust someone else’s code to run on my router’s interface?), and the lack of adoption by the industry that was at the time making money from offering high-bandwidth products and services.

All this has now changed: resource (server, network) virtualisation has been pervasive, allowing the efficient sharing of the physical infrastructure; and network operators and service providers now try to differentiate based on services they offer over virtualised infrastructures. In this new landscape, Software-Defined Networking (SDN) has emerged over the past five years as a new paradigm for dynamically-configured next generation networks, and has already been embraced by major equipment vendors (e.g., HP, Cisco, etc.) and service providers (e.g., Google).

Fundamental to SDN is the idea that the whole control plane is abstracted from individual network nodes and all network-wide functionality is configured centrally in software. Switches and routers are therefore reduced to general-purpose devices (in contrast to the legacy, vertically-integrated and vendor-controller platforms) that perform fast packet switching and are being configured on-demand through a defined API (e.g., Openflow). All functionality that then controls the network (e.g., spanning tree computation, shortest-path routing, access control lists, etc.) is provided by a (set of) central controller(s), and the resulting rules are installed on the switches through the Openflow API. This separation between the network’s data and control planes is a first step in ‘softwarising’ future networks but is still a long way from enabling true programmability through softwarisation.

The overarching objective of this PhD project is to design next generation, fully programmable Software-Defined Networks above and beyond the current state-of-the-art. Currently, the main SDN implementation through Openflow lacks any support for real-time programmable service deployment, since it centralises all intelligence (and programmability) around a (set of) controller(s). Future, service-oriented architectures will need to provide data path programmability through distributing intelligence to the network nodes. This is the only way to support the deployment of real-time programmable services in the data path (e.g., distributed network monitoring and control, performance-based provisioning, anomaly detection, dynamic firewalls, etc.).

The work will entail experimental research in protocols and languages for network programmability, in switch architectures, and the software-hardware interface. It will explore platform-independent language representations and runtimes (e.g., bytecodes and intermediate representations) that can allow custom processing at the switches without requiring the manual extension of protocol fields to support new functionality and at the same time offer bound data forwarding performance. The work will also include the design of exemplar time-critical services that will benefit from such underlying network architecture.

The research will be conducted as part of the Networked Systems Research Laboratory (netlab) at the School of Computing Science and the student will be given access to a state-of-the-art SDN testbed with fully programmable platforms at all software and hardware layers. Through the strong experimental nature of this project, the student will contribute to a currently buzzing research area, and will be equipped with highly demanded expertise in Software-Defined Networks, and next generation network architectures.

Contact: emailweb


Compilers and runtime systems for heterogeneous architectures, in particular FPGAs - Dr. Wim Vanderbauwhede

 

The topic of this research is the development of a compiler for high-level programming of FPGAs in particular. However, the compiler will target OpenCL (e.g. using the recent SPIR draft for integration with LLVM) so that the code can also run on multicore CPUs and GPUs. The challenge lies in transforming the intermediate code to obtain optimal performance on the target platform by using all devices in the heterogeneous platform. This requires partitioning the code and transforming each part to create an optimal version for the device it will run on. This is a complex problem requiring the development of sophisticated cost models for the heterogeneous architecture as well as run-time systems that can dynamically schedule code to run on different devices. If you are keen to undertake cutting-edge compiler research, this is the topic for you!

Contact: emailweb


Acceleration of scientific code for clusters of multicore CPUs, GPUs and FPGAs - Dr. Wim Vanderbauwhede

 

The topic of this research is the development and application of automated refactoring and source translation technologies to scientific codes, in particular climate/weather-related simulation code, with the aim of efficient acceleration of these codes on multicore CPUs, GPUs and FPGAs. If you are interested in source-to-source compilation (e.g. the ROSE compiler framework), refactoring and GPUs or FPGAs, and have expertise in compiler technology, FPGA programming or GPU programming, this topic provides an exciting research opportunity. The ultimate aim is to develop a compiler that can take single-threaded legacy code and transform it into high-performance parallel code using MPI, OpenMP, OpenACC or OpenCL, either entirely automatically or based on a small number of annotations.

Contact: emailweb


Acceleration of Information Retrieval algorithms on GPUs and FPGAs ("Greener Search") - Dr. Wim Vanderbauwhede

 

The topic of this research is on accelerating search and data filtering algorithms using FPGAs and GPUs. In particular FPGAs have great potential for greening the data centres as they offer a very high performance-per-Watt. A lot depends on the actual algorithms, as well as the system partitioning. If you have expertise in FPGA programming would like to take part in the development of the next generation of low-power search technology, this is a great opportunity.

Contact: emailweb


A novel shared-memory overlay for HPC cluster systems - Dr. Wim Vanderbauwhede

 

Traditional programming models have assumed a shared-memory model; however, modern architectures often have many different memory spaces, e.g. across heterogeneous devices, or across nodes in a cluster. Maintaining the shared-memory abstraction for the programmer is both very useful and highly challenging. In general of course naive shared-memory programming over an exascale HPC cluster would lead to disastrous performance.

However, in the context of a task-based programming model such as the GPRM (Glasgow Parallel Reduction Machine), we have shown that shared-memory parallel programming of manycore systems can be highly effective. The aim of this project is to extend the GPRM framework from homogeneous manycore systems to heterogeneous distributed systems through the addition of a shared-memory overlay. This overlay will allow the programmer to use a shared memory abstraction within the task-based programming model, and will leverage GPRM's sophisticated runtime systems and full-system knowledge to make intelligent caching decisions that will effectively hide the latency of the distributed memory space.

The implementation of this novel shared-memory overlay can be considered either in user space or as functionality in the operating system. The latter approach is more challenging but offers the greatest potential. If you are interested in research into system-level aspects of parallel programming for future HPC systems, this is an excellent choice.

Contact: emailweb


Application-defined Networking for HPC systems - Dr. Wim Vanderbauwhede

 

Workloads in the area of climate modeling and numerical weather prediction are becoming increasingly irregular and dynamic. Numeric Weather Prediction models such as the Weather Research and Forecasting model already display poor scalabity as a result of their complex communication patterns. Climate models usually consist of four to five coupled models, and the communication between these models is highly irregular. Coupled models are a clear emerging trend, as individual models have become too complex for conventional integration. Combined with the growing scale of supercomputers and the ever-increasing computational needs (for example, for accurate cloud modeling in climate models), this trend poses a huge challenge in terms of performance scalability.

A lot of research and development effort has gone into optimizing the network hardware, the operating system network stack and the communication libraries, as well as optimization of the individual codes. Despite this, current supercomputer systems are not well equipped to deal efficiently with rapidly changing, irregular workloads, as the network is optimised statically, routing is static and there is no elastic bandwidth provisioning.

The aim of this PhD is to investigate a radically different approach to supercomputer networking, where the network is defined by the application in a highly dynamic and elastic way. This Application Defined Networking takes a holistic view of the complete system, from the application code down to the lowest level of the network layer. Decision about routing, prioritising traffic and bandwidth provisioning are taken using the information provided at run time by the application code. In this way, the network will adapt so that traffic will always be transferred in the optimal way (e.g. lowest possible latency or highest possible bandwidth).

As this is a very large topic, the actual PhD research project will likely focus on one or more specific aspects of the problem such as the machine learning algorithms required to predict the network behaviours, the inference of code annotations required for the application to notify the network of impending traffic, or the network subsystem required to handle the dynamic allocation of resources based on the application's needs.

Contact: emailweb


The Cyber Security of Safety-Critical Applications - Prof. Chris Johnson

 

We have developed a range of techniques to protect office-based systems from a growing array of cyber threats. These include intrusion detection systems, firewalls and event logging applications. Most of these techniques are either dangerous or impracticable for safety-critical systems. For example, we can only use intrusion detection systems if we can demonstrate that they will not compromise safety. This is hard when we may have very limited time to install the patches needed to protect against a zero day attack. My research looks at ways of addressing these problems so that we can be both safe and secure across military and civil applications. This work is supported by a host of organizations ranging from the United Nations to the UK nuclear industry.

Contact: emailweb.


Tools and Methods for Reproducible Scientific Software Development - Dr. Tim Storer

 

The use of software is pervasive in all fields of science. Associated software development efforts may be very large, long lived and complex, requiring the commitment of significant resources. However, several authors have argued that the `gap' between software engineering and scientific programming is a serious risk to the production of reliable scientific results, as demonstrated in a huge number of case studies.

Problems can arise because (for example):

  • The source code for reproducing results is not available.
  • The software has been changed since the results were produced in undocumented ways.
  • The project and environmental dependencies are not fully documented or available.
  • The acceptable tolerance for variation in results is not stated.
  • There is a poor separation between source code for 'tools' and source code that describes an experimental method.

The aim of this PhD will be to investigate tools and methods that support the development of software for reproducible experiments by addressing one or more of these challenges. The work may involve the creation of completely novel approaches, or the adaptation of existing software engineering tools and methods to the highly dynamic domain of scientific software development.

Contact: emailweb.


Safety and Security of Autonomous Systems - Prof. Chris Johnson

 

Autonomous systems, including ground based vehicles and aerospace systems but also network monitoring applications raise a host of safety concerns. It is for this reason that we typically segregate them from conventional applications. For instance, autonomous aircraft and even ground-controlled vehicles are banned from entering civil, controlled airspace. They also create unique security issues – when for example, an attacker might communications links to then control the autonomous platform and threaten other assets. Countermeasures often place very tight restrictions on innovative engineering techniques – for example, standards such as IEC61508 prevent the use of AI architectures in software with a high integrity level. This is intended to make the behavior of the autonomous system easier to reason about but also prevents industry from building applications that offer huge potential benefits to the safety industry. Solutions range from the application of formal reasoning and model-based development to the integration of safety and security argumentation. This work is supported by a host of organizations ranging from UK and Asian public transport agencies to the US Department of Defence.

Contact: emailweb.


Safety Critical Software Engineering - Prof. Chris Johnson

Many aspects of safety-critical software engineering are not well developed. For example, redundant software provides little benefits without some form of diversity because two identical programs will contain the same bugs. However, getting two different companies to write the same code can be costly and they too may contain common problems if there are errors in the requirements. My research develops new approaches to the software engineering of complex systems based on existing industrial standards such as IEC61508 and ISO26262. Key areas include cost control and securing the supply chain, we also focus on supporting national and international regulators who determine whether applications in the healthcare, nuclear, energy, aerospace industries are sufficiently safe. This work involves a host of organizations ranging from the UK National Health Service to NASA.

Contact: emailweb.


Glasgow Interactive SysTems (GIST)

Designing Eye Gaze interaction for Handheld Mobile Devices – Dr Mohamed Khamis

 

Imagine controlling a mobile device with your eye movements. Front-facing cameras of handheld mobile devices are continuously advancing. In particular, recent smartphones feature depth cameras (e.g. iPhone X), which can significantly improve the quality of eye tracking on mobile devices compared to older versions. Eye gaze is fast, and interacting with gaze is intuitive, natural and brings in a lot of benefits for the user. In addition to being a hands-free interaction modality, gaze is subtle and thus suitable for sensitive interactions (e.g. entering passwords). There are also many ways gaze can improve other forms of interaction, like interaction by touch. Imaging looking at a button on the top of the screen, and instead of tapping on it with your finger, you just gaze at it and tap anywhere on the screen to activate it. This would significantly increase interaction speed, and thereby improve the overall user experience.

However interaction with mobile devices using eye gaze is also challenging. Mobile devices are used in dynamic contexts (e.g. while walking). So how can we ensure accurate interaction in these contexts? This would require some computer vision work and/or coming up with novel gaze interaction techniques that would work even if eye tracking data is not accurate (e.g. see work on Pursuits [2] and gaze gestures [3]). Another problem is that people do not always hold phones in a way that reveals their eyes to the front facing camera. How can we guide them to hold the phone in a suitable manner? These are some of the challenges that get in the way of enabling gaze interaction on mobile devices.

This PhD project aims to:
1) Identify opportunities of using eye gaze on mobile devices
2) Identify challenges that hinder the adoption of eye gaze interaction on mobile devices
3) Address some of the core challenges identified in step (2)

References:
[1] Mohamed Khamis, Florian Alt, and Andreas Bulling. 2018. The past, present, and future of gaze-enabled handheld mobile devices: survey and lessons learned. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '18). ACM, New York, NY, USA, Article 38, 17 pages. DOI: http://www.mkhamis.com/data/papers/khamis2018mobilehci.pdf
[2] Mélodie Vidal, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: spontaneous interaction with displays based on smooth pursuit eye movement and moving targets. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing (UbiComp '13). ACM, New York, NY, USA, 439-448. DOI: https://doi.org/10.1145/2493432.2493477
[3] Drewes H., Schmidt A. (2007) Interacting with the Computer Using Gaze Gestures. In: Baranauskas C., Palanque P., Abascal J., Barbosa S.D.J. (eds) Human-Computer Interaction – INTERACT 2007. INTERACT 2007. Lecture Notes in Computer Science, vol 4663. Springer, Berlin, Heidelberg https://link.springer.com/chapter/10.1007/978-3-540-74800-7_43#citeas

Contact: emailweb


Understanding and Mitigating the Threats of Thermal Imaging on User Privacy and Security – Dr Mohamed Khamis

 

Thermal cameras are continuously improving; they are becoming cheaper, smaller, and easier to use. This presents an increasing threat of a new front for side channel attacks. Namely, thermal cameras can be inconspicuously used to reveal heat traces left on interfaces. This was shown to allow attackers to recover the vast majority of passwords even 30 seconds after they have been entered using a simple contour-fitting approach. Apart from authentication, thermal imaging could potentially reveal other types of interactions such as text input, usage behaviour (e.g., which apps were launched), and more.

These threats underline the importance of understanding the impact of thermal imaging on user's privacy and security. The aim of this work is to 1) understand the impact of thermal imaging on user privacy and security through literature review and empirical evaluations, and 2) developing and evaluating methods for resisting these attacks.

This project will extend over previous work on thermal attacks against PINs and patterns on mobile devices by 1) employing a machine learning approach to improve the detection of the user's inputs, and 2) covering a broader range of user's input beyond authentication on mobile devices, including but not limited to authentication on other platforms (e.g., desktop devices), and text entry.

This project is suitable for students interested in Image Processing, Machine Learning and Human-Computer Interaction.

Related paper: http://www.mkhamis.com/data/papers/abdelrahman2017chi.pdf
Related video: https://youtu.be/a2Q64XmZpc4

Contact: emailweb


Multimodality in Urban Interaction - Dr. Julie Williamson

 

Interactive technologies for public spaces play a significant role in the experience of living and working in an urban environment.  For example, information displays can increase civic engagement, whole body interaction can create playful encounters at shop windows, and digital interactive art can augment public spaces.  The rapid development of novel input techniques means that urban interaction is no longer limited to public displays, mobile devices, and formally curated experiences.  New interaction techniques allow interactivity to be embedded into the built environment for users of all kinds to walk up and enjoy interactive content.

This PhD research will explore how multimodal interaction can be embedded in urban installations, comparing gesture, touch, and proxemic interaction techniques in real world settings.  This interactivity will utilise new forms of input and output, moving away from display focused interaction by adding interactivity to new objects and surfaces in the urban environment.  For example, possible projects could include making railings, benches, or trees interactive using multimodal input and output.

There are four main challenges that this research will address.

  1. Perception – What are the usability parameters of different modalities in an urban context?  This challenge will survey possible input techniques and evaluate the communication potential of different interactions.
  2. Action – How do newly interactive objects and surfaces communicate interactivity to passers-by?  What kinds of cues or outputs are successful without requiring a traditional visual display?
  3. Control – What feedback and output capabilities are required to maximise control and usability?  This challenge will explore a variety of feedback techniques for continuous and discrete interactions.
  4. Context – How are these modalities used in real world settings, exploring the social interaction and social acceptability of these new input techniques?

This research will be completed through a combination of lab-based studies and in the wild deployments.  The installations will include elements of playful interaction and more complex interaction with information.  For example, installations could present information about events in the city, visualise urban data sets, or present other contextually relevant information.  

Contact: emailweb


In-car haptics - Professor Steven Brewster

 

The physical controls in cars are being replaced by touchscreens. Pressing physical buttons and turning dials are being replaced by tapping on touchscreens and performing gestures. These work well on smartphones but can be problematic in cars. For example, they require more visual attention, taking the driver’s eyes off the road; they are smooth and flat, so it is hard to find where to interact.

The aim in this PhD project will be to use haptics (touch) to improve in-car interaction. We can then add back in some of the missing feedback and create new interactions that take advantage of the touchscreen but without the problems. We will investigate solutions such as pressure-based input – by detecting how hard a person is pressing, we can allow richer input on the steering wheel or other controls. Free-hand gesture input can allow the driver to control car systems without reaching for controls. For output, we will study solutions such as tactile feedback, ultrasound haptics and thermal displays to create a rich set of different ways of providing feedback. We will also combine these with audio and visual displays to create effective multimodal interactions that allow fast and accurate interaction from the driver but don’t distract him/her from driving. We will test these in the lab in a driving simulator and on the road to ensure our new interactions are usable.

Contact: emailweb


Artificial Intelligence for Psychiatry and Mental Health - Professor Alessandro Vinciarelli

 

The goal of the project is to develop automatic approaches for the detection of psychiatric issues in children and adults. In particular, the project will use methodologies inspired by Social Signal Processing - the AI domain aimed at modelling, analysis and synthesis of nonverbal behaviour - to detect automatically the behavioural cues associated to the presence of common psychiatric issues such as depression and non-secure attachment.

The project will be highly interdisciplinary and will contribute to both computing science (making computers capable to analyse the behaviour of people involved in clinical interviews) and psychiatry (identifying the behavioural traces of psychiatric issues). Furthermore, the project will involve extensive experiments revolving around the interaction between humans and interactive systems aimed at delivering psychiatric tests. The ideal candidate has a solid computing background in computing, in particular machine learning and artificial intelligence, but it is open to collaborate with colleagues active in human sciences (social psychology, anthropology, etc.).

Contact: email, web


Information, Data and Analytics (IDA)
Information, Data, Events, Analytics at Scale


Bridging the gap between big data batch and streaming data management - Dr. Nikos Ntarmos

 

MapReduce, and related technologies, revolutionised the area of scale-out big data processing when they were first introduced, as they allowed for easy large scale parallelisation of batch data processing. On the other hand, their inherent latencies led to the introduction of scale-out stream processing solutions, purpose-built to handle large amounts of high-speed data. Yet, our current information needs require a mix of these two paradigms. This has led to the rise of the so-called ``lambda'' architecture, comprising both batch and stream processing components, and combining the accuracy and comprehensiveness of scale-out batch processing frameworks with the low latency and responsiveness of near-real-time distributed stream processing, in a fault tolerant and highly scalable ensemble. In this new environment, several data management research questions, solved in traditional systems, remain open. We are looking for an excellent candidate who will pursue a PhD on answering some of these questions, pertaining mainly to supporting high-quality approximate query answering and appropriate data consistency notions in this setting. This is a rather challenging research endeavour, aiming to advance the state-of-the-art closer to the elusive goal of ``the one'' data management infrastructure, capable of providing all of the guarantees we have come to rely on in centralised settings, but in a highly scalable, decentralised, fault tolerant, efficient and high performance package, capable of dealing with very large amounts of both in-rest and streaming data. The PhD candidate will need to identify classes of data processing workflows which benefit from deployment in the lambda architecture, propose novel data consistency notions appropriate in this loosely coupled distributed setting, revisit and revamp statistics and data summaries for use by both components of the architecture, and produce appropriate indexing and query processing methods and data structures providing low -latency but highly accurate approximate query answers.

Contact: emailweb


Edge-Centric Adaptive Inferential Analytics - Dr. Christos Anagnostopoulos

 

Research Fields: wireless sensor networks, pervasive computing, edge computing, network-centric information processing systems.

Description: We are looking for an excellent candidate who will pursue a PhD on the development of energy-efficient distributed inferential analytics methods and algorithms at the network edge. We focus on distributed and mobile computing environments where a network of sensing and computing devices are responsible to locally process contextual data, reason and collaboratively infer knowledge. Pushing processing and inference to the network edge allows the complexity of the reasoning process to be distributed into many smaller and more manageable pieces and to be physically located at the source of the contextual information it needs to work on. This enables a huge amount of rich contextual data to be processed in real time that would be prohibitively complex and costly to deliver on a traditional centralized cloud/back-end processing system. Emerging future intelligent and adaptive applications based on knowledge derived from streaming contextual information include emergency situations awareness, smart city applications, remote sensing and environmental monitoring.

Challenges: We envisage a mobile computing environment, where things at the edge of the network convey locally inferred knowledge to the applications. We focus on a setting that involves networks of adaptive distributed wireless devices (e.g., sensor nodes and actuators) capable of sensing and locally processing & reasoning about events. Each node performs measurements and locally extracts and infers knowledge over these measurements considering predictive models reasoning. The fundamental requirement to materialize predictive intelligence at the network edge is the autonomous nature of nodes to locally perform data sensing & inference, and disseminate only inferred knowledge (e.g., minimal sufficient statistics) to their neighbours for further processing.

Enrolment & Opportunity: The successful candidate will enrol as a PhD student at the School of Computing Science, University of Glasgow, under the supervision of Dr Christos Anagnostopoulos and will join the Information, Data, and Analysis (IDA),  and Networked Systems Research Laboratory (NETLAB) of the University of Glasgow. Our labs explore several different issues such as: distributed sensor networks, mobile computing, statistical learning, scalable & adaptive information processing, intelligent systems, bio-mimetic and bio-inspired data processing algorithms. 

Skills: The ideal candidate will have a background in Computer Science and some background in either Mathematics and/or Statistics. Special areas of interest include: in-network processing, basics on statistics, and/or mathematical modelling/optimization. A good understanding of the basic Machine Learning and Adaptation algorithms as well as an MSc in one of the above areas will be a considerable plus. Programming skills, good command of English and team work capacity are required.

Contact: emailweb


Bio-Inspired In-Network Pervasive System - Dr. Christos Anagnostopoulos

 

Research Fields: In-network processing, mobile computing, computational and swarm intelligence, bio-inspired information processing, pervasive computing.

Description: We are looking for an excellent candidate who will pursue a PhD on the development of new large-scale, in-network processing methods for distributed, streaming/contextual data and/or time-series generated in the context of the Internet of Things (IoT). Such methods will become the basis for building intelligent and adaptive applications over IoT data. IoT is a part of future Internet and comprises many billions of devices (‘things’) that sense, compute, communicate, share knowledge, and actuate. Such devices incorporate machine intelligence, physical/virtual identities, contextual sensors, RFIDs, social media, etc. The vision of IoT is to allow ‘things’ to be connected any-time, any-place, with anything and anyone. Some emerging Big Data applications based on knowledge derived from streaming contextual information include emergency situations awareness, smart city applications, remote sensing and environmental monitoring.

Challenges: In-network processing of contextual data and bio-inspired adaptation to changes in the context of IoT sets forth several challenges that have to do with the nature of the contextual data and the processes that generate them. Contextual information including time-series coming from IoT devices has a strong spatio-temporal dimension, which needs to be considered during the data modelling and learning process heading for reliable knowledge inference/reasoning and context awareness in pervasive computing environments. Moreover, research challenges relate to in-network contextual data and knowledge fusion, and localized adaptive bio-inspired decision making, which deals with the redundancies and interactions that exist among the distributed contextual data sources. In addition, IoT devices regularly fail, e.g. limited battery life-time and loss of connectivity, thus, resulting to incomplete contextual data availability. It is challenging for in-network context prediction and adaptation algorithm to cope with incomplete and missing contextual information, concept drift and changing data distributions.

Enrolment & Opportunity: The successful candidate will enrol as a PhD student at the School of Computing Science, University of Glasgow, under the supervision of Dr Christos Anagnostopoulos and will join the Information, Data, and Analysis (IDA),  and Networked Systems Research Laboratory (NETLAB) of the University of Glasgow. Our labs explore several different issues such as: distributed sensor networks, mobile computing, statistical learning, scalable & adaptive information processing, intelligent systems, bio-mimetic and bio-inspired data processing algorithms. 

Skills: The ideal candidate will have a background in Computer Science and some background in either Mathematics/Statistics or Computational/Swarm Intelligence. Special areas of interest include: in-network processing, basics on statistics, and/or swarm intelligence. A good understanding of the basic adaptation and swarm intelligence algorithms (e.g., PSO, ACO, etc) as well as an MSc in one of the above areas will be a considerable plus. Programming skills, good command of English and team work capacity are required.

Contact: emailweb


Adaptive data management for NoSQL data stores - Dr. Nikos Ntarmos

 

The current state-of-the-art in decentralised NoSQL databases comprises a large number of highly scalable, elastic, fault tolerant, decentralised data management systems, often coupled with data processing frameworks with matching characteristics. As these systems are purpose-built to deal with large amounts of data, a core component of the paradigm is to ``move the processing to the data''. In this setting, the data are sharded and replicated across a large number of storage nodes, with the database/storage tier taking care of data persistency and consistency. Then, the processing framework strives to assign the processing of any part of the data to the same (or a nearby) nodes as those storing replicas of said part. What is still missing, however, is the ability of these systems to sense changes in their access patterns and to appropriately adapt their inner workings, so as to accommodate and expedite their users' changing data needs, without sacrificing any of the safety guarantees provided by the underlying system. We are looking for an excellent PhD candidate to perform research into mechanisms to identify access pattern changes online and to appropriately modify core design parameters of the system -- such as the degree of replication, the replica placement policy, the sharding criteria, etc. -- so as to expedite data processing via lower access costs and a fairer load distribution across the system nodes. On top of that, the proposed solutions should maintain, if not improve upon, the data consistency guarantees provided by the data management infrastructure, all while allowing the system to scale out to a large number of nodes. To accomplish these goals, the candidate will have to draw upon a large body of research, comprising statistical and indexing structures, access monitoring mechanisms, appropriate replication and consistency notions, and decentralised query processing frameworks.

Contact: emailweb


Decentralised graph isomorphism testing - Dr. Nikos Ntarmos

 

Graph structured data are prevalent in many modern big data applications, ranging from chemical and bioinformatic databases and other scientific datasets, to social networking and social-based applications such as recommendation systems. Central to high performance graph analytics over such data, is the ability to locate occurrences of pattern graphs in dataset graphs -- an operation entailing the NP-complete problem of subgraph isomorphism (sub-iso). Relevant theoretical research has produced a large number of fast and efficient algorithms to deal with this problem, but a treatise in fully distributing these algorithms is still lacking. On the other hand, the data management community has produced a number of index-based techniques to reduce the number of sub-iso tests required for datasets consisting of a large number of small-to-medium sized graphs, but they all still require a final ``verification'' stage of sub-iso testing, which is largely serial on a per stored graph basis. Lately researchers started looking into the problem of sub-iso testing against a single very large graph, producing some very promising scale-out solutions, although usually for a limited set of queries or query types. We are looking for an excellent PhD candidate who will pursue a PhD on scale-out/scale-up subgraph isomorphism testing of arbitrary query pattern graphs, supporting graph node/edge labels, borrowing and augmenting ideas and intuitions from the leading state-of-the-art centralised sub-iso solutions, as well as the latest index-based and index-less subgraph matching solutions and scalable graph processing frameworks.

Contact: emailweb


Intelligence over Distributed Time Series: Learn to Adapt - Dr. Christos Anagnostopoulos and Dr. Kostas Kolomvatsos

Research Fields: multidimensional data streams, time-series, sensor networks, pervasive computing.

Description: The main aim of this PhD research is the intelligent management of distributed time series. The main focus will be in the management of heterogeneous streams of dynamically changing data and the provision of intelligent analytics techniques that will build knowledge over multiple streams. The study involves the spatio-temporal aspect of the data as well as the contextual information to support solutions fully adapted to the application domain and the underlying infrastructure. Novel techniques for distribution adaptation, model inconsistency checking, distributed time series correlation and decentralized concept drift identification will be proposed and evaluated. The implementation process will adopt widely known frameworks for supporting streaming environments (e.g., Storm).
Enrolment & Opportunity: The successful candidate will enroll as a PhD student at the School of Computing Science, University of Glasgow, under the supervision of Dr Christos Anagnostopoulos and Dr Kostas Kolomvatsos and will join the Information, Data, and Analysis (IDA) and specifically the Pervasive Distributed & Edge Intelligence (Essence) research team of the University of Glasgow. Our team explore several different issues such as: distributed sensor networks, mobile computing, statistical learning, scalable & adaptive information processing, intelligent systems, and bio-inspired processing algorithms.
Skills: The ideal candidate will have a background in Computer Science and some background in either Mathematics and/or Statistics. Special areas of interest include basics on statistics, and/or mathematical modelling/optimization. A good understanding of the basic Adaptation algorithms as well as an MSc in one of the above areas will be a considerable plus. Programming skills, good command of English and team work capacity are required.

Contact: emailweb


Knowledge Management in Edge Computing: Dealing with Uncertainty - 

Dr. Christos Anagnostopoulos and Dr. Kostas Kolomvatsos

 

Research Fields: knowledge management, uncertainty reasoning, fusion, edge computing.

Description: Edge computing offers an infrastructure that may limit the latency that end users enjoy when try to communicate with the back-end network infrastructure. Various processing schemes can be proposed for the management of data present at the edge of the network. The aim is to extract knowledge and support applications for a wide range of domains. This PhD research studies the potential innovations in the derived knowledge management at the edge focusing on the uncertainty. The study covers distributed solutions to manage and reason over the uncertainty present that is related to the knowledge that other nodes present in the network may have. Statistical and computational intelligence models related to the aggregation/fusion of knowledge extracted by edge nodes should incorporate the local view of each node and the inherent uncertainty derived by the application domain.

Enrolment & Opportunity: The successful candidate will enroll as a PhD student at the School of Computing Science, University of Glasgow, under the supervision of Dr Christos Anagnostopoulos and Dr Kostas Kolomvatsos and will join the Information, Data, and Analysis (IDA) and specifically the Pervasive Distributed & Edge Intelligence (Essence) research team of the University of Glasgow. Our team explores several different issues such as: distributed sensor networks, mobile computing, statistical learning, scalable & adaptive information processing, intelligent systems, and bio-inspired processing algorithms.

Skills: The ideal candidate will have a background in Computer Science and some background in either Mathematics and/or Statistics. Special areas of interest include basics on statistics, and/or mathematical modelling/optimization. A good understanding of the basic Adaptation algorithms as well as an MSc in one of the above areas will be a considerable plus. Programming skills, good command of English and team work capacity are required.

Contact: emailweb

 


Information, Data and Analytics (IDA)
Information, Dynamics and Interaction


Bayesian methods for Metabolomics - Dr. Simon Rogers

 

Metabolomics is the large-scale study of the molecules involved in the chemical reactions that sustain life (metabolites). Measuring the abundance of metabolites in complex samples (e.g. human blood or urine) using Mass Spectrometry (MS) is potentially very useful in understanding disease, and developing drugs. However, it is very challenging due to various artefacts of the MS systems. This project will address the following particular problems:

  1. The large number of mass peaks produces by each metabolite: each metabolite results in many peaks in the spectrum. Developing clustering methods to group these peaks together will reduce the number of false positive identifications.
  2. Absolute Quantitation: only charged molecules can be detected by the MS. Different molecules ionise with different levels of efficiency making it impossible to compare measured intensities between molecules in a particular sample. Using advanced regression techniques we will attempt to predict ionisation efficiency to enable us to correct measured intensities to values that can be compared.
  3. Text models for fragment analysis: Data in which molecules have been fragmented is useful for performing identification. Mining this data for patterns (e.g. co-occuring groups of fragments) has the potential to uncover novel metabolites and understand the effect of drugs.
  4. Calibrating identification scores: When fragment data is available, metabolites can be annotated by passing the observed fragments to external identification servers. However, each server returns a score on a different scale. We will build statistical models to enable us to map scores from one tool onto another.

Contact: emailweb


Casual Interaction: creating novel styles of human-computer interaction that span a range of engagement levels - Professor Roderick Murray-Smith

 

The focused–casual continuum is a framework for describing interaction techniques according to the degree to which they allow users to adapt how much attention and effort they choose to invest in an interaction conditioned on their current situation. Casual interactions are particularly appropriate in scenarios where full engagement with devices is frowned upon socially, is unsafe, physically challenging or too mentally taxing.

This thesis will involve the design of novel interaction approaches which use a range of novel sensing and feedback mechanisms to go beyond direct touch and enable wider use of casual interactions. This will include ‘around device’ interactions, ‘Internet of things’ style interaction with specific objects. Making these systems work will require the development of systems based on signal processing and machine learning.

To test the performance of the system we will use motion capture systems and biomechanical models of the human body, wearable eye trackers and models of attention to infer the mental and physical effort required to interact with the system.

Related work:

  • H. Pohl, R. Murray-Smith, Focused and Casual Interactions: Allowing Users to Vary Their Level of Engagement ACM SIG CHI 2013 pdf
  • The BeoMoment from B&O is an example of Casual interaction which was designed together with our group in a recent Ph.D. thesis - Boland, Daniel (2015) Engaging with music retrieval.

Contact: emailweb


Information, Data and Analytics (IDA)
Information Retrieval


Renting the Right Room: Improving Airbnb Recommendation with Deep learning - Dr Richard McCreadie and Professor Iadh Ounis

Description: The Glasgow Information Retrieval group is looking for motivated students interested in our doctoral program. In particular, in collaboration with the Adam Smith Business School, we are looking for a PhD student to work on retrieval challenges in the emerging ‘shared economy’ (e.g. online room renting/sharing websites). A successful student taking this opportunity will be provided access to Big Data from Airbnb, including sales information and dates, property descriptions and images, as well as property reviews.

The broad aim of this PhD programme is to examine how to use and extend state-of-the-art machine learning and artificial intelligence algorithms (e.g. new neural network architectures) to better satisfy tenants on platforms like Airbnb. This will involve learning about how such platforms recommend properties to people, analysing the influential factors that lead to a good customer experience, and ultimately designing new approaches that produce better recommendations than current solutions.

Environment: The successful candidate will enrol as a PhD student at the School of Computing Science (Information, Data, Analysis Section), University of Glasgow, under the supervision of Prof Iadh Ounis & Dr Richard McCreadie, and will be co-supervised by Dr Bowei Chen from the Adam Smith Business School. The successful candidate will be based in the Glasgow Information Retrieval Group, and will be expected to collaborate with experts in Big Data processing, Machine Learning from across the IDA Section. The successful candidate will have access to a state-of-the-art cluster of machines, including new GPU servers.

Skills: The ideal candidate will have a strong background in Computer Science and some background in either Statistics. In particular, the student is expected to have strong programming skills, some prior experience of machine learning, a good command of English and team work skills.

Eligibility: Full funding is provided for EU/UK students (standard EU/UK fees and stipend rates included). Non-EU/UK students can apply, however they have higher international fees, which will not be fully covered by the scholarship.

Contact Information: For further information, interested candidates can contact Richard McCreadie (richard.mccreadie@glasgow.ac.uk) or Iadh Ounis (iadh.ounis@glasgow.ac.uk)


Information, Data and Analytics (IDA)
Computer Vision and Autonomous Systems

Continuous visual perception for dexterous clothing manipulation - Dr. Gerardo Aragon Camarasa

Description: This project is about investigating and developing continuous visual perception algorithms to recognise and track rigid and deformable objects for robotic hand-eye manipulation tasks. The key aim is to leapfrog the outputs of a major European research project (CloPeMa, www.clopema.eu) in robotic clothing manipulation by exploiting current developments of Deep-Learning technologies for visual recognition, registration and segmentation tasks.

Automated clothing manipulation is of significant global importance for tasks such as on-shore production, mass-market custom clothing, clothing recycling and automated laundry services (commercial, home, to name a few). However, automated clothing manipulation is notoriously difficult to achieve since clothing is non-rigid and can take an almost infinite number of possible configurations. We, humans, can continuously see and understand the environment, hence we can recognise deformable objects accurately and manipulate them intelligently. Robots, on the other hand, see then think what to do with the acquired information, making the inference of actions challenging for clothing manipulation under the sense-plan-act robot control paradigm (as developed during CloPeMa). Therefore, our aim is to develop Bayesian and Deep Learning architectures to continuously survey the state of a deformable object to adjust manipulation actions accordingly.

This PhD project is to develop new algorithms to side-step the above limitations by developing deep learning methods for closed-loop visual perception for recognition, segmentation and pose estimation of objects (deformable and rigid), the robot’s workspace, and surroundings. By developing the proposed closed-loop visual perception and action, it will be possible to increase the commercial readiness level of the proposed system for the homecare and textile manufacturing sectors using GPU-accelerated vision algorithms to achieve a viable degree of clothing perception and manipulation dexterity close to human speed performance.

How: it is envisaged to bootstrap recognition and segmentation of clothing in a pile by adopting current deep learning architectures, e.g. Mask-RCNN, YOLO, etc. The latter will allow us to implement a fast, real time vision pipeline using a dedicated GPU for this task. Hence, the system will constantly survey the scene and provide visual object hypothesis which can then be refined as the robot interacts with the objects. For this, Bayesian-based architectures will be devised to aggregate hypotheses to reduce uncertainty in the detection, and consequently increase the reliability of the detection in real-time. This project will accommodate a wide variety of use cases based on state-of-the-art benchmarks in robotic grasping and extrapolate these benchmarks to deformable objects.

 

Contact: email web


Reinforcement learning in robotics manipulation using visual feedback - Dr J. Paul Siebert and Dr Simon Rogers

The current state-of-the-art in robotics has reached the stage where affordable standard hardware platforms, such as arms and manipulators, controlled by powerful computers, are now available at relatively low cost. What is holding back the development of general purpose robots as the next mass-market consumer product is the current approach to painstakingly designing every behaviour control algorithm necessary to operate the robot for each task it is required to perform.

The objective of this research is to investigate methods that allow the robot to discover how to undertake tasks, such as manipulating objects, by itself by observing the outcome of hand-eye-object interactions. In this approach we propose to by combine visual representations of the scene observed by the robot with manipulator interaction to allow the machine to discover, i.e. learn, how to manipulate objects in specific situations, e.g. how to manipulate cloth to flatten it, or how to unfold or fold clothing, grasp rigid objects of different shapes etc., or how to grasp non rigid objects, or objects of widely differing shapes.

The project would comprise investigating:

  • Appropriate visual representations for surfaces and objects which can be extracted from images captured by the robot's cameras using computer vision techniques – we already have appropriate 3D vision systems able to represent clothing and certain classes of object.
  • The relationship between the physical action applied by a manipulator and the outcome of this action as determined by an algorithm that ranks to what degree the applied action has brought the observed world-state towards a desired world-state, e.g. by how much did this flattening action make the cloth flatter, or did this grasping action allow the griper to grab the cup?
  • Action sequences and their overall consequences for carrying our a task: for example, in non-monotonic learning strategies, early actions may actually reduce any intuitive measure of task progress prior to achieving larger gains later in the task (and therefore achieves greater gains overall than a behaviour that attempts to achieve the same goal by means of monotonic, and incremental improvements).

Our research group has excellent robot facilities on which this project will be based, including Dexterous Blue (housed in it's own laboratory on Level 7 of the Boys Orr building) – a large two armed robot richly sensorised with a high resolution (sub-millimetre) binocular vision system, lower resolution Xtion RGBD sensors, in-gripper tactile sensing and microphonic sensing for clothing and surface texture perception.

Our research using Dexterous Blue can be seen in action at: www.clopema.eu. Access to the Baxter robot, situated in the Computing Science foyer ,will also be available and and example of a student project using Baxter can be viewed at: https://www.youtube.com/watch?v=zyzaY4ur8As

Contact: email web


Cognitive Visual Architectures for Autonomous and Robotic Systems - Dr J. Paul Siebert

 

How well a robot senses the world in effect defines the types of activity it is able to undertake. Vision is the richest of the senses, and recent advances in computer vision have enabled robots to perceive the world sufficiently well to be able to map their environment, localise their position and recognise objects, other robots and the robots own actuators. As a result robots are now capable of navigating autonomously and interacting with specific objects (whose appearance has been learned) or classes of object, such as people or cars, again learned from large image databases.

Despite this progress the state-of-the-art in robot vision is still incomplete in terms of being able to represent objects richly, whereby a full set of visual characteristics are extracted and represented to feed higher reasoning processes. The class of an object gives a noun (e.g. human, car, cup, etc.) while object attributes such as colour, texture and spatial location and motion provide adjectives (blue car, distant person). Shape, geometry, motion and other visual properties discovered through interaction can provide affordances (verbs from the robot's perspective) – what the robot object can do with the object (move it by pushing, knock it over, lift it etc.). These object properties discovered by means of vision and interaction can then be used in reasoning processes, the classic being the ability of a machine to respond to the command “grasp the red box closest to the green bottle”. Therefore to be truly autonomous, a robot must be capable of understanding a scene and be able to reason about it. To achieve such cognitive ability it is necessary to couple sensed visual attributes (and potentially attributes obtained from other sensing modalities) to a reasoning engine to allow action plans to be deduced that allow the robot to achieve goals without the need for wholly pre-programmed interactions actions.

There are potentially a number of projects that could be based on the above theme of vision for robots and other autonomous systems, based on the following investigations:

  • Visual processing architectures to support the extraction of a rich set of visual attributes (2D and 3D) from images captured by the robot's camera systems. This might include space-variant (foveated) visual architectures, as found in the mammalian visual system, supporting attention control and real-time execution.
  • Reasoning and planning systems for controlling interaction of the robot with the environment coupled with visual attention and perception, binding visual perception to action and learning.

Our research group has excellent robot facilities on which this project can be based, including Dexterous Blue (housed in it's own laboratory on Level 7 of the Boys Orr building) – a large two armed robot richly sensorised with a high resolution (sub-millimetre) binocular vision system, lower resolution Xtion RGBD sensors, in-gripper tactile sensing and microphonic sensing for clothing and surface texture perception. Our research using Dexterous Blue can be seen in action at: www.clopema.eu. Access to the Baxter robot, situated in the Computing Science foyer, will also be available and and example of a student project using Baxter can be viewed at: https://www.youtube.com/watch?v=zyzaY4ur8As

Contact: email web