Computer Vision & Autonomous Systems

Computer-based analysis of images to extract information and classify their contents is becoming increasingly important in all walks of life. For example, by combining the science of 'photogrammetry' (measurement using cameras) with digital camera technology it becomes possible to capture 3D models of people, animals and objects that are metrically accurate and photo-realistic in appearance. Furthermore, it is possible to analyse and animate these models for applications such as virtual actors or sports science.

The Computer Vision and Autonomous Systems group, CVAS, in the School of Computing Science, investigates fundamental issues of how to analyse images and also how apply this knowledge within practical applications. Our projects cover all aspects of human body modelling in 3D, including animation and surface skin modelling, robotic perception and manipulation, materials, informatics of perception and distributed systems. This approach opens a broad range of application areas such as; creative media, engineering, medicine, textiles & clothing, military & security, internet & communications, forensic and fine art. A key objective of the work of the group is to combine 3D measurement and modelling techniques with image understanding approaches to constructing cognitive robot vision systems that actively search their operating environments using digital cameras.

Current Activities

  • Robotics@Glasgow Forum - University of Glasgow
  • A study into robotic coat hanger manipulation, Spring/Summer 2016
  • Object Learning, reasoning and scene understanding in an active robot vision system - PhD Studentship, 2013-2016
  • Registration of cross-modal tomography images of the lung - EngD with Toshiba Medical Visualisation Systems, 2013-2016
  • Single Image Based Removal of Dynamic Weather Effects for Outdoor Image Enhancement - Singapore Institute of Technology, 2016-2019
  • An Investigation of Deep Convolutional Neural Networks and Boundary Detection for Automatic Image Segmentation - PhD studentship, 2016-2019 
  • Large scale reliable robotics - on-going collaboration with Glasgow Systems Section


Past Activities

 

  • Automated Tuber Quality Assessment by Computer Vision - EPSRC (IAA), 2015
  • Dexterous Robotic Manipulation Systems for Clothing and Flexible Materials Workshop at euRobotics Forum - 2014 
  • Glasgow Knowledge Exchange: A proof-of-concept demonstration of a Cognitive Vision System within the sEnglish® programming environment - EPSRC (IAA), 2013
  • Integrated Visual Perception Architecture for Robotic Clothes Perception and Manipulation - SICSA PhD Studentship, 2012-2016 
  • CLOPEMA (Clothes Perception and Manipulation) - EU FP7 STREP 2012-2015 (www.clopema.eu)
  • The analysis of three-dimensional facial dysmorphology - Wellcome Trust, 2009-2012

 

 Laboratories and Facilities

CVAS hosts two full-size humanoid robots: Dexterous Blue is a large industrial robot comprising two arms, supplied by Kawasaki Motoman, which are mounted on a rotating plinth. This robot has been equipped with a steerable binocular vision head and processing system developed in our group. This powerful and precise 750Kg machine is located in a custom laboratory with control/viewing gallery for safe operation of the machine on the 7th floor of the Boyd Orr Building. Dexterous Blue is equipped with specialised end effectors developed for clothing manipulation within the CLOPEMA project.

We also have a Baxter Research Robot also used for undergraduate and MSc projects in a showcase laboratory in the foyer of the Sir Alwyn Williams Building. Baxter is an inherently safe robot (unlike Dexterous Blue which is operated in isolation from humans under a strict safety plan) that can be operated in close proximity to operators and bystanders. We have now mounted a two-fingered SAKE gripper to one of Baxter’s arms, and this is functioning and available for projects.

Information about research using these robots is available at the CloPeMa website and our YouTube Channel

Academic Staff

 

Researchers

  • Mr Aamir Khan
  • Mr Finlay McCourt
  • Ms Xiaomeng Wang
  • Mr Lai Meng Tang
  • Mr Long Chen
  • Mr James Sloan (Toshiba Medical Visualisation Systems)

Affiliate/Associate members

 

 

 

This Week’s EventsAll Upcoming EventsPast EventsWebapp

This Week’s Events

There are no events scheduled for this week

Upcoming Events

There are no upcoming events

Past Events

Understanding Capsule Networks (16 May, 2018)

Speaker: Piotr Ozimek

Abstract:

In recent years convolutional neural networks (CNNs) have revolutionized the fields of computer vision and machine learning. On multiple occasions, they have achieved state of the art performance on a variety of vision tasks, such as object detection, classification and segmentation. In spite of this CNNs suffer from a variety of problems: they require large and diverse datasets that may be expensive to obtain, they do not have an explicit and easy to interpret internal object representation, and they are easy to fool by manipulating spatial relationships between visual features in the input image. To address these issues Hinton et. al. have devised a new neural network architecture called the Capsule Network (CapsNet), which consists of explicit and encapsulated neural structures whose output represents the detected object or feature in a richer and more interpretable format. CapsNets are a new concept that is still being researched and developed, but they have already achieved state of the art performance on the MNIST dataset without any data augmentation. In this talk, I will give a brief overview of the current state of CapsNets, explain the motivation behind them as well as their architecture.

Bio:

Recognition of Grasp Points for Clothes Manipulation under unconstrained Conditions (12 October, 2017)

Speaker: Luz Martinez

Abstract: I will talk about a system for recognizing grasp points in RGB-D images. This system is intended to be used in domestic robots when deploying clothes lying at random positions on a table. By taking into consideration that the grasp points are usually near key parts of clothing, such as the waist of pants or the neck of a shirt. Also, I will cover my recent work on clothing simulators that I use to obtain images to train deep learning networks.

Short-bio: Luz is a PhD student in Electrical Engineering at the University of Chile; and currently, a visiting research student in the Computer Vision and Autonomous group. Luz has worked with service robots for 4 years and has expertise in computer vision, computational intelligence, voice recognition and high-level behaviours design. She is currently working on her PhD thesis and she focuses on clothing recognition using active vision.

Simple Rules from Chaos: Towards Socially Aware Robotics using Agent-Local Cellular Automata (08 May, 2017)

Speaker: Alexander Hallgren

Controlling robotic agents requires complex control methods. This study aims to take advantage of emergent behaviours to reduce the amount of complexity. Cellular automata (CA) are employed as a means to generate emergent behaviour at low computational cost. A novel architecture is developed based on subsumption architecture, which uses an agent-local CA to influences the selection of a behaviour. The architecture is tested by measuring the time it takes the robot to navigate through a maze. 2 different models are used to evaluate the system. The results indicate that the current configuration is ineffective, but a number of task are proposed as future work.

Integrating a Biologically Inspired Software Retina with Convolutional Neural Networks (08 May, 2017)

Speaker: Piotr Ozimek

 

Convolutional neural networks are the state of the art machine learning model for a wide range of computer vision tasks, however a major drawback of the method is that there rarely is enough memory or computational power for ConvNets to operate directly on large, high resolution images. We present a biologically inspired method for pre-processing images provided to ConvNets, the benefits of which are: 
1) a visual attention mechanism that preserves high frequency information around the foveal focal point by the use of space-variant subsampling
2) a conforming and inherently scale and rotation invariant mapping for presenting images to the ConvNet
3) a highly parameterizable image compression process
The method is based on the mammalian retino-cortical transform. This is the first attempt at integrating such a process to ConvNets. To evaluate the method a dataset was built from ImageNet and a set of ConvNets with identical architectures was trained on raw, partially pre-processed and fully pre-processed images. The ConvNets achieved comparable results, suggesting an untapped potential in drawing inspirations from natural vision systems.
 

Events Webapp