Cognitive binocular vision of space robotics
The expense and hazards of space travel and sheer distances involved (with associated communications lags to Earth) have driven the need for launching wholly autonomous robot controlled missions.
Real-time visual perception by machine is a key ingredient required to control planetary rovers or other time-critical operations such as spacecraft docking spacecraft to allow these autonomous systems to "see what they are doing".
Our current work in the School of Computing Science exploits a combination of algorithms that mimic aspects of human visual perception, including space-variant visual sensing (foveation) , to model feature extraction and construct visual representations that drive visual search behaviours based on both reactive and deliberative (cognitive) control mechanisms.
Why is this research important?
Reliable and sophisticated cognitive vision systems will underpin advanced robotics systems capable of undertaking complex mission that require flexible responses to unpredictable circumstances, for example, as required to learn the appearance of a landing site to allow this to be reliably navigated by a rover or how to control the operation of a manipulator to allow visual interact with samples identified and collected during a mission.
Similarly in-transit missions such as robotic inspection of spacecraft exteriors and subsequent repair or adjustment becomes possible only when supported by advanced visual sensing.