cSCAN is a uniquely interdisciplinary Research Centre in the School of Psychology and Neuroscience at the University of Glasgow that brings together researchers from complementary disciplines including Psychology, Neuroscience, Cognitive Science, Computer Science and Engineering to examine fundamental questions in social perception, cognition, and interaction, and how to harness these for meaningful applications in social context.
Click here to enter the cSCAN main website.
Our research on social perception examines how humans process social signals in multiple channels, principally vision (faces, bodies) and audition (voices, language), and how the cultural, neural, and biological underpinnings of this processing affect interactions with artificial agents (e.g., social robots as well as avatars encountered in real or virtual realities). Our work on social cognition and interaction examines how emotions, language, social coordination, and culture shape social communication. Finally, we bring fundamental insights from these areas to bear on important societal issues in the digital economy, such as human-technology interactions.
Our complementary research teams also develop and apply a wide variety of innovative behavioral methods, brain imaging technologies, virtual reality environments, and computational modeling techniques. As part of our interdisciplinary environment, cSCAN members participate in the UKRI Centre for Doctoral Training (CDT) in Socially Intelligent Artificial Agents in partnership with the School of Computing Science and collaborate with the Centre for Cognitive Neuroimaging (CCNi).
Our cSCAN members also lead an international research network, with AE roles and editorial board membership of prominent journals (e.g., Psychological Science, JEP: General, Cognition, PNAS), substantial national and international funding (e.g., ERC, RCUK, ONR MURI, DARPA), and a range of UK Trusts and Foundations and industrial partners (e.g., FurHat Robotics, Dimensional Imaging).
- The 20th ACM International Conference on Intelligent Virtual Agents (IVA). Conference hosted by cSCAN (October 2020)
- The 7th Consortium of European Research on Emotion (CERE) Conference. Conference hosted by centre (April 2018).
- Face Facts: Revealing the information hidden in faces. Interactive exhibition at the Royal Society of London’s 2015 Summer Science Exhibition.
Equipment & Facilities
cSCAN facilities include a wide range of state-of-the-art equipment, including:
- Head-mounted eye-tracker with real-world tracking
- Vicon-based motion capture lab
- Qualisys Motion Capture lab
- Xsens motion capture suit for portable motion capture
- VR labs with Vive Pro Eye and Oculus Quest headsets
- Character animation system for fully articulated, interactive characters
- DI3D face capture system with software to analyse images
- Di4D stereo photogrammetry and facial motion capture system
- Real-time 3D facial animation and rendering system
- 3D facial identity database and synthetic facial identity generation system
- Suite of humanoid and non-humanoid social robots
- 2022 – 25 Minerva Fast-Track Award, Max-Planck-Gesellschaft (€1,000,000) "Language evolution and adaptation in Diverse Situations" Principal Investigator RAVIV
- 2018 – 24 European Research Council Starting Grant (£1,878,815) "Computing the Face Syntax of Social Communication" Principal Investigator JACK (2 PDRAs, 1 PhD student)
- 2016 – 23 European Research Council Starting Grant (€1,809,000) "Mechanisms and Consequences of Attributing Socialness to Artificial Agents" Principal Investigator CROSS (3 PDRAs, 2 PhD students, 1 programmer)
- 2018 – 23 Marie Curie Innovative Training Network (€4,091,824) "ENTWINE: The European Network on Informal Care" Co-Investigator CROSS (1 PhD student)
- 2018 – 22 Philip Leverhulme Prize in Psychology (£100,000) "Using Art & Neuroscience to Inform Next Generation AI" Principal Investigator CROSS (1 Postdoc)
Liu, M., Duan, Y., Ince, R. A. A., Chen, C., Garrod, O. G. B., Schyns, P. G., & Jack, R. E. (2022). Facial expressions elicit multiplexed perceptions of emotion categories and dimensions. Current Biology, 32(1), 200-209. doi: 10.1016/j.cub.2021.10.035
Raviv, L., & Arnon, I. (2018b). The developmental trajectory of children’s auditory and visual statistical learning abilities: Modality-based differences in the effect of age. Developmental Science. 21(4): e12593. doi:10.1111/desc.12593
Raviv, L., Jacobson, S. L., Plotnik, J. M., Bowman, J., Lynch, V., & Benítez-Burraco, A. (2023). Elephants as an animal model for self-domestication. Proceedings of the National Academy of Sciences, 120(15), e2208607120. https://doi.org/10.1073/pnas.2208607120
Snoek, L., Jack, R. E., Schyns, P. G., Garrod, O. G. B., Mittenbühler, M., Chen, C., Oosterwijk, S., & Scholte, H. S. (2023). Testing, explaining, and exploring models of facial expressions of emotions. Science Advances, 9(6), eabq8421. https://doi.org/10.1126/sciadv.abq8421