Centres for Doctoral Training

High-Fidelity 4D Facial Reconstruction from Video for Social Signal Understanding

Supervisors:

Hui Lu, MVLS School of Psychology and Neuroscience 

Tanya Guhu, CoSE, School of Computing Science

Rachael E Jack, MVLS School of Psychology and Neuroscience  

 

PhD Project Summary: 

Human faces convey a wealth of rich social and emotional information—for example, facial expressions often convey our internal emotion states, while the shape, colour, and texture of faces can betray our age, sex, and ethnicity. As a highly salient source of social information, human faces are integral to shaping social communication and interactions. Faces in videos can be viewed as temporal sequences of facial images with intrinsic dynamic changes. Establishing correlations between faces in different frames is therefore important for tracking and reconstructing faces from videos.

Jointly modelling fine facial geometry, appearance, and temporal dynamics in a data-driven manner enables the model to learn the relationship between 2D video frames and the corresponding 4D facial model and thus reconstruct high-quality dynamic facial models by leveraging the capacity of deep neural networks. This project will develop computational methods for high-fidelity 4D facial tracking from videos for social signal analysis in social interaction scenarios. It involves developing computational models for reconstructing 4D facial details that capture both geometric facial expression changes and temporally coherent facial dynamics, and analysing the social signals conveyed by these dynamic facial behaviours.