High-Fidelity 3D Facial Reconstruction for Social Signal Understanding
Supervisors:
Professor Hui Yu, MVLS, School of Psychology and Neuroscience
Dr Tanya Guha, CoSE/School of Computing Science
Professor Rachael E. Jack, MVLS, School of Psychology and Neuroscience
Project Summary:
Human faces convey a wealth of rich social and emotional information—for example, facial expressions often convey our internal emotion states while the shape, colour, and texture of faces can betray our age, sex, and ethnicity. As a highly salient source of social information, human faces are integral to shaping social communication and interactions. The faces in the video can be viewed as a temporal sequence of facial images with intrinsic dynamic changes. Establishing correlations between faces in different frames is important for tracking and reconstructing faces from videos. Jointly modelling fine facial geometry and appearance in a data-driven manner enables the model to learn the relationship between a single 2D face image and the corresponding 3D face model and thus reconstruct its high-quality 3D face model by leveraging the high capacity of deep neural networks.
This project is to investigate computational methods for high-fidelity 3D facial tracking on videos for social signal analysis in social interaction scenarios. It involves developing computational models for reconstruction of 3D facial details capturing geometric facial expression changes and analysing social signals.