A video prediction approach for animating single face image

Yong Zhao, Meshia Cédric Oveneke, Dongmei Jiang, Hichem Sahli

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)


Generating dynamic 2D image-based facial expressions is a challenging task for facial animation. Much research work focused on performance-driven facial animation from given videos or images of a target face, while animating a single face image driven by emotion labels is a less explored problem. In this work, we treat the task of animating single face image from emotion labels as a conditional video prediction problem, and propose a novel framework by combining factored conditional restricted boltzmann machines (FCRBM) and reconstruction contractive auto-encoder (RCAE). A modified RCAE with an associated efficient training strategy is used to extract low dimensional features and reconstruct face images. FCRBM is used as animator to predict facial expression sequence in the feature space given discrete emotion labels and a frontal neutral face image as input. Both quantitative and qualitative evaluations on two facial expression databases, and comparison to state-of-the-art showed the effectiveness of our proposed framework for animating frontal neutral face image from given emotion labels.

Original languageEnglish
Pages (from-to)16389–16410
Number of pages22
JournalMultimedia Tools and Applications
Issue number12
Publication statusPublished - Jun 2019


  • Emotion
  • Facial expression animation
  • Image-based
  • Reconstruction contractive auto-encoder


Dive into the research topics of 'A video prediction approach for animating single face image'. Together they form a unique fingerprint.

Cite this