Iterative human facial expression modeling

  • Antai Peng
  • Monson H. Hayes
Session CG3b — Human Models
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1024)


Human facial expression modeling and synthesis have recently become a very active area of research. This is partially due to its potential application in model-based image coding, as well as the possibility of using it to enhance human-computer interactions. The majority of the research that has been done in this area has focused on facial expression analysis, modeling and synthesis. Although good results have been obtained in analysis and synthesis, not much effort has been spent on attempting to synthesize facial images that are natural looking. In this paper, we describe our research on facial expression modeling and synthesis. We propose an iterative framework that uses a genetic algorithm to synthesize natural looking facial images. Facial expression representation and distortion measures will also be discussed. Preliminary results are presented.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    P. Ekman and W.V. Friesen, “Facial Action Coding System”, Consulting Psychologists Press, 1977Google Scholar
  2. 2.
    A. Watt and M. Watt, “Advanced Animation and Rendering Techniques”, Addison-Wesley, 1992Google Scholar
  3. 3.
    A. Peng and M. Hayes, “Modeling Human Facial Expressions at Multiple Resolutions”, ICASSP95, pp. 2627–2629Google Scholar
  4. 4.
    Z. Michalewicz, “Genetic Algorithms+Data Structures — Evolution Programs”, Springer-Verlag, 1994Google Scholar
  5. 5.
    S. Curinga, A. Grattarola and F. Lavagetto, “Synthesis and Animation of Human Faces: Artificial Reality in Interpersonal Video Communication”, pp. 397–40Google Scholar
  6. 6.
    C.S. Choi, K. Aizawa, H. Harashima and T. Takebe, “Analysis and Synthesis of Facial Image Sequences in Model-Based Image Coding”, IEEE-Trans. on Circuits and Systems for Video Tech., Vol. 4, No. 3, June, 1994, pp. 257–274Google Scholar
  7. 7.
    F.I. Parke, “A model for human faces that allows speech synchronized animation”, Computer Graphics, Vol. 1, pp. 3–4 (1975)Google Scholar
  8. 8.
    F.I. Parke, Parameterized models for facial animation”, IEEE Computer Graphics Applications, Vol. 2, no. 9, pp. 61–68 (1982)Google Scholar
  9. 9.
    D. Hill, A. Pearce, and B. Wyvill, “Animating speech: A automated approach using speech synthesized by rules”, Visual Computer, Vol. 3, pp. 277–287 (1988)Google Scholar
  10. 10.
    N. Magnenat-Thalmann, E. Primeau, and D. Thalmann, “Abstract muscle action procedures for face animation”, Visual Computer, Vol. 3, pp. 290–297(1988)Google Scholar
  11. 11.
    D. Terzopoulous, K. Waters, “Analysis and synthesis of facial image sequences using physical and anatomical models”, IEEE Trans-PAMI, Vol. 15, No. 6, pp. 69–579 (1993)Google Scholar
  12. 12.
    S. Morishima, K. Aizawa, and H. Harashima, “A real-time facial action image synthesis system driven by speech and text”, SPIE Vol. 1360 Visual Comm. Image Proc, pp. 1151–1158 (1990)Google Scholar
  13. 13.
    S. Morishima, K. Aizawa, and H. Harashima, “An intelligent facial image coding driven by speech and phoneme”, IEEE ICASSP89, Pres. No.M8.7, pp. 1795–1798 (1989)Google Scholar
  14. 14.
    L. Tang and T.S. Huang, “Analysis-Based Facial Expression Synthesis”, pp. 98–102, ICIP94Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Antai Peng
    • 1
  • Monson H. Hayes
    • 1
  1. 1.School of Electrical and Computer EngineeringGeorgia TechAtlanta

Personalised recommendations