Facial Expression Transformations for Expression-Invariant Face Recognition

  • Hyung-Soo Lee
  • Daijin Kim
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4291)


This paper presents a method of expression-invariant face recognition by transforming the input face image with an arbitrary expression into its corresponding neutral facial expression image. When a new face image with an arbitrary expression is queried, it is represented by a feature vector using the active appearance model. Then, the facial expression state of the queried feature vector is identified by the facial expression recognizer. Then, the queried feature vector is transformed into the neutral facial expression vector using the identified facial expression state via the direct or indirect facial expression transformation, where the former uses the bilinear translation directly to transform the facial expression, but the latter uses the bilinear translation to obtain the relative expression parameters and transforms the facial expression indirectly by the obtained relative expression parameters. Then, the neutral facial expression vector is converted into the neutral facial expression image via the AAM reconstruction. Finally, the face recognition has been performed by the distance-based matching technique. Experimental results show that the proposed expression-invariant face recognition method is very robust under a variety of facial expressions.


Facial Expression Face Recognition Near Neighbor Facial Expression Recognition Bilinear Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Kanade, T.: Picture processing by computer complex and recognition of human face. PhD thesis, Kyoto University (1973)Google Scholar
  2. 2.
    Liu, Y., Schmidt, K., Cohn, J., Mitra, S.: Facial asymmetry quantificatioin for expression invariant human identification. Computer Vision and Image Understanding 91, 138–159 (2003)CrossRefGoogle Scholar
  3. 3.
    Elad, A., Kimmel, R.: On bending invariant signatures for surfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence 25, 1285–1295 (2001)CrossRefGoogle Scholar
  4. 4.
    Wang, H., Ahuja, N.: Facial expression decomposition. In: Proc. of IEEE International Conference on Computer Vision, pp. 958–964 (2003)Google Scholar
  5. 5.
    Li, X., Mori, G., Zhang, H.: Expression-invariant face recognition with expression classification. In: Third Canadian Conference on Computer and Robot Vision (to appear, 2006)Google Scholar
  6. 6.
    Abboud, B., Davoine, F.: Face appearance factorization for expression analysis and synthesis. In: Proc. of Workshop on Image Analysis for Multimedia Interactive Services (2004)Google Scholar
  7. 7.
    Zhou, C., Lin, X.: Facial expressional image synthesis controlled by emotional parameters. Pattern Recognition Letters 26, 2611–2627 (2005)CrossRefGoogle Scholar
  8. 8.
    Tenenbaum, J., Freeman, W.: Separating style and content with bilinear models, neural computation. Neural Computation 12, 1247–1283 (2000)CrossRefGoogle Scholar
  9. 9.
    Lee, H.S., Shin, D., Kim, D.: Ridge regressive bilinear model for robust face recognition. In: International Conference on Ubiquitous Robots and Ambient Intelligence (submitted, 2006)Google Scholar
  10. 10.
    Sung, J.W., Kim, D.: A real-time facial expression recognition using the STAAM. In: Proc. of International Conference on Pattern Recognition (2006)Google Scholar
  11. 11.
    Kanade, T., Cohn, J., Tian, Y.L.: Comprehensive database for facial expression analysis. In: Proc. of International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Hyung-Soo Lee
    • 1
  • Daijin Kim
    • 1
  1. 1.Department of Computer Science and EngineeringPohang University of Science and Technology 

Personalised recommendations