Advertisement

Reconstruction and Recognition of Occluded Facial Expressions Using PCA

  • Howard Towner
  • Mel Slater
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4738)

Abstract

Descriptions of three methods for reconstructing incomplete facial expressions using principal component analysis are given, projection to the model plane, single component projection and replacement by the conditional mean – the facial expressions being represented by feature points. It is established that one method gives better reconstruction accuracy than the others. This method is used on a systematic reconstruction problem, the reconstruction of occluded top and bottom halves of faces. The results indicate that occluded-top expressions can be reconstructed with little loss of expression recognition – occluded-bottom expressions are reconstructed less accurately but still give comparable performance to human rates of facial expression recognition.

Keywords

Support Vector Machine Facial Expression Feature Point Asperger Syndrome Expression Recognition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hong, H., Neven, H., von der Malsburg, C.: Online facial expression recognition based on personalized galleries. In: Proceedings of the Second International Conference on Automatic Face and Gesture Recognition, pp. 354–359 (April 1998)Google Scholar
  2. 2.
    El Kaliouby, R., Robinson, P.: Real-time inference of complex mental states from facial expressions and head gestures. In: Conference on Computer Vision and Pattern Recognition Workshop, vol. 10 (2004)Google Scholar
  3. 3.
    Tian, Y., Kanade, T., Cohn, J.F: Recognizing action units for facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(2) (February 2001)Google Scholar
  4. 4.
    Edwards, J., Taylor, C.J, Cootes, T.F: Interpreting face images using active appearance models. In: International Conference on Face and Gesture Recognition, pp. 300–305 (1998)Google Scholar
  5. 5.
    Leonardis, A., Bischof, H.: Dealing with occlusions in the eigenspace approach. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 453–458 (1996)Google Scholar
  6. 6.
    Nelson, P.R.C, Taylor, P.A, MacGregor, J.F.: Missing data methods in pca and pls: Score calculations with incomplete observations. Chemometrics and Intelligent Laboratory Systems 35, 45–65 (1996)Google Scholar
  7. 7.
    Everson, R., Sirovich, L.: Karhunen-loève procedure for gappy data. Journal of the Optical Society of America A 12, 1657–1664 (1995)Google Scholar
  8. 8.
    Hu, S., Buxton, B.: Statistical personal tracker. In: BMVA Symposium on Spatiotemporal Image Processing (March 2004)Google Scholar
  9. 9.
    Ekman, P.: Are there basic emotions? Psychological Review 99(3), 550–553 (1992)Google Scholar
  10. 10.
    Bassili, J.N.: Emotion recognition: The role of facial movement and the relative importance of upper and lower areas of the face. Journal of Personality and Social Psychology 37(11), 2049–2058 (1979)Google Scholar
  11. 11.
    Baron-Cohen, S., Wheelwright, S., Jolliffe, T.: Is there a “language of the eyes”? evidence from normal adults, and adults with autism or asperger syndrome. Visual Cognition 4(3), 311–331 (1997)Google Scholar
  12. 12.
    Calder, A.J, Young, A.W, Keane, J., Dean, M.: Configural information in facial expression perception. Journal of Experimental Psychology: Human Perception and Performance 26(2), 527–551 (2000)Google Scholar
  13. 13.
    Kanade, T., Cohn, J., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the International Conference on Face and Gesture Recognition, pp. 46–53 (March 2000)Google Scholar
  14. 14.
    Michel, P., El Kaliouby, R.: Real time facial expression recognition in video using support vector machines. In: Proceedings of the 5th International Conference on Multimodal Interfaces, pp. 258–264 (2003)Google Scholar
  15. 15.
    Buciu, I., Kotsia, I., Pitas, I.: Facial expression analysis under partial occlusion. In: IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 5, pp. 453–456. IEEE Computer Society Press, Los Alamitos (2005)Google Scholar
  16. 16.
    Bourel, F., Chibelushi, C.C., Low, A.A.: Recognition of facial expressions in the presence of occlusion. In: 12th British Machine Vision Conference, vol. 1, pp. 213–222 (2001)Google Scholar
  17. 17.
    Jolliffe, I.T.: Principal Component Analysis. Springer Series in Statistics. Springer, Heidelberg (1986)Google Scholar
  18. 18.
  19. 19.
    Autodesk. 3d studio max, http://www.autodesk.com/3dmax
  20. 20.
    Cootes, T.F., Taylor, C. J., Cooper, D.H., Graham, J.: Active shape models – their training and application. Computer Vision and Image Understanding 61(1), 38–59 (1995)Google Scholar
  21. 21.
    Pantic, M., Rothkrantz, L.J.M: Expert system for automatic analysis of facial expressions. Image and Vision Computing 18, 881–905 (2000)Google Scholar
  22. 22.
    Chang, C.-C., Lin, C.-J.: LIBSVM: A Library for Support Vector Machines, Software (2001), available at http://www.csie.ntu.edu.tw/~cjlin/libsvm

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Howard Towner
    • 1
  • Mel Slater
    • 1
  1. 1.Department of Computer Science, University College London, Gower Street, London, WC1E 6BTUK

Personalised recommendations