Facial Expression Recognition Using Spatiotemporal Boosted Discriminatory Classifiers

  • Stephen Moore
  • Eng Jon Ong
  • Richard Bowden
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6111)

Abstract

This paper introduces a novel approach to facial expression recognition in video sequences. Low cost contour features are introduced to effectively describe the salient features of the face. Temporalboost is used to build classifiers which allow temporal information to be utilized for more robust recognition. Weak classifiers are formed by assembling edge fragments with chamfer scores. Detection is efficient as weak classifiers are evaluated using an efficient look up to a chamfer image. An ensemble framework is presented with all-pairs binary classifiers. An error correcting support vector machine (SVM) is utilized for final classification. The results of this research is a 6 class classifier (joy, surprise, fear, sadness, anger and disgust ) with recognition results of up to 95%. Extensive experiments on the Cohn-kanade database illustrate that this approach is effective for facial exression analysis.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barrow, H.G., Tenenbaum, J.M., Bolles, R.C., Wolf, H.C.: Parametric correspondence and chamfer matching: two new techniques for image matching. In: IJCAI 1977: Proceedings of the 5th International Joint Conference on Artificial Intelligence, pp. 659–663. Morgan Kaufmann Publishers Inc., San Francisco (1977)Google Scholar
  2. 2.
    Bassili, J.N.: Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face. Journal of personality and social psychology 37(11), 2049–2058 (1979)CrossRefGoogle Scholar
  3. 3.
    Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986)CrossRefGoogle Scholar
  4. 4.
    Chang, Y., Hu, C., Turk, M.: Probabilistic expression analysis on manifolds. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 520–527 (2004)Google Scholar
  5. 5.
    Cohen, I., Garg, A., Huang, T.S.: Emotion recognition from facial expressions using multilevel hmm. In: Neural Information Processing Systems (2000)Google Scholar
  6. 6.
    Crammer, K., Singer, Y., Cristianini, N., Shawe-taylor, J., Williamson, B.: On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research 2, 2001 (2001)Google Scholar
  7. 7.
    Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  8. 8.
    Ekman, P., Friesen, W.V.: Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 124–129 (1971)Google Scholar
  9. 9.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar
  10. 10.
    Gavrila, D.: Pedestrian detection from a moving vehicle. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 37–49. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  11. 11.
    Kanade, T., Tian, Y., Cohn, J.F.: Comprehensive database for facial expression analysis. In: FG 2000: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000, Washington, DC, USA, p. 46. IEEE Computer Society, Los Alamitos (2000)Google Scholar
  12. 12.
    Mignault, A., Chaudhuri, A.: The many faces of a neutral face: Head tilt and perception of dominance and emotion. Journal of Nonverbal Behavior 27(2), 111–132 (2003)CrossRefGoogle Scholar
  13. 13.
    Moore, S., Bowden, R.: Automatic facial expression recognition using boosted discriminatory classifiers. In: Zhou, S.K., Zhao, W., Tang, X., Gong, S. (eds.) AMFG 2007. LNCS, vol. 4778, pp. 71–83. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  14. 14.
    Oliver, N., Pentland, A., Brard, F.: Lafter: Lips and face real time tracker with facial expression recognition. In: Proc. CVPR, pp. 123–129 (1997)Google Scholar
  15. 15.
    Petridis, S., Pantic, M.: Audiovisual laughter detection based on temporal features. In: IMCI 2008: Proceedings of the 10th International Conference on Multimodal Interfaces, pp. 37–44. ACM, New York (2008)CrossRefGoogle Scholar
  16. 16.
    Shan, C.F., Gong, S.G., McOwan, P.W.: Dynamic facial expression recognition using a bayesian temporal manifold model. In: BMVC 2006, pp. 297–306 (2006)Google Scholar
  17. 17.
    Sheerman-Chase, T., Ong, E.-J., Bowden, R.: Feature selection of facial displays for detection of non verbal communication in natural conversation. In: IEEE International Workshop on Human-Computer Interaction, Kyoto (October 2009)Google Scholar
  18. 18.
    Smith, P., da Vitoria Lobo, N., Shah, M.: Temporalboost for event recognition. In: ICCV 2005: Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV 2005), Washington, DC, USA, vol. 1, pp. 733–740. IEEE Computer Society, Los Alamitos (2005)CrossRefGoogle Scholar
  19. 19.
    Yang, P., Liu, Q.S., Cui, X.Y., Metaxas, D.N.: Facial expression recognition using encoded dynamic features, pp. 1–8 (2008)Google Scholar
  20. 20.
    Tian, Y., Kanade, T., Cohn, J.: Facial expression analysis. In: Handbook of Face Recognition, ch. 11, pp. 247–275. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  21. 21.
    Zhao, G., Pietikainen, M.: Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 915–928 (2007)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Stephen Moore
    • 1
  • Eng Jon Ong
    • 1
  • Richard Bowden
    • 1
  1. 1.Centre for Vision Speech and Signal ProcessingUniversity of SurreyGuildfordUK

Personalised recommendations