Skip to main content

Dynamic Visual Time Context Descriptors for Automatic Human Expression Classification

  • Conference paper
  • First Online:
Foundations and Practical Applications of Cognitive Systems and Information Processing

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 215))

  • 3002 Accesses

Abstract

In this paper, we propose two fast dynamic descriptors Vertical-Time-Backward (VTB) and Vertical-Time-Forward (VTF) on spatial–temporal domain to catch the cues of essential facial movements. These dynamic descriptors are used in a two-step system to recognize human facial expression within image sequences. In the first step, the system classifies static images and then it identifies the whole sequence. After combining the visual-time context features with popular LBP, the system can efficiently recognize the expression in a single image, and is especially helpful in highly ambiguous ones. In the second step, we use the evaluation method through the weighted probabilities of all frames to predict the class of the whole sequence. The experiments were performed on 348 sequences from 95 subjects in Cohn–Kanade database and obtained good results as high as 97.6 % in seven-class recognition for frames and 95.7 % in six class for sequences.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ekman P, Friesen W (1978) Facial action coding system: a technique for the measurement of facial movement. Consulting Psychologists Press, Palo Alto

    Google Scholar 

  2. Pantic M, Rothkrantz L (2000) Automatic analysis of facial expressions: the state of the art. IEEE Trans Pattern Anal Mach Intell 22:1424–1445

    Article  Google Scholar 

  3. Huang D, Shan C, Ardabilian M, Wang Y, Chen L (2011) Local binary patterns and its application to facial image analysis: a survey. IEEE Trans Syst Man Cybern Part C: Appl Rev 41(4):1–17

    Google Scholar 

  4. Chang K, Liu T, Lai S (2009) Learning partially-observed hidden conditional random fields for facial expression recognition. In: CVPR, Miami, pp 533–540

    Google Scholar 

  5. Bartlett M, Littlewort G, Lainscsek C, Fasel I, Frank M, Movellan J (2005) Fully automatic facial action recognition in spontaneous behavior. In: Automatic Face and Gesture Recognition, pp 223–228

    Google Scholar 

  6. Shan C, Gong S, McOwan P (May 2009) Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis Comput 27(6):803–816

    Google Scholar 

  7. Zhao G, Ahonen T, Matas J, Pietikainen M (2012) Rotation-invariant image and video description with local binary pattern features. Image Process, IEEE Trans 21(4):1465–1477

    Google Scholar 

  8. Yeasin M, Bullot B, Sharma R (2004) From facial expression to level of interest: a spatio-temporal approach. CVPR 2:922–927

    Google Scholar 

  9. Xiang T, Leung M, Cho S (2008) Expression recognition using fuzzy spatio-temporal modeling. Pattern Recognit 41(1):204–216

    Article  MATH  Google Scholar 

  10. Zhao G, Pietikäinen M (2009) Boosted multi-resolution spatiotemporal descriptors for facial expression recognition. Pattern Recogn Lett 30(12):1117–1127

    Article  Google Scholar 

  11. Yang P, Liu Q, Metaxas DN (2009) Boosting encoded dynamic features for facial expression recognition. Pattern Recogn Lett 30(2):132–139

    Article  Google Scholar 

  12. Buenaposada J, Munoz E, Baumela L (January 2008) Recognising facial expressions in video sequences. Pattern Anal Appl 1(2):101–116

    Google Scholar 

  13. Rudovic O, Pavlovic V, Pantic M (2012) Multi-output laplacian dynamic ordinal regression for facial expression recognition and intensity estimation. CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp 2634–2641

    Google Scholar 

  14. Kienzle W, Bakir G, Franz M, Schölkopf B (2004) Face detection—efficient and rank deficient. NIPS 17(2005):673–680

    Google Scholar 

  15. Ji Y, Idrissi K (2012) Automatic facial expression recognition based on spatiotemporal descriptors. Pattern Recognit Lett 33(10):1373–1380

    Article  Google Scholar 

  16. Beveridge J, Bolme D, Draper B, Teixeira M (February 2005) The csu face identification evaluation system: its purpose, features, and structure. Mach Vis Appl 16(2):128–138

    Google Scholar 

  17. Ojala T, Pietikäinen M, Harwood D (January 1996) A comparative study of texture measures with classification based on feature distributions. Pattern Recognit 29(1):51–59

    Google Scholar 

  18. Bassili JN (1979) Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face. J Pers Soc Psychol 37(11):2049–2058

    Article  Google Scholar 

  19. Kanade T, Cohn J, Tian Y-L (2000) Comprehensive database for facial expression analysis. In Proceedings of the 4th IEEE international conference on automatic face and gesture recognition (FG’00), Grenoble, pp 46–53

    Google Scholar 

Download references

Acknowledgments

This work was supported by National Natural Science Foundation of China (61170124& 61272258), Jiangsu Province Natural Science Foundation (BK2009116) and Basic and Applied Research Program of Suzhou City (SYG201116).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Ji .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ji, Y., Gong, S., Liu, C. (2014). Dynamic Visual Time Context Descriptors for Automatic Human Expression Classification. In: Sun, F., Hu, D., Liu, H. (eds) Foundations and Practical Applications of Cognitive Systems and Information Processing. Advances in Intelligent Systems and Computing, vol 215. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37835-5_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-37835-5_28

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-37834-8

  • Online ISBN: 978-3-642-37835-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics