Advertisement

Hidden Markov Models for Modeling Occurrence Order of Facial Temporal Dynamics

  • Khadoudja Ghanem
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8192)

Abstract

The analysis of facial expression temporal dynamics is of great importance for many real-world applications. Furthermore, due to the variability among individuals and different contexts, the dynamic relationships among facial features are stochastic. Systematically capturing such temporal dependencies among facial features and incorporating them into the facial expression recognition process is especially important for interpretation and understanding of facial behaviors. The base system in this paper uses Hidden Markov Models (HMMs) and a new set of derived features from geometrical distances obtained from detected and automatically tracked facial points. We propose here to transform numerical representation which is in the form of multi time series to a symbolic representation in order to reduce dimensionality, extract the most pertinent information and give a meaningful representation to human. Experiments show that new and interesting results have been obtained from the proposed approach.

Keywords

Facial expression HMM Occurrence order time series 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ekman, P., Rosenberg, E.L.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System, 2nd edn. Oxford University Press (2005)Google Scholar
  2. 2.
    Cohn, J.F., Schmidt, K.L.: The timing of facial motion in posed and spontaneous smiles. J. Wavelets, Multi-resolution and Information Processing 2(2), 121–132 (2004)CrossRefGoogle Scholar
  3. 3.
    Valstar, M.F., Pantic, M., Ambadar, Z., Cohn, J.F.: Spontaneous versus Posed Facial Behavior: Automatic Analysis of Brow Actions. In: Proc. ACM Int’l Conf. Multimodal Interfaces, pp. 162–170 (2006)Google Scholar
  4. 4.
    Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior. In: Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 568–573 (2005)Google Scholar
  5. 5.
    Valstar, M.F., Gunes, H., Pantic, M.: How to Distinguish Posed from Spontaneous Smiles Using Geometric Features. In: Proc. ACM Int’l Conf. Multimodal Interfaces, pp. 38–45 (2007)Google Scholar
  6. 6.
    Zeng, Z., Pantic, M., Glenn, I., Roisman, T., Huang, S.: A survey of affect recognition Methods: Audio,Visual, and Spontaneous Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(1) (2009)Google Scholar
  7. 7.
    Gunes, H., Schuller, B., Pantic, M., Cowie, R.: Emotion Representation, Analysis and Synthesis in Continuous Space: A Survey. In: Proc. of the 1st International Workshop on Emotion Synthesis, Representation, and Analysis in Continuous Space, IEEE FG, pp. 827–834. IEEE Press, Santa Barbara (2011)Google Scholar
  8. 8.
    Pantic, M., Patras, I.: Dynamics of facial expression: recognition of facial actions and their temporal segments form face profile image sequences. IEEE Trans. Systems, Man and Cybernetics – Part B 36(2), 433–449 (2006)CrossRefGoogle Scholar
  9. 9.
    Tong, Y., Liao, W., Ji, Q.: Facial action unit recognition by exploiting their dynamics and semantic relationships. IEEE Trans. Pattern Analysis and Machine Intelligence 29(10), 1683–1699 (2007)CrossRefGoogle Scholar
  10. 10.
    Valstar, M., Pantic, M.: Fully Automatic Facial Action Unit Detection and Temporal Analysis. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 3(149) (2006)Google Scholar
  11. 11.
    Valstar, M., Pantic, M., Patras, I.: Motion History for Facial Action Detection from Face Video. In: Proc. IEEE Conf. Systems, Man, and Cybernetics, pp. 635–640 (2004)Google Scholar
  12. 12.
    Koelstra, S., Pantic, M., Patras, I.: A Dynamic Texture-Based Approach to Recognition of Facial Actions and Their Temporal Models. IEEE Transactions on Pattern Analysis and Machine Intelligence 32 (2010)Google Scholar
  13. 13.
    Lucas, B.D., Kanade, T.: An Iterative Image Registration Technique with an Application to Stereo Vision. In: Proceedings of Imaging Understanding Workshop, pp. 121–130 (1981)Google Scholar
  14. 14.
    Makridakis, S., Wheelwright, S.C., Hyndman, R.J.: Forecasting: methods and applications, 3rd edn. John Wiley & Sons, New York (1998)Google Scholar
  15. 15.
    Klein, J.L.: Statistical Visions in Time. Cambridge University Press (1997)Google Scholar
  16. 16.
    Arlot, S., Celisse, A.: Segmentation of the mean of heteroscedastic data via cross-validation. Statistics and Computing, 1–20 (2010)Google Scholar
  17. 17.
    Guédon, Y.: Exploring the segmentation space for the assessment of multiple change-point models. Institut National de Recherche en Informatique et en Automatique, Cahier de recherche 6619 (2008)Google Scholar
  18. 18.
    Lavielle, M.: Detection of Changes using a Penalized Contrast (the DCPC algorithm) (2009), http://www.math.upsud.fr/_lavielleprogrammeslavielle.html
  19. 19.
    Kanade, J., Cohn, F., Tian, Y.: Comprehensive database for facial expression analysis. In: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000)Google Scholar
  20. 20.
    Pantic, M., Valstar, M.F., Rademaker, R., Maat, L.: Web-based Database for Facial Expression Analysis. In: Proc. IEEE Int’l Conf. Multmedia and Expo (ICME 2005), Amsterdam, The Netherlands (2005)Google Scholar
  21. 21.
    McKeown, G., Valstar, M., Cowie, R., Pantic, M., Schroder, M.: The SEMAINE database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Transactions on Affective Computing 3, 5–17 (2012)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Khadoudja Ghanem
    • 1
  1. 1.MISC LaboratoryUniversity CONSTANTINE2ConstantineAlgeria

Personalised recommendations