Skip to main content

Summary

In current dialogue systems the use of speech as an input modality is common. But this modality is only one of those human beings use. In human-human interaction people use gestures to point or facial expressions to show their moods as well. To give modern systems a chance to read information from all modalities used by humans, these systems must have multimodal user interfaces. The SmartKom system has such a multimodal interface that analyzes facial expression, speech and gesture simultaneously. Here we present the module that fulfills the task of facial expression analysis in order to identify the internal state of a user.

In the following we first describe the state of the art in emotion and user state recognition by analyzing facial expressions. Next, we describe the facial expression recognition module. After that we present the experiments and results for recognition of user states. We summarize our results in the last section.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • J. Adelhardt, C. Frank, E. Nöth, R.P. Shi, V. Zeißler, and H. Niemann. Multimodal Emogram, Data Collection and Presentation, 2006. In this volume.

    Google Scholar 

  • B. Braathen, M.S. Bartlett, G. Littlewort, and J.R. Movellan. First Steps Towards Automatic Recognition of Spontaneous Facial Action Units. In: Proc. ACM Conf. on Perceptual User Interfaces (PUI’03), pp. 319–242, Vancouver, Canada, 2001.

    Google Scholar 

  • X.W. Chen and T. Huang. Facial Expression Recognition: A Clustering-Based Approach. Pattern Recognition Letters, 24:1295–1302, 2002.

    Article  Google Scholar 

  • J.F. Cohn, A.J. Zlochower, J.J. Lien, and T. Kanade. Feature-Point Tracking by Optical Flow Discriminates Subtle Differences in Facial Expressions. In: Proc. Int. Conf. on Automatic Face and Gesture Recognition, pp. 390–395, 1998.

    Google Scholar 

  • P. Ekman and W.V. Friesen. The Facial Action Coding System — A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, CA, 1978.

    Google Scholar 

  • I.A. Essa and A.P. Pentland. Facial Expression Recognition Using a Dynamic Model and Motion Energy. In: Proc. 5th Int. Conf. on Computer Vision, pp. 360–367, Cambridge, MA, 1995.

    Google Scholar 

  • I.A. Essa and A.P. Pentland. Coding, Analysis, Interpretation, and Recognition of Facial Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 19(7):757–763, 1997.

    Article  Google Scholar 

  • C. Frank and E. Nöth. Automatic Pixel Selection for Optimizing Facial Expression Recognition Using Eigenfaces. In: Pattern Recognition, Proc. 25rd DAGM Symposium, pp. 378–385, Magdeburg, Germany, 2001.

    Google Scholar 

  • C. Frank and E. Nöth. Optimizing Eigenfaces by Face Masks for Facial Expression Recognition. In: Computer Analysis of Images and Patterns — CAIP 2003, LNCS, pp. 1–13, Berlin Heidelberg New York, 2003. Springer.

    Google Scholar 

  • T. Hu, L.C.D. Silva, and K. Sengupta. A Hybrid Approach of NN and HMM for Facial Emotion Classification. Pattern Recognition Letters, 23:1303–1310, 2002.

    Article  MATH  Google Scholar 

  • S. Kaiser, T. Wehrle, and S. Schmidt. Emotional Episodes, Facial Expressions, and Reported Feelings in Human-Computer Interactions. In: Proc. 10th Conf. of the Int. Society for Research on Emotion, pp. 82–86, Würzburg, Germany, 1998. ISRE Publications.

    Google Scholar 

  • S. Kawato and N. Tetsutani. Real-Time Detection of Between-the-Eyes With a Circle Frequency Filter. In: Proc. 5th Asian Conference on Computer Vision (ACCV), pp. 442–447, Melbourne, Australia, 2002.

    Google Scholar 

  • M. Kirby and L. Sirovich. Application of the Karhunen-Loèeve Procedure for the Characterization of Human Faces. TPAMI, 12(1):103–108, 1990.

    Google Scholar 

  • V.P. Kumar and T. Poggio. Learning-Based Approach to Real Time Tracking and Analysis of Faces. In: Automatic Face and Gesture Recognition 2000, pp. 96–101, 2000.

    Google Scholar 

  • H. Li, A. Lundmark, and R. Forchheimer. Video Based Human Emotion Estimation. In: Int. Workshop on Synthetic-Natural Hybrid Coding and Three Dimensional Imaging, Sept. 1999.

    Google Scholar 

  • J.J. Lien, T. Kanade, J.F. Cohn, and C.C. Li. A Multi-Method Approach for Discriminating Between Similar Facial Expressions, Including Expression Intensity Estimation. In: Proc. Computer Vision and Pattern Recognition (CVPR’98), pp. 853–859, 1998a.

    Google Scholar 

  • J.J. Lien, T. Kanade, J.F. Cohn, and C.C. Li. Automated Facial Expression Recognition Based on FACS Action Units. In: Proc. Int. Conf. on Automatic Face and Gesture Recognition, pp. 390–395, 1998b.

    Google Scholar 

  • M.J. Lyons, L. Budynek, and S. Akamatsu. Automatic Classification of Single Facial Images. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 21(12):1357–1362, 1999.

    Article  Google Scholar 

  • A.M. Martinez and R. Benavente. The AR FAce DAtabase. Technical Report 24, Computer Vision Center, Purdue University, West Lafayette, IN, 1998.

    Google Scholar 

  • B. Moghaddam and A.P. Pentland. Face Recognition Using View-Based and Modular Eigenspaces. In: Automatic Systems for the Identification and Inspection of Humans, SPIE’94, vol. 2257, 1994.

    Google Scholar 

  • M. Müller, F. Wallhoff, F. Hülsken, and G. Rigoll. Facial Expression Recognition Using Pseudo 3-D Hidden Markov Models. In: Pattern Recognition, Proc. 23rd DAGM Symposium, pp. 291–297, Munich, Germany, 2001.

    Google Scholar 

  • N. Oliver, A.P. Pentland, and F. Berard. LAFTER: Lips and Face Real Time Tracker. In: Proc. Computer Vision and Pattern Recognition (CVPR’97), pp. 123–129, Puerto Rico, 1997.

    Google Scholar 

  • E.E. Osuna, R. Freund, and F. Girosi. Support Vector Machines: Training and Application. Technical Report A. I. Memo No. 1602, Massachusetts Institute of Technology, Cambridge, MA, 1996.

    Google Scholar 

  • T. Otsuka and J. Ohya. Recognizing Multiple Persons’ Facial Expressions Using HMM Based on Automatic Extraction of Significant Frames From Image Sequences. In: Int. Conf. on Image Processing (ICIP 1997), pp. 546–549, Oct 1997.

    Google Scholar 

  • T. Otsuka and J. Ohya. Extracting Facial Motion Parameters by Tracking Feature Points. In: Proc. 1st Int. Conf. AMCP’98, LNCS 1554, pp. 433–444, Berlin Heidelberg New York, 1999. Springer.

    Google Scholar 

  • M. Pantic. Facial Expression Analysis by Computational Intelligence Techniques. PhD thesis, Faculteit der Informatietechnology en Systemen, TU Delft, The Netherlands, 2001.

    Google Scholar 

  • F. Schiel and U. Türk. Wizard-of-Oz Recordings, 2006. In this volume.

    Google Scholar 

  • K. Schwerdt, D. Hall, and J.L. Crowley. Visual Recognition of Emotional States. In: Proc. Int. Conf. on Multimodal Interfaces (ICMI’00), pp. 41–48, Beijing, China, 2000.

    Google Scholar 

  • E.P. Simoncelli. Distributed Analysis and Representation of Visual Motion. PhD thesis, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, Cambridge, MA, 1993.

    Google Scholar 

  • C.E. Thomaz, D.F. Gillies, and R.Q. Feitosa. Using Mixture Covariance Matrices to Improve Face and Facial Expression Recognitions. In: Proc. 3rd Int. Conf. of Audio-and Video-Based Biometric Person Authentication AVBPA R01, LNCS 2091, pp. 71–77, Berlin Heidelberg New York, June 2001. Springer.

    Google Scholar 

  • Y. Tian, T. Kanade, and J.F. Cohn. Recognizing Action Units for Facial Expression Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 23(2):97–115, 2001.

    Article  Google Scholar 

  • Y.L. Tian, T. Kanade, and J.F. Cohn. Recognizing Lower Face Action Units for Facial Expression Analysis. In: Proc. Int. Conf. on Automatic Face and Gesture Recognition, pp. 484–490, 2000.

    Google Scholar 

  • M. Turk and A.P. Pentland. Eigenfaces for Recognition. Journal of Cognitive Neuroscience, 3(1):71–86, 1991a.

    Article  Google Scholar 

  • M. Turk and A.P. Pentland. Face Recognition Using Eigenfaces. In: Proc. Computer Vision and Pattern Recognition (CVPR’98), pp. 586–591, 1991b.

    Google Scholar 

  • L. Wiskott. Phantom Faces for Face Analysis. Proc. 7th Intern. Conf. on Computer Analysis of Images and Patterns, CAIP’97, Kiel, 1296:480–487, 1997.

    Google Scholar 

  • W.S. Yambor, B.A. Draper, and J.R. Beveridge. Analyzing PCA-Based Face Recognition Algorithms: Eigenvector Selection and Distance Measures. In: 2nd Workshop on Empirical Evaluation Methods in Computer Vision, Dublin, Irland, 2000.

    Google Scholar 

  • Z. Zhang, M. Lyon, M. Schuster, and S. Akamatsu. Comparison Between Geometry-Based and Gabor-Wavelets-Based Facial Expression Recognition Using Multi-Layer Perceptron. In: Proc. 3rd Int. Conf. on Automatic Face and Gesture Recognition, pp. 454–459, Nara, Japan, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Frank, C. et al. (2006). The Facial Expression Module. In: Wahlster, W. (eds) SmartKom: Foundations of Multimodal Dialogue Systems. Cognitive Technologies. Springer, Berlin, Heidelberg . https://doi.org/10.1007/3-540-36678-4_11

Download citation

  • DOI: https://doi.org/10.1007/3-540-36678-4_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-23732-7

  • Online ISBN: 978-3-540-36678-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics