Advertisement

Multiple Classifier Systems for the Recogonition of Human Emotions

  • Friedhelm Schwenker
  • Stefan Scherer
  • Miriam Schmidt
  • Martin Schels
  • Michael Glodek
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5997)

Abstract

Research in the area of human-computer interaction (HCI) increasingly addressed the aspect of integrating some type of emotional intelligence in the system. Such systems must be able to recognize, interprete and create emotions. Although, human emotions are expressed through different modalities such as speech, facial expressions, hand or body gestures, most of the research in affective computing has been done in unimodal emotion recognition. Basically, a multimodal approach to emotion recognition should be more accurate and robust against missing or noisy data. We consider multiple classifier systems in this study for the classification of facial expressions, and additionally present a prototype of an audio-visual laughter detection system. Finally, a novel implementation of a Java process engine for pattern recognition and information fusion is described.

Keywords

Facial Expression Hide Markov Model Gaussian Mixture Model Emotion Recognition Emotional Intelligence 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bayerl, P., Neumann, H.: A fast biologically inspired algorithm for recurrent motion estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 246–260 (2007)CrossRefGoogle Scholar
  2. 2.
    Bousmalis, K., Mehu, M., Pantic, M.: Spotting agreement and disagreement: A survey of nonverbal audiovisual cues and tools. In: Proceedings of the International Conference on Affective Computing and Intelligent Interaction, vol. 2, pp. 121–129 (2009)Google Scholar
  3. 3.
    Campbell, N., Kashioka, H., Ohara, R.: No laughing matter. In: Proceedings of Interspeech, pp. 465–468. ISCA (2005)Google Scholar
  4. 4.
    Cohn, J.F., Kanade, T., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000)Google Scholar
  5. 5.
    Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor, J.: Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine 18(1), 32–80 (2001)CrossRefGoogle Scholar
  6. 6.
    Devillers, L., Vidrascu, L., Lamel, L.: Challanges in real-life emotion annotation and machine learning based detection. Neural Networks 18, 407–422 (2005)CrossRefGoogle Scholar
  7. 7.
    Hermansky, H.: The modulation spectrum in automatic recognition of speech. In: Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 140–147. IEEE, Los Alamitos (1997)CrossRefGoogle Scholar
  8. 8.
    Jaeger, H.: Tutorial on training recurrent neural networks, covering bppt, rtrl, ekf and the echo state network approach. Tech. Rep. 159, Fraunhofer-Gesellschaft, St. Augustin Germany (2002)Google Scholar
  9. 9.
    Jaeger, H., Haas, H.: Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 304, 78–80 (2004)CrossRefGoogle Scholar
  10. 10.
    Knox, M., Mirghafori, N.: Automatic laughter detection using neural networks. In: Proceedings of Interspeech 2007, pp. 2973–2976. ISCA (2007)Google Scholar
  11. 11.
    Krause, A.F., Blaesing, B., Duerr, V., Schack, T.: Direct control of an active tactile sensor using echo state networks direct control of an active tactile sensor using echo state networks. In: Ritter, H., Sagerer, G., Dillmann, R., Buss, M. (eds.) Proceedings of 3rd International Workshop on Human-Centered Robotic Systems (HCRS 2009). Cognitive Systems Monographs, pp. 11–21. Springer, Heidelberg (2009)Google Scholar
  12. 12.
    Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach. Learn. 51(2), 181–207 (2003)MATHCrossRefGoogle Scholar
  13. 13.
    Laskowski, K.: Modeling vocal interaction for text-independent detection of involvement hotspots in multi-party meetings. In: Proceedings of the 2nd IEEE/ISCA/ACL Workshop on Spoken Language Technology (SLT2008), pp. 81–84 (2008)Google Scholar
  14. 14.
    Oudeyer, P.Y.: The production and recognition of emotions in speech: features and algorithms. International Journal of Human Computer Interaction 59(1-2), 157–183 (2003)Google Scholar
  15. 15.
    Rabiner, L.R.: A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 257–286 (1989)Google Scholar
  16. 16.
    Scherer, S., Campbell, W.N.: Automatic laughter detection for measuring discourse engagement. In: Autumn Meeting of the Acoustical Society of Japan 2008 (ASJ 2008), pp. 265–266 (2008) (in japanese)Google Scholar
  17. 17.
    Scherer, S., Oubbati, M., Schwenker, F., Palm, G.: Real-time emotion recognition from speech using echo state networks. In: Prevost, L., Marinai, S., Schwenker, F. (eds.) ANNPR 2008. LNCS (LNAI), vol. 5064, pp. 205–216. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  18. 18.
    Scherer, S., Schwenker, F., Campbell, W.N., Palm, G.: Multimodal laughter detection in natural discourses. In: Ritter, H., Sagerer, G., Dillmann, R., Buss, M. (eds.) Proceedings of 3rd International Workshop on Human-Centered Robotic Systems (HCRS 2009). Cognitive Systems Monographs, pp. 111–121 (2009)Google Scholar
  19. 19.
    Scherer, S., Schwenker, F., Palm, G.: Classifier fusion for emotion recognition from speech. In: Proceedings of Intelligent Environments 2007, pp. 152–155 (2007)Google Scholar
  20. 20.
    Scherer, S., Fritzsch, V., Schwenker, F.: Multimodal real-time conversation analysis using a novel process engine. In: Proceedings of International Conference on Affective Computing and Intelligent Interaction 2009 (ACII 2009), pp. 253–255. IEEE, Los Alamitos (2009)Google Scholar
  21. 21.
    Scherer, S., Fritzsch, V., Schwenker, F., Campbell, N.: Demonstrating laughter detection in natural discourses. In: Interdisciplinary Workshop on Laughter and other Interactional Vocalisations in Speech (2009)Google Scholar
  22. 22.
    Schwenker, F., Sachs, A., Palm, G., Kestler, H.A.: Orientation histograms for face recognition. In: ANNPR, pp. 253–259 (2006)Google Scholar
  23. 23.
    Strauss, P.M., Hoffmann, H., Scherer, S.: Evaluation and user acceptance of a dialogue system using wizard-of-oz recordings. In: 3rd IET International Conference on Intelligent Environments 2007 (IE 2007), pp. 521–524. IEEE, Los Alamitos (2007)CrossRefGoogle Scholar
  24. 24.
    Truong, K.P., Van Leeuwen, D.A.: Evaluating laughter segmentation in meetings with acoustic and acoustic-phonetic features. In: Workshop on the Phonetics of Laughter, Saarbrücken, pp. 49–53 (2007)Google Scholar
  25. 25.
    Turk, M., Pentland, A.: Eigenfaces for recognition. Journal of Cognitive Neuroscience 3(1), 71–86 (1991)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Friedhelm Schwenker
    • 1
  • Stefan Scherer
    • 1
  • Miriam Schmidt
    • 1
  • Martin Schels
    • 1
  • Michael Glodek
    • 1
  1. 1.Institute of Neural Information ProcessingUniversity of UlmUlm

Personalised recommendations