Advertisement

A Systematic Comparison of Different HMM Designs for Emotion Recognition from Acted and Spontaneous Speech

  • Johannes Wagner
  • Thurid Vogt
  • Elisabeth André
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4738)

Abstract

In this work we elaborate the use of hidden Markov models (HMMs) for speech emotion recognition as a dynamic alternative to static modelling approaches. Since previous work on this field does not yet define a clear line which HMM design should be prioritised for this task, we run a systematic analysis of different HMM configurations. Furthermore, experiments are carried out on an acted and a spontaneous emotions corpus, since little is known about the suitability of HMMs for spontaneous speech. Additionally, we consider two different segmentation levels, namely words and utterances. Results are compared with the outcome of a support vector machine classifier trained on global statistics features. While for both databases similar performance was observed on utterance level, the HMM-based approach outperformed static classification on word level. However, setting up general guidelines which kind of models are best suited appeared to be rather difficult.

Keywords

Support Vector Machine Hide Markov Model Gaussian Mixture Model Emotion Recognition Word Level 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Batliner, A., Hacker, C., Steidl, S., Nöth, E., D’Arcy, S., Russell, M., Wong, M.: You stupid tin box — children interacting with the AIBO robot: A cross-linguistic emotional speech corpus, LREC, Lisbon, Portugal (2004)Google Scholar
  2. 2.
    Batliner, A., Steidl, S., Schuller, B., Seppi, D., Laskowski, K., Vogt, T., Devillers, L., Vidrascu, L., Amir, N., Kessous, L., Aharonson, V.: Combining Efforts for Improving Automatic Classification of Emotional User States, IS-LTC, Ljubljana, Slov. (2006)Google Scholar
  3. 3.
    Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W.F., Weiss, B.: A Database of German Emotional Speech. In: Interspeech, Lisbon, Portugal (2005)Google Scholar
  4. 4.
    Cowie, R., Douglas-Cowie, E., Tsapatoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor, J.G.: Emotion Recognition in Human-Computer Interaction. IEEE Signal Processing Magazine 18(1), 32–80 (2001)CrossRefGoogle Scholar
  5. 5.
    Fernandez, R., Picard, R.W.: Modeling drivers’ speech under stress. Speech Communication 40(1-2), 145–159 (2003)zbMATHCrossRefGoogle Scholar
  6. 6.
    Jiang, D.-N., Cai, L.-H.: Speech emotion classification with the combination of statistic features and temporal features, ICME, Taipei, Taiwan (2004)Google Scholar
  7. 7.
    Kang, B.-S., Han, C.-H., Lee, S.-T., Youn, D.-H., Lee, C.: Speaker dependent emotion recognition using speech signals. In: ICSLP, Beijing, China (2000)Google Scholar
  8. 8.
    Kwon, O.-W., Chan, K.-L., Hao, J., Lee, T.-W.: Emotion Recognition by Speech Signals. In: Eurospeech, Geneva, Switzerland (2003)Google Scholar
  9. 9.
    Nogueiras, A., Moreno, A., Bonafonte, A., Mari, J.B.: Speech emotion recognition using hidden Markov models. In: Eurospeech, Aalborg, Denmark (2001)Google Scholar
  10. 10.
    Nwe, T.L., Foo, S.W., De Silva, L.C.: Speech emotion recognition using hidden Markov models. Speech Communication 41(4), 603–623 (2003)CrossRefGoogle Scholar
  11. 11.
    Pao, T-L., Chen, Y-T., Yeh, J-H., Liao, W-Y.: Detecting Emotions in Mandarin Speech. Comp. Ling. and Chinese Lang. Proc. 10(3), 347–362 (2005)Google Scholar
  12. 12.
    Rabiner, L.R.: A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proc. IEEE 77(2), 257–286 (1989)CrossRefGoogle Scholar
  13. 13.
    Schuller, B., Rigoll, G., Lang, M.: Hidden Markov model-based speech emotion recognition. In: ICME, Baltimore, USA (2003)Google Scholar
  14. 14.
    Vogt, T., André, E.: Comparing Feature Sets for Acted and Spontaneous Speech in View of Automatic Emotion Recognition. In: ICME (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Johannes Wagner
    • 1
  • Thurid Vogt
    • 1
  • Elisabeth André
    • 1
  1. 1.Multimedia concepts and applications, Augsburg UniversityGermany

Personalised recommendations