Recognizing emotions from human speech

  • Preeti Khanna
  • M. Sasikumar
Conference paper


Emotions colour the language and act as a necessary ingredient for natural two way human-to–human communication and interaction. As listeners we also react to the speaker’s emotive state and adapt our behavior depending on what kind of emotions the speaker transmits. Recent technological advances have enabled the human to interact with computer through non-traditional modalities (e.g. keyboard, mouse) like voice, gesture, facial expression etc. This interaction still lacks the component of emotions. It is argued that to truly achieve affective human computer intelligent interaction there is a need for the computer to be able to interact naturally with the user, similar to the way human - human interaction takes place. Several studies have been conducted consisting of classical human interaction and of human computer interaction. They concluded that for intelligent interaction emotions play an important ingredient. A baby learns to recognize emotional information before understanding semantic information in his/her mother’s utterance. We present some basic research in the field of emotion recognition from speech. First we give a brief survey of research in the field of emotive speech. Then we discuss an approach to detect and classify human emotion in speech using certain rules. The basic five emotions considered as anger, happiness, fear, sadness and neutral.


Recognition Rate Speech Signal Emotion Recognition Emotional Category Human Speech 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Boersma, P., 2001. “PRAAT, a system for doing phonetics by computer,” Glot International, vol.5, no. 9/10, pp. 341-345Google Scholar
  2. 2.
    Bukhardt., F., Paeschke., A., Rolfes., M., Sendlmeier., W., Weiss., B. 2005. “A database of German Emotional Speech”. InterspeechGoogle Scholar
  3. 3.
    Burkhardt, F., Sendlmeier, W., 2000. “Verification of acoustical correlates of Emotional speech using formant-syntheses.” In: Proceedings of the ISCA Workshop on Speech and EmotionGoogle Scholar
  4. 4.
    Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N.,Votsis, G. Kollias, S. Fellenx,W., and Taylor, J.G. 2001. “Emotion recognition in human computer interaction. IEEE Signal Processing magazine, Vol. 18, No. 1, pp. 32-80Google Scholar
  5. 5.
    German Emotional Speech Database
  6. 6.
    Gobl, C. and Chasaide, A.N., 2001. “The role of voice quality in communicating emotions, mood and attitude.” Speech communication, Vol. 40, pp. 189-212CrossRefGoogle Scholar
  7. 7.
    Lee, C., Narayanan, S., and Pieraccini, R., 2002. “Classifying emotions in human machine spoken dialogs”, presented at proc. Of int. Conf. on multimedia and expo, SwitzerlandGoogle Scholar
  8. 8.
    Murray, I.R., Arnott, J.L., 1993. “Towards a simulation of emotion in synthetic speech: a Review of the literature on human vocal emotion”. JASA 93 (2), 1097-1108Google Scholar
  9. 9.
    Oh-Wook Kwon, Kwokleung Chan, Jiucang Hao, Te-Won Lee, 2003. “Emotion Recognition By Speech Signals”. Eurospeech GenevaGoogle Scholar
  10. 10.
    Petrushin, V.A., 1999. “Emotion in speech: recognition and application to call centers”. Proceedings of Conference on Artificial Neural Networks in EngineeringGoogle Scholar

Copyright information

© Springer India Pvt. Ltd 2011

Authors and Affiliations

  • Preeti Khanna
    • 1
  • M. Sasikumar
    • 2
  1. 1.MPSTME, NMIMSMumbaiIndia
  2. 2.CDAC KhargharNavi MumbaiIndia

Personalised recommendations