A Study of Prosodic Features of Emotional Speech

  • X. Arputha Rathina
  • K. M. Mehata
  • M. Ponnavaikko
Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 166)


Speech is a rich source of information which gives not only about what a speaker says, but also about what the speaker’s attitude is toward the listener and toward the topic under discussion—as well as the speaker’s own current state of mind. Recently increasing attention has been directed to the study of the emotional content of speech signals, and hence, many systems have been proposed to identify the emotional content of a spoken utterance.

The focus of this research work is to enhance man machine interface by focusing on user’s speech emotion. This paper gives the results of the basic analysis on prosodic features and also compares the prosodic features of, various types and degrees of emotional expressions in Tamil speech based on the auditory impressions between the two genders of speakers as well as listeners. The speech samples consist of “neutral” speech as well as speech with three types of emotions (“anger”, “joy”, and “sadness”) of three degrees (“light”, “medium”, and “strong”). A listening test is also being conducted using 300 speech samples uttered by students at the ages of 19 -22. The features of prosodic parameters based on the emotional speech classified according to the auditory impressions of the subjects are analyzed. Analysis results suggest that prosodic features that identify their emotions and degrees are not only speakers’ gender dependent, but also listeners’ gender dependent.


Vocal Fold Emotional Content Speech Sample Pitch Contour Emotional Speech 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Kuremastsu, M., et al: An extraction of emotion in human speech using speech synthesize and classifiers foe each emotion. International Journal of Circuits Systems and Signal Processing (2008)Google Scholar
  2. 2.
    Adell, J., Bonafonte, A., et al.: Analysis of prosodic features: towards modeling of emotional and pragmatic attributes of speech. In: Proc. Natural Lang. Proc. (2005)Google Scholar
  3. 3.
    Hashizawa, Y., Takeda, S., Hamzah, M.D., Ohyama, G.: On the Differences in Prosodic Features of Emotional Expressions in Japanese Speech according to the Degree of the Emotion. In: Proc. 2nd Int. Conf. Speech Prosody, Nara, Japan, pp. 655–658 (2004)Google Scholar
  4. 4.
    Takeda, S., Ohyama, G., Tochitani, A.: ”Diversity of Prosody and its Quantative Description” and example: analysis of “anger” expression in Japanese Speech. In: Proc. ICSP 2001, Taejon, Korea, pp. 423–428 (2001)Google Scholar
  5. 5.
    Hamzah, M.D., Muraoka, T., Ohashi, T.: Analysis of Prosodic features of Emotional Expressions in Noh Farce speech according to the Degree of Emotions. In: Proc. 2nd Int. Conf. Speech Prosody, Nara, Japan, pp. 651–654 (2004)Google Scholar
  6. 6.
    Espinosa, H.P., Reyes Garcıa, C.A., Pineda, L.V.: Features Selection for Primitives Estimation on Emotional Speech. In: ICASSP (2010)Google Scholar
  7. 7.
    Zhang, S., Lei, B., Chen, A., Chen, C., Chen, Y.: Spoken Emotion Recognition Using Local Fisher Discriminant Analysis. In: ICSP (2010)Google Scholar
  8. 8.
    Liscombe, J.: Detecting Emotions in Speech: Experiments in three domains. In: Proc. of the Human Lang. Tech Conf. of the North America Chapter of ACL, pp. 234–251 (June 2006)Google Scholar
  9. 9.
    Dumouchel, P., Boufaden, N.: Leveraging emotion detection using emotions form yes-no answers. In: Interspeech (2008)Google Scholar
  10. 10.
    Galanis, D., Darsinos, V., Kokkinakis, G.: Investigating Emotional Speech Parameters for Speech Synthesis. In: ICECS (1996)Google Scholar
  11. 11.
    Adell, J., Bonafonte, A., et al.: Analysis of prosodic features: towards modeling of emotional and pragmatic attributes of speech. Proc. Natural Lang. Proc. (2005)Google Scholar

Copyright information

© Springer-Verlag GmbH Berlin Heidelberg 2012

Authors and Affiliations

  • X. Arputha Rathina
    • 1
  • K. M. Mehata
    • 1
  • M. Ponnavaikko
    • 2
  1. 1.Department of Computer Science and EngineeringB.S. Abdur Rahman UniversityChennaiIndia
  2. 2.SRM UniversityChennaiIndia

Personalised recommendations