Motivation and Emotion

, Volume 15, Issue 2, pp 123–148

Vocal cues in emotion encoding and decoding


  • Klaus R. Scherer
    • University of Geneva, FPSE
  • Rainer Banse
    • University of Geneva, FPSE
  • Harald G. Wallbott
    • University of Giessen
  • Thomas Goldbeck
    • University of Giessen

DOI: 10.1007/BF00995674

Cite this article as:
Scherer, K.R., Banse, R., Wallbott, H.G. et al. Motiv Emot (1991) 15: 123. doi:10.1007/BF00995674


This research examines the correspondence between theoretical predictions on vocal expression patterns in naturally occurring emotions (as based on the component process theory of emotion; Scherer, 1986) and empirical data on the acoustic characteristics of actors' portrayals. Two male and two female professional radio actors portrayed anger, sadness, joy, fear, and disgust based on realistic scenarios of emotion-eliciting events. A series of judgment studies was conducted to assess the degree to which judges are able to recognize the intended emotion expressions. Disgust was relatively poorly recognized; average recognition accuracy for the other emotions attained 62.8% across studies. A set of portrayals reaching a satisfactory level of recognition accuracy underwent digital acoustic analysis. The results for the acoustic parameters extracted from the speech signal show a number of significant differences between emotions, generally confirming the theoretical predictions.

Copyright information

© Plenum Publishing Corporation 1991