Behavior Research Methods

, Volume 42, Issue 1, pp 74–81 | Cite as

Recognizing emotions in spoken language: A validated set of Portuguese sentences and pseudosentences for research on emotional prosody

  • São Luís Castro
  • César F. Lima


A set of semantically neutral sentences and derived pseudosentences was produced by two native European Portuguese speakers varying emotional prosody in order to portray anger, disgust, fear, happiness, sadness, surprise, and neutrality. Accuracy rates and reaction times in a forced-choice identification of these emotions as well as intensity judgments were collected from 80 participants, and a database was constructed with the utterances reaching satisfactory accuracy (190 sentences and 178 pseudosentences). High accuracy (mean correct of 75% for sentences and 71% for pseudosentences), rapid recognition, and high-intensity judgments were obtained for all the portrayed emotional qualities. Sentences and pseudosentences elicited similar accuracy and intensity rates, but participants responded to pseudosentences faster than they did to sentences. This database is a useful tool for research on emotional prosody, including cross-language studies and studies involving Portuguese-speaking participants, and it may be useful for clinical purposes in the assessment of brain-damaged patients. The database is available for download from


Emotion Recognition Stimulus Type Expression Category Fearful Expression Emotional Prosody 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Supplementary material

Castro-BRM-2010 (18.6 mb)
Supplementary material, approximately 340 KB.
Castro-BRM-2010 (19.8 mb)
Supplementary material, approximately 340 KB.


  1. Abboud, H., Schultz, W., & Zeitlin, V. (2006). SuperLab (Version 4.0) [Computer software]. San Pedro, CA: Cedrus Corporation.Google Scholar
  2. Adolphs, R., Damasio, H., & Tranel, D. (2002). Neural systems for recognition of emotional prosody: A 3-D lesion study. Emotion, 2, 23–51. doi:10.1037//1528-3542.2.1.23PubMedCrossRefGoogle Scholar
  3. Adolphs, R., Tranel, D., & Damasio, H. (2001). Emotion recognition from faces and prosody following temporal lobectomy. Neuropsychology, 15, 396–404. doi:10.1037//0894-4105.15.3.396PubMedCrossRefGoogle Scholar
  4. Bach, D. R., Grandjean, D., Sander, D., Herdener, M., Strik, W. K., & Seifritz, E. (2008). The effect of appraisal level on processing of emotional prosody in meaningless speech. NeuroImage, 42, 919–927. doi:10.1016/j.neuroimage.2008.05.034PubMedCrossRefGoogle Scholar
  5. Banse, R., & Scherer, K. R. (1996). Acoustic profiles in vocal emotion expression. Journal of Personality & Social Psychology, 70, 614–636.CrossRefGoogle Scholar
  6. Belin, P., Fecteau, S., & Bédard, C. (2004). Thinking the voice: Neural correlates of voice perception. Trends in Cognitive Sciences, 8, 129–135. doi:10.1016/j.tics.2004.01.008PubMedCrossRefGoogle Scholar
  7. Belin, P., Fillion-Bilodeau, S., & Gosselin, F. (2008). The Montreal Affective Voices: A validated set of nonverbal affect bursts for research on auditory affective processing. Behavior Research Methods, 40, 531–539. doi:10.3758/BRM.40.2.531PubMedCrossRefGoogle Scholar
  8. Dara, C., Monetta, L., & Pell, M. D. (2008). Vocal emotion processing in Parkinson’s disease: Reduced sensitivity to negative emotions. Brain Research, 1188, 100–111. doi:10.1016/j.brainres.2007.10.034PubMedCrossRefGoogle Scholar
  9. Ekman, P., & Friesen, W. V. (1978). Facial action coding system: Investigator’s guide. Palo Alto, CA: Consulting Psychologists Press.Google Scholar
  10. Friederici, A. D., & Alter, K. (2004). Lateralization of auditory language functions: A dynamic dual pathway model. Brain & Language, 89, 267–276. doi:10.1016/S0093-934X(03)00351-1CrossRefGoogle Scholar
  11. Grandjean, D., Sander, D., Pourtois, G., Schwartz, S., Seghier, M. L., Scherer, K. R., & Vuilleumier, P. (2005). The voices of wrath: Brain responses to angry prosody in meaningless speech. Nature Neuroscience, 8, 145–146.PubMedCrossRefGoogle Scholar
  12. Juslin, P. N., & Laukka, P. (2003). Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin, 129, 770–814. doi:10.1037/0033-2909.129.5.770PubMedCrossRefGoogle Scholar
  13. Kotz, S. A., Meyer, M., Alter, K., Besson, M., von Cramon, D. Y., & Friederici, A. D. (2003). On the lateralization of emotional prosody: An event-related functional MR investigation. Brain & Language, 86, 366–376. doi:10.1016/S0093-934X(02)00532-1CrossRefGoogle Scholar
  14. Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2005). International Affective Picture System (IAPS): Affective ratings of pictures and instruction manual (Tech. Rep. No. A-6). Gainesville: University of Florida.Google Scholar
  15. Laukka, P. (2005). Categorical perception of vocal emotion expressions. Emotion, 5, 277–295. doi:10.1037/1528-3542.5.3.277PubMedCrossRefGoogle Scholar
  16. Laukka, P., & Juslin, P. N. (2007). Similar patterns of age-related differences in emotion recognition from speech and music. Motivation & Emotion, 31, 182–191. doi:10.1007/s11031-007-9063-zCrossRefGoogle Scholar
  17. Mitchell, R. L. C., Elliott, R., Barry, M., Cruttenden, A., & Woodruff, P. W. R. (2003). The neural response to emotional prosody, as revealed by functional magnetic resonance imaging. Neuropsychologia, 41, 1410–1421. doi:10.1016/S0028-3932(03)00017-4PubMedCrossRefGoogle Scholar
  18. Mitchell, R. L. C., & Ross, E. D. (2008). fMRI evidence for the effect of verbal complexity on lateralisation of the neural response associated with decoding prosodic emotion. Neuropsychologia, 46, 2880–2887. doi:10.1016/j.neuropsychologia.2008.05.024PubMedCrossRefGoogle Scholar
  19. Palermo, R., & Coltheart, M. (2004). Photographs of facial expression: Accuracy, response times, and ratings of intensity. Behavior Research Methods, Instruments, & Computers, 36, 634–638.CrossRefGoogle Scholar
  20. Paulmann, S., & Kotz, S. A. (2008). An ERP investigation on the temporal dynamics of emotional prosody and emotional semantics in pseudo- and lexical-sentence context. Brain & Language, 105, 59–69. doi:10.1016/j.bandl.2007.11.005CrossRefGoogle Scholar
  21. Paulmann, S., Pell, M. D., & Kotz, S. A. (2008a). Functional contributions of the basal ganglia to emotional prosody: Evidence from ERPs. Brain Research, 1217, 171–178. doi:10.1016/j.brainres.2008.04.032PubMedCrossRefGoogle Scholar
  22. Paulmann, S., Pell, M. D., & Kotz, S. A. (2008b). How aging affects the recognition of emotional speech. Brain & Language, 104, 262–269. doi:10.1016/j.bandl.2007.03.002CrossRefGoogle Scholar
  23. Pell, M. D. (2002). Evaluation of nonverbal emotion in face and voice: Some preliminary findings on a new battery of tests. Brain & Cognition, 48, 499–504. doi:10.1006/brcg.2001.1406Google Scholar
  24. Pell, M. D., & Leonard, C. L. (2003). Processing emotional tone from speech in Parkinson’s disease: A role for the basal ganglia. Cognitive, Affective, & Behavioral Neuroscience, 3, 275–288. doi:10.3758/ CABN.3.4.275CrossRefGoogle Scholar
  25. Pell, M. D., Monetta, L., Paulmann, S., & Kotz, S. A. (2009). Recognizing emotions in a foreign language. Journal of Nonverbal Behavior, 33, 107–120. doi:10.1007/s10919-008-0065-7.CrossRefGoogle Scholar
  26. Pell, M. D., & Skorup, V. (2008). Implicit processing of emotional prosody in a foreign versus native language. Speech Communication, 50, 519–530. doi:10.1016/j.specom.2008.03.006CrossRefGoogle Scholar
  27. Ross, E. D., & Monnot, M. (2008). Neurology of affective prosody and its functional-anatomic organization in right hemisphere. Brain & Language, 104, 51–74. doi:10.1016/j.bandl.2007.04.007CrossRefGoogle Scholar
  28. Ross, E. D., Thompson, R. D., & Yenkosky, J. (1997). Lateralization of affective prosody in brain and the callosal integration of hemispheric language functions. Brain & Language, 56, 27–54. doi:10.1006/ brln.1997.1731CrossRefGoogle Scholar
  29. Scherer, K. R., Banse, R., Wallbott, H. G., & Goldbeck, T. (1991). Vocal cues in emotion encoding and decoding. Motivation & Emotion, 15, 123–148. doi:10.1007/BF00995674CrossRefGoogle Scholar
  30. Scherer, K. R., Johnstone, T., & Klasmeyer, G. (2003). Vocal expression of emotion. In R. J. Davison, K. Scherer, & H. H. Goldsmith (Eds.), Handbook of the affective sciences (pp. 433–456). New York: Oxford University Press.Google Scholar
  31. Schirmer, A., & Kotz, S. A. (2006). Beyond the right hemisphere: Brain mechanisms mediating vocal emotional processing. Trends in Cognitive Sciences, 10, 24–30. doi:10.1016/j.tics.2005.11.009PubMedCrossRefGoogle Scholar
  32. Thompson, W. F., & Balkwill, L.-L. (2006). Decoding speech prosody in five languages. Semiotica, 158, 407–424. doi:10.1515/ SEM.2006.017CrossRefGoogle Scholar
  33. Vieillard, S., Peretz, I., Gosselin, N., Khalfa, S., Gagnon, L., & Bouchard, B. (2008). Happy, sad, scary and peaceful musical excerpts for research on emotions. Cognition & Emotion, 22, 720–752. doi:10.1080/02699930701503567CrossRefGoogle Scholar
  34. Wildgruber, D., Riecker, A., Hertrich, I., Erb, M., Grodd, W., Ethofer, T., & Ackermann, H. (2005). Identification of emotional intonation evaluated by fMRI. NeuroImage, 24, 1233–1241. doi:10.1016/j.neuroimage.2004.10.034PubMedCrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2010

Authors and Affiliations

  1. 1.Faculty of Psychology and EducationUniversity of PortoPortoPortugal

Personalised recommendations