Abstract
The Montreal Affective Voices consist of 90 nonverbal affect bursts corresponding to the emotions of anger, disgust, fear, pain, sadness, surprise, happiness, and pleasure (plus a neutral expression), recorded by 10 different actors (5 of them male and 5 female). Ratings of valence, arousal, and intensity for eight emotions were collected for each vocalization from 30 participants. Analyses revealed high recognition accuracies for most of the emotional categories (mean of 68%). They also revealed significant effects of both the actors’ and the participants’ gender: The highest hit rates (75%) were obtained for female participants rating female vocalizations, and the lowest hit rates (60%) for male participants rating male vocalizations. Interestingly, the mixed situations— that is, male participants rating female vocalizations or female participants rating male vocalizations—yielded similar, intermediate ratings. The Montreal Affective Voices are available for download at vnl.psy.gla.ac.uk/ (Resources section).
Article PDF
References
Banse, R., & Scherer, K. R. (1996). Acoustic profiles in vocal emotion expression. Journal of Personality & Social Psychology, 70, 614–636.
Belin, P., Fecteau, S., & Bédard, C. (2004). Thinking the voice: Neural correlates of voice perception. Trends in Cognitive Sciences, 8, 129–135.
Bradley, M. M., & Lang, P. J. (1999). International affective digitized sounds (IADS): Stimuli, instruction manual and affective ratings (Tech. Rep. B-2). Gainesville: University of Florida, Center for Research in Psychophysiology.
Brody, L. R., & Hall, J. A. (1993). Gender and emotion. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp. 447–460). New York: Guilford.
Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77, 305–327.
Buchanan, T. W., Lutz, K., Mirzazade, S., Specht, K., Shah, N. J., Zilles, K., & Jäncke, L. (2000). Recognition of emotional prosody and verbal components of spoken language: An fMRI study. Cognitive Brain Research, 9, 227–238.
Calder, A. J., Burton, A. M., Miller, P., Young, A. W., & Akamatsu, S. (2001). A principal component analysis of facial expressions. Vision Research, 41, 1179–1208.
Dailey, M., Cottrell, G. W., & Reilly, J. (2001). California facial expressions, CAFE: Unpublished digital images. San Diego: University of California, Computer Science and Engineering Department.
Ekman, P., & Friesen, W. V. (1978). Facial action coding system: Investigator’s guide. Palo Alto, CA: Consulting Psychologists Press.
Ekman, P., Friesen, W. V., & Ellsworth, P. (1972). Emotion in the human face: Guidelines for research and an integration of findings. Oxford: Pergamon.
Ekman, P., Friesen, W. V., & Hager, J. C. (2002). Facial Action Coding System investigator’s guide. Salt Lake City, UT: A Human Face.
Fecteau, S., Belin, P., Joanette, Y., & Armony, J. L. (2007). Amygdala responses to nonlinguistic emotional vocalizations. NeuroImage, 36, 480–487.
Fischer, A. (1993). Sex differences in emotionality: Fact or stereotype? Feminism & Psychology, 3, 303–318.
Friend, M. (2000). Developmental changes in sensitivity to vocal paralanguage. Developmental Science, 3, 148–162.
Grandjean, D., Sander, D., Pourtois, G., Schwartz, S., Seghier, M. L., Scherer, K. R., & Vuilleumier, P. (2005). The voices of wrath: Brain responses to angry prosody in meaningless speech. Nature Neuroscience, 8, 145–146.
Hall, J. A. (1978). Gender effects in decoding nonverbal cues. Psychological Bulletin, 85, 845–857.
Imaizumi, S., Mori, K., Kiritani, S., Kawashima, R., Sugiura, M., Fukuda, H., et al. (1997). Vocal identification of speaker and emotion activates different brain regions. NeuroReport, 8, 2809–2812.
Juslin, P. N., & Laukka, P. (2003). Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin, 129, 770–814.
Kawahara, H., Katayose, H., de Cheveigne, A., & Patterson, R. D. (1999, September). Fixed point analysis of frequency to instantaneous frequency mapping for accurate estimation of f0 and periodicity. Paper presented at the 6th European Conference on Speech Communication and Technology (Eurospeech’99), Budapest.
Kotz, S. A., Meyer, M., Alter, K., Besson, M., von Cramon, D. Y., & Friederici, A. D. (2003). On the lateralization of emotional prosody: An event-related functional MR investigation. Brain & Language, 86, 366–376.
Krolak-Salmon, P., Fischer, C., Vighetto, A., & Mauguière, F. (2001). Processing of facial emotional expression: Spatio-temporal data as assessed by scalp event-related potentials. European Journal of Neuroscience, 13, 987–994.
Lang, P. J. (1995). The emotion probe: Studies of motivation and attention. American Psychologist, 50, 372–385.
Lang, P. [J.], Öhman, A., & Vaitl, D. (1988). The international affective picture system. Gainesville: University of Florida, Center for Research in Psychophysiology.
Laukka, P. (2005). Categorical perception of vocal emotion expressions. Emotion, 5, 277–295.
McNally, R. J., Otto, M. W., & Hornig, C. D. (2001). The voice of emotional memory: Content-filtered speech in panic disorder, social phobia, and major depressive disorder. Behaviour Research & Therapy, 39, 1329–1337.
Mitchell, R. L., Elliott, R., Barry, M., Cruttenden, A., & Woodruff, P. W. (2003). The neural response to emotional prosody, as revealed by functional magnetic resonance imaging. Neuropsychologia, 41, 1410–1421.
Monrad-Krohn, G. H. (1963). The third element of speech: Prosody and its disorders. In L. Halpern (Ed.), Problems of dynamic neurology (pp. 101–117). Jerusalem: Hebrew University Press.
Morris, J. S., Frith, C. D., Perrett, D. I., Rowland, D., Young, A. W., Calder, A. J., & Dolan, R. J. (1996). A differential neural response in the human amygdala to fearful and happy facial expressions. Nature, 383, 812–815.
Murray, I. R., & Arnott, J. L. (1993). Toward the simulation of emotion in synthetic speech: A review of the literature on human vocal emotion. Journal of the Acoustical Society of America, 93, 1097–1108.
Pell, M. D. (2006). Cerebral mechanisms for understanding emotional prosody in speech. Brain & Language, 96, 221–234.
Russell, J. A., Bachorowski, J.-A., & Fernández-Dols, J.-M. (2003). Facial and vocal expressions of emotions. Annual Review of Psychology, 54, 329–349.
Scherer, K. R. (1986). Vocal affect expression: A review and a model for future research. Psychological Bulletin, 99, 143–165.
Scherer, K. R. (1995). Expression of emotion in voice and music. Journal of Voice, 9, 235–248.
Scherer, K. R., Banse, R., & Wallbott, H. G. (2001). Emotion inferences from vocal expression correlate across languages and cultures. Journal of Cross-Cultural Psychology, 32, 76–92.
Scherer, K. R., Ladd, D. R., & Silverman, K. E. A. (1984). Vocal cues to speaker affect: Testing two models. Journal of the Acoustical Society of America, 76, 1346–1356.
Schirmer, A., & Kotz, S. A. (2006). Beyond the right hemisphere: Brain mechanisms mediating vocal emotional processing. Trends in Cognitive Sciences, 10, 24–30.
Schirmer, A., Kotz, S. A., & Friederici, A. D. (2005). On the role of attention for the processing of emotions in speech: Sex differences revisited. Cognitive Brain Research, 24, 442–452.
Schlosberg, H. (1954). Three dimensions of emotion. Psychological Review, 61, 81–88.
Schröder, M. (2003). Experimental study of affect bursts. Speech Communication, 40, 99–116.
Simon, D., Craig, K. D., Gosselin, F., Belin, P., & Rainville, P. (2008). Recognition and discrimination of prototypical dynamic expressions of pain and emotions. Pain, 135, 55–64.
Smith, M. L., Cottrell, G. W., Gosselin, F., & Schyns, P. G. (2005). Transmitting and decoding facial expressions. Psychological Science, 16, 184–189.
Vuilleumier, P., Armony, J. L., Driver, J., & Dolan, R. J. (2001). Effects of attention and emotion on face processing in the human brain: An event-related fMRI study. Neuron, 30, 829–841.
Young, A. W., Rowland, D., Calder, A. J., Etcoff, N. L., Seth, A., & Perrett, D. I. (1997). Facial expression megamix: Tests of dimensional and category accounts of emotion recognition. Cognition, 63, 271–313.
Author information
Authors and Affiliations
Corresponding author
Additional information
This research was supported by grants from the Canadian Institutes of Health Research, the Canadian Foundation for Innovation, and France-Telecom to P.B.
Rights and permissions
About this article
Cite this article
Belin, P., Fillion-Bilodeau, S. & Gosselin, F. The Montreal Affective Voices: A validated set of nonverbal affect bursts for research on auditory affective processing. Behavior Research Methods 40, 531–539 (2008). https://doi.org/10.3758/BRM.40.2.531
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.3758/BRM.40.2.531