Cultural Specific Effects on the Recognition of Basic Emotions: A Study on Italian Subjects

  • Anna Esposito
  • Maria Teresa Riviello
  • Nikolaos Bourbakis
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5889)


The present work reports the results of perceptual experiments aimed to investigate if some of the basic emotions are perceptually privileged and if the cultural environment and the perceptual mode play a role in this preference. To this aim, Italian subjects were requested to assess emotional stimuli extracted from Italian and American English movies in the single (either video or audio alone) and the combined audio/video mode. Results showed that anger, fear, and sadness are better perceived than surprise, happiness in both the cultural environments (irony instead strongly depend on the language), that emotional information is affected by the communication mode and that language plays a role in assessing emotional information. Implications for the implementation of emotionally colored interactive systems are discussed.


Perceptually privileged emotions cultural specificity effect of communication modes 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Apolloni, B., Aversano, G., Esposito, A.: Preprocessing and classification of emotional features in speech sentences. In: Kosarev, Y. (ed.) Proceedings, IEEE Workshop on Speech and Computer, pp. 49–52 (2000)Google Scholar
  2. 2.
    Apolloni, B., Esposito, A., Malchiodi, D., Orovas, C., Palmas, G., Taylor, J.G.: A general framework for learning rules from data. IEEE Transactions on Neural Networks 15(6), 1333–1350 (2004)CrossRefGoogle Scholar
  3. 3.
    Apple, W., Hecht, K.: Speaking emotionally: The relation between verbal and vocal communication of affect. Journal of Personality and Social Psychology 42, 864–875 (1982)CrossRefGoogle Scholar
  4. 4.
    Banse, R., Scherer, K.: Acoustic profiles in vocal emotion expression. Journal of Personality & Social Psychology 70(3), 614–636 (1996)CrossRefGoogle Scholar
  5. 5.
    Bartneck, C.: Affective expressions of machines. StanAckerman Institute, Eindhoven (2000)Google Scholar
  6. 6.
    Bryll, R., Quek, F., Esposito, A.: Automatic hand hold detection in natural conversation. In: Proceedings of IEEE Workshop on Cues in Communication, Hawai, December 9 (2001)Google Scholar
  7. 7.
    Bourbakis, N.G., Esposito, A., Kavraki, D.: Multi-modal interfaces for interaction-communication between hearing and visually impaired individuals: Problems & issues. In: Proceedings of the International Conference on Tool for Artificial Intelligence, Patras, Greece, October 29-31, pp. 1–10 (2007)Google Scholar
  8. 8.
    Bourbakis, N.G., Esposito, A., Kavraki, D.: Analysis of invariant meta-features for learning and understanding disable people’s emotional behavior related to their health conditions: A case study. In: Proceedings of 6th International IEEE Symposium BioInformatics and BioEngineering, pp. 357–369. IEEE Computer Society, Los Alamitos (2006)Google Scholar
  9. 9.
    Cacioppo, J.T., Berntson, G.G., Larsen, J.T., Poehlmann, K.M., Ito, T.A.: The Psychophysiology of emotion. In: Lewis, J.M., Haviland-Jones, M. (eds.) Handbook of Emotions, 2nd edn., pp. 173–191. Guilford Press, New York (2000)Google Scholar
  10. 10.
    Campos, J.J., Barrett, K., Lamb, M.E., Goldsmith, H.H., Stenberg, C.: Socioemotional development. In: Haith, M.M., Campos, J.J. (eds.) Handbook of Child Psychology, 4th edn., vol. 2, pp. 783–915. Wiley, New York (1983)Google Scholar
  11. 11.
    Cassell, J., Vilhjalmsson, H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: Proceedings of SIGGRAPH (2001)Google Scholar
  12. 12.
    Chollet, G., Esposito, A., Gentes, A., Horain, P., Karam, W., Li, Z., Pelachaud, C., Perrot, P., Petrovska-Delacrétaz, D., Zhou, D., Zouari, L.: Multimodal Human Machine Interactions in Virtual and Augmented Reality. In: Esposito, A., et al. (eds.) Multimodal Signals: Cognitive and Algorithmic Issues. LNCS, vol. 5398, pp. 1–23. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  13. 13.
    Cosmides, L.: Invariances in the acoustic expressions of emotions during speech. Journal of Experimental Psychology, Human Perception Performance 9, 864–881 (1983)CrossRefGoogle Scholar
  14. 14.
    Doyle, P.: When is a communicative agent a good idea? In: Proceedings of Inter. Workshop on Communicative and Autonomous Agents, Seattle (1999)Google Scholar
  15. 15.
    Ekman, P.: Facial expression of emotion: New findings, new questions. Psychological Science 3, 34–38 (1992)CrossRefGoogle Scholar
  16. 16.
    Ekman, P., Friesen, W.V.: Facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press, Palo Alto (1978)Google Scholar
  17. 17.
    Ekman, P., Friesen, W.V.: Manual for the Facial Action Coding System. Consulting Psychologists Press, Palo Alto (1977)Google Scholar
  18. 18.
    Elliott, C.D.: The affective reasoner: A process model of emotions in a multi-agent system. Ph.D. Thesis, Institute for the Learning Sciences, Northwestern University, Evanston, Illinois (1992)Google Scholar
  19. 19.
    Esposito, A.: The Perceptual and Cognitive Role of Visual and Auditory Channels in Conveying Emotional Information. Cognitive Computation Journal 2, 1–11 (2009)MathSciNetGoogle Scholar
  20. 20.
    Esposito, A.: Affect in Multimodal Information. In: Tao, J., Tan, T. (eds.) Affective Information Processing, pp. 211–234. Springer, Heidelberg (2008)Google Scholar
  21. 21.
    Esposito, A.: The amount of information on emotional states conveyed by the verbal and nonverbal channels: some perceptual data. In: Stylianou, Y., Faundez-Zanuy, M., Esposito, A. (eds.) COST 277. LNCS, vol. 4391, pp. 249–268. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  22. 22.
    Esposito, A.: COST 2102: Cross-modal analysis of verbal and nonverbal Communication (CAVeNC). In: Esposito, A., Faundez-Zanuy, M., Keller, E., Marinaro, M. (eds.) COST Action 2102. LNCS (LNAI), vol. 4775, pp. 1–10. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  23. 23.
    Ezzat, T., Geiger, G., Poggio, T.: Trainable videorealistic speech animation. In: Proceedings of SIGGRAPH, San Antonio, Texas, July 2002, pp. 388–397 (2002)Google Scholar
  24. 24.
    Fasel, B., Luettin, J.: Automatic facial expression analysis: A survey. Pattern Recognition 36, 259–275 (2002)CrossRefGoogle Scholar
  25. 25.
    Friend, M.: Developmental changes in sensitivity to vocal paralanguage. Developmental Science 3, 148–162 (2000)CrossRefGoogle Scholar
  26. 26.
    Frick, R.: Communicating emotions: the role of prosodic features. Psychological Bullettin 93, 412–429 (1985)CrossRefGoogle Scholar
  27. 27.
    Fulcher, J.A.: Vocal affect expression as an indicator of affective response. Behavior Research Methods, Instruments, & Computers 23, 306–313 (1991)Google Scholar
  28. 28.
    Fu, S., Gutierrez-Osuna, R., Esposito, A., Kakumanu, P., Garcia, O.N.: Audio/visual mapping with cross-modal Hidden Markov Models. IEEE Transactions on Multimedia 7(2), 243–252 (2005)CrossRefGoogle Scholar
  29. 29.
    Gutierrez-Osuna, R., Kakumanu, P., Esposito, A., Garcia, O.N., Bojorquez, A., Castello, J., Rudomin, I.: Speech-driven facial animation with realistic dynamic. IEEE Transactions on Multimedia 7(1), 33–42 (2005)CrossRefGoogle Scholar
  30. 30.
    Hozjan, V., Kacic, Z.: A rule-based emotion-dependent feature extraction method for emotion analysis from speech. Journal of the Acoustical Society of America 119(5), 3109–3120 (2006)CrossRefGoogle Scholar
  31. 31.
    Hozjan, V., Kacic, Z.: Context-independent multilingual emotion recognition from speech signals. International Journal of Speech Technology 6, 311–320 (2003)CrossRefGoogle Scholar
  32. 32.
    Huang, C.L., Huang, Y.M.: Facial expression recognition using model-based feature extraction and action parameters Classification. Journal of Visual Commumication and Image Representation 8(3), 278–290 (1997)CrossRefGoogle Scholar
  33. 33.
    Izard, C.E., Ackerman, B.P.: Motivational, organizational, and regulatory functions of discrete emotions. In: Lewis, J.M., Haviland-Jones, M. (eds.) Handbook of Emotions, 2nd edn., pp. 253–264. Guilford Press, New York (2000)Google Scholar
  34. 34.
    Izard, C.E.: The maximally discriminative facial movement coding system (MAX). Unpublished manuscript. Available from Instructional Resource Center, University of Delaware (1979)Google Scholar
  35. 35.
    Izard, C.E.: Human Emotions. Plenum Press, New York (1977)Google Scholar
  36. 36.
    Kähler, K., Haber, J., Seidel, H.: Geometry-based muscle modeling for facial animation. In: Proceedings of the International Conference on Graphics Interface, pp. 27–36 (2001)Google Scholar
  37. 37.
    Kakumanu, P., Esposito, A., Garcia, O.N., Gutierrez-Osuna, R.: A comparison of acoustic coding models for speech-driven facial animation. Speech Commumication 48, 598–615 (2006)CrossRefGoogle Scholar
  38. 38.
    Kakumanu, P., Gutierrez-Osuna, R., Esposito, A., Bryll, R., Goshtasby, A., Garcia, O.N.: Speech Dirven Facial Animation. In: Proceedings of ACM Workshop on Perceptive User Interfaces, Orlando, November 15-16 (2001)Google Scholar
  39. 39.
    Kanade, T., Cohn, J., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000)Google Scholar
  40. 40.
    Koda, T.: Agents with faces: A study on the effect of personification of software agents. Master Thesis, MIT Media Lab, Cambridge (1996)Google Scholar
  41. 41.
    Morishima, S.: Face analysis and synthesis. IEEE Signal Processing Magazine 18(3), 26–34 (2001)CrossRefGoogle Scholar
  42. 42.
    Oatley, K., Jenkins, J.M.: Understanding emotions. Blackwell, Oxford (1996)Google Scholar
  43. 43.
    Pantic, M., Patras, I., Rothkrantz, J.M.: Facial action recognition in face profile image sequences. In: Proceedings IEEE International Conference Multimedia and Expo., pp. 37–40 (2002)Google Scholar
  44. 44.
    Pantic, M., Rothkrantz, J.M.: Expert system for automatic analysis of facial expression. Image and Vision Computing Journal 18(11), 881–905 (2000)CrossRefGoogle Scholar
  45. 45.
    Scherer, K.R.: Vocal communication of emotion: A review of research paradigms. Speech Communication 40, 227–256 (2003)zbMATHCrossRefGoogle Scholar
  46. 46.
    Scherer, K.R., Banse, R., Wallbott, H.G., Goldbeck, T.: Vocal cues in emotion encoding and decoding. Motiation and Emotion 15, 123–148 (1991)CrossRefGoogle Scholar
  47. 47.
    Scherer, K.R.: Vocal correlates of emotional arousal and affective disturbance. In: Wagner, H., Manstead, A. (eds.) Handbook of social Psychophysiology, pp. 165–197. Wiley, New York (1989)Google Scholar
  48. 48.
    Stocky, T., Cassell, J.: Shared reality: Spatial intelligence in intuitive user interfaces. In: Proceedings of Intelligent User Interfaces, San Francisco, CA, pp. 224–225 (2002)Google Scholar
  49. 49.
    Van Bezooijen, R.: The Characteristics and Recognizability of Vocal Expression of Emotions. Foris, Drodrecht, The Netherlands (1984)Google Scholar
  50. 50.
    Schubert, T.W.: The Power in Your Hand: Gender Differences in Bodily Feedback from Making a Fist. Personality and Social Psychology Bulletin 30, 757–769 (2004)CrossRefGoogle Scholar
  51. 51.
    Bargh, J.A., Chen, M., Burrows, L.: Automaticity of Social Behavior: Direct Effects of Trait Construct and Stereotype Activation on Action. Journal of Personality and Social Psychology 71, 230–244 (1996)CrossRefGoogle Scholar
  52. 52.
    Stepper, S., Strack, F.: Proprioceptive Determinants of Emotional and Nonnemotional Feelings. Journal of Personality and Social Psychology 64, 211–220 (1993)CrossRefGoogle Scholar
  53. 53.
    Esposito, A., Carbone, D., Riviello, M.T.: Visual Context Effects on the Perception of Musical Emotional Expressions. In: Fierrez, J., et al. (eds.) Biometric ID Management and Multimodal Communication. LNCS, vol. 5707, pp. 81–88. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  54. 54.
    Stickel, C., Ebner, M., Steinbach-Nordmann, S., Searle, G., Holzinger, A.: Emotion Detection: Application of the Valence Arousal Space for Rapid Biological Usability Testing to Enhance Universal Access. In: Stephanidis, C. (ed.) Universal Access in HCI, Part I, HCII 2009. LNCS, vol. 5614, pp. 615–624. Springer, Heidelberg (2009)Google Scholar
  55. 55.
    Gallager, R.G.: Information theory and reliable communication. John Wiley & Son, Chichester (1968)zbMATHGoogle Scholar
  56. 56.
    Sinanović, S., Johnson, D.H.: Toward a theory of information processing. Signal Processing 87, 1326–1344 (2007)CrossRefGoogle Scholar
  57. 57.
    Massaro, D.W.: Perceiving talking faces. MIT Press, Cambridge (1998)Google Scholar
  58. 58.
    Ali, S.M., Silvey, S.D.: A general class of coefficients of divergence of one distribution from another. Journal of Royal Statistic Society 28, 131–142 (1996)MathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Anna Esposito
    • 1
    • 3
  • Maria Teresa Riviello
    • 3
  • Nikolaos Bourbakis
    • 2
  1. 1.Department of PsychologySecond University of NaplesItaly
  2. 2.Wright State University, Dayton, OHIOUSA
  3. 3.International Institute for Advanced Scientific Studies (IIASS)Vietri sul MareItaly

Personalised recommendations