Validating a Multilingual and Multimodal Affective Database

  • Juan Miguel López
  • Idoia Cearreta
  • Inmaculada Fajardo
  • Nestor Garay
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4560)


This paper summarizes the process of validating RekEmozio, a multilingual (Spanish and Basque) and multimodal (audio and video) affective database. Fifty-seven participants validated a sample of 2,618 videos of facial expressions and 102 utterances in the database. The results replicated previous findings of no significant differences in recognition rates among emotions. This validation has allowed having the audio and video material in the database classified in terms of the emotional category expressed. This normative data has proven to be usefulfor both training affective recognizers and synthesizers and carrying out empirical studies on emotions by psychologists.


Affective computing affective resources user validation multilingual and multimodal resources semantics 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Casacuberta, D.: La mente humana: Diez Enigmas y 100 preguntas (The human mind: Ten Enigmas and 100 questions). Océano (ed.), Barcelona, Spain (2001)Google Scholar
  2. 2.
    Garay, N., Abascal, J., Gardeazabal, L.: Mediación emocional en sistemas de Comunicación Aumentativa y Alternativa (Emotional mediation in Augmentative and Alternative Communication systems). Revista Iberoamericana de Inteligencia Artificial (Iberoamerican journal of Artificial intelligence) 16, 65–70 (2002)Google Scholar
  3. 3.
    Picard, R.W.: Affective Computing. MIT Press, Cambridge, MA (1997)Google Scholar
  4. 4.
    Tao, J., Tan, T.: Affective computing: A review. In: Tao, J., Tan, T., Picard, R.W. (eds.) ACII 2005. LNCS, vol. 3784, pp. 981–995. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  5. 5.
    Garay, N., Cearreta, I., López, J.M., Fajardo, I.: Assistive technology and affective mediation. Human technology. Special Issue on Human Technologies for Special Needs 2(1), 55–83 (2006)Google Scholar
  6. 6.
    Fragopanagos, N.F., Taylor, J.G.: Emotion recognition in human-computer interaction. Neural Networks 18, 389–405 (2005)CrossRefGoogle Scholar
  7. 7.
    Athanaselis, T., Bakamidis, S., Dologlou, I., Cowie, R., Douglas-Cowie, E., Cox, C.: ASR for emotional speech: clarifying the issues and enhancing performance. Neural Networks 18, 437–444 (2005)CrossRefGoogle Scholar
  8. 8.
    Cowie, R., Douglas-Cowie, E., Cox, C.: Beyond emotion archetypes: Databases for emotion modelling using neural networks. Neural Networks 18, 371–388 (2005)CrossRefGoogle Scholar
  9. 9.
    Ekman, P., Friesen, W.: Pictures of facial affect. Consulting Psychologist Press, Palo Alto, CA (1976)Google Scholar
  10. 10.
    Mehrabian, A., Russell, J.A.: An approach to environmental psychology. MIT Press, Cambridge, MA (1974)Google Scholar
  11. 11.
    Tellegen, A.: Structures of mood and personality and their relevance to assessing anxiety, with an emphasis on self-report. In: Tuma, A.H., Maser, J.D. (eds.) Anxiety and the anxiety disorders, pp. 681–706. Lawrence Erlbaum, Hillsdale, NJ (1985)Google Scholar
  12. 12.
    Schröder, M., Cowie, R., Douglas-Cowie, E., Westerdijk, M., Gielen, S.: Acoustic correlates of emotion dimensions in view of speech synthesis. In Proc. Eurospeech 1, 87–90 (2001)Google Scholar
  13. 13.
    Navas, E., Hernáez, I., Castelruiz, A., Luengo, I.: Obtaining and Evaluating an Emotional Database for Prosody Modelling in Standard Basque. Lecture Notes on Artificial Intelligence, vol. 3206, pp. 393–400. Springer, Berlin (2004)Google Scholar
  14. 14.
    Iriondo, I., Guaus, R., Rodríguez, A., Lázaro, P., Montoya, N., Blanco, J.M., Bernadas, D., Oliver, J.M., Tena, D., Longhi, L.: Validation of an acoustical modelling of emotional expression in Spanish using speech synthesis techniques. In: SpeechEmotion 2000, pp. 161–166 (2000)Google Scholar
  15. 15.
    Álvarez, A., Cearreta, I., López, J.M., Arruti, A., Lazkano, E., Sierra, B., Garay, N.: Feature Subset Selection based on Evolutionary Algorithms for automatic emotion recognition in spoken Spanish and Standard Basque languages. In: Sojka, P., Kopecek, I., Pala, K. (eds.) TSD 2006. LNCS (LNAI), vol. 4188, pp. 565–572. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  16. 16.
    López, J.M., Cearreta, I., Garay, N., de López Ipiña, K., Beristain, A.: Creación de una base de datos emocional bilingüe y multimodal. In: Redondo, M.A., Bravo, C., Ortega, M. (eds.) Proceedings of the 7th Spanish Human Computer Interaction Conference, Interaccion’06, Puertollano, pp. 55–66 (2006)Google Scholar
  17. 17.
    Pérez, M.A., Alameda, J.R., Cuetos Vega, F.: Frecuencia, longitud y vecindad ortográfica de las palabras de 3 a 16 letras del diccionario de la lengua española (RAE, 1992). Revista Española de Metodología Aplicada 8(2), 1–20 (2003)Google Scholar
  18. 18.
    Arrue, M., Fajardo, I., López, J.M., Vigo, M.: Interdependence between technical web accessibility and usability its influence on web quality models. Int. J. Web Engineering and Technology 3(3), 307–328 (2007)CrossRefGoogle Scholar
  19. 19.
    López, J.M.: Development of a tool for the Design and Analysis of Experiments in the Web. In: Lorés, J., Navarro, R. (eds.) Proceedings of The 5th Spanish Human Computer Interaction Conference, Interacción’04, Lleida, pp. 434–437 (2004)Google Scholar
  20. 20.
    Scherer, K.R., Banse, R., Wallbott, H.G., Goldbeck, T.: Vocal cues in emotion encoding and decoding. Motivation and Emotion 15, 123–148 (1991)CrossRefGoogle Scholar
  21. 21.
    Banse, R., Scherer, K.R.: Acoustic profiles in vocal emotion expression. Journal of Personality and Social Psychology 70(3), 614–636 (1996)CrossRefGoogle Scholar
  22. 22.
    Johnstone, T., Scherer, K.R.: Vocal communication of emotion. In: Lewis, M., Haviland, J. (eds.) Handbook of Emotion, 2nd edn. pp. 220–235. Guilford Publications, New York (2000)Google Scholar
  23. 23.
    Abelin, A.: Cross-cultural multimodal interpretation of emotional expressions – an experimental study of spanish and swedish. In: Abelin, A. (ed.) SProSIG (2004)Google Scholar
  24. 24.
    Oudeyer, P.-.Y.: The production and recognition of emotions in speech: features and algorithms. International Journal of Human-Computer Studies 59(1-2), 157–183 (2003)CrossRefGoogle Scholar
  25. 25.
    Tickle, A.: English and Japanese speaker’s emotion vocalizations and recognition: a comparison highlighting vowel quality. In: ISCA Workshop on Speech and Emotion, Belfast (2000)Google Scholar
  26. 26.
    Obrenovic, Z., Garay, N., López, J.M., Fajardo, I., Cearreta, I.: An ontology for description of emotional cues. In: Tao, J., Tan, T., Picard, R.W. (eds.) ACII 2005. LNCS, vol. 3784, pp. 505–512. Springer, Heidelberg (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Juan Miguel López
    • 1
  • Idoia Cearreta
    • 1
  • Inmaculada Fajardo
    • 2
  • Nestor Garay
    • 1
  1. 1.Laboratory of Human-Computer Interaction for Special Needs (LHCISN)., Computer Science Faculty. University of the Basque Country, Manuel Lardizabal 1; Donostia - San Sebastian 
  2. 2.Cognitive Ergonomics Group, Department of Experimental Psychology. University of Granada, Cartuja Campus; Granada 

Personalised recommendations