Emotional Intelligence: Constructing User Stereotypes for Affective Bi-modal Interaction

  • Efthymios Alepis
  • Maria Virvou
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4251)


The need of incorporating stereotypes concerning users’ emotional characteristics in affective multi-modal user interfaces is important. To this end we present user stereotypes that we have constructed concerning the emotional behavior of users while they interact with computers. The construction of these user stereotypes have been based on an empirical study. The empirical study has been conducted among different kinds of computer users in terms of how they express emotions while they interact with a through a keyboard and a microphone in a bi-modal kind of interaction. The stereotypes have been incorporated in a user modeling component underlying an affective bi-modal user interface that uses a microphone and a keyboard. Actions of the users in terms of what they said and/or what they typed computer in the context of typical interaction situations are interpreted in terms of their feelings so that appropriate affective messages may be generated.


Emotion Recognition Emotional Intelligence Trigger Condition Facial Emotion Recognition Keyboard Input 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Elfenbein, H.A., Ambady, N.: When Familiarity Breeds Accuracy: Cultural Exposure and Facial Emotion Recognition. Journal of Personality and Social Psychology 85(2), 276–290 (2003)CrossRefGoogle Scholar
  2. 2.
    Goleman, D.: Emotional Intelligence. Bantam Books, New York (1995)Google Scholar
  3. 3.
    Kay, J.: Stereotypes, student models and scrutability. In: Gauthier, G., VanLehn, K., Frasson, C. (eds.) ITS 2000. LNCS, vol. 1839, pp. 19–30. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  4. 4.
    Moriyama, T., Ozawa, S.: Measurement of Human Vocal Emotion Using Fuzzy Control. Systems and Computers in Japan 32(4) (2001)Google Scholar
  5. 5.
    Moriyama, T., Saito, H., Ozawa, S.: Evaluation of the Relation between Emotional Concepts and Emotional Parameters in Speech. Systems and Computers in Japan 32(3) (2001)Google Scholar
  6. 6.
    Oviatt, S.: User-modeling and evaluation of multimodal interfaces. Proceedings of the IEEE, pp. 1457–1468 (2003)Google Scholar
  7. 7.
    Pantic, M., Rothkrantz, L.J.M.: Toward an affect-sensitive multimodal human-computer interaction. Proceedings of the IEEE 91, 1370–1390 (2003)Google Scholar
  8. 8.
    Picard, R.W.: Affective Computing: Challenges. Int. Journal of Human-Computer Studies 59(1-2), 55–64 (2003)CrossRefGoogle Scholar
  9. 9.
    Picard, R.W., Klein, J.: Computers that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers 14, 141–169 (2002)CrossRefGoogle Scholar
  10. 10.
    Rich, E.: Users are individuals: individualizing user models. International Journal of Man-Machine Studies 18, 199–214 (1983)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Efthymios Alepis
    • 1
  • Maria Virvou
    • 1
  1. 1.Department of InformaticsUniversity of PiraeusPiraeusGreece

Personalised recommendations