Object Oriented Design for Multiple Modalities in Affective Interaction

  • Efthimios AlepisEmail author
  • Maria Virvou
Part of the Intelligent Systems Reference Library book series (ISRL, volume 64)


The purpose of this chapter is to investigate how an object oriented (OO) architecture can be adapted to cope with multimodal emotion recognition applications with mobile interfaces. A large obstacle in this direction is the fact that mobile phones differ from desktop computers since mobile phones are not capable of performing the demanding processing required as in emotion recognition. To surpass this fact, in our approach, mobile phones are required to transmit all data collected to a server which is responsible for performing, among other, emotion recognition. The object oriented architecture that we have created, combines evidence from multiple modalities of interaction, namely the mobile device’s keyboard and the mobile device’s microphone, as well as data from user stereotypes. All collected information is classified into well-structured objects which have their own properties and methods. The resulting emotion detection platform is capable of processing and re-transmitting information from different mobile sources of multimodal data during human–computer interaction. The interface that has been used as a test bed for the affective mobile interaction is that of an educational m-learning application.


Affective Interaction Emotion Recognition System Emotion Detection Well-structured Objects Actual User Input 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. Alepis Ε, Virvou Μ (2006) Emotional intelligence: constructing user stereotypes for affective bi-modal interaction. In: Knowledge-based intelligent information and engineering systems 2006. Lecture notes in computer science LNAI-I, vol 4251. Springer, Heidelberg, pp 435–442Google Scholar
  2. Fishburn PC (1967) Additive utilities with incomplete product set: applications to priorities and assignments. Oper Res 15(3):537Google Scholar
  3. Gee P, Coventry KR, Birkenhead D (2005) Mood state and gambling: using mobile telephones to track emotions. Br J Psychol 96(1):53–66CrossRefGoogle Scholar
  4. Hwang CL, Yoon K (1981) Multiple attribute decision making: methods and applications. Lecture notes in economics and mathematical systems, vol 186. Springer, HeidelbergGoogle Scholar
  5. Isomursu M, Tähti M, Väinämö S, Kuutti K (2007) Experimental evaluation of five methods for collecting emotions in field settings with mobile applications. Int J Hum Comput Stud 65(4):404–418CrossRefGoogle Scholar
  6. Kay J (2000) Stereotypes, student models and scrutability. In: Gauthier G, Frasson C, VanLehn K (eds) Proceedings of the 5th international conference on intelligent tutoring systems. Lecture notes in computer science, vol 1839. Springer, Heidelberg, pp 19–30Google Scholar
  7. Neerincx M, Streefkerk JW (2003) Interacting in desktop and mobile context: emotion, trust, and task performance. Lecture notes in computer science (Lecture notes in artificial intelligence and Lecture notes in bioinformatics), vol 2875. pp 119–132Google Scholar
  8. Rich E (1983) Users are individuals: individualizing user models. Int J Man Mach Stud 18:199–214CrossRefGoogle Scholar
  9. Tsihrintzis G, Virvou M, Stathopoulou IO, Alepis E (2008) On improving visual-facial emotion recognition with audio-lingual and keyboard stroke pattern information, Web intelligence and intelligent agent technology, WI-IAT’08, vol 1. pp 810–816Google Scholar
  10. Virvou M, Tsihrintzis G, Alepis E, Stathopoulou IO, Kabassi K (2007) Combining empirical studies of audio-lingual and visual-facial modalities for emotion recognition. In: Knowledge-based intelligent information and engineering systems—KES 2007. Lecture notes in computer science (Lecture notes in artificial intelligence), vol 4693/2007. Springer, Berlin, pp 1130–1137Google Scholar
  11. Virvou M, Tsihrintzis GA, Alepis E, Stathopoulou I-O, Kabassi K (2012) Emotion recognition: empirical studies towards the combination of audio-lingual and visual-facial modalities through multi-attribute decision making. Int J Artif Intell Tools 21(2) (Art. no 1240001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.Department of InformaticsUniversity of PiraeusPiraeusGreece

Personalised recommendations