Affect recognition for interactive companions: challenges and design in real world scenarios

  • Ginevra Castellano
  • Iolanda Leite
  • André Pereira
  • Carlos Martinho
  • Ana Paiva
  • Peter W. McOwan
Original Paper


Affect sensitivity is an important requirement for artificial companions to be capable of engaging in social interaction with human users. This paper provides a general overview of some of the issues arising from the design of an affect recognition framework for artificial companions. Limitations and challenges are discussed with respect to other capabilities of companions and a real world scenario where an iCat robot plays chess with children is presented. In this scenario, affective states that a robot companion should be able to recognise are identified and the non-verbal behaviours that are affected by the occurrence of these states in the children are investigated. The experimental results aim to provide the foundation for the design of an affect recognition system for a game companion: in this interaction scenario children tend to look at the iCat and smile more when they experience a positive feeling and they are engaged with the iCat.

Interactive companions Affective and social robotics Affect recognition Affective cues Socially intelligent behaviour 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Breazeal C (2003) Emotion and sociable humanoid robots. Int J Hum-Comput Stud 59(1–2):119–155 CrossRefGoogle Scholar
  2. 2.
    Gratch J, Wang N, Gerten J, Fast E, Duffy R (2007) Creating rapport with virtual agents. In: 7th international conference on intelligent virtual agents, Paris, France Google Scholar
  3. 3.
    Dautenhahn K (2007) Socially intelligent robots: dimensions of human-robot interaction. Philos Trans R Soc B Biol Sci 362(1480):679–704 CrossRefGoogle Scholar
  4. 4.
    Zeng Z, Pantic M, Roisman GI, Huang TS (2009) A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans Pattern Anal Mach Intell 31(1):39–58 CrossRefGoogle Scholar
  5. 5.
    Kapoor A, Burleson W, Picard RW (2007) Automatic prediction of frustration. Int J Hum-Comput Stud 65(8):724–736 CrossRefGoogle Scholar
  6. 6.
    El Kaliouby R, Robinson P (2005) Generalization of a vision-based computational model of mind-reading. In: 1st international conference on affective computing and intelligent interaction, Beijing, China Google Scholar
  7. 7.
    Yeasin M, Bullot B, Sharma R (2006) Recognition of facial expressions and measurement of levels of interest from video. IEEE Trans Multimedia 8(3):500–507 CrossRefGoogle Scholar
  8. 8.
    Peters C, Asteriadis S, Karpouzis K, de Sevin E (2008) Towards a real-time gaze-based shared attention for a virtual agent. In: Workshop on affective interaction in natural environments (AFFINE), ACM international conference on multimodal interfaces (ICMI’08), Chania, Crete, Greece Google Scholar
  9. 9.
    Castellano G, Pereira A, Leite I, Paiva A, McOwan PW (2009) Detecting user engagement with a robot companion using task and social interaction-based features. In: International conference on multimodal interfaces and workshop on machine learning for multimodal interaction (ICMI-MLMI’09). ACM Press, Cambridge Google Scholar
  10. 10.
    Caridakis G, Malatesta L, Kessous L, Amir N, Raouzaiou A, Karpouzis K (2006) Modeling naturalistic affective states via facial and vocal expression recognition. In: International conference on multimodal interfaces, pp 146–154 Google Scholar
  11. 11.
    Castellano G, Mortillaro M, Camurri A, Volpe G, Scherer K (2008) Automated analysis of body movement in emotionally expressive piano performances. Music Percept 26(2):103–119 CrossRefGoogle Scholar
  12. 12.
    Littlewort GC, Bartlett MS, Lee K (2007) Faces of pain: automated measurement of spontaneous facial expressions of genuine and posed pain. In: International conference of multimodal interfaces, pp 15–21 Google Scholar
  13. 13.
    Scherer KR (2000) Psychological models of emotion. In: Borod J (ed) The neuropsychology of emotion. Oxford University Press, Oxford, pp 137–162 Google Scholar
  14. 14.
    Bänziger T, Scherer K (2007) Using actor portrayals to systematically study multimodal emotion expression: the GEMEP corpus. In: 2nd international conference on affective computing and intelligent interaction, Lisbon Google Scholar
  15. 15.
    Ioannou S, Raouzaiou A, Tzouvaras V, Mailis T, Karpouzis K, Kollias S (2005) Emotion recognition through facial expression analysis based on a neurofuzzy method. Neural Netw 18:423–435 CrossRefGoogle Scholar
  16. 16.
    Devillers L, Vasilescu I (2006) Real-life emotions detection with lexical and paralinguistic cues on human-human call center dialogs. In: International conference on spoken language processing Google Scholar
  17. 17.
    Valstar MF, Gunes H, Pantic M (2007) How to distinguish posed from spontaneous smiles using geometric features. In: ACM international conference on multimodal interfaces (ICMI’07), Nagoya, Japan, pp 38–45 Google Scholar
  18. 18.
    Gunes H, Piccardi M (2009) Automatic temporal segment detection and affect recognition from face and body display. IEEE Trans Syst Man Cybern, Part B 39(1):64–84 CrossRefGoogle Scholar
  19. 19.
    Kim J, Andre E, Rehm M, Vogt T, Wagner J (2005) Integrating information from speech and physiological signals to achieve emotional sensitivity. In: 9th European conference on speech communication and technology Google Scholar
  20. 20.
    Castellano G, Kessous L, Caridakis G (2008) Emotion recognition through multiple modalities: face, body gesture, speech. In: Peter C, Beale R (eds) Affect and Emotion in Human-Computer Interaction. LNCS, vol 4868. Springer, Heidelberg CrossRefGoogle Scholar
  21. 21.
    Meeren H, Heijnsbergen C, Gelder B (2005) Rapid perceptual integration of facial expression and emotional body language. Proc Natl Acad Sci USA 102(45):16518–16523 CrossRefGoogle Scholar
  22. 22.
    Stein B, Meredith MA (1993) The merging of senses. MIT Press, Cambridge Google Scholar
  23. 23.
    Shan C, Gong S, McOwan PW (2007) Beyond facial expressions: learning human emotion from body gestures. In: Proceedings of British machine vision conference (BMVC’07), Warwick, UK Google Scholar
  24. 24.
    Zeng Z, Hu Y, Liu M, Fu Y, Huang TS (2006) Training combination strategy of multi-stream fused hidden Markov model for audio-visual affect recognition. In: ACM international conference on multimedia, pp 65–68 Google Scholar
  25. 25.
    Pantic M, Patras I (2006) Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences. IEEE Trans Syst Man Cybern, Part B 36(2):433–449 CrossRefGoogle Scholar
  26. 26.
    Anderson K, McOwan PW (2006) A real-time automated system for recognition of human facial expressions. IEEE Trans Syst Man Cybern, Part B 36(1):96–105 CrossRefGoogle Scholar
  27. 27.
    Castellano G, Villalba SD, Camurri A (2007) Recognising human emotions from body movement and gesture dynamics. In: 2nd international conference on affective computing and intelligent interaction, Lisbon Google Scholar
  28. 28.
    Scherer KR (1984) On the nature and function of emotion: a component process approach. In: Scherer KR, Ekman P (eds) Approaches to emotion. Erlbaum, Hillsdale, pp 293–317 Google Scholar
  29. 29.
    Kapoor A, Picard RW (2005) Multimodal affect recognition in learning environments. In: ACM international conference on multimedia, pp 677–682 Google Scholar
  30. 30.
    You Z-J, Shen C-Y, Chang C-W, Liu B-J, Chen G-D (2006) A robot as a teaching assistant in an English class. In: Sixth international conference on advanced learning technologies Google Scholar
  31. 31.
    Dias J, Paiva A (2005) Feeling and reasoning: a computational model for emotional characters. In: Bento C, Cardoso A, Dias G (eds) Progress in artificial intelligence, EPIA’2005. LNAI, vol 3808. Springer, Berlin Google Scholar
  32. 32.
    Castellano G (2008) Movement expressivity analysis in affective computers: from recognition to expression of emotion, PhD thesis, Department of Communication, Computer and System Sciences, University of Genova, Italy Google Scholar
  33. 33.
    Graham S, Weiner B (1996) Theories and principles of motivation. In: Berliner DC, Calfee RC (eds) Handbook of educational psychology. Macmillan, New York, pp 63–84 Google Scholar
  34. 34.
    Arroyo I, Cooper DG, Burleson W, Woolf BP, Muldner K, Christopherson R (2009) Emotions sensors go to school. In: International conference on artificial intelligence in education, Brighton, UK, pp 17–24 Google Scholar
  35. 35.
    Horgan DD, Morgan D (1990) Chess expertise in children. Appl Cogn Psychol 4(2):109–128 CrossRefGoogle Scholar
  36. 36.
    Waters AJ, Gobet F, Leyden G (2002) Visuospatial abilities of chess players. Br J Psychol 93(4):557–565 CrossRefGoogle Scholar
  37. 37.
    Linder I (1990) Chess, a subject taught at school. Sputnik: Digest of the Soviet Press, pp 164–166 Google Scholar
  38. 38.
    Breemen A, Yan X, Meerbeek B (2005) iCat: An animated user-interface robot with personality. In: Pechoucek, Steiner, Thompson (eds) Proceedings of autonomous agents and multiagent systems conference, AAMAS’05. ACM Press, New York, pp 143–144 CrossRefGoogle Scholar
  39. 39.
    Leite I, Pereira A, Martinho C, Paiva A (2008) Are emotional robots more fun to play with? In: Proceedings of IEEE RO-MAN 2008 conference, Munich, Germany Google Scholar
  40. 40.
    Russell JA (1980) A circumplex model of affect. J Pers Soc Psychol 39:1161–1178 CrossRefGoogle Scholar
  41. 41.
    Poggi I (2007) Mind, hands, face and body. A goal and belief view of multimodal communication. Weidler, Berlin Google Scholar
  42. 42.
    Kipp M (2008) Spatiotemporal coding in ANVIL. In: Proceedings of the 6th international conference on language resources and evaluation (LREC-08) Google Scholar
  43. 43.
    Castellano G, Leite I, Pereira A, Martinho C, Paiva A, McOwan PW (2009) It’s all in the game: towards an affect sensitive and context aware game companion. In: International conference on affective computing and intelligent interaction, Amsterdam, The Netherlands. IEEE Press, New York Google Scholar

Copyright information

© OpenInterface Association 2009

Authors and Affiliations

  • Ginevra Castellano
    • 1
  • Iolanda Leite
    • 2
  • André Pereira
    • 2
  • Carlos Martinho
    • 2
  • Ana Paiva
    • 2
  • Peter W. McOwan
    • 1
  1. 1.Department of Computer Science, School of Electronic Engineering and Computer ScienceQueen Mary University of LondonLondonUK
  2. 2.INESC-IDInstituto Superior TécnicoPorto SalvoPortugal

Personalised recommendations