Modeling Emotions in Robotic Socially Believable Behaving Systems

  • Anna EspositoEmail author
  • Lakhmi C. Jain
Part of the Intelligent Systems Reference Library book series (ISRL, volume 105)


This book aims to investigate the features that are at the core of human interactions to model the involved emotional processes, in order to design and develop autonomous systems and algorithms able to detect early signs of changes, in moods and emotional states. The attention is focused on emotional social features and the human’s ability to decode and encode emotional social cues while interacting. In order to do this, the book will propose a series of investigations that gather behavioral data from speech, handwriting, facial, vocal and gestural expressions. This is done through the definition of behavioral tasks that may serve to produce changes in the perception of emotional social cues. Specific scenarios are designed to assess users’ emphatic and social competencies. The collected data are used to gain knowledge on how behavioral and interactional features are affected by individuals’ moods and emotional states. This information can be exploited to devise multidimensional models of multimodal interactional features that will serve for measuring the degree of empathic relationships developed between individuals and allow the design and development of cost-effective emotion-aware technologies to be used in applicative contexts such as remote health care services and robotic assistance.


Socially believable robotic interfaces Mood changes Social and emotional interactional features Speech Gestures Faces Emotional expressions 


  1. 1.
    Atassi H, Esposito A (2008) Speaker independent approach to the classification of emotional vocal expressions. In: Proceedings of IEEE conference on tools with artificial intelligence (ICTAI 2008), vol 1. Dayton, 3–5 Nov 2008, pp 487–494Google Scholar
  2. 2.
    Atassi H, Esposito A, Smekal Z (2011) Analysis of high-level features for vocal emotion recognition. In: Proceedings of 34th IEEE international conference on telecom and signal processing (TSP), Budapest, 18–20 Aug 2011, pp 361–366Google Scholar
  3. 3.
    Atassi H, Riviello MT, Smékal Z, Hussain A, Esposito A (2010) Emotional vocal expressions recognition using the cost 2102 italian database of emotional speech. In: Esposito A et al. (eds) Development of multimodal interfaces: active listening and synchrony, LNCS 5967, Springer, Berlin, pp 255–267Google Scholar
  4. 4.
    Belpaeme T, Adams S, De Greeff J, Di Nuovo A, Morse A, Cangelosi A (2016) Social development of artificial cognition. This volumeGoogle Scholar
  5. 5.
    Benyon D, Turner P, Turner S (2005) Designing interactive systems: people, activities, contexts, technologies. Pearson Education, HarlowGoogle Scholar
  6. 6.
    Castellano G, Kessous L, Caridakis G (2008) Emotion recognition through multiple modalities: face, body, gesture, speech. Affect and emotion in human-computer interaction. Springer, Berlin, pp 92–103Google Scholar
  7. 7.
    Cordasco G, Esposito M, Masucci F, Riviello MT, Esposito A, Chollet G, Schlögl S, Milhorat P, Pelosi G (2014) Assessing voice user interfaces: the vAssist system prototype. In: Proceedings of the 5th IEEE international conference on cognitive infocommunications, Vietri sul Mare, 5–7 Nov 2014, pp 91–96Google Scholar
  8. 8.
    Corrigan LJ, Peters C, Küster D, Castellano G (2016) Engagement perception and generation for social robots and virtual agents. This volumeGoogle Scholar
  9. 9.
    Dupont S, Çakmak H, Curran W, Dutoit T, Hofmann J, McKeown G, Pietquin O, Platt T, Ruch W, Urbain J (2016) Laughter research: a review of the ILHAIRE project. This volumeGoogle Scholar
  10. 10.
    Esposito A (2013) The situated multimodal facets of human communication. In Rojc M, Campbell N (Eds), Coverbal synchrony in human-machine interaction, chap. 7. CRC Press, Taylor & Francis Group, Boca Raton, pp 173–202Google Scholar
  11. 11.
    Esposito A, Esposito AM (2012) On the recognition of emotional vocal expressions: motivations for an holistic approach. Cogn Process 13(2):541–550CrossRefGoogle Scholar
  12. 12.
    Esposito A, Fortunati L, Lugano G (2014) Modeling emotion, behaviour and context in socially believable robots and ICT interfaces. Cogn Comput 6(4):623–627CrossRefGoogle Scholar
  13. 13.
    Esposito A, Esposito AM, Vogel C (2015) Needs and challenges in human computer interaction for processing social emotional information. Patter Recognit Lett 66:41–51CrossRefGoogle Scholar
  14. 14.
    Fortunati L, Esposito A, Lugano G (2015) Beyond industrial robotics: social robots entering public and domestic spheres. Inf Soc: Int J 31(3):229–236CrossRefGoogle Scholar
  15. 15.
    Gangamohan P, Kadiri SR, Yegnanarayana B (2016) Analysis of emotional speech: A review. This volumeGoogle Scholar
  16. 16.
    Hunyadi L, István Szekrényes I, Kiss H (2016) Prosody enhances cognitive infocommunication: materials from the HuComTech corpus. This volumeGoogle Scholar
  17. 17.
    Lewandowska-Tomaszczyk B, Wilson PA (2016) Physical and moral disgust with socially believable behaving systems in different cultures. This volumeGoogle Scholar
  18. 18.
    Maricchiolo F, Gnisci A, Cerasuolo M, Ficca, Bonaiuto M (2016) Speaker’s hand gestures can modulate receiver’s negative reactions to a disagreeable verbal message. This volumeGoogle Scholar
  19. 19.
    Meudt S, Schmidt-Wack M, Honold F, Schüssel F, Michael Weber M, Schwenker F, Palm G (2016) Going further in affective computing: how emotion recognition can improve adaptive user interaction. This volumeGoogle Scholar
  20. 20.
    Milhorat P, Schlögl S, Chollet G, Boudyy J, Esposito A, Pelosi G (2014) Building the next generation of personal digital assistants. In: Proceedings of the 1st IEEE international conference on advanced technologies for signal and image processing - ATSIP’2014, Sousse, 17–19 March 2014, pp 458–463Google Scholar
  21. 21.
    Placidi G, Avola D, Petracca A, Sgallari F, Spezialetti M (2015) Basis for the implementation of an EEG-based single-trial binary brain computer interface through the disgust produced by remembering unpleasant odors. Neurocomputing 160:308–318CrossRefGoogle Scholar
  22. 22.
    Ringeval F, Eyben F, Kroupi E, Yuce A, Thiran JP, Ebrahimi T, Lalanne D, Schuller B (2014) Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data. Pattern Recogn Lett Elsevier 66(C):22–30Google Scholar
  23. 23.
    Schuller B (2015) Deep learning our everyday emotions: a short overview. In: Bassis S et al (eds) Advances in neural networks: computational and theoretical issues, vol 37. SIST Series, Springer, Berlin, pp 339–346Google Scholar
  24. 24.
    van der Veer GC, Tauber MJ, Waern Y, van Muylwijk B (1985) On the interaction between system and user characteristics. Behav Inf Technol 4:284–308Google Scholar
  25. 25.
    Vernon D, Thill S, and Ziemke T (2016) The role of intention in cognitive robotics. This volumeGoogle Scholar
  26. 26.
    Vinciarelli A, Esposito A, André E, Bonin F, Chetouani M, Cohn JF, Cristan M, Fuhrmann F, Gilmartin E, Hammal Z, Heylen D, Kaiser R, Koutsombogera M, Potamianos A, Renals S, Riccardi G, Salah AA (2015) Open challenges in modelling, analysis and synthesis of human behaviour in human-human and human-machine interactions. Cogn Comput 7(4):397–413CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Dipartimento di Psicologia and IIASSSeconda Università di NapoliCasertaItaly
  2. 2.Bournemouth UniversityBournemouthUK
  3. 3.University of CanberraCanberraAustralia

Personalised recommendations