Skip to main content

Human-Inspired Socially-Aware Interfaces

  • Conference paper
  • First Online:
Theory and Practice of Natural Computing (TPNC 2019)

Abstract

Social interactions shape our human life and are inherently emotional. Human conversational partners usually try to interpret – consciously or unconsciously – the speaker’s or listener’s affective cues and respond to them accordingly. With the objective to contribute to more natural and intuitive ways of communicating with machines, an increasing number of research projects has started to investigate how to simulate similar affective behaviors in socially-interactive agents. In this paper we present an overview of the state of the art in social-interactive agents that expose a socially-aware interface including mechanisms to recognize a user’s emotional state, to respond to it appropriately and to continuously learn how to adapt to the needs and preferences of a human user. To this end, we focus on three essential properties of socially-aware interfaces: Social Perception, Socially-Aware Behavior Synthesis, and Learning Socially-Aware Behaviors. We also analyze the limitations of current approaches and discuss directions for future development.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. André, E., Pelachaud, C.: Interacting with embodied conversational agents. In: Chen, F., Jokinen, K. (eds.) Speech Technology, pp. 123–149. Springer, Boston (2010). https://doi.org/10.1007/978-0-387-73819-2_8

    Chapter  Google Scholar 

  2. Andrist, S., Tan, X.Z., Gleicher, M., Mutlu, B.: Conversational gaze aversion for humanlike robots. In: ACM/IEEE International Conference on Human-Robot Interaction, (HRI), Bielefeld, Germany, pp. 25–32 (2014)

    Google Scholar 

  3. Argyle, M., Cook, M.: Gaze and Mutual Gaze. Cambridge University Press, Cambridge (1976)

    Google Scholar 

  4. Baur, T., Schiller, D., André, E.: Modeling user’s social attitude in a conversational system. In: Tkalčič, M., De De Carolis, B., de Gemmis, M., Odić, A., Košir, A. (eds.) Emotions and Personality in Personalized Services. HIS, pp. 181–199. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-31413-6_10

    Chapter  Google Scholar 

  5. Bee, N., André, E., Tober, S.: Breaking the ice in human-agent communication: eye-gaze based initiation of contact with an embodied conversational agent. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS (LNAI), vol. 5773, pp. 229–242. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04380-2_26

    Chapter  Google Scholar 

  6. Bee, N., André, E., Vogt, T., Gebhard, P.: The use of affective and attentive cues in an empathic computer-based companion. In: Wilks, Y. (ed.) Natural Language Processing, vol. 8, pp. 131–142. John Benjamins Publishing Company (2010)

    Google Scholar 

  7. Bohus, D., Horvitz, E.: Facilitating multiparty dialog with gaze, gesture, and speech. In: International ACM Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (ICML-MLMI), Beijing, China, pp. 5:1–5:8 (2010)

    Google Scholar 

  8. Boukricha, H., Wachsmuth, I., Carminati, M.N., Knoeferle, P.: A computational model of empathy: empirical evaluation. In: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, (ACII), Geneva, Switzerland, pp. 1–6. IEEE (2013)

    Google Scholar 

  9. Cavazza, M., de la Camara, R.S., Turunen, M.: How was your day?: a companion ECA. In: 9th International Conference on Autonomous Agents and Multiagent Systems, (AAMAS), Toronto, Canada, vol. 1, pp. 1629–1630, Richland (2010)

    Google Scholar 

  10. Damian, I., Baur, T., Lugrin, B., Gebhard, P., Mehlmann, G., André, E.: Games are better than books: in-situ comparison of an interactive job interview game with conventional training. In: Conati, C., Heffernan, N., Mitrovic, A., Verdejo, M.F. (eds.) AIED 2015. LNCS (LNAI), vol. 9112, pp. 84–94. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-19773-9_9

    Chapter  Google Scholar 

  11. D’Mello, S., Kory, J.: Consistent but modest: a meta-analysis on unimodal and multimodal affect detection accuracies from 30 studies. In: 14th ACM International Conference on Multimodal Interaction (ICMI), Santa Monica, CA, USA, pp. 31–38. ACM (2012)

    Google Scholar 

  12. Endrass, B., Rehm, M., André, E.: Planning small talk behavior with cultural influences for multiagent systems. Comput. Speech Lang. 25(2), 158–174 (2011)

    Article  Google Scholar 

  13. Eresha, G., Häring, M., Endrass, B., André, E., Obaid, M.: Investigating the influence of culture on proxemic behaviors for humanoid robots. In: 2013 IEEE International Symposium on Robot and Human Interactive Communication, (RO-MAN), Gyeongju, South Korea, pp. 430–435 (2013)

    Google Scholar 

  14. Eyben, F., et al.: The Geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing. IEEE Trans. Affect. Comput. 7(2), 190–202 (2015)

    Article  Google Scholar 

  15. Eyben, F., Weninger, F., Gross, F., Schuller, B.: Recent developments in open SMILE, the Munich open-source multimedia feature extractor. In: ACM Multimedia, Firenze, Italy, pp. 835–838 (2013)

    Google Scholar 

  16. Gebhard, P.: ALMA: a layered model of affect. In: 4th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 29–36 (2005)

    Google Scholar 

  17. Gebhard, P., Schneeberger, T., Baur, T., André, E.: MARSSI: model of appraisal, regulation, and social signal interpretation. In: 17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), Stockholm, Sweden, pp. 497–506 (2018)

    Google Scholar 

  18. Gebhard, P., Schneeberger, T., Dietz, M., André, E., ul Habib Bajwa, N.: Designing a mobile social and vocational reintegration assistant for burn-out outpatient treatment. In: 19th ACM International Conference on Intelligent Virtual Agents (IVA), Paris, France, pp. 13–15 (2019)

    Google Scholar 

  19. Gebhard, P., Schneeberger, T., Mehlmann, G., Baur, T., André, E.: Designing the impression of social agents’ real-time interruption handling. In: 19th ACM International Conference on Intelligent Virtual Agents (IVA), Paris, France, pp. 19–21 (2019)

    Google Scholar 

  20. Gratch, J., Rickel, J., André, E., Cassell, J., Petajan, E., Badler, N.I.: Creating interactive virtual humans: some assembly required. IEEE Intell. Syst. 17(4), 54–63 (2002)

    Article  Google Scholar 

  21. Janowski, K., André, E.: What if I speak now?: a decision-theoreticapproach to personality-based turn-taking. In: 18th International Conference on Autonomous Agents and MultiAgent Systems, (AAMAS), pp. 1051–1059, Richland (2019)

    Google Scholar 

  22. Janowski, K., Ritschel, H., Birgit, L., André, E.: Sozial interagierende Roboter in der Pflege. In: Bendel, O. (ed.) Pflegeroboter, pp. 63–87. Springer, Wiesbaden (2018). https://doi.org/10.1007/978-3-658-22698-5_4

    Chapter  Google Scholar 

  23. Kim, J., André, E.: Emotion recognition based on physiological changes in music listening. IEEE Trans. Pattern Anal. Mach. Intell. 30(12), 2067–2083 (2008)

    Article  Google Scholar 

  24. Kleinsmith, A., Bianchi-Berthouze, N.: Affective body expression perception and recognition: a survey. IEEE Trans. Affect. Comput. 4(1), 15–33 (2012)

    Article  Google Scholar 

  25. Lawler-Dormer, D.: Baby X: digital artificial intelligence, computational neuroscience and empathetic interaction. In: ISEA 2013 Conference Proceedings, ISEA International (2013)

    Google Scholar 

  26. Leite, I., Pereira, A., Mascarenhas, S., Martinho, C., Prada, R., Paiva, A.: The influence of empathy in human-robot relations. Int. J. Hum Comput Stud. 71(3), 250–260 (2013)

    Article  Google Scholar 

  27. Lingenfelser, F., Wagner, J., Deng, J., Brueckner, R., Schuller, B., André, E.: Asynchronous and event-based fusion systems for affect recognition on naturalistic data in comparison to conventional approaches. IEEE Trans. Affect. Comput. 9(4), 410–423 (2016)

    Article  Google Scholar 

  28. ter Maat, M., Truong, K.P., Heylen, D.K.J.: How agents’ turn-taking strategies influence impressions and response behaviors. Presence: Teleoperators Virtual Environ. 20(5), 412–430 (2011)

    Google Scholar 

  29. Martínez, B., Valstar, M.F., Jiang, B., Pantic, M.: Automatic analysis of facial actions: a survey. IEEE Trans. Affect. Comput. 10(3), 325–347 (2019)

    Article  Google Scholar 

  30. McQuiggan, S.W., Lester, J.C.: Modeling and evaluating empathy in embodied companion agents. Int. J. Hum Comput Stud. 65(4), 348–360 (2007)

    Article  Google Scholar 

  31. Mitsunaga, N., Smith, C., Kanda, T., Ishiguro, H., Hagita, N.: Adapting robot behavior for human-robot interaction. IEEE Trans. Robot. 24(4), 911–916 (2008)

    Article  Google Scholar 

  32. Morency, L.P., et al.: SimSensei demonstration: a perceptive virtual human interviewer for healthcare applications. In: Twenty-Ninth AAAI Conference on Artificial Intelligence (2015)

    Google Scholar 

  33. Ortony, A., Clore, G.L., Collins, A.: The Cognitive Structure of Emotions. Cambridge University Press, Cambridge (1988)

    Book  Google Scholar 

  34. Osherenko, A., André, E.: Lexical affect sensing: are affect dictionaries necessary to analyze affect? In: Paiva, A.C.R., Prada, R., Picard, R.W. (eds.) ACII 2007. LNCS, vol. 4738, pp. 230–241. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74889-2_21

    Chapter  Google Scholar 

  35. Peters, C., Asteriadis, S., Karpouzis, K.: Investigating shared attention with a virtual agent using a gaze-based interface. J. Multimodal User Interfaces 3(1–2), 119–130 (2010)

    Article  Google Scholar 

  36. Petrak, B., Weitz, K., Aslan, I., André, E.: Let me show you your new home: studying the effect of proxemic-awareness of robots on users’ first impressions. In: 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India. IEEE (2019)

    Google Scholar 

  37. Ritschel, H., Baur, T., André, E.: Adapting a robot’s linguistic style based on socially-aware reinforcement learning. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, pp. 378–384. IEEE (2017)

    Google Scholar 

  38. Rosa, H.: Resonanz: Eine Soziologie der Weltbeziehung. Suhrkamp Verlag (2016)

    Google Scholar 

  39. Schröder, M., et al.: Building autonomous sensitive artificial listeners. IEEE Trans. Affect. Comput. 3(2), 165–183 (2012)

    Article  Google Scholar 

  40. Silver, D.L., Yang, Q., Li, L.: Lifelong machine learning systems: beyond learning algorithms. In: Lifelong Machine Learning, Papers from the 2013 AAAI Spring Symposium, Palo Alto, California, USA, 25–27 March 2013 (2013)

    Google Scholar 

  41. Skantze, G., Hjalmarsson, A., Oertel, C.: Turn-taking, feedback and joint attention in situated human-robot interaction. Speech Commun. 65, 50–66 (2014)

    Article  Google Scholar 

  42. Strapparava, C., Valitutti, A., et al.: Wordnet affect: an affective extension of wordnet. In: 4th International Conference on Language Resources and Evaluation, LREC, Lisbon, Portugal, pp. 1083–1086 (2004)

    Google Scholar 

  43. Takayama, L., Pantofaru, C.: Influences on proxemic behaviors in human-robot interaction. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, St. Louis, MO, USA, pp. 5495–5502 (2009)

    Google Scholar 

  44. Trigeorgis, G., et al.: Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, pp. 5200–5204 (2016)

    Google Scholar 

  45. Vogt, T., André, E., Bee, N.: EmoVoice — a framework for online recognition of emotions from voice. In: André, E., Dybkjær, L., Minker, W., Neumann, H., Pieraccini, R., Weber, M. (eds.) PIT 2008. LNCS (LNAI), vol. 5078, pp. 188–199. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-69369-7_21

    Chapter  Google Scholar 

  46. Vogt, T., André, E., Wagner, J.: Automatic recognition of emotions from speech: a review of the literature and recommendations for practical realisation. In: Peter, C., Beale, R. (eds.) Affect and Emotion in Human-Computer Interaction. LNCS, vol. 4868, pp. 75–91. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-85099-1_7

    Chapter  Google Scholar 

  47. Wagner, J., Lingenfelser, F., André, E.: Using phonetic patterns for detecting social cues in natural conversations. In: Interspeech, Stockholm, pp. 168–172 (2013)

    Google Scholar 

  48. Wagner, J., Schiller, D., Seiderer, A., André, E.: Deep learning in paralinguistic recognition tasks: are hand-crafted features still relevant? In: Interspeech, Hyderabad, India, pp. 147–151 (2018)

    Google Scholar 

  49. Weitz, K., Hassan, T., Schmid, U., Garbas, J.U.: Deep-learned faces of pain and emotions: elucidating the differences of facial expressions with the help of explainable AI methods. tm-Technisches Messen 86(7–8), 404–412 (2019)

    Article  Google Scholar 

  50. Zhang, L., Wang, S., Liu, B.: Deep learning for sentiment analysis: a survey. Wiley Interdisc. Rev.: Data Min. Knowl. Disc. 8(4), e1253 (2018)

    Google Scholar 

Download references

Acknowledgments

This work has been partially funded by the Bundesministerium für Bildung und Forschung (BMBF) within the project VIVA, Grant Number 16SV7960.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elisabeth André .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schiller, D., Weitz, K., Janowski, K., André, E. (2019). Human-Inspired Socially-Aware Interfaces. In: Martín-Vide, C., Pond, G., Vega-Rodríguez, M. (eds) Theory and Practice of Natural Computing. TPNC 2019. Lecture Notes in Computer Science(), vol 11934. Springer, Cham. https://doi.org/10.1007/978-3-030-34500-6_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-34500-6_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-34499-3

  • Online ISBN: 978-3-030-34500-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics