Skip to main content

Enabling Embodied Conversational Agents to Respond to Nonverbal Behavior of the Communication Partner

  • Conference paper
  • First Online:
Human-Computer Interaction. User Experience and Behavior (HCII 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13304))

Included in the following conference series:

Abstract

Humans communicate on three levels: words, paralanguage, and nonverbal. While conversational agents focus mainly on the interpretation of words that are being spoken, recently the focus has also shifted to how we say those words with our tone, pace, and intonation. Nonverbal communication, including facial expression, eye contact, posture, and proximity, has been largely ignored in human-agent interactions.

In this work, we propose to incorporate nonverbal behavior into conversations between humans and agents by displaying a human-like embodied agent on a large screen and by responding appropriately to nonverbal cues from the interlocutors. In a user study with 19 volunteers, we investigated the influence on the participants for different behaviors (mimicry, positively biased mimicry, negatively biased mimicry, random) of the embodied conversation agents. The results indicate that goal-directed behavior is perceived significantly better concerning likability, social competence, attitude, and responsiveness in comparison to random behavior. This indicates that already simple nonverbal methods of building rapport can be used to improve the perceived conversational quality with an embodied conversational agent.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    www.mixamo.com.

  2. 2.

    Body Tracking SDK for Azure Kinect enables segmentation of exposed instances and both observed and estimated 3D joints and landmarks for fully articulated, uniquely identified body tracking of skeletons. (www.azure.microsoft.com/en-us/services/kinect-dk).

  3. 3.

    Kinetic Space is an open-source tool that enables training, analysis, and recognition of individual gestures with a depth camera like Microsoft’s Kinect family [32].

References

  1. AbuShawar, B., Atwell, E.: Alice chatbot: trials and outputs. Computación y Sistemas 19(4), 625–632 (2015)

    Article  Google Scholar 

  2. Albrecht, I., Schröder, M., Haber, J., Seidel, H.P.: Mixed feelings: expression of non-basic emotions in a muscle-based talking head. Virt. Real. 8(4), 201–212 (2005)

    Article  Google Scholar 

  3. Argyle, M.: Bodily Communication, 2nd edn, pp. 1–111. Routledge, London (1986)

    Google Scholar 

  4. Bickmore, T., Cassell, J.: Small talk and conversational storytelling in embodied conversational interface agents. In: AAAI Fall Symposium on Narrative Intelligence, pp. 87–92 (1999)

    Google Scholar 

  5. Bull, P.: Gesture and Posture. Pergamon Press, Oxford (1987)

    Google Scholar 

  6. Butz, M., Hepperle, D., Wölfel, M.: Influence of visual appearance of agents on presence, attractiveness, and agency in virtual reality. In: Wölfel, M., Bernhardt, J., Thiel, S. (eds.) 10th EAI International Conference, ArtsIT 2021. LNICS, vol. 422, p. 44. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-95531-1_4

  7. Cassell, J., et al.: Embodiment in conversational interfaces: Rea. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 520–527 (1999)

    Google Scholar 

  8. Cerekovic, A., Aran, O., Gatica-Perez, D.: Rapport with virtual agents: what do human social cues and personality explain? IEEE Trans. Affect. Comput. 8(3), 382–395 (2017)

    Article  Google Scholar 

  9. Chartrand, T.L., Bargh, J.A.: The chameleon effect: the perception-behavior link and social interaction. J. Pers. Soc. Psychol. 76(6), 893–910 (1999)

    Article  Google Scholar 

  10. Clark, L., et al.: What makes a good conversation? Challenges in designing truly conversational agents. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–12. Association for Computing Machinery, New York (2019)

    Google Scholar 

  11. Das, A., Datta, S., Gkioxari, G., Lee, S., Parikh, D., Batra, D.: Embodied question answering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–10 (2018)

    Google Scholar 

  12. Duffy, K.A., Chartrand, T.L.: Mimicry: causes and consequences. Curr. Opin. Behav. Sci. 3, 112–116 (2015)

    Article  Google Scholar 

  13. Duncan, S., Fiske, D.W.: Face to Face Interaction Research. Methods and Theory (1977)

    Google Scholar 

  14. Gong, L.: Is happy better than sad even if they are both non-adaptive? Effects of emotional expressions of talking-head interface agents. Int. J. Hum. Comput. Stud. 65(3), 183–191 (2007)

    Article  Google Scholar 

  15. Gratch, J., et al.: Virtual Rapport. In: Gratch, J., Young, M., Aylett, R., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 14–27. Springer, Heidelberg (2006). https://doi.org/10.1007/11821830_2

    Chapter  Google Scholar 

  16. Harrigan, J.A., Oxman, T.E., Rosenthal, R.: Rapport expressed through nonverbal behavior. J. Nonverbal Behav. 9(2), 95–110 (1985)

    Article  Google Scholar 

  17. Harrigan, J.A., Rosenthal, R.: Physicians’ head and body positions as determinants of perceived rapport. J. Appl. Soc. Psychol. 13(6), 496–509 (1983)

    Article  Google Scholar 

  18. Heylen, D.: Challenges ahead: head movements and other social acts in conversations. Virt. Soc. Agents, 45–52 (2005)

    Google Scholar 

  19. Hoque, M.E., Courgeon, M., Martin, J.C., Mutlu, B., Picard, R.W.: MACH: my automated conversation coach. In: Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp 2013, pp. 697–706. Association for Computing Machinery, New York (2013)

    Google Scholar 

  20. Huang, L., Morency, L.-P., Gratch, J.: Virtual Rapport 2.0. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds.) IVA 2011. LNCS (LNAI), vol. 6895, pp. 68–79. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23974-8_8

    Chapter  Google Scholar 

  21. Lester, J., Branting, K., Mott, B.: Conversational Agents. The Practical Handbook of Internet Computing, pp. 220–240 (2004)

    Google Scholar 

  22. Lucas, G.M., et al.: Getting to know each other: the role of social dialogue in recovery from errors in social robots. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 344–351. ACM, Chicago (2018)

    Google Scholar 

  23. Mara, M., Appel, M.: Effects of lateral head tilt on user perceptions of humanoid and android robots. Comput. Hum. Behav. 44, 326–334 (2015)

    Article  Google Scholar 

  24. Massaro, D.W., Cohen, M.M., Daniel, S., Cole, R.A.: Developing and evaluating conversational agents. In: Human Performance and Ergonomics, pp. 173–194. Elsevier (1999)

    Google Scholar 

  25. Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56(1), 81–103 (2000)

    Article  Google Scholar 

  26. Pentland, A.: Socially aware, computation and communication. Computer 38(3), 33–40 (2005)

    Article  Google Scholar 

  27. Poggi, I., D’Errico, F., Vincze, L.: Types of nods. The polysemy of a social signal. In: Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC 2010) (2010)

    Google Scholar 

  28. Salem, B., Earle, N.: Designing a non-verbal language for expressive avatars. In: Proceedings of the Third International Conference on Collaborative Virtual Environments, CVE 2000, pp. 93–101. Association for Computing Machinery, New York (2000)

    Google Scholar 

  29. Schiaffino, S., Amandi, A.: User-interface agent interaction: personalization issues. Int. J. Hum. Comput. Stud. 60(1), 129–148 (2004)

    Article  Google Scholar 

  30. Technologies, U.: Unity—2D 3D Game Creator & Editor—Augmented. Virtual Reality Software—Game Engine. https://unity.com/products/unity-platform

  31. Wang, F.Y., Carley, K.M., Zeng, D., Mao, W.: Social computing: from social informatics to social intelligence. IEEE Intell. Syst. 22(2), 79–83 (2007)

    Article  Google Scholar 

  32. Wölfel, M.: Kinetic Space - 3D Gestenerkennung für Dich und Mich. Konturen 32, 58–63 (2012)

    Google Scholar 

  33. Zakharov, E., Shysheya, A., Burkov, E., Lempitsky, V.: Few-shot adversarial learning of realistic neural talking head models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9459–9468 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthias Wölfel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wölfel, M., Purps, C.F., Percifull, N. (2022). Enabling Embodied Conversational Agents to Respond to Nonverbal Behavior of the Communication Partner. In: Kurosu, M. (eds) Human-Computer Interaction. User Experience and Behavior. HCII 2022. Lecture Notes in Computer Science, vol 13304. Springer, Cham. https://doi.org/10.1007/978-3-031-05412-9_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05412-9_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05411-2

  • Online ISBN: 978-3-031-05412-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics