Advertisement

Social Gaze Model for an Interactive Virtual Character

  • Bram van den Brink
  • Christyowidiasmoro
  • Zerrin YumakEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10498)

Abstract

This paper describes a live demo of our autonomous social gaze model for an interactive virtual character situated in the real world. We are interested in estimating which user has an intention to interact, in other words which user is engaged with the virtual character. The model takes into account behavioral cues such as proximity, velocity, posture and sound, estimates an engagement score and drives the gaze behavior of the virtual character. Initially, we assign equal weights to these features. Using data collected in a real setting, we analyze which features have higher importance. We found that the model with weighted features correlates better with the ground-truth data.

Keywords

Gaze model Engagement Situated interaction 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ruhland, K., Peters, C.E., Andrist, S., Badler, J.B., Badler, N.I., Gleicher, M., Mutlu, B., McDonnell, R.: A review of eye gaze in virtual agents, social robotics and HCI: Behavior generation, user interaction and perception. Computer Graphics Forum 34(6), 299–326 (2015)CrossRefGoogle Scholar
  2. 2.
    Sidner, C.L., Lee, C., Kidd, C.D., Lesh, N., Rich, C.: Explorations in engagement for humans and robots. Artificial Intelligence 166(1–2), 140–164 (2005)CrossRefGoogle Scholar
  3. 3.
    Michalowski, M.P., Sabanovic, S., Simmons, R.: A spatial model of engagement for a social robot. In: 9th IEEE International Workshop on Advanced Motion Control, pp. 762–767. IEEE (2006)Google Scholar
  4. 4.
    Bohus D., Horvitz, E.: Learning to predict engagement with a spoken dialog system in open-world settings. In: Proceedings of the SIGDIAL 2009 Conference, The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Stroudsburg, PA, USA, pp. 244–252 (2009)Google Scholar
  5. 5.
    Foster, M.E., Gaschler, A., Giuliani, M.: How can I help you? comparing engagement classification strategies for a robot bartender. In: Proceedings of the 15th International Conference on Multimodal Interaction (ICMI 2013), Sydney, Australia (2013)Google Scholar
  6. 6.
    Yumak, Z., Magnenat-Thalmann, N: Multimodal and multi-party social interactions. In: Magnenat-Thalmann, N., Yuan, J., Thalmann, D., You, B. (eds) Context Aware Human-Robot and Human-Agent Interaction, pp. 275–298. Springer International Publishing (2016)Google Scholar
  7. 7.
    Yumak, Z., van den Brink, B., Egges, A.: Autonomous Social Gaze Model for an Interactive Virtual Character in Real-Life Settings, Computer Animation and Virtual Worlds (2017)Google Scholar
  8. 8.
    Kopp, S., Krenn, B., Marsella, S., Marshall, Andrew N., Pelachaud, C., Pirker, H., Thórisson, Kristinn R., Vilhjálmsson, H.: Towards a common framework for multimodal generation: the behavior markup language. In: Gratch, J., Young, M., Aylett, R., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS, vol. 4133, pp. 205–217. Springer, Heidelberg (2006). doi: 10.1007/11821830_17CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Bram van den Brink
    • 1
  • Christyowidiasmoro
    • 1
  • Zerrin Yumak
    • 1
    Email author
  1. 1.Utrecht UniversityUtrechtNetherlands

Personalised recommendations