Advertisement

Estimating a User’s Conversational Engagement Based on Head Pose Information

  • Ryota Ooko
  • Ryo Ishii
  • Yukiko I. Nakano
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6895)

Abstract

With the goal of building an intelligent conversational agent that can recognize the user’s engagement, this paper proposes a method of judging a user’s conversational engagement based on head pose data. First, we analyzed how head pose information is correlated with the user’s conversational engagement and found that the amplitude of head movement and rotation have a moderate positive correlation with the level of conversational engagement. We then established an engagement estimation model by applying a decision tree learning algorithm to 19 parameters. The results showed that the proposed model based on head pose information performs quite well.

Keywords

conversational engagement head pose eye gaze 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sidner, C.L., et al.: Explorations in engagement for humans and robots. Artificial Intelligence 166(1-2), 140–164 (2005)CrossRefGoogle Scholar
  2. 2.
    Peters, C.: Direction of Attention Perception for Conversation Initiation in Virtual Environments. In: Intelligent Virtual Agents (2005)Google Scholar
  3. 3.
    Sidner, C.L., et al.: Where to Look: A Study of Human-Robot Engagement. In: ACM International Conference on Intelligent User Interfaces, IUI (2004)Google Scholar
  4. 4.
    Nakano, Y.I., Ishii, R.: Estimating User’s Engagement from Eye-gaze Behaviors in Human-Agent Conversations. In: 2010 International Conference on Intelligent User Interfaces (IUI 2010), Hong Kong,Google Scholar
  5. 5.
    Nakano, Y.I., et al.: Towards a Model of Face-to-Face Grounding. In: The 41st Annual Meeting of the Association for Computational Linguistics, Sapporo, Japan (ACL 2003) (2003)Google Scholar
  6. 6.
    Bohus, D., Horvitz, E.: Learning to Predict Engagement with a Spoken Dialog System in Open-World Settings. In: SIGdial 2009, London, UK (2009)Google Scholar
  7. 7.
    Kendon, A.: Spatial organization in social encounters: the F-formation system. In: Gumperz, J.J. (ed.) Conducting Interaction: Patterns of behavior in focused encounters. Studies in International Sociolinguistics, Cambridge University Press, Cambridge (1990)Google Scholar
  8. 8.
    Morency, L.-P., et al.: Head gestures for perceptual interfaces: The role of context in improving recognition. Artificial Intelligence 171(8-9), 568–585 (2007)CrossRefGoogle Scholar
  9. 9.
    Rich, C., et al.: Recognizing Engagement in Human-Robot Interaction. In: Human-Robot Interaction (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Ryota Ooko
    • 1
  • Ryo Ishii
    • 2
    • 3
  • Yukiko I. Nakano
    • 4
  1. 1.Graduate School of Science and TechnologySeikei UniversityMusashino-shiJapan
  2. 2.Graduate School of InformaticsKyoto UniversityKyotoJapan
  3. 3.NTT Cyber Space LaboratoriesNTT CorporationJapan
  4. 4.Dept. of Computer and Information ScienceSeikei UniversityJapan

Personalised recommendations