Modeling Gaze Behavior for Virtual Demonstrators

  • Yazhou Huang
  • Justin L. Matthews
  • Teenie Matlock
  • Marcelo Kallmann
Conference paper

DOI: 10.1007/978-3-642-23974-8_17

Part of the Lecture Notes in Computer Science book series (LNCS, volume 6895)
Cite this paper as:
Huang Y., Matthews J.L., Matlock T., Kallmann M. (2011) Modeling Gaze Behavior for Virtual Demonstrators. In: Vilhjálmsson H.H., Kopp S., Marsella S., Thórisson K.R. (eds) Intelligent Virtual Agents. IVA 2011. Lecture Notes in Computer Science, vol 6895. Springer, Berlin, Heidelberg

Abstract

Achieving autonomous virtual humans with coherent and natural motions is key for being effective in many educational, training and therapeutic applications. Among several aspects to be considered, the gaze behavior is an important non-verbal communication channel that plays a vital role in the effectiveness of the obtained animations. This paper focuses on analyzing gaze behavior in demonstrative tasks involving arbitrary locations for target objects and listeners. Our analysis is based on full-body motions captured from human participants performing real demonstrative tasks in varied situations. We address temporal information and coordination with targets and observers at varied positions.

Keywords

gaze model motion synthesis virtual humans virtual reality 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Yazhou Huang
    • 1
  • Justin L. Matthews
    • 1
  • Teenie Matlock
    • 1
  • Marcelo Kallmann
    • 1
  1. 1.University of CaliforniaMercedUSA

Personalised recommendations