Modeling Gaze Behavior for Virtual Demonstrators
- Cite this paper as:
- Huang Y., Matthews J.L., Matlock T., Kallmann M. (2011) Modeling Gaze Behavior for Virtual Demonstrators. In: Vilhjálmsson H.H., Kopp S., Marsella S., Thórisson K.R. (eds) Intelligent Virtual Agents. IVA 2011. Lecture Notes in Computer Science, vol 6895. Springer, Berlin, Heidelberg
Achieving autonomous virtual humans with coherent and natural motions is key for being effective in many educational, training and therapeutic applications. Among several aspects to be considered, the gaze behavior is an important non-verbal communication channel that plays a vital role in the effectiveness of the obtained animations. This paper focuses on analyzing gaze behavior in demonstrative tasks involving arbitrary locations for target objects and listeners. Our analysis is based on full-body motions captured from human participants performing real demonstrative tasks in varied situations. We address temporal information and coordination with targets and observers at varied positions.
Keywordsgaze model motion synthesis virtual humans virtual reality
Unable to display preview. Download preview PDF.