Abstract
Achieving autonomous virtual humans with coherent and natural motions is key for being effective in many educational, training and therapeutic applications. Among several aspects to be considered, the gaze behavior is an important non-verbal communication channel that plays a vital role in the effectiveness of the obtained animations. This paper focuses on analyzing gaze behavior in demonstrative tasks involving arbitrary locations for target objects and listeners. Our analysis is based on full-body motions captured from human participants performing real demonstrative tasks in varied situations. We address temporal information and coordination with targets and observers at varied positions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bee, N., Wagner, J., André, E., Vogt, T., Charles, F., Pizzi, D., Cavazza, M.: Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application. In: Int’l Conference on Multimodal Interfaces and Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010, pp. 9:1–9:8. ACM, New York (2010)
Camporesi, C., Huang, Y., Kallmann, M.: Interactive motion modeling and parameterization by direct demonstration. In: Safonova, A. (ed.) IVA 2010. LNCS, vol. 6356, pp. 77–90. Springer, Heidelberg (2010)
Clark, H.H., Krych, M.A.: Speaking while monitoring addressees for understanding. Memory and Language 50, 62–81 (2004)
Cullen, K.E., Huterer, M., Braidwood, D.A., Sylvestre, P.A.: Time course of vestibuloocular reflex suppression during gaze shifts. Journal of Neurophysiology 92(6), 3408–3422 (2004)
Deng, Z., Lewis, J., Neumann, U.: Automated eye motion using texture synthesis. IEEE Computer Graphics and Applications 25(2), 24–30 (2005)
Galiana, H.L., Guitton, D.: Central organization and modeling of eye-head coordination during orienting gaze shifts. Annals of the New York Acd. of Sci. 656(1), 452–471 (1992)
Huang, Y., Kallmann, M.: Motion Parameterization with Inverse Blending. In: Boulic, R., Chrysanthou, Y., Komura, T. (eds.) MIG 2010. LNCS, vol. 6459, pp. 242–253. Springer, Heidelberg (2010)
Huette, S., Huang, Y., Kallmann, M., Matlock, T., Matthews, J.L.: Gesture variants and cognitive constraints for interactive virtual reality training systems. In: Proceeding of 16th International Conference on Intelligent User Interfaces (IUI), pp. 351–354 (2011)
Kendon, A.: Some Functions of Gaze Direction in Two-Person Conversation. Conducting Interaction: Patterns of Behavior in Focused Encounters (1990)
Kendon, A.: Gesture: Visible action as utterance, Cambridge (2004)
Lance, B., Marsella, S.C.: Emotionally expressive head and body movement during gaze shifts. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 72–85. Springer, Heidelberg (2007)
Lefevre, P., Bottemanne, I., Roucoux, A.: Experimental study and modeling of vestibulo-ocular reflex modulation during large shifts of gaze in humans. Experimental Brain Research 91, 496–508 (1992)
Murphy, H.A., Duchowski, A.T., Tyrrell, R.A.: Hybrid image/model-based gaze-contingent rendering. ACM Trans. Appl. Percept. 22, 22:1–22:21 (2009)
Mutlu, B., Hodgins, J.K., Forlizzi, J.: A storytelling robot: Modeling and evaluation of human-like gaze behavior. In: Proceedings of HUMANOIDS 2006, 2006 IEEE-RAS International Conference on Humanoid Robots. IEEE, Los Alamitos (2006)
Pelisson, D., Prablanc, C., Urquizar, C.: Vestibuloocular reflex inhibition and gaze saccade control characteristics during eye-head orientation in humans. Journal of Neurophysiology 59, 997–1013 (1988)
Thiebaux, M., Lance, B., Marsella, S.: Real-time expressive gaze animation for virtual humans. In: Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Budapest, Hungary, pp. 321–328 (2009)
Van Horn, M.R., Sylvestre, P.A., Cullen, K.E.: The brain stem saccadic burst generator encodes gaze in three-dimensional space. J. of Neurophysiology 99(5), 2602–2616 (2008)
Weiten, W.: Wayne Weiten, Psychology: Themes and Variations, 8th edn. Cengage Learning Publishing (2008)
Yamane, K., Kuffner, J.J., Hodgins, J.K.: Synthesizing animations of human manipulation tasks. In: ACM SIGGRAPH 2004, pp. 532–539. ACM, New York (2004)
Zhang, H., Fricker, D., Smith, T.G., Yu, C.: Real-time adaptive behaviors in multimodal human-avatar interactions. In: Int’l Conf. on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010, 4:1–4:8. ACM, New York (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Huang, Y., Matthews, J.L., Matlock, T., Kallmann, M. (2011). Modeling Gaze Behavior for Virtual Demonstrators. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds) Intelligent Virtual Agents. IVA 2011. Lecture Notes in Computer Science(), vol 6895. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23974-8_17
Download citation
DOI: https://doi.org/10.1007/978-3-642-23974-8_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-23973-1
Online ISBN: 978-3-642-23974-8
eBook Packages: Computer ScienceComputer Science (R0)