Modeling Gaze Behavior for Virtual Demonstrators

  • Yazhou Huang
  • Justin L. Matthews
  • Teenie Matlock
  • Marcelo Kallmann
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6895)

Abstract

Achieving autonomous virtual humans with coherent and natural motions is key for being effective in many educational, training and therapeutic applications. Among several aspects to be considered, the gaze behavior is an important non-verbal communication channel that plays a vital role in the effectiveness of the obtained animations. This paper focuses on analyzing gaze behavior in demonstrative tasks involving arbitrary locations for target objects and listeners. Our analysis is based on full-body motions captured from human participants performing real demonstrative tasks in varied situations. We address temporal information and coordination with targets and observers at varied positions.

Keywords

gaze model motion synthesis virtual humans virtual reality 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bee, N., Wagner, J., André, E., Vogt, T., Charles, F., Pizzi, D., Cavazza, M.: Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application. In: Int’l Conference on Multimodal Interfaces and Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010, pp. 9:1–9:8. ACM, New York (2010)Google Scholar
  2. 2.
    Camporesi, C., Huang, Y., Kallmann, M.: Interactive motion modeling and parameterization by direct demonstration. In: Safonova, A. (ed.) IVA 2010. LNCS, vol. 6356, pp. 77–90. Springer, Heidelberg (2010)Google Scholar
  3. 3.
    Clark, H.H., Krych, M.A.: Speaking while monitoring addressees for understanding. Memory and Language 50, 62–81 (2004)CrossRefGoogle Scholar
  4. 4.
    Cullen, K.E., Huterer, M., Braidwood, D.A., Sylvestre, P.A.: Time course of vestibuloocular reflex suppression during gaze shifts. Journal of Neurophysiology 92(6), 3408–3422 (2004)CrossRefGoogle Scholar
  5. 5.
    Deng, Z., Lewis, J., Neumann, U.: Automated eye motion using texture synthesis. IEEE Computer Graphics and Applications 25(2), 24–30 (2005)CrossRefGoogle Scholar
  6. 6.
    Galiana, H.L., Guitton, D.: Central organization and modeling of eye-head coordination during orienting gaze shifts. Annals of the New York Acd. of Sci. 656(1), 452–471 (1992)CrossRefGoogle Scholar
  7. 7.
    Huang, Y., Kallmann, M.: Motion Parameterization with Inverse Blending. In: Boulic, R., Chrysanthou, Y., Komura, T. (eds.) MIG 2010. LNCS, vol. 6459, pp. 242–253. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  8. 8.
    Huette, S., Huang, Y., Kallmann, M., Matlock, T., Matthews, J.L.: Gesture variants and cognitive constraints for interactive virtual reality training systems. In: Proceeding of 16th International Conference on Intelligent User Interfaces (IUI), pp. 351–354 (2011)Google Scholar
  9. 9.
    Kendon, A.: Some Functions of Gaze Direction in Two-Person Conversation. Conducting Interaction: Patterns of Behavior in Focused Encounters (1990)Google Scholar
  10. 10.
    Kendon, A.: Gesture: Visible action as utterance, Cambridge (2004)Google Scholar
  11. 11.
    Lance, B., Marsella, S.C.: Emotionally expressive head and body movement during gaze shifts. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 72–85. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  12. 12.
    Lefevre, P., Bottemanne, I., Roucoux, A.: Experimental study and modeling of vestibulo-ocular reflex modulation during large shifts of gaze in humans. Experimental Brain Research 91, 496–508 (1992)CrossRefGoogle Scholar
  13. 13.
    Murphy, H.A., Duchowski, A.T., Tyrrell, R.A.: Hybrid image/model-based gaze-contingent rendering. ACM Trans. Appl. Percept. 22, 22:1–22:21 (2009)Google Scholar
  14. 14.
    Mutlu, B., Hodgins, J.K., Forlizzi, J.: A storytelling robot: Modeling and evaluation of human-like gaze behavior. In: Proceedings of HUMANOIDS 2006, 2006 IEEE-RAS International Conference on Humanoid Robots. IEEE, Los Alamitos (2006)Google Scholar
  15. 15.
    Pelisson, D., Prablanc, C., Urquizar, C.: Vestibuloocular reflex inhibition and gaze saccade control characteristics during eye-head orientation in humans. Journal of Neurophysiology 59, 997–1013 (1988)Google Scholar
  16. 16.
    Thiebaux, M., Lance, B., Marsella, S.: Real-time expressive gaze animation for virtual humans. In: Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Budapest, Hungary, pp. 321–328 (2009)Google Scholar
  17. 17.
    Van Horn, M.R., Sylvestre, P.A., Cullen, K.E.: The brain stem saccadic burst generator encodes gaze in three-dimensional space. J. of Neurophysiology 99(5), 2602–2616 (2008)CrossRefGoogle Scholar
  18. 18.
    Weiten, W.: Wayne Weiten, Psychology: Themes and Variations, 8th edn. Cengage Learning Publishing (2008)Google Scholar
  19. 19.
    Yamane, K., Kuffner, J.J., Hodgins, J.K.: Synthesizing animations of human manipulation tasks. In: ACM SIGGRAPH 2004, pp. 532–539. ACM, New York (2004)CrossRefGoogle Scholar
  20. 20.
    Zhang, H., Fricker, D., Smith, T.G., Yu, C.: Real-time adaptive behaviors in multimodal human-avatar interactions. In: Int’l Conf. on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010, 4:1–4:8. ACM, New York (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Yazhou Huang
    • 1
  • Justin L. Matthews
    • 1
  • Teenie Matlock
    • 1
  • Marcelo Kallmann
    • 1
  1. 1.University of CaliforniaMercedUSA

Personalised recommendations