Skip to main content

Automatic Generation of Conversational Behavior for Multiple Embodied Virtual Characters: The Rules and Models behind Our System

  • Conference paper
Intelligent Virtual Agents (IVA 2008)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5208))

Included in the following conference series:

Abstract

In this paper we presented the rules and algorithms we use to automatically generate non-verbal behavior like gestures and gaze for two embodied virtual agents. They allow us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents’ gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. Since all behaviors are generated automatically, our system offers content creators a convenient method to compose multimodal presentations, a task that would otherwise be very cumbersome and time consuming.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Breitfuss, W., Prendinger, H., Ishizuka, M.: Automated Generation of Non-verbal Behavior for Virtual Embodied Characters. In: Proc. of the Int’l Conf. on Multimodal Interfaces (ICMI 2007), pp. 319–322. ACM Press, New York (2007)

    Chapter  Google Scholar 

  2. Breitfuss, W., Prendinger, H., Ishizuka, M.: Automatic Generation of Gaze and Gestures for Dialogues between Embodied Conversational Agents: System Description and Study on Gaze Behavior. In: Proc. of the AISB 2008 Symposium on Multimodal Output Generation (MOG 2008), pp. 18–25 (2008)

    Google Scholar 

  3. Cassell, J., Vilhjálmsson, H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: Proceedings of SIGGRAPH 2001, pp. 477–486 (2001)

    Google Scholar 

  4. Kipp, M.: Creativity meets automation: Combining nonverbal action authoring with rules and machine learning. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 230–242. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  5. Nischt, M., Prendinger, H., André, E., Ishizuka, M.: MPML3D: a reactive framework for the Multimodal Presentation Markup Language. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 218–229. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Helmut Prendinger James Lester Mitsuru Ishizuka

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Breitfuss, W., Prendinger, H., Ishizuka, M. (2008). Automatic Generation of Conversational Behavior for Multiple Embodied Virtual Characters: The Rules and Models behind Our System. In: Prendinger, H., Lester, J., Ishizuka, M. (eds) Intelligent Virtual Agents. IVA 2008. Lecture Notes in Computer Science(), vol 5208. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-85483-8_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-85483-8_49

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-85482-1

  • Online ISBN: 978-3-540-85483-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics