Multimodal Plan Representation for Adaptable BML Scheduling

  • Dennis Reidsma
  • Herwin van Welbergen
  • Job Zwiers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6895)


In this paper we show how behavior scheduling for Virtual Humans can be viewed as a constraint optimization problem, and how Elckerlyc uses this view to maintain a extensible behavior plan representation that allows one to make micro-adjustments to behaviors while keeping constraints between them intact. These capabilities make it possible to implement tight mutual behavior coordination between a Virtual Human and a user, without requiring to re-schedule every time an adjustment needs to be made.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Dehling, E.: The Reactive Virtual Trainer. Master’s thesis, University of Twente, Enschede, the Netherlands (2011)Google Scholar
  2. 2.
    Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading (1995)Google Scholar
  3. 3.
    Goodwin, C.: Between and within: Alternative sequential treatments of continuers and assessments. Human Studies 9(2-3), 205–217 (1986)CrossRefGoogle Scholar
  4. 4.
    Heloir, A., Kipp, M.: Real-time animation of interactive agents: Specification and realization. Applied Artificial Intelligence 24(6), 510–529 (2010)CrossRefGoogle Scholar
  5. 5.
    Herman, M., Albus, J.S.: Real-time hierarchical planning for multiple mobile robots. In: Proc. DARPA Knowledge -Based Planning Workshop (Proceedings of the DARPA Knowledge-Based Planning Workshop), pp. 22-1–22-10Google Scholar
  6. 6.
    Kipp, M., Heloir, A., Schröder, M., Gebhard, P.: Realizing multimodal behavior: Closing the gap between behavior planning and embodied agent presentation. In: Proc. IVA, pp. 57–63. Springer, Heidelberg (2010)Google Scholar
  7. 7.
    Kopp, S.: Social resonance and embodied coordination in face-to-face conversation with artificial interlocutors. Speech Communication 52(6), 587–597 (2010), speech and Face-to-Face CommunicationCrossRefGoogle Scholar
  8. 8.
    Kopp, S., Krenn, B., Marsella, S.C., Marshall, A.N., Pelachaud, C., Pirker, H., Thórisson, K.R., Vilhjálmsson, H.H.: Towards a common framework for multimodal generation: The behavior markup language. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 205–217. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. 9.
    Nijholt, A., Reidsma, D., van Welbergen, H., op den Akker, R., Ruttkay, Z.: Mutually coordinated anticipatory multimodal interaction. In: Esposito, A., Bourbakis, N.G., Avouris, N., Hatzilygeroudis, I. (eds.) HH and HM Interaction. LNCS (LNAI), vol. 5042, pp. 70–89. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  10. 10.
    Reidsma, D., de Kok, I., Neiberg, D., Pammi, S., van Straalen, B., Truong, K.P., van Welbergen, H.: Continuous interaction with a virtual human. Journal on Multimodal User Interfaces (in press, 2011)Google Scholar
  11. 11.
    Thiebaux, M., Marshall, A.N., Marsella, S.C., Kallmann, M.: Smartbody: Behavior realization for embodied conversational agents. In: Proc. AAMAS, pp. 151–158 (2008)Google Scholar
  12. 12.
    van Welbergen, H.: Specifying, scheduling and realizing multimodal output for continuous interaction with a virtual human. Ph.D. thesis, University of Twente, Enschede, NL (2011)Google Scholar
  13. 13.
    van Welbergen, H., Reidsma, D., Ruttkay, Z.M., Zwiers, J.: Elckerlyc: A BML realizer for continuous, multimodal interaction with a virtual human. Journal on Multimodal User Interfaces 3(4), 271–284 (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Dennis Reidsma
    • 1
  • Herwin van Welbergen
    • 1
  • Job Zwiers
    • 1
  1. 1.Human Media InteractionUniversity of TwenteThe Netherlands

Personalised recommendations