An Incremental Multimodal Realizer for Behavior Co-Articulation and Coordination

  • Herwin van Welbergen
  • Dennis Reidsma
  • Stefan Kopp
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7502)


Human conversations are highly dynamic, responsive interactions. To enter into flexible interactions with humans, a conversational agent must be capable of fluent incremental behavior generation. New utterance content must be integrated seamlessly with ongoing behavior, requiring dynamic application of co-articulation. The timing and shape of the agent’s behavior must be adapted on-the-fly to the interlocutor, resulting in natural interpersonal coordination. We present AsapRealizer, a BML 1.0 behavior realizer that achieves these capabilities by building upon, and extending, two state of the art existing realizers, as the result of a collaboration between two research groups.


Interactional Coordination Virtual Human Subsiding State Conversational Agent Synchronization Point 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Kendon, A.: Gesticulation and speech: Two aspects of the process of utterance. In: Key, M.R. (ed.) The Relation of Verbal and Nonverbal Communication, pp. 207–227. Mouton (1980)Google Scholar
  2. 2.
    McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press (1995)Google Scholar
  3. 3.
    Bernieri, F.J., Rosenthal, R.: Interpersonal coordination: Behavior matching and interactional synchrony. In: Feldman, R.S., Rimé, B. (eds.) Fundamentals of Nonverbal Behavior. Studies in Emotional and Social Interaction. Cambridge University Press (1991)Google Scholar
  4. 4.
    Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds 15(1), 39–52 (2004)CrossRefGoogle Scholar
  5. 5.
    Reidsma, D., van Welbergen, H., Zwiers, J.: Multimodal Plan Representation for Adaptable BML Scheduling. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds.) IVA 2011. LNCS, vol. 6895, pp. 296–308. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  6. 6.
    Salem, M., Kopp, S., Wachsmuth, I., Joublin, F.: Towards an integrated model of speech and gesture production for multi-modal robot behavior. In: Symposium on Robot and Human Interactive Communication, pp. 649–654 (2010)Google Scholar
  7. 7.
    Schlangen, D., Skantze, G.: A general, abstract model of incremental dialogue processing. Dialogue & Discourse 2(1), 83–111 (2011)Google Scholar
  8. 8.
    Nijholt, A., Reidsma, D., van Welbergen, H., op den Akker, R., Ruttkay, Z.: Mutually Coordinated Anticipatory Multimodal Interaction. In: Esposito, A., Bourbakis, N.G., Avouris, N., Hatzilygeroudis, I. (eds.) HH and HM Interaction. LNCS (LNAI), vol. 5042, pp. 70–89. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  9. 9.
    Kopp, S., Krenn, B., Marsella, S.C., Marshall, A.N., Pelachaud, C., Pirker, H., Thórisson, K.R., Vilhjálmsson, H.H.: Towards a Common Framework for Multimodal Generation: The Behavior Markup Language. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 205–217. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  10. 10.
    Thiebaux, M., Marshall, A.N., Marsella, S.C., Kallmann, M.: Smartbody: Behavior realization for embodied conversational agents. In: International Foundation for Autonomous Agents and Multiagent Systems, pp. 151–158 (2008)Google Scholar
  11. 11.
    Heloir, A., Kipp, M.: Real-time animation of interactive agents: Specification and realization. Applied Artificial Intelligence 24(6), 510–529 (2010)CrossRefGoogle Scholar
  12. 12.
    Čereković, A., Pandžić, I.S.: Multimodal behavior realization for embodied conversational agents. Multimedia Tools and Applications, 1–22 (2010)Google Scholar
  13. 13.
    van Welbergen, H., Reidsma, D., Ruttkay, Z.M., Zwiers, J.: Elckerlyc: A BML realizer for continuous, multimodal interaction with a virtual human. Journal on Multimodal User Interfaces 3(4), 271–284 (2010)CrossRefGoogle Scholar
  14. 14.
    Mancini, M., Niewiadomski, R., Bevacqua, E., Pelachaud, C.: Greta: a SAIBA compliant ECA system. In: Troisiéme Workshop sur les Agents Conversationnels Animés (2008)Google Scholar
  15. 15.
    van Welbergen, H., Xu, Y., Thiebaux, M., Feng, W.-W., Fu, J., Reidsma, D., Shapiro, A.: Demonstrating and Testing the BML Compliance of BML Realizers. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds.) IVA 2011. LNCS, vol. 6895, pp. 269–281. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  16. 16.
    van Welbergen, H., Zwiers, J., Ruttkay, Z.M.: Real-time animation using a mix of physical simulation and kinematics. Journal of Graphics, GPU, and Game Tools 14(4), 1–21 (2009)CrossRefGoogle Scholar
  17. 17.
    Reidsma, D., de Kok, I., Neiberg, D., Pammi, S., van Straalen, B., Truong, K.P., van Welbergen, H.: Continuous interaction with a virtual human. Journal on Multimodal User Interfaces 4(2), 97–118 (2011)CrossRefGoogle Scholar
  18. 18.
    Reidsma, D., Dehling, E., van Welbergen, H., Zwiers, J., Nijholt, A.: Leading and following with a virtual trainer. In: Workshop on Whole Body Interaction. University of Liverpool (2011)Google Scholar
  19. 19.
    Goodwin, C.: Between and within: Alternative sequential treatments of continuers and assessments. Human Studies 9(2-3), 205–217 (1986)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Herwin van Welbergen
    • 1
  • Dennis Reidsma
    • 2
  • Stefan Kopp
    • 1
  1. 1.Sociable Agents Group, CITEC, Fac. of TechnologyBielefeld UniversityGermany
  2. 2.Human Media InteractionUniversity of TwenteThe Netherlands

Personalised recommendations