Advertisement

Autonomous Agents and Multi-Agent Systems

, Volume 27, Issue 2, pp 305–327 | Cite as

Multimodal plan representation for adaptable BML scheduling

  • Herwin van Welbergen
  • Dennis Reidsma
  • Job Zwiers
Article
  • 251 Downloads

Abstract

Natural human interaction is characterized by interpersonal coordination: interlocutors converge in their speech rates, smoothly switch speaking turns with virtually no delay, provide their interlocutors with verbal and nonverbal backchannel feedback, wait for and react to such feedback, execute physical tasks in tight synchrony, etc. If virtual humans are to achieve such interpersonal coordination they require very flexible behavior plans that are adjustable on-the-fly. In this paper we discuss how such plans are represented, maintained and constructed in our BML realizer Elckerlyc. We argue that behavior scheduling for Virtual Humans can be viewed as a constraint satisfaction problem, and show how Elckerlyc uses this view in its flexible behavior plan representation that allows one to make on-the-fly adjustments to behaviors while keeping the specified constraints between them intact.

Keywords

Virtual Humans Behavior Markup Language SAIBA  Multimodal plan representation Interpersonal coordination 

Notes

Acknowledgments

This research has been supported by the GATE project, funded by the Dutch Organization for Scientific Research (NWO), and by the GATE KTP project.

References

  1. 1.
    Baumann, T., & Schlangen, D. (2012). Inpro_iss: A component for just-in-time incremental speech synthesis. In Proceedings of the ACL system demonstrations, association for computational linguistics (pp. 103–108). Stroudsburg, PA.Google Scholar
  2. 2.
    Dehling, E. (2011). The reactive virtual trainer. Master’s Thesis, University of Twente, Enschede.Google Scholar
  3. 3.
    Goodwin, C. (1986). Between and within: Alternative sequential treatments of continuers and assessments. Human Studies, 9(2–3), 205–217. doi: 10.1007/bf00148127.CrossRefGoogle Scholar
  4. 4.
    Heloir, A., & Kipp, M. (2010). Real-time animation of interactive agents: Specification and realization. Applied Artificial Intelligence, 24(6), 510–529. doi: 10.1080/08839514.2010.492161.CrossRefGoogle Scholar
  5. 5.
    Kipp, M., Heloir, A., Schröder, M., & Gebhard, P. (2010). Realizing multimodal behavior: Closing the gap between behavior planning and embodied agent presentation. In Intelligent virtual agents. LNCS (Vol. 6356, pp. 57–63). Berlin: Springer.Google Scholar
  6. 6.
    Kopp, S. (2010). Social resonance and embodied coordination in face-to-face conversation with artificial interlocutors. Speech Communication, 52(6), 587–597. doi: 10.1016/j.specom.2010.02.007.CrossRefGoogle Scholar
  7. 7.
    Kopp, S., & Wachsmuth, I. (2004). Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds, 15(1), 39–52. doi: 10.1002/cav.v15:1.CrossRefGoogle Scholar
  8. 8.
    Kopp, S., Krenn, B., Marsella, S. C., Marshall, A. N., Pelachaud, C., Pirker, H., et al. (2006). Towards a common framework for multimodal generation: The behavior markup language. In Intelligent virtual agents. LNCS (Vol. 4133, pp. 205–217). Berlin: Springer.Google Scholar
  9. 9.
    Nijholt, A., Reidsma, D., van Welbergen, H., op den Akker, H., & Ruttkay, Z. M. (2008). Mutually coordinated anticipatory multimodal interaction. In Verbal and nonverbal features of human–human and human–machine interaction (pp. 70–89). Berlin: Springer.Google Scholar
  10. 10.
    Reidsma, D., de Kok, I., Neiberg, D., Pammi, S., van Straalen, B., Truong, K., et al. (2011). Continuous interaction with a virtual human. Journal on Multimodal User Interfaces, 4, 97–118. doi: 10.1007/s12193-011-0060-x.CrossRefGoogle Scholar
  11. 11.
    Ribeiro, T., Vala, M., & Paiva, A. (2012). Thalamus: Closing the mind-body loop in interactive embodied characters. In Intelligent virtual agents. LNCS (Vol. 7502, pp. 189–195). Berlin: Springer.Google Scholar
  12. 12.
    Thiebaux, M., Marshall, A. N., Marsella, S. C., & Kallmann, M. (2008). Smartbody: Behavior realization for embodied conversational agents. In Proc. AAMAS (pp. 151–158).Google Scholar
  13. 13.
    van Welbergen, H., Reidsma, D., Ruttkay, Z. M., & Zwiers, J. (2010). Elckerlyc: A BML realizer for continuous, multimodal interaction with a virtual human. Journal on Multimodal User Interfaces, 3(4), 271–284. doi: 10.1007/s12193-010-0051-3.CrossRefGoogle Scholar
  14. 14.
    van Welbergen, H., Xu, Y., Thiébaux, M., Feng, W. W., Fu, J., Reidsma, D., et al. (2011). Demonstrating and testing the bml compliance of bml realizers. In Intelligent virtual agents. LNCS (Vol. 6895, pp. 269–281). Springer. doi: 10.1007/978-3-642-23974-8_30.
  15. 15.
    van Welbergen, H., Reidsma, D., & Kopp, S. (2012). An incremental multimodal realizer for behavior co-articulation and coordination. In Intelligent virtual agents. LNCS (Vol. 7502, pp. 175–188). doi: 10.1007/978-3-642-33197-8_18.
  16. 16.
    Zwiers, J., van Welbergen, H., & Reidsma, D. (2011). Continuous interaction within the SAIBA framework. In Intelligent virtual agents. Lecture Notes in Computer Science (Vol. 6895, pp. 324–330). Springer. doi: 10.1007/978-3-642-23974-8_35.

Copyright information

© The Author(s) 2013

Authors and Affiliations

  • Herwin van Welbergen
    • 1
    • 2
  • Dennis Reidsma
    • 2
  • Job Zwiers
    • 2
  1. 1.Sociable Agents Group, CITECUniversity of BielefeldBielefeldGermany
  2. 2.Human Media Interaction GroupUniversity of TwenteEnschedeThe Netherlands

Personalised recommendations