A Mark-Up Language and Interpreter for Interactive Scenes for Embodied Conversational Agents

  • David Novick
  • Mario Gutierrez
  • Ivan Gris
  • Diego A. Rivera
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9179)

Abstract

Our research seeks to provide embodied conversational agents (ECAs) with behaviors that enable them to build and maintain rapport with human users. To conduct this research, we need to build agents and systems that can maintain high levels of engagement with humans over multiple interaction sessions. These sessions can potentially extend to longer periods of time to examine long-term effects of the virtual agent’s behaviors. Our current ECA interacts with humans in a game called “Survival on Jungle Island.” Throughout this game, users interact with our agent across several scenes. Each scene is composed of a collection of speech input, speech output, gesture input, gesture output, scenery, triggers, and decision points. Our prior system was developed with procedural code, which did not lend itself to rapid extension to new game scenes. So to enable effective authoring of the scenes for the “Jungle” game, we adopted a declarative approach. We developed ECA middleware that parses, interprets, and executes XML files that define the scenes. This paper presents the XML coding scheme and its implementation and describes the functional back-end enabled by the scene scripts.

Keywords

Embodied conversational agents Scene Interpreter Parser 

References

  1. Anderson, T., Galley, S.: The History of Zork. The New Zork Times, New York (1985)Google Scholar
  2. Crowther, W., Woods, D., Black, K.: Colossal cave adventure. Computer Game. Intellect Books, Bristol (1976)Google Scholar
  3. Rayon, A., Gris, I., Novick, D., Camacho, A., Rivera, D.A., Gutierrez, M.: Recorded speech, virtual environments, and the effectiveness of embodied conversational agents. In: Bickmore, T., Marsella, S., Sidner, C. (eds.) IVA 2014. LNCS, vol. 8637, pp. 182–185. Springer, Heidelberg (2014)Google Scholar
  4. Gris, I., Novick, D., Rivera, D.A., and Gutierrez, M.: UTEP’s AGENT architecture. In: IVA 2014 Workshop on Architectures and Standards for IVAs, Intelligent Virtual Agents (2014), Boston, August 2014Google Scholar
  5. Gris, I., Rivera, D.A., Novick, D.: Animation guidelines for believable embodied conversational agent gestures In: HCII 2015, Seattle, 2–7 August 2015 (in press)Google Scholar
  6. Heylen, D., Kopp, S., Marsella, S.C., Pelachaud, C., Vilhjálmsson, H.H.: The next step towards a function markup language. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 270–280. Springer, Heidelberg (2008)Google Scholar
  7. Kopp, S., Krenn, B., Marsella, S.C., Marshall, A.N., Pelachaud, C., Pirker, H., Thórisson, K.R., Vilhjálmsson, H.H.: Towards a common framework for multimodal generation: the behavior markup language. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 205–217. Springer, Heidelberg (2006)Google Scholar
  8. Lee, J., DeVault, D., Marsella, S., Traum, D.: Thoughts on FML: behavior generation in the virtual human communication architecture. In: AAMAS (2008)Google Scholar
  9. Novick, D., Gris, I.: Building rapport between human and eca: a pilot study. In: Kurosu, M. (ed.) HCI 2014, Part II. LNCS, vol. 8511, pp. 472–480. Springer, Heidelberg (2014)Google Scholar
  10. Zwiers, J., van Welbergen, H., Reidsma, D.: Continuous interaction within the SAIBA framework. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds.) IVA 2011. LNCS, vol. 6895, pp. 324–330. Springer, Heidelberg (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • David Novick
    • 1
  • Mario Gutierrez
    • 1
  • Ivan Gris
    • 1
  • Diego A. Rivera
    • 1
  1. 1.Department of Computer ScienceThe University of Texas at El PasoEl PasoUSA

Personalised recommendations