Advertisement

Say Hi to Eliza

An Embodied Conversational Agent on the Web
  • Gerard LlorachEmail author
  • Josep Blat
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10498)

Abstract

The creation and support of Embodied Conversational Agents (ECAs) has been quite challenging, as features required might not be straight-forward to implement and to integrate in a single application. Furthermore, ECAs as desktop applications present drawbacks for both developers and users; the former have to develop for each device and operating system and the latter must install additional software, limiting their widespread use. In this paper we demonstrate how recent advances in web technologies show promising steps towards capable web-based ECAs, through some off-the-shelf technologies, in particular, the Web Speech API, Web Audio API, WebGL and Web Workers. We describe their integration into a simple fully functional web-based 3D ECA accessible from any modern device, with special attention to our novel work in the creation and support of the embodiment aspects.

Keywords

Embodied conversational agents Web technologies Virtual characters 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Weinzembaum, J.: ELIZA A Computer Program for the Study of Natural Language Communication Between Man And Machine. Communications of the ACM 9(1), 36–45 (1966)CrossRefGoogle Scholar
  2. 2.
    Ruhland, K., Peters, C.E., Andrist, S., Badler, J.B., Badler, N.I., Gleicher, M., Mutlu, B., McDonnell, R.: A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception. Computer Graphics Forum 34(6), 299–326 (2015)CrossRefGoogle Scholar
  3. 3.
    Agenjo, J., Evans, A., Blat, J.: WebGLStudio: a pipeline for WebGL scene creation. In: Proceedings of the 18th International Conference on 3D Web Technology, pp. 79–82 (2013)Google Scholar
  4. 4.
    Zatepyakin, E.: JavaScript Computer Vision library (jsfeat). https://github.com/inspirit/jsfeat
  5. 5.
    Romeo, M.: Automated Processes and Intelligent Tools in CG Media Production. PhD thesis, 119–148 (2016)Google Scholar
  6. 6.
    Llorach, G., Evans, A., Blat, J., Grimm, G., Hohmann, V.: Web-based live speech-driven lip-sync. In: 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games) (2016)Google Scholar
  7. 7.
    Kopp, S., Krenn, B., Marsella, S., Marshall, A.N., Pelachaud, C., Pirker, H., Thórisson, K.R., Vilhjálmsson, H.: Towards a common framework for multimodal generation: the behavior markup language. In: Gratch, J., Young, M., Aylett, R., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS, vol. 4133, pp. 205–217. Springer, Heidelberg (2006). doi: 10.1007/11821830_17CrossRefGoogle Scholar
  8. 8.
    Bickmore, T., Schulman, D., Shaw, G.: DTask and litebody: open source, standards-based tools for building web-deployed embodied conversational agents. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 425–431. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-04380-2_46CrossRefGoogle Scholar
  9. 9.
  10. 10.
  11. 11.
  12. 12.
    Leone, G.R., Cosi, P.: LUCIA-webGL: a web based Italian MPEG-4 talking head. In: AVSP-2011, pp. 123-126 (2011)Google Scholar
  13. 13.
    Landsteiner, N.: (2005). http://www.masswerk.at/elizabot/
  14. 14.
    Sacks, H., Schegloff, E., Jefferson, G.: A Simplest Systematics for the Organization of Turn-Taking for Conversation. Language 50(4), 696–735 (1974)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Interactive Technologies GroupUniversitat Pompeu FabraBarcelonaSpain
  2. 2.Medizinische Physik and Cluster of Excellence ‘Hearing4all’Universität OldenburgOldenburgGermany
  3. 3.Hörzentrum Oldenburg GmbHOldenburgGermany

Personalised recommendations