Advertisement

Enhancing Human-Computer Interaction with Embodied Conversational Agents

  • Mary Ellen Foster
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4555)

Abstract

We survey recent research in which the impact of an embodied conversational agent on human-computer interaction has been assessed through a human evaluation. In some cases, the evaluation involved comparing different versions of the agent against itself in the context of a full interactive system; in others, it measured the effect on user perception of spoken output of specific aspects of the embodied agent’s behaviour. In almost all of the studies, an embodied agent that displays appropriate non-verbal behaviour was found to enhance the interaction.

Keywords

Facial Expression Body Language Synthesise Speech Social Dialogue Interface Agent 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cassell, J., Sullivan, J., Prevost, S., Churchill, E. (eds.): Embodied Conversational Agents. MIT Press, Cambridge (2000)Google Scholar
  2. 2.
    Reeves, B., Nass, C.: The Media Equation: How people treat computers, television, and new media like real people and places. Cambridge University Press, Cambridge (1996)Google Scholar
  3. 3.
    Bavelas, J.B., Chovil, N.: Visible acts of meaning: An integrated message model of language in face-to-face dialogue. Journal of Language and Social Psychology 19(2), 163–194 (2000), doi:10.1177/026192700019002001CrossRefGoogle Scholar
  4. 4.
    Kendon, A.: Gesture: Visible Action as Utterance. Cambridge University Press, Cambridge (2004)Google Scholar
  5. 5.
    McNeill, D. (ed.): Language and Gesture: Window into Thought and Action. Cambridge University Press, Cambridge (2000)Google Scholar
  6. 6.
    Bickmore, T., Cassell, J.: Social dialogue with embodied conversational agents. In: van Kuppevelt, J., Dybkjær, L., Bernsen, N.O. (eds.) Advances in Natural, Multimodal Dialogue Systems, Kluwer, New York (2005)Google Scholar
  7. 7.
    Ruttkay, Z., André, E., Johnson, W.L., Pelachaud, C. (eds.): Evaluating Embodied Conversational Agents. Dagstuhl Seminar Proceedings, 04121 (2006)Google Scholar
  8. 8.
    Dybkjær, L., Bernsen, N.O., Minker, W.: Evaluation and usability of multimodal spoken language dialogue systems. Speech Communication 43(1-2), 33–54 (2004), doi:10.1016/j.specom.2004.02.001CrossRefGoogle Scholar
  9. 9.
    Rehm, M., André, E.: Catch me if you can – exploring lying agents in social settings. In: Proceedings of the International Conference on Autonomous Agents and Multiagent Systems, pp. 937–944 (2005)Google Scholar
  10. 10.
    Cassell, J., Bickmore, T., Campbell, L., Vilhjálmsson, H., Yan, H.: Human conversation as a system framework: Designing embodied conversational agents. In: [1], pp. 29–63Google Scholar
  11. 11.
    Poggi, I., Pelachaud, C.: Performative facial expressions in animated faces. In: [1], pp. 154–188Google Scholar
  12. 12.
    Berry, D.C., Butler, L., de Rosis, F., Laaksolathi, J., Pelachaud, C., Steedman, M.: Final evaluation report. Deliverable 4.6, MagiCster project (2004)Google Scholar
  13. 13.
    Buisine, S., Abrilian, S., Martin, J.-C.: Evaluation of individual multimodal behaviour of 2D embodied agents in presentation tasks. In [25], pp. 217–238 Google Scholar
  14. 14.
    White, M., Foster, M.E., Oberlander, J., Brown, A.: Using facial feedback to enhance turn-taking in a multimodal dialogue system. In: Proceedings of HCI International 2005 Thematic Session on Universal Access in Human-Computer Interaction (2005)Google Scholar
  15. 15.
    Sidner, C.L., Lee, C., Kidd, C.D., Lesh, N., Rich, C.: Explorations in engagement for humans and robots. Artificial Intelligence 166(1-2), 140–164 (2005), doi:10.1016/j.artint.2005.03.005CrossRefGoogle Scholar
  16. 16.
    Rehm, M., André, E.: Where do they look? Gaze behaviors of multiple users interacting with an embodied conversational agent. In: Panayiotopoulos, T., Gratch, J., Aylett, R., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 241–252. Springer, Heidelberg (2005), doi:10.1007/1155061721CrossRefGoogle Scholar
  17. 17.
    Bickmore, T.W., Picard, R.W.: Establishing and maintaining long-term human-computer relationships. ACM Transactions on Computer-Human Interaction 12(2), 293–327 (2005), doi:10.1145/1067860.1067867CrossRefGoogle Scholar
  18. 18.
    de Ruyter, B., Saini, P., Markopoulos, P., van Breemen, A.: Assessing the effects of building social intelligence in a robotic interface for the home. Interacting with Computers 17(5), 522–541 (2005), doi:10.1016/j.intcom.2005.03.003CrossRefGoogle Scholar
  19. 19.
    Prendinger, H., Ma, C., Yingzi, J., Nakasone, A., Ishizuka, M.: Understanding the effect of life-like interface agents through users’ eye movements. In: Proceedings of the 7th international conference on Multimodal interfaces (ICMI 2005), pp. 108–115 (2005), doi:10.1145/1088463.1088484 Google Scholar
  20. 20.
    Prendinger, H., Mori, J., Ishizuka, M.: Using human physiology to evaluate subtle expressivity of a virtual quizmaster in a mathematical game. International Journal of Human-Computer Studies 62(2), 231–245 (2005), doi:10.1016/j.ijhcs.2004.11.009CrossRefGoogle Scholar
  21. 21.
    Swerts, M., Krahmer, E.: On the perception of audiovisual cues to prominence (in Press)Google Scholar
  22. 22.
    Foster, M.E.: Evaluating the impact of variation in the generation of multimodal object descriptions. Ph.D. thesis, School of Informatics, University of Edinburgh (in submission) ( 2007)Google Scholar
  23. 23.
    DeCarlo, D., Stone, M., Revilla, C., Venditti, J.J.: Specifying and animating facial signals for discourse in embodied conversational agents. Computer Animation and Virtual Worlds 15(1), 27–38 (2004), doi:10.1002/cav.5CrossRefGoogle Scholar
  24. 24.
    Marsi, E., van Rooden, F.: Expressing uncertainty with a talking head. In: Proceedings of the Workshop on Multimodal Generation (MOG 2007) (2007)Google Scholar
  25. 25.
    Pelachaud, C., Ruttkay, Z. (eds.): From Brows to Trust: Evaluating Embodied Conversational Agents. Springer, Heidelberg (2004), doi:10.1007/1-4020-2730-3zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Mary Ellen Foster
    • 1
  1. 1.Robotics and Embedded Systems Group, Department of Informatics, Technische Universität München, Boltzmannstraße 3, 85748 Garching bei MünchenGermany

Personalised recommendations