Advertisement

Comparing Modes of Information Presentation: Text versus ECA and Single versus Two ECAs

  • Svetlana Stoyanchev
  • Paul Piwek
  • Helmut Prendinger
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6895)

Abstract

In this short paper, we evaluate the prospects of automatic dialogue script generation from text for presentation by a team of Embodied Conversational Agents (ECAs). We describe an experiment comparing user perception and preference between plain text and video ECA presentations modes and between monologue and dialogue presentation styles. Our results show that most users are not indifferent of the presentation mode and the user’s preference is guided by the perceived understanding and enjoyment of the presentation.

Keywords

Presentation Mode Conversational Agent Text Presentation Natural Language Generation Presentation Style 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    André, E., Rist, T., van Mulken, S., Klesen, M., Baldes, S.: The automated design of believable dialogues for animated presentation teams. In: Embodied Conversational Agents, pp. 220–255. MIT Press, Cambridge (2000)Google Scholar
  2. 2.
    Carlson, L., Marcu, D.: Discourse tagging reference manual. Technical Report ISI-TR-545, ISI (September 2001)Google Scholar
  3. 3.
    Cavazza, M., Charles, F.: Dialogue Generation in Character-based Interactive Storytelling. In: Proceedings of the AAAI First Annual Artificial Intelligence and Interactive Digital Entertainment Conference, Marina Del Rey, California, USA (2005)Google Scholar
  4. 4.
    Cohn, T., Lapata, M.: Large margin synchronous generation and its application to sentence compression. In: Procs. of EMNLP-CONLL, Prague, pp. 73–82 (2007)Google Scholar
  5. 5.
    Craig, S., Gholson, B., Ventura, M., Graesser, A., and the Tutoring Research Group: Overhearing dialogues and monologues in virtual tutoring sessions. International Journal of Artificial Intelligence in Education 11, 242–253 (2000)Google Scholar
  6. 6.
    Nadamoto, A., Tanaka, K.: Complementing your TV-viewing by web content automatically-transformed into TV-program-type content. In: Proceedings 13th Annual ACM International Conference on Multimedia, pp. 41–50. ACM Press, New York (2005)CrossRefGoogle Scholar
  7. 7.
    Nakasone, A., et al.: A 3D Internet based experimental framework for integrating traffic simulation and multi-user immersive driving. In: 4th Int’l Conference on Simulation Tools and Techniques (2011)Google Scholar
  8. 8.
    Piwek, P., Hernault, H., Prendinger, H., Ishizuka, M.: T2D: Generating Dialogues Between Virtual Agents Automatically from Text. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 161–174. Springer, Heidelberg (2007), doi:10.1007/978-3-540-74997-4CrossRefGoogle Scholar
  9. 9.
    Piwek, P., Stoyanchev, S.: Generating Expository Dialogue from Monologue: Motivation, Corpus and Preliminary Rules. In: Proc. of NAACL, Los Angeles, USA (2010)Google Scholar
  10. 10.
    Piwek, P., Stoyanchev, S.: Data-Oriented Monologue-to-Dialogue Generation. In: Proc. of ACL/HLT, Portland, USA (2011)Google Scholar
  11. 11.
    Reiter, E.: An Architecture for Data to Text Systems. In: Procs. of ENLG-2007, Schloss Dagstuhl, Germany, pp. 97–104 (2007)Google Scholar
  12. 12.
    Rus, V., Graesser, A., Stent, A., Walker, M., White, M.: Text-to-Text Generation. In: Dale, R., White, M. (eds.) Shared Tasks and Comparative Evaluation in Natural Language Generation: Workshop Report, Arlington, Virginia (2007)Google Scholar
  13. 13.
    Stock, O., Zancanaro, M.: Multimodal Intelligent Information Presentation. Text, Speech and Language Technology, vol. 27. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  14. 14.
    Stoyanchev, S., Piwek, P.: The CODA System for Monologue-to-Dialogue Generation. In: Proc. of SIGDIAL, Portland, USA (2011)Google Scholar
  15. 15.
    Stoyanchev, S., Piwek, P.: Constructing the CODA Corpus: a Parallel Corpus of Monologues and Expository Dialogues. In: Proc. of LREC, Malta (2010)Google Scholar
  16. 16.
    Stoyanchev, S., Piwek, P.: Harvesting re-usable high-level rules for expository dialogue generation. In: Proc. of INLG, Trim, Ireland (2010)Google Scholar
  17. 17.
    Suzuki, S.V., Yamada, S.: Persuasion through overheard communication by life-like agents. In: Procs of the 2004 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, Beijing (September 2004)Google Scholar
  18. 18.
    Deemter, K.v., Krenn, B., Piwek, P., Klesen, M., Schroeder, M., Baumann, S.: Fully Generated Scripted Dialogue for Embodied Agents. Artificial Intelligence Journal 172(10), 1219–1244 (2008)zbMATHCrossRefGoogle Scholar
  19. 19.
    Williams, S., Piwek, P., Power, R.: Generating Monologue and Dialogue to Present Personalised Medical Information to Patients. In: Procs ENLG 2007, Schloss Dagstuhl, Germany, pp. 167–170 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Svetlana Stoyanchev
    • 1
  • Paul Piwek
    • 1
  • Helmut Prendinger
    • 2
  1. 1.NLG Group, Centre for Research in ComputingThe Open UniversityMilton KeynesUK
  2. 2.National Institute of InformaticsChiyoda-kuJapan

Personalised recommendations