Advertisement

Development and Evaluation of Spoken Dialog Systems with One or Two Agents through Two Domains

  • Yuki Todo
  • Ryota Nishimura
  • Kazumasa Yamamoto
  • Seiichi Nakagawa
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8082)

Abstract

Almost all current spoken dialog systems treat dialog as that where a single user talks to an agent. On the other hand, we set out to investigate a multiparty dialog system that deals with two agents and a single user. We developed a three person (one user and two agents) and a two person (one user and one agent) dialog system to consider the same dialog tasks, that is, “Which do you prefer, udon or ramen (Japanese noodle or Chinese noodle)?” and ‘Which do you want to travel to Hokkaido or Okinawa (snowy region or tropical region)?” and compared them with respect to user behavior and satisfaction. According to the results of the experiments, the three person dialog system performed better in terms of lively conversation, and user can talk with the agents more like chatting.

Keywords

spoken dialog system multi-party dialogue two agents chat 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Nishimura, R., Nakagawa, S.: Response timing generation and response type selection for a spontaneous spoken dialog system. In: Proceedings of 2009 IEEE Workshop on ASRU 2009, pp. 462–467 (2009)Google Scholar
  2. 2.
    Itoh, T., Kitaoka, N., Nishimura, R.: Subjective experiments on influence of response timing in spoken dialogues. In: Proceedings of the Interspeech 2009, pp. 1835–1838 (2009)Google Scholar
  3. 3.
    Dielmann: DBN Based Joint Dialogue Act Recognition of Multiparty Meetings. In: Proceedings of ICASSP 2007, pp. 133–136 (2007)Google Scholar
  4. 4.
    Shriberg, E., Stolcke, A., Baron, D.: Observations on Overlap: Findings and Implications for Automatic Processing of Multi-Party Conversation. In: Proceedings of the Interspeech 2009, pp. 1359–1362 (2009)Google Scholar
  5. 5.
    Klotz, D., et al.: Engagement-based Multi-party Dialog with a Humanoid Robot. In: SIGDIAL Conference 2011, pp. 341–343 (2011)Google Scholar
  6. 6.
    Fujie, S., Kobayashi, T., et al.: Conversation Robot Participating in and Activating a Group Communication. In: Proceedings of the Interspeech 2009, pp. 264–267 (2009)Google Scholar
  7. 7.
    Swartout, W., et al.: Ada and Grace: Toward Realistic and Engaging Virtual Museum Guides. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds.) IVA 2010. LNCS (LNAI), vol. 6356, pp. 286–300. Springer, Heidelberg (2010)Google Scholar
  8. 8.
    Traum, D., Marsella, S.C., Gratch, J., Lee, J., Hartholt, A.: Multi-party, Multi-issue, Multi-strategy Negotiation for Multi-modal Virtual Agents. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 117–130. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  9. 9.
    Dohsaka, K., Asai, R.: Effects of Conversational Agents on Human Communication in Thought-Evoking Multi-Party Dialogues. In: SIGDIAL, pp. 217–224 (2009)Google Scholar
  10. 10.
    Kai, A., Nakagawa, S.: A frame-synchronous continuous speech recognition algorithm using a top-down parsing of context-free grammar. In: ICSLP, pp. 257–260 (1992)Google Scholar
  11. 11.
  12. 12.
    Kawamoto, S., Shimodaira, H., Sagayama, S.: Open-source software for developing anthropomorphic spoken dialog agent. In: Proc. of PRICAI 2002, International Workshop on Lifelike Animated Agents, pp. 64–69 (2002)Google Scholar
  13. 13.

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Yuki Todo
    • 1
  • Ryota Nishimura
    • 2
  • Kazumasa Yamamoto
    • 1
  • Seiichi Nakagawa
    • 1
  1. 1.Department of Computer Sciences and EngineeringToyohashi University of TechnologyJapan
  2. 2.Nagoya Institute of TechnologyJapan

Personalised recommendations