Advertisement

Fast Reinforcement Learning of Dialogue Policies Using Stable Function Approximation

  • Matthias Denecke
  • Kohji Dohsaka
  • Mikio Nakano
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3248)

Abstract

We propose a method to speed up reinforcement learning of policies for spoken dialogue systems. This is achieved by combining a coarse grained abstract representation of states and actions with learning only in frequently visited states. The value of unsampled states is approximated by a linear interpolation of known states. Experiments show that the proposed method effectively optimizes dialogue strategies for frequently visited dialogue states.

Keywords

Reinforcement Learn Action Space Reward Function Dialogue System Dialogue Strategy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Singh, S., Litman, D., Kearns, M., Walker, M.: Optimizing Dialogue Management with Reinforcement Learning: Experiments with the NJFun System. Journal of Artificial Intelligence Research 16, 105–133 (2002)Google Scholar
  2. 2.
    Gordon, G.J.: Stable function approximation in dynamic programming. In: Proceedings of the Twelfth International Conference on Machine Learning (1995)Google Scholar
  3. 3.
    Levin, E., Pieraccini, R.: A Stochastic Model of Human Computer Interaction for Learning Dialog Strategies. In: Proceedings of Eurospeech, Rhodos, Greece (1997)Google Scholar
  4. 4.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)Google Scholar
  5. 5.
    Walker, M., Fromer, J., Narayanan, S.: Learning optimal dialogue strategies: A case study of a spoken dialogue agent for email. In: Proceedings of ACL/COLING 1998 (1998)Google Scholar
  6. 6.
    Williams, J.D., Young, S.: Using Wizard-of-Oz Simulations to Bootstrap Reinforcement Learning Based Dialog Management Systems. In: Proceedings of the 4th SigDial Workshop on Discourse and Dialogue (2003)Google Scholar
  7. 7.
    Roy, N., Pineau, J., Thrun, S.: Spoken Dialog Management for Robots. In: Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (2000)Google Scholar
  8. 8.
    Scheffler, K., Young, S.J.: Corpus-based dialogue simulation for automatic strategy learning and evaluation. In: Proceedings NAACL Workshop on Adaptation in Dialogue Systems, pp. 64–70 (2001)Google Scholar
  9. 9.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning. MIT Press, Cambridge (1998)Google Scholar
  10. 10.
    Goddeau, D., Pineau, J.: Fast Reinforcement Learning of Dialog Strategies. In: IEEE Conference on Acoustics, Speech and Signal Processing (ICASSP), Istanbul, Turkey (2000)Google Scholar
  11. 11.
    Denecke, M.: Informational Characterization of Dialogue States. In: Proceedings of the 6th International Conference on Speech and Language Processing, Beijing, China (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Matthias Denecke
    • 1
  • Kohji Dohsaka
    • 1
  • Mikio Nakano
    • 1
  1. 1.Communication Science LaboratoriesNippon Telegraph and Telephone CorporationAtsugi Kanagawa

Personalised recommendations