An Empirical Analysis of the Impact of Prioritised Sweeping on the DynaQ’s Performance

  • Marek Grześ
  • Daniel Kudenko
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5097)

Abstract

Reinforcement learning tackles the problem of how to act optimally given observations of the current world state. Agents that learn from reinforcements execute actions in an environment and receive feedback (reward) that can be used to guide the learning process. The distinguishing feature of reinforcement learning is that the model of the environment (i.e., effects of actions or the reward function) are not known in advance. Model-based approaches represent a class of reinforcement learning algorithms which learn the model of dynamics. This model can be used by the learning agent to simulate interactions with the environment. DynaQ and its extended version with prioritised sweeping are the most popular examples of model-based approaches. This paper shows that, contrary to common belief, DynaQ with prioritised sweeping may perform worse than pure DynaQ in domains where the agent can be easily misled by a sub-optimal solution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Andre, D., Friedman, N., Parr, R.: Generalized prioritized sweeping. In: Proceedings of the 1997 conference on Advances in Neural Information Processing Systems, pp. 1001–1007 (1997)Google Scholar
  2. 2.
    Boutilier, C., Dean, T., Hanks, S.: Decision-theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research 11, 1–94 (1999)MATHMathSciNetGoogle Scholar
  3. 3.
    Dearden, R., Friedman, N., Russell, S.J.: Bayesian Q-learning. In: Proceedings of the Fifteenth National Conference on Artificial Intelligence, pp. 761–768. AAAI Press (1998)Google Scholar
  4. 4.
    Kalyanakrishnan, S., Stone, P., Liu, Y.: Model-based reinforcement learning in a complex domain. In: RoboCup-2007: Robot Soccer World Cup XI. Springer, Berlin (2008)Google Scholar
  5. 5.
    Moore, A.W., Atkenson, C.G.: Prioritized sweeping: Reinforcement learning with less data and less time. Machine Learning 13, 103–130 (1993)Google Scholar
  6. 6.
    Peng, J., Williams, R.J.: Efficient learning and planning within the dyna framework. In: Proceedings of the 1993 IEEE International Conference on Neural Networks, pp. 168–174 (1993)Google Scholar
  7. 7.
    Rayner, D.C., Davison, K., Bulitko, V., Anderson, K., Lu, J.: Real-time heuristic search with a priority queue. In: Proceedings of the 2007 International Joint Conference on Artificial Intelligence, pp. 2372–2377 (2007)Google Scholar
  8. 8.
    Strens, M.J.A.: A bayesian framework for reinforcement learning. In: Proceedings of the 17th International Conference on Machine Learning, pp. 943–950 (2000)Google Scholar
  9. 9.
    Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: Proceedings of the Seventh International Conference on Machine Learning, pp. 216–224 (1990)Google Scholar
  10. 10.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). MIT Press, Cambridge (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Marek Grześ
    • 1
  • Daniel Kudenko
    • 1
  1. 1.Department of Computer ScienceUniversity of YorkYorkUK

Personalised recommendations