Advertisement

Enhanced Temporal Difference Learning Using Compiled Eligibility Traces

  • Peter Vamplew
  • Robert Ollington
  • Mark Hepburn
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4304)

Abstract

Eligibility traces have been shown to substantially improve the convergence speed of temporal difference learning algorithms, by maintaining a record of recently experienced states. This paper presents an extension of conventional eligibility traces (compiled traces) which retain additional information about the agent’s experience within the environment. Empirical results show that compiled traces outperform conventional traces when applied to policy evaluation tasks using a tabular representation of the state values.

Keywords

Reinforcement Learning Previous Episode Policy Iteration Current Episode Eligibility Trace 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sutton, R.S.: Learning to predict by the methods of temporal differences. Machine Learning 3, 9–44 (1988)Google Scholar
  2. 2.
    Singh, S.P., Sutton, R.S.: Reinforcement learning with replacing eligibility traces. Machine Learning 22, 123–158 (1996)MATHGoogle Scholar
  3. 3.
    Rummery, G., Niranjan, M.: On-line Q-Learning Using Connectionist Systems. Cambridge University Engineering Department, Cambridge (1994)Google Scholar
  4. 4.
    Watkins, C.J.C.H.: Learning from Delayed Rewards, PhD Thesis, Cambridge University (1989)Google Scholar
  5. 5.
    Sutton, R.S.: Dyna, an integrated architecture for learning, planning and reacting. SIGART Bulletin 2, 160–163 (1991)CrossRefGoogle Scholar
  6. 6.
    Kaelbling, L.P.: Learning to Achieve Goals. In: Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, Chambéry, France (1993)Google Scholar
  7. 7.
    Ollington, R., Vamplew, P.: Concurrent Q-Learning for Autonomous Mapping and Navigation. In: The 2nd International Conference on Computational Intelligence, Robotics and Autonomous Systems, Singapore (2003)Google Scholar
  8. 8.
    Jaakkola, T., Jordan, M.I., Singh, S.P.: On the convergence of stochastic iterative dynamic programming algorithms. Neural Computation 6, 1185–1201 (1994)MATHCrossRefGoogle Scholar
  9. 9.
    Cichosz, P.: Truncating temporal differences: On the efficient implementation of TD(λ) for reinforcement learning. Journal of Artificial Intelligence Research 2, 287–318 (1995)Google Scholar
  10. 10.
    Sutton, R.S.: Generalisation in reinforcement learning: Successful examples using sparse coarse coding. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems: Proceedings of the 1995 Conference, pp. 1038–1044. The MIT Press, Cambridge (1996)Google Scholar
  11. 11.
    Kretchmar, R.M., Anderson, C.W.: Comparison of CMACs and RBFs for local function approximators in reinforcement learning. In: IEEE International Conference on Neural Networks (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Peter Vamplew
    • 1
  • Robert Ollington
    • 2
  • Mark Hepburn
    • 2
  1. 1.School of Information Technology and Mathematical SciencesUniversity of BallaratBallaratAustralia
  2. 2.School of ComputingUniversity of TasmaniaHobartAustralia

Personalised recommendations