Skip to main content
Log in

Adaptive Fuzzy Watkins: A New Adaptive Approach for Eligibility Traces in Reinforcement Learning

  • Published:
International Journal of Fuzzy Systems Aims and scope Submit manuscript

Abstract

Reinforcement learning is one of the most reliable methods, which have been used to solve many problems. One of the best reinforcement learning family methods are temporal difference methods. The most important weakness of reinforcement learning methods, such as temporal difference methods, is that these methods have slow convergence rate. Many studies are devoted to solving this problem. One of the proposed solutions to this problem is eligibility traces. Owing to the nature of off-policy methods, combining eligibility traces with off-policy methods requires special attention. In the early learning process for Watkins method (one of the dominant eligibility traces methods), cutting eligibility traces during exploratory actions results in diminishing benefits of eligibility traces method. In this study, we propose a framework to combine eligibility traces with off-policy methods. This research attempts to properly use the information explored during action exploration of the agent; to this end, the decision about applying the eligibility traces during the exploratory actions of the agent is made by means of fuzzy adaptation. We apply this method to find the goal state in the static and dynamic grid world. We compare our approach against the state of the art techniques and show that it outperforms these techniques both in terms of averaged achieved reward and also the convergence time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)

    MATH  Google Scholar 

  2. Van Seijen, H., Mahmood, A.R., Pilarski, P.M., Machado, M.C., Sutton, R.S.: True online temporal-difference learning. J. Mach. Learn. Res. 17(1), 5057–5096 (2016)

    MathSciNet  MATH  Google Scholar 

  3. Boyan, J.A.: Technical update: least-squares temporal difference learning. Mach. Learn. 49(2–3), 233–246 (2002)

    Article  MATH  Google Scholar 

  4. Choi, D., Van Roy, B.: A generalized kalman filter for fixed point approximation and efficient temporal-difference learning. Discrete Event Dyn. Syst. 16(2), 207–239 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  5. Yu, H., Bertsekas, D.P.: Convergence results for some temporal difference methods based on least squares. IEEE Trans. Autom. Control 54(7), 1515–1531 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Maei, H.R., Szepesvári, C., Bhatnagar, S., Sutton, R.S.: Toward off-policy learning control with function approximation. In: ICML, pp. 719–726 (2010)

  7. Sutton, R.S., Maei, H.R., Precup, D., Bhatnagar, S., Silver, D., Szepesvári, C., Wiewiora, E.: Fast gradient-descent methods for temporal-difference learning with linear function approximation, In: Proceedings of the 26th Annual International Conference on Machine Learning, 993–1000. ACM (2009)

  8. Maei, H.R., Sutton, R.S.: Gq (\(\lambda\)): a general gradient algorithm for temporal-difference prediction learning with eligibility traces. In: Proceedings of the Third Conference on Artificial General Intelligence, vol. 1, pp. 91–96 (2010)

  9. Geist, M., Scherrer, B.: Off-policy learning with eligibility traces: a survey. J. Mach. Learn. Res. 15(1), 289–333 (2014)

    MathSciNet  MATH  Google Scholar 

  10. Gehring, C., Pan, Y., White, M.: Incremental truncated lstd, arXiv preprint arXiv:1511.08495 (2015)

  11. Pan, Y., White, A.M., White, M.: Accelerated gradient temporal difference learning. In: AAAI, 2464–2470 (2017)

  12. Devraj, A.M., Meyn, S.P.: Fastest convergence for q-learning, arXiv preprint arXiv:1707.03770 (2017)

  13. Chen, S.-L., Wei, Y.-M.: Least-squares sarsa (lambda) algorithms for reinforcement learning, In: Natural Computation, 2008. ICNC’08. Fourth International Conference on, vol. 2, pp. 632–636, IEEE (2008)

  14. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)

    Article  Google Scholar 

  15. Engel, Y.: Algorithms and Representations for Reinforcement Learning. Hebrew University of Jerusalem, Jerusalem (2005)

    Google Scholar 

  16. Dolk, V.: Survey Reinforcement Learning. Eindhoven University of Technology, Eindhoven (2010)

    Google Scholar 

  17. Glorennec, P.Y., Jouffe, L.: Fuzzy q-learning. In: Proceedings of 6th International Fuzzy Systems Conference, vol. 2, 659–662 (1997)

  18. Er, M.J., Deng, C.: Online tuning of fuzzy inference systems using dynamic fuzzy q-learning. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 34(3), 1478–1489 (2004)

    Article  Google Scholar 

  19. Buşoniu, L., Ernst, D., De Schutter, B., Babuška, R.: Continuous-state reinforcement learning with fuzzy approximation, In: Adaptive Agents and Multi-Agent Systems III. Adaptation and Multi-Agent Learning, pp. 27–43, Springer, London (2008)

  20. Bonarini, A., Lazaric, A., Montrone, F., Restelli, M.: Reinforcement distribution in fuzzy q-learning. Fuzzy Sets Syst. 160(10), 1420–1443 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  21. Zajdel, R.: Fuzzy q(\(\lambda\))-learning algorithm. In: Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. (Berlin, Heidelberg), pp. 256–263, Springer, Berlin (2010)

  22. Watkins, C.J.C.H.: Learning from delayed rewards. Ph.D thesis, King’s College, Cambridge (1989)

  23. Peng, J., Williams, R.J.: Incremental multi-step q-learning, In: Machine Learning Proceedings 1994, 226–232. Elsevier, Amsterdam (1994)

  24. Sutton, R., Barto, A.: Reinforcement Learning. MIT Press, Cambridge (1998)

    MATH  Google Scholar 

  25. Leng, J., Fyfe, C., Jain, L.C.: Experimental analysis on sarsa (\(\lambda\)) and q (\(\lambda\)) with different eligibility traces strategies. J. Intell. Fuzzy Syst. 20(1,2), 73–82 (2009)

    MATH  Google Scholar 

  26. Even-Dar, E., Mansour, Y.: Learning rates for q-learning. J. Mach. Learn. Res. 5, 1–25 (2003). no. Dec

    MathSciNet  MATH  Google Scholar 

  27. Tizhoosh, H.: Opposition-based reinforcement learning. JACIII 10(01), 578–585 (2006)

    Article  Google Scholar 

  28. Azar, M.G., Munos, R., Ghavamzadeh, M., Kappen, H.: Speedy q-learning, In: Advances in Neural Information Processing Systems (2011)

  29. Devraj, A.M., Meyn, S.: Zap q-learning, In: Advances in Neural Information Processing Systems, 2235–2244 (2017)

  30. Wang, L.: A Couse in Fuzzy Systems and Control. Prentice-Hall, London (1997)

    Google Scholar 

  31. Dai, X., Li, C.-K., Rad, A.B.: An approach to tune fuzzy controllers based on reinforcement learning for autonomous vehicle control. IEEE Trans. Intell. Transp. Syst. 6(3), 285–293 (2005)

    Article  Google Scholar 

  32. Schneider, T.D.: Information theory primer with an appendix on logarithms. In: National Cancer Institute, Citeseer (2007)

  33. Borda, M.: Fundamentals in Information Theory and Coding. Springer, Berlin (2011)

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seyed Hossein Khasteh.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shokri, M., Khasteh, S.H. & Aminifar, A. Adaptive Fuzzy Watkins: A New Adaptive Approach for Eligibility Traces in Reinforcement Learning. Int. J. Fuzzy Syst. 21, 1443–1454 (2019). https://doi.org/10.1007/s40815-019-00633-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40815-019-00633-x

Keywords

Navigation