Skip to main content

Using Control Theory for Analysis of Reinforcement Learning and Optimal Policy Properties in Grid-World Problems

  • Conference paper
Emerging Intelligent Computing Technology and Applications. With Aspects of Artificial Intelligence (ICIC 2009)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5755))

Included in the following conference series:

  • 1545 Accesses

Abstract

Markov Decision Process (MDP) has enormous applications in science, engineering, economics and management. Most of decision processes have Markov property and can be modeled as MDP. Reinforcement Learning (RL) is an approach to deal with MDPs. RL methods are based on Dynamic Programming (DP) algorithms, such as Policy Evaluation, Policy Iteration and Value Iteration. In this paper, policy evaluation algorithm is represented in the form of a discrete-time dynamical system. Hence, using Discrete-Time Control methods, behavior of agent and properties of various policies, can be analyzed. The general case of grid-world problems is addressed, and some important results are obtained for this type of problems as a theorem. For example, equivalent system of an optimal policy for a grid-world problem is dead-beat.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, Chichester (2005)

    MATH  Google Scholar 

  2. Cassandra, A.: Exact and Approximate Algorithms for Partially Observable Markov Decision Processes. Ph.D. Thesis, Brown University (1998)

    Google Scholar 

  3. Pyeatt, L.: Integration of Partially Observable Markov Decision Processes and Reinforcement Learning for Simulated Robot Navigation. Ph.D. Thesis, Colorado State University (1999)

    Google Scholar 

  4. Bertsekas, D.P., Tsitsiklis, J.N.: Neuro-Dynamic Programming. Athena Scientific (1996)

    Google Scholar 

  5. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. The MIT Press, Cambridge (1998)

    Google Scholar 

  6. Lew, A., Mauch, H.: Dynamic Programming: A Computational Tool. Springer, Berlin (2007)

    MATH  Google Scholar 

  7. Reynolds, S.I.: Reinforcement Learning with Exploration. Ph.D. Thesis, School of Computer Science, University of Birmingham, UK (2002)

    Google Scholar 

  8. Van Roy, B.: Neuro-Dynamic Programming: Overview and Recent Trends. In: Feinberg, E.A., Schwartz, A. (eds.) Handbook of Markov Decision Processes: Methods and Applications. Kluwer Academic, Dordrecht (2002)

    Google Scholar 

  9. Si, J., et al.: Handbook of Learning and Approximate Dynamic Programming. Wiley InterScience, Hoboken (2004)

    Book  Google Scholar 

  10. Soo Chang, H., et al.: A survey of some Simulation-Based Algorithms for Markov Decision Processes. Communications in Information and Systems 7(1), 59–92 (2007)

    MATH  MathSciNet  Google Scholar 

  11. Smith, J.E., Mc Cardle, K.F.: Structural Properties of Stochastic Dynamic Programs. Operations Research 50, 796–809 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  12. Fu, M.C., et al.: Monotone optimal policies for queuing staffing problem. Operations Research 46, 327–331 (2000)

    Article  Google Scholar 

  13. Givan, R., et al.: Bounded Markov Decision Processes. Artificial Intelligence 122, 71–109 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  14. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)

    Google Scholar 

  15. Gordon, G.J.: Approximate Solution to Markov Decision Processes. Ph.D. Thesis, School of Computer Science, Carnegie Mellon University (1999)

    Google Scholar 

  16. de Farias, D.P., Van Roy, B.: On the Existance of Fixed-points for Approximate Value Iteration and Temporal-Difference Learning. Journal of Optimization theory and Applications 105(3), 589–608 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  17. Royden, H.: Real Analysis, 3rd edn. Prentice Hall, Englewood Cliffs (1988)

    MATH  Google Scholar 

  18. Hu, Q., Yue, W.: Markov Decision Processes with Their Applications. Springer Science+Busines Media, LLC (2008)

    Google Scholar 

  19. Soo, H., et al.: Simulation-based Algorithms for Markov Decision Processes. Springer, London (2007)

    MATH  Google Scholar 

  20. Fernandez, F., Veloso, M.: Exploration and Policy Reuse. Technical Report, School of Computer Science, Carnegie Mellon University (2005)

    Google Scholar 

  21. Fernandez, F., Veloso, M.: Probabilistic Reuse of Past policies. Technical Report, School of Computer Science, Carnegie Mellon University (2005)

    Google Scholar 

  22. Fernandez, F., Veloso, M.: Building a Library of Policies through Policy Reuse. Technical Report, School of Computer Science, Carnegie Mellon University (2005)

    Google Scholar 

  23. Bernstein, D.S.: Reusing Old Policies to Accelerate Learning on New Markov Decision Processes. Technical Report, University of Massachusetts (1999)

    Google Scholar 

  24. Zhang, N.L., Zhang, W.: Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes. Journal of Artificial Intelligence Research 14, 29–51 (2001)

    MathSciNet  Google Scholar 

  25. Hansen, E.A.: An Improved Policy Iteration for Partially Observable Markov Decision Processes. In: Proceedings of 10th Neural Information Processing Systems Conference (1997)

    Google Scholar 

  26. Sallans, B.: Reinforcement Learning for Factored Markov Decision Processes. Ph.D. Thesis, Graduate Department of Computer Science, University of Toronto (2002)

    Google Scholar 

  27. Ogata, K.: Discrete-Time Control Systems, 2nd edn. Prentice Hall, Englewood Cliffs (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kalami Heris, S.M., Naghibi Sistani, MB., Pariz, N. (2009). Using Control Theory for Analysis of Reinforcement Learning and Optimal Policy Properties in Grid-World Problems. In: Huang, DS., Jo, KH., Lee, HH., Kang, HJ., Bevilacqua, V. (eds) Emerging Intelligent Computing Technology and Applications. With Aspects of Artificial Intelligence. ICIC 2009. Lecture Notes in Computer Science(), vol 5755. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04020-7_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04020-7_30

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04019-1

  • Online ISBN: 978-3-642-04020-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics