Using Control Theory for Analysis of Reinforcement Learning and Optimal Policy Properties in Grid-World Problems

  • S. Mostapha Kalami Heris
  • Mohammad-Bagher Naghibi Sistani
  • Naser Pariz
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5755)

Abstract

Markov Decision Process (MDP) has enormous applications in science, engineering, economics and management. Most of decision processes have Markov property and can be modeled as MDP. Reinforcement Learning (RL) is an approach to deal with MDPs. RL methods are based on Dynamic Programming (DP) algorithms, such as Policy Evaluation, Policy Iteration and Value Iteration. In this paper, policy evaluation algorithm is represented in the form of a discrete-time dynamical system. Hence, using Discrete-Time Control methods, behavior of agent and properties of various policies, can be analyzed. The general case of grid-world problems is addressed, and some important results are obtained for this type of problems as a theorem. For example, equivalent system of an optimal policy for a grid-world problem is dead-beat.

Keywords

Dynamic Programming Discrete-Time Control Systems Markov Decision Process Reinforcement Learning Stochastic Control 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, Chichester (2005)MATHGoogle Scholar
  2. 2.
    Cassandra, A.: Exact and Approximate Algorithms for Partially Observable Markov Decision Processes. Ph.D. Thesis, Brown University (1998)Google Scholar
  3. 3.
    Pyeatt, L.: Integration of Partially Observable Markov Decision Processes and Reinforcement Learning for Simulated Robot Navigation. Ph.D. Thesis, Colorado State University (1999)Google Scholar
  4. 4.
    Bertsekas, D.P., Tsitsiklis, J.N.: Neuro-Dynamic Programming. Athena Scientific (1996)Google Scholar
  5. 5.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. The MIT Press, Cambridge (1998)Google Scholar
  6. 6.
    Lew, A., Mauch, H.: Dynamic Programming: A Computational Tool. Springer, Berlin (2007)MATHGoogle Scholar
  7. 7.
    Reynolds, S.I.: Reinforcement Learning with Exploration. Ph.D. Thesis, School of Computer Science, University of Birmingham, UK (2002)Google Scholar
  8. 8.
    Van Roy, B.: Neuro-Dynamic Programming: Overview and Recent Trends. In: Feinberg, E.A., Schwartz, A. (eds.) Handbook of Markov Decision Processes: Methods and Applications. Kluwer Academic, Dordrecht (2002)Google Scholar
  9. 9.
    Si, J., et al.: Handbook of Learning and Approximate Dynamic Programming. Wiley InterScience, Hoboken (2004)CrossRefGoogle Scholar
  10. 10.
    Soo Chang, H., et al.: A survey of some Simulation-Based Algorithms for Markov Decision Processes. Communications in Information and Systems 7(1), 59–92 (2007)MATHMathSciNetGoogle Scholar
  11. 11.
    Smith, J.E., Mc Cardle, K.F.: Structural Properties of Stochastic Dynamic Programs. Operations Research 50, 796–809 (2002)MATHCrossRefMathSciNetGoogle Scholar
  12. 12.
    Fu, M.C., et al.: Monotone optimal policies for queuing staffing problem. Operations Research 46, 327–331 (2000)CrossRefGoogle Scholar
  13. 13.
    Givan, R., et al.: Bounded Markov Decision Processes. Artificial Intelligence 122, 71–109 (2000)MATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)Google Scholar
  15. 15.
    Gordon, G.J.: Approximate Solution to Markov Decision Processes. Ph.D. Thesis, School of Computer Science, Carnegie Mellon University (1999)Google Scholar
  16. 16.
    de Farias, D.P., Van Roy, B.: On the Existance of Fixed-points for Approximate Value Iteration and Temporal-Difference Learning. Journal of Optimization theory and Applications 105(3), 589–608 (2000)MATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Royden, H.: Real Analysis, 3rd edn. Prentice Hall, Englewood Cliffs (1988)MATHGoogle Scholar
  18. 18.
    Hu, Q., Yue, W.: Markov Decision Processes with Their Applications. Springer Science+Busines Media, LLC (2008)Google Scholar
  19. 19.
    Soo, H., et al.: Simulation-based Algorithms for Markov Decision Processes. Springer, London (2007)MATHGoogle Scholar
  20. 20.
    Fernandez, F., Veloso, M.: Exploration and Policy Reuse. Technical Report, School of Computer Science, Carnegie Mellon University (2005)Google Scholar
  21. 21.
    Fernandez, F., Veloso, M.: Probabilistic Reuse of Past policies. Technical Report, School of Computer Science, Carnegie Mellon University (2005)Google Scholar
  22. 22.
    Fernandez, F., Veloso, M.: Building a Library of Policies through Policy Reuse. Technical Report, School of Computer Science, Carnegie Mellon University (2005)Google Scholar
  23. 23.
    Bernstein, D.S.: Reusing Old Policies to Accelerate Learning on New Markov Decision Processes. Technical Report, University of Massachusetts (1999)Google Scholar
  24. 24.
    Zhang, N.L., Zhang, W.: Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes. Journal of Artificial Intelligence Research 14, 29–51 (2001)MathSciNetGoogle Scholar
  25. 25.
    Hansen, E.A.: An Improved Policy Iteration for Partially Observable Markov Decision Processes. In: Proceedings of 10th Neural Information Processing Systems Conference (1997)Google Scholar
  26. 26.
    Sallans, B.: Reinforcement Learning for Factored Markov Decision Processes. Ph.D. Thesis, Graduate Department of Computer Science, University of Toronto (2002)Google Scholar
  27. 27.
    Ogata, K.: Discrete-Time Control Systems, 2nd edn. Prentice Hall, Englewood Cliffs (1994)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • S. Mostapha Kalami Heris
    • 1
  • Mohammad-Bagher Naghibi Sistani
    • 2
  • Naser Pariz
    • 2
  1. 1.Control Engineering Department, Faculty of Electrical EngineeringK. N. Toosi University of TechnologyTehranIran
  2. 2.Electrical Engineering Department, Faculty of EngineeringFerdowsi University of MashhadMashhadIran

Personalised recommendations