Advertisement

Geometry of Policy Improvement

  • Guido Montúfar
  • Johannes Rauh
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10589)

Abstract

We investigate the geometry of optimal memoryless time independent decision making in relation to the amount of information that the acting agent has about the state of the system. We show that the expected long term reward, discounted or per time step, is maximized by policies that randomize among at most k actions whenever at most k world states are consistent with the agent’s observation. Moreover, we show that the expected reward per time step can be studied in terms of the expected discounted reward. Our main tool is a geometric version of the policy improvement lemma, which identifies a polyhedral cone of policy changes in which the state value function increases for all states.

Keywords

Partially Observable Markov Decision Process Reinforcement learning Memoryless stochastic policy Policy gradient theorem 

Notes

Acknowledgment

We thank Nihat Ay for support and insightful comments.

References

  1. 1.
    Ay, N., Montúfar, G., Rauh, J.: Selection criteria for neuromanifolds of stochastic dynamics. In: Yamaguchi, Y. (ed.) Advances in Cognitive Neurodynamics (III), pp. 147–154. Springer, Dordrecht (2013). doi: 10.1007/978-94-007-4792-0_20 CrossRefGoogle Scholar
  2. 2.
    Hutter, M.: General discounting versus average reward. In: Balcázar, J.L., Long, P.M., Stephan, F. (eds.) ALT 2006. LNCS, vol. 4264, pp. 244–258. Springer, Heidelberg (2006). doi: 10.1007/11894841_21 CrossRefGoogle Scholar
  3. 3.
    Kakade, S.: Optimizing average reward using discounted rewards. In: Helmbold, D., Williamson, B. (eds.) COLT 2001. LNCS, vol. 2111, pp. 605–615. Springer, Heidelberg (2001). doi: 10.1007/3-540-44581-1_40 CrossRefGoogle Scholar
  4. 4.
    Montúfar, G., Ghazi-Zahedi, K., Ay, N.: Geometry and determinism of optimal stationary control in partially observable Markov decision processes. arXiv:1503.07206 (2015)
  5. 5.
    Ross, S.M.: Introduction to Stochastic Dynamic Programming. Academic Press Inc., Cambridge (1983)MATHGoogle Scholar
  6. 6.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  7. 7.
    Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: Advances in Neural Information Processing Systems 12, pp. 1057–1063. MIT Press (2000)Google Scholar
  8. 8.
    Tsitsiklis, J.N., Van Roy, B.: On average versus discounted reward temporal-difference learning. Mach. Learn. 49(2), 179–191 (2002)CrossRefMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Max Planck Institute for Mathematics in the SciencesLeipzigGermany
  2. 2.Departments of Mathematics and StatisticsUCLALos AngelesUSA

Personalised recommendations