Reinforcement Learning through Global Stochastic Search in N-MDPs

  • Matteo Leonetti
  • Luca Iocchi
  • Subramanian Ramamoorthy
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6912)

Abstract

Reinforcement Learning (RL) in either fully or partially observable domains usually poses a requirement on the knowledge representation in order to be sound: the underlying stochastic process must be Markovian. In many applications, including those involving interactions between multiple agents (e.g., humans and robots), sources of uncertainty affect rewards and transition dynamics in such a way that a Markovian representation would be computationally very expensive. An alternative formulation of the decision problem involves partially specified behaviors with choice points. While this reduces the complexity of the policy space that must be explored - something that is crucial for realistic autonomous agents that must bound search time - it does render the domain Non-Markovian. In this paper, we present a novel algorithm for reinforcement learning in Non-Markovian domains. Our algorithm, Stochastic Search Monte Carlo, performs a global stochastic search in policy space, shaping the distribution from which the next policy is selected by estimating an upper bound on the value of each action. We experimentally show how, in challenging domains for RL, high-level decisions in Non-Markovian processes can lead to a behavior that is at least as good as the one learned by traditional algorithms, and can be achieved with significantly fewer samples.

Keywords

Reinforcement Learning 

References

  1. 1.
    Auer, P., Jaksch, T., Ortner, R.: Near-optimal regret bounds for reinforcement learning. In: Koller, D., Schuurmans, D., Bengio, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems, vol. 21, pp. 89–96 (2009)Google Scholar
  2. 2.
    Baxter, J., Bartlett, P.L.: Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Research 15(4), 319–350 (2001)MathSciNetMATHGoogle Scholar
  3. 3.
    Crook, P.A.: Learning in a state of confusion: Employing active perception and reinforcement learning in partially observable worlds. Technical report, University of Edinburgh (2006)Google Scholar
  4. 4.
    Jung, T., Polani, D.: Learning robocup-keepaway with kernels. In: JMLR: Workshop and Conference Proceedings (Gaussian Processes in Practice), vol. 1, pp. 33–57 (2007)Google Scholar
  5. 5.
    Kalyanakrishnan, S., Stone, P.: An empirical analysis of value function-based and policy search reinforcement learning. In: Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2009, SC 2009. International Foundation for Autonomous Agents and Multiagent Systems, Richland, pp. 749–756 (2009)Google Scholar
  6. 6.
    Kalyanakrishnan, S., Stone, P.: Learning Complementary Multiagent Behaviors: A Case Study. In: Proceedings of the 13th RoboCup International Symposium, pp. 153–165 (2009)Google Scholar
  7. 7.
    Leonetti, M., Iocchi, L.: Improving the performance of complex agent plans through reinforcement learning. In: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, vol. 1, pp. 723–730 (2010)Google Scholar
  8. 8.
    Littman, M.L.: Memoryless policies: Theoretical limitations and practical results. In: From Animals to Animats 3: Proceedings of the Third International Conference on Simulation of Adaptive Behavior, pp. 238–247 (1994)Google Scholar
  9. 9.
    Loch, J., Singh, S.: Using eligibility traces to find the best memoryless policy in partially observable Markov decision processes. In: Proceedings of the Fifteenth International Conference on Machine Learning, pp. 323–331 (1998)Google Scholar
  10. 10.
    Pendrith, M.D., McGarity, M.: An analysis of direct reinforcement learning in non-markovian domains. In: Proceedings of the Fifteenth International Conference on Machine Learning (ICML), pp. 421–429 (1998)Google Scholar
  11. 11.
    Perkins, T.J.: Reinforcement learning for POMDPs based on action values and stochastic optimization. In: Proceedings of the National Conference on Artificial Intelligence, pp. 199–204 (2002)Google Scholar
  12. 12.
    Perkins, T.J., Pendrith, M.D.: On the existence of fixed points for Q-learning and Sarsa in partially observable domains. In: Proceedings of the Nineteenth International Conference on Machine Learning, pp. 490–497 (2002)Google Scholar
  13. 13.
    Singh, S.P., Sutton, R.S.: Reinforcement learning with replacing eligibility traces. In: Recent Advances in Reinforcement Learning, pp. 123–158 (1996)Google Scholar
  14. 14.
    Spall, J.C.: Introduction to Stochastic Search and Optimization, 1st edn. John Wiley & Sons, Inc., New York (2003)CrossRefMATHGoogle Scholar
  15. 15.
    Stone, P., Sutton, R.S., Kuhlmann, G.: Reinforcement learning for RoboCup-soccer keepaway. Adaptive Behavior 13(3), 165–188 (2005)CrossRefGoogle Scholar
  16. 16.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  17. 17.
    Taylor, M.E., Whiteson, S., Stone, P.: Transfer via inter-task mappings in policy search reinforcement learning. In: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 2007, pp. 1–37. ACM, New York (2007)Google Scholar
  18. 18.
    Whiteson, S., Kohl, N., Miikkulainen, R., Stone, P.: Evolving soccer keepaway players through task decomposition. Machine Learning 59, 5–30 (2005), doi:10.1007/s10994-005-0460-9CrossRefMATHGoogle Scholar
  19. 19.
    Young, H.: Strategic learning and its limits. Oxford University Press, USA (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Matteo Leonetti
    • 1
  • Luca Iocchi
    • 1
  • Subramanian Ramamoorthy
    • 2
  1. 1.Department of Computer and System SciencesSapienza University of RomeRomeItaly
  2. 2.School of InformaticsUniversity of EdinburghEdinburghUnited Kingdom

Personalised recommendations