Abstract
We propose a new approach for reinforcement learning in problems with continuous actions. Actions are sampled by means of a diffusion tree, which generates samples in the continuous action space and organizes them in a hierarchical tree structure. In this tree, each subtree holds a subset of the action samples and thus holds knowledge about a subregion of the action space. Additionally, we store the expected long-term return of the samples of a subtree in the subtree’s root. Thus, the diffusion tree integrates both, a sampling technique and a means for representing acquired knowledge in a hierarchical fashion. Sampling of new action samples is done by recursively walking down the tree. Thus, information about subregions stored in the roots of all subtrees of a branching point can be used to direct the search and to generate new samples in promising regions. This facilitates control of the sample distribution, which allows for informed sampling based on the acquired knowledge, e.g. the expected return of a region in the action space. In simulation experiments, we show how this can be used conceptually for exploring the state-action space efficiently.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Watkins, C.J., Dayan, P.: Q-learning. Machine Learning 8, 279–292 (1992)
Gross, H.M., Stephan, V., Boehme, H.J.: Sensory-based robot navigation using self-organizing networks and q-learning. In: Proceedings of the 1996 World Congress on Neural Networks, pp. 94–99. Psychology Press, San Diego (1996)
Gaskett, C., Wettergreen, D., Zelinsky, A., Zelinsky, E.: Q-learning in continuous state and action spaces. In: Australian Joint Conference on Artificial Intelligence, pp. 417–428. Springer, Heidelberg (1999)
Atkeson, C.G.: Randomly sampling actions in dynamic programming. In: 2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL 2007), pp. 185–192 (2007)
Kearns, M., Mansour, Y., Ng, A.Y.: A sparse sampling algorithm for near-optimal planning in large markov decision processes. Machine Learning 49, 193–208 (2002)
Ross, S., Chaib-Draa, B., Pineau, J.: Bayesian reinforcement learning in continuous pomdps with application to robot navigation. In: 2008 IEEE International Conference on Robotics and Automation (ICRA 2008), pp. 2845–2851. IEEE, Los Alamitos (May 2008)
Lazaric, A., Restelli, M., Bonarini, A.: Reinforcement learning in continuous action spaces through sequential monte carlo methods. In: Platt, J., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems, vol. 20, pp. 833–840. MIT Press, Cambridge (2008)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. The MIT Press, Cambridge (March 1998)
Neal, R.M.: Density modeling and clustering using dirichlet diffusion trees. In: Bayesian Statistics 7: Proceedings of the Seventh Valencia International Meeting, pp. 619–629 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Vollmer, C., Schaffernicht, E., Gross, HM. (2010). Exploring Continuous Action Spaces with Diffusion Trees for Reinforcement Learning. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds) Artificial Neural Networks – ICANN 2010. ICANN 2010. Lecture Notes in Computer Science, vol 6353. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15822-3_24
Download citation
DOI: https://doi.org/10.1007/978-3-642-15822-3_24
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-15821-6
Online ISBN: 978-3-642-15822-3
eBook Packages: Computer ScienceComputer Science (R0)