An Online Kernel-Based Clustering Approach for Value Function Approximation
Value function approximation is a critical task in solving Markov decision processes and accurately modeling reinforcement learning agents. A significant issue is how to construct efficient feature spaces from samples collected by the environment in order to obtain an optimal policy. The particular study addresses this challenge by proposing an on-line kernel-based clustering approach for building appropriate basis functions during the learning process. The method uses a kernel function capable of handling pairs of state-action as sequentially generated by the agent. At each time step, the procedure either adds a new cluster, or adjusts the winning cluster’s parameters. By considering the value function as a linear combination of the constructed basis functions, the weights are optimized in a temporal-difference framework in order to minimize the Bellman approximation error. The proposed method is evaluated in numerous known simulated environments.
KeywordsFunction Approximation Markov Decision Process Policy Iteration Stochastic Gradient Descent Eligibility Trace
Unable to display preview. Download preview PDF.
- 1.Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
- 2.Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Journal of Artificial Inteligence Research 4, 237–285 (1996)Google Scholar
- 3.Sutton, R.: Learning to predict by the method of temporal differences. Machine Learning 3(1), 9–44 (1988)Google Scholar
- 4.Boyan, J.A.: Technical update: Least-squares temporal difference learning. Machine Learning, 233–246 (2002)Google Scholar
- 7.Rasmussen, C.E., Kuss, M.: Gaussian processes in reinforcement learning. In: Advances in Neural Information Processing Systems 16, pp. 751–759 (2004)Google Scholar
- 8.Engel, Y., Mannor, S., Meir, R.: Reinforcement learning with gaussian process. In: International Conference on Machine Learning, pp. 201–208 (2005)Google Scholar
- 9.Farahmand, A.M., Ghavamzadeh, M., Szepesvári, C., Mannor, S.: Regularized policy iteration. In: NIPS, pp. 441–448 (2008)Google Scholar
- 10.Konidaris, G.D., Osentoski, S., Thomas, P.S.: Value function approximation in reinforcement learning using the fourier basis. In: AAAI Conf. on Artificial Intelligence, pp. 380–385 (2011)Google Scholar
- 11.Mahadevan, S.: Samuel meets amarel: Automating value function approximation using global state space analysis. In: AAAI (2005)Google Scholar
- 14.Petrik, M.: An analysis of laplacian methods for value function approximation in mdps. In: International Joint Conference on Artificial Intelligence, pp. 2574–2579 (2007)Google Scholar