Q Learning Based on Self-organizing Fuzzy Radial Basis Function Network

  • Xuesong Wang
  • Yuhu Cheng
  • Wei Sun
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)


A fuzzy Q learning based on a self-organizing fuzzy radial basis function (FRBF) network is proposed to solve the ‘curse of dimensionality’ problem caused by state space generalization in the paper. A FRBF network is used to represent continuous action and the corresponding Q value. The interpolation technique is adopted to represent the appropriate utility value for the wining local action of every fuzzy rule. Neurons can be organized by the FRBF network itself. The methods of the structure and parameter learning, based on new adding and merging neurons techniques and a gradient descent algorithm, are simple and effective, with a high accuracy and a compact structure. Simulation results on balancing control of inverted pendulum illustrate the performance and applicability of the proposed fuzzy Q learning scheme to real-world problems with continuous states and continuous actions.


Membership Function Fuzzy Rule Fuzzy Inference System Inverted Pendulum Continuous Action 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Watkins, C.J.C.H., Dayan, P.: Technical Report: Q-Learning. Machine Learning 8(3), 279–292 (1992)MATHGoogle Scholar
  2. 2.
    Munos, R.: A Study of Reinforcement Learning in the Continuous Case by the Means of Viscosity Solutions. Machine Learning 40(3), 265–299 (2000)MATHCrossRefGoogle Scholar
  3. 3.
    Smith, A.J.: Applications of the Self-Organizing Map to Reinforcement Learning. Neural Network (15), 1107–1124 (2002)Google Scholar
  4. 4.
    Gross, H.M., Stephan, V., Krabbes, M.: A Neural Field Approach to Topological Reinforcement Learning in Continuous Action Spaces. In: Proceedings of the IEEE World Congress on Computational Intelligence, San Diego, vol. 3, pp. 3460–3465 (1998)Google Scholar
  5. 5.
    Jouffe, L.: Fuzzy Inference System Learning by Reinforcement Learning. IEEE Transactions on Systems, Man and Cybernetics 28(3), 338–355 (1998)CrossRefGoogle Scholar
  6. 6.
    Kim, M.S., Hong, S.G., Lee, J.J.: On-line Fuzzy Q-Learning with Extended Rule Interpolation Technique. In: Proceedings of the 1999/RSJ International Conference on Intelligent Robots and Systems, Kyongiu, vol. 2, pp. 757–762 (1999)Google Scholar
  7. 7.
    Samejima, K., Omori, T.: Adaptive Internal State Space Construction Method for Reinforcement Learning of a Real-World Agent. Neural Networks (12), 1143–1155 (1999)Google Scholar
  8. 8.
    Meesad, P., Yen, G.G.: Accuracy, Comprehensibility and Completeness Evaluation of a Fuzzy Expert System. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 1(4), 445–466 (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Xuesong Wang
    • 1
  • Yuhu Cheng
    • 1
  • Wei Sun
    • 1
  1. 1.School of Information and Electrical EngineeringChina University of Mining and TechnologyXuzhouP.R. China

Personalised recommendations