Incremental Rule Base Creation with Fuzzy Rule Interpolation-Based Q-Learning

  • Dávid Vincze
  • Szilveszter Kovács
Part of the Studies in Computational Intelligence book series (SCI, volume 313)


Reinforcement Learning (RL) is a widely known topic in computational intelligence. In the RL concept the problem needed to be solved is hidden in the feedback of the environment, called rewards. Using these rewards the system can learn which action is considered to be the best choice in a given state. One of the most frequently used RL method is the Q-learning, which was originally introduced for discrete states and actions. Applying fuzzy reasoning, the method can be adapted for continuous environments, called Fuzzy Q-learning. An extension of the Fuzzy Q-learning method with the capability of handling sparse fuzzy rule bases is already introduced by the authors. The latter suggests a Fuzzy Rule Interpolation (FRI) method to be the reasoning method applied with Q-learning, called FRIQ-learning. The main goal of this paper is to introduce a method which can construct the requested FRI fuzzy model from scratch in a reduced size. The reduction is achieved by incremental creation of an intentionally sparse fuzzy rule base. Moreover an application example (cart-pole problem simulation) shows the promising results of the proposed rule base reduction method.


reinforcement learning fuzzy Q-learning fuzzy rule interpolation fuzzy rule base reduction 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Appl, M.: Model-based Reinforcement Learning in Continuous Environments. Ph.D. thesis, Technical University of München, München, Germany,, Verlag im Internet (2000)Google Scholar
  2. 2.
    Baranyi, P., Kóczy, L.T., Gedeon, T.D.: A Generalized Concept for Fuzzy Rule Interpolation. IEEE Trans. on Fuzzy Systems 12(6), 820–837 (2004)CrossRefGoogle Scholar
  3. 3.
    Bellman, R.E.: Dynamic Programming. Princeton University Press, Princeton (1957)zbMATHGoogle Scholar
  4. 4.
    Berenji, H.R.: Fuzzy Q-Learning for Generalization of Reinforcement Learning. In: Proc. of the 5th IEEE International Conference on Fuzzy Systems, pp. 2208–2214 (1996)Google Scholar
  5. 5.
    Bonarini, A.: Delayed Reinforcement, Fuzzy Q-Learning and Fuzzy Logic Controllers. In: Herrera, F., Verdegay, J.L. (eds.) Genetic Algorithms and Soft Computing, Studies in Fuzziness, 8, pp. 447–466. Physica-Verlag, Berlin (1996)Google Scholar
  6. 6.
    Horiuchi, T., Fujino, A., Katai, O., Sawaragi, T.: Fuzzy Interpolation-based Q-learning with Continuous States and Actions. In: Proc. of the 5th IEEE International Conference on Fuzzy Systems, vol. 1, pp. 594–600 (1996)Google Scholar
  7. 7.
    Johanyák, Z.C.: Sparse Fuzzy Model Identification Matlab Toolbox - RuleMaker Toolbox. In: IEEE 6th International Conference on Computational Cybernetics, November 27-29, pp. 69–74. Stara Lesná, Slovakia (2008)CrossRefGoogle Scholar
  8. 8.
    Johanyák, Z.C., Tikk, D., Kovács, S., Wong, K.W.: Fuzzy Rule Interpolation Matlab Toolbox – FRI Toolbox. In: Proc. of the IEEE World Congress on Computational Intelligence (WCCI 2006), 15th Int. Conf. on Fuzzy Systems (FUZZ-IEEE 2006), Vancouver, BC, Canada, July 16-21, pp. 1427–1433. Omnipress (2006)Google Scholar
  9. 9.
    Klawonn, F.: Fuzzy Sets and Vague Environments. Fuzzy Sets and Systems 66, 207–221 (1994)zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Kovács, S., Kóczy, L.T.: Approximate Fuzzy Reasoning Based on Interpolation in the Vague Environment of the Fuzzy Rule base as a Practical Alternative of the Classical CRI. In: Proceedings of the 7th International Fuzzy Systems Association World Congress, Prague, Czech Republic, pp. 144–149 (1997)Google Scholar
  11. 11.
    Kovács, S.: Extending the Fuzzy Rule Interpolation FIVE by Fuzzy Observation. In: Reusch, B. (ed.) Advances in Soft Computing, Computational Intelligence, Theory and Applications, pp. 485–497. Springer, Germany (2006)CrossRefGoogle Scholar
  12. 12.
    Kovács, S.: New Aspects of Interpolative Reasoning. In: Proceedings of the 6th International Conference on Information Processing and Management of Uncertainty in Knowledge-based Systems, Granada, Spain, pp. 477–482 (1996)Google Scholar
  13. 13.
    Kovács, S.: SVD Reduction in Continuos Environment Reinforcement Learning. In: Reusch, B. (ed.) Fuzzy Days 2001. LNCS, vol. 2206, pp. 719–738. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  14. 14.
    Kovács, S., Kóczy, L.T.: The Use of the Concept of Vague Environment in Approximate Fuzzy Reasoning. In: Fuzzy Set Theory and Applications, Tatra Mountains Mathematical Publications, Mathematical Institute Slovak Academy of Sciences, vol. 12, pp. 169–181. Slovak Republic, ratislava (1997)Google Scholar
  15. 15.
    Krizsán, Z., Kovács, S.: Gradient-based Parameter Optimisation of FRI FIVE. In: Proceedings of the 9th International Symposium of Hungarian Researchers on Computational Intelligence and Informatics, Budapest, Hungary, November 6-8, pp. 531–538 (2008) ISBN 978-963-7154-82-9Google Scholar
  16. 16.
    Kovács, S.: Interpolative Fuzzy Reasoning in Behaviour-based Control, Advances in Soft Computing. In: Reusch, B. (ed.) Computational Intelligence, Theory and Applications, vol. 2, pp. 159–170. Springer, Germany (2005)CrossRefGoogle Scholar
  17. 17.
    José Antonio Martin, H., De Lope, J.: A Distributed Reinforcement Learning Architecture for Multi-Link Robots. In: 4th International Conference on Informatics in Control, Automation and Robotics, ICINCO 2007 (2007)Google Scholar
  18. 18.
    Rummery, G.A., Niranjan, M.: On-line Q-learning Using Connectionist Systems. In: CUED/F-INFENG/TR, vol. 166, Cambridge University, UK (1994)Google Scholar
  19. 19.
    Shepard, D.: A Two Dimensional Interpolation Function for Irregularly Spaced Data. In: Proc. 23rd ACM Internat. Conf., pp. 517–524 (1968)Google Scholar
  20. 20.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  21. 21.
    Tikk, D., Joó, I., Kóczy, L.T., Várlaki, P., Moser, B., Gedeon, T.D.: Stability of Interpolative Fuzzy KH-Controllers. Fuzzy Sets and Systems 125(1), 105–119 (2002)zbMATHCrossRefMathSciNetGoogle Scholar
  22. 22.
    Vincze, D., Kovács, S.: Using Fuzzy Rule Interpolation-based Automata for Controlling Navigation and Collision Avoidance Behaviour of a Robot. In: IEEE 6th International Conference on Computational Cybernetics, Stara Lesná, Slovakia, November 27-29, pp. 79–84 (2008) ISBN: 978-1-4244-2875-5Google Scholar
  23. 23.
    Vincze, D., Kovács, S.: Fuzzy Rule Interpolation-based Q-learning. In: SACI 2009, 5th International Symposium on Applied Computational Intelligence and Informatics, Timisoara, Romania, May 28-29, pp. 55–59 (2009) ISBN: 978-1-4244-4478-6Google Scholar
  24. 24.
    Vincze, D., Kovács, S.: Reduced Rule Base in Fuzzy Rule Interpolation-based Q-learning. In: Proceedings of 10th International Symposium of Hungarian Researchers on Computational Intelligence and Informatics, November 12-14, pp. 533–544. Budapest Tech, Hungary (2009)Google Scholar
  25. 25.
    Watkins, C.J.C.H.: Learning from Delayed Rewards. Ph.D. thesis, Cambridge University, Cambridge, England (1989)Google Scholar
  26. 26.
    The FRI Toolbox is available at,
  27. 27.
  28. 28.
    The cart-pole example for discrete space can be found at:

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Dávid Vincze
    • 1
  • Szilveszter Kovács
    • 1
  1. 1.Department of Information TechnologyUniversity of MiskolcMiskolcHungary

Personalised recommendations