Continuous-State Reinforcement Learning with Fuzzy Approximation

  • Lucian Buşoniu
  • Damien Ernst
  • Bart De Schutter
  • Robert Babuška
Conference paper

DOI: 10.1007/978-3-540-77949-0_3

Part of the Lecture Notes in Computer Science book series (LNCS, volume 4865)
Cite this paper as:
Buşoniu L., Ernst D., De Schutter B., Babuška R. (2008) Continuous-State Reinforcement Learning with Fuzzy Approximation. In: Tuyls K., Nowe A., Guessoum Z., Kudenko D. (eds) Adaptive Agents and Multi-Agent Systems III. Adaptation and Multi-Agent Learning. Lecture Notes in Computer Science, vol 4865. Springer, Berlin, Heidelberg

Abstract

Reinforcement learning (RL) is a widely used learning paradigm for adaptive agents. There exist several convergent and consistent RL algorithms which have been intensively studied. In their original form, these algorithms require that the environment states and agent actions take values in a relatively small discrete set. Fuzzy representations for approximate, model-free RL have been proposed in the literature for the more difficult case where the state-action space is continuous. In this work, we propose a fuzzy approximation architecture similar to those previously used for Q-learning, but we combine it with the model-based Q-value iteration algorithm. We prove that the resulting algorithm converges. We also give a modified, asynchronous variant of the algorithm that converges at least as fast as the original version. An illustrative simulation example is provided.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Lucian Buşoniu
    • 1
  • Damien Ernst
    • 2
  • Bart De Schutter
    • 1
  • Robert Babuška
    • 1
  1. 1.Delft University of TechnologyThe Netherlands
  2. 2.Supélec, RennesFrance

Personalised recommendations