Skip to main content

Learning classifier systems from a reinforcement learning perspective

Abstract

 We analyze learning classifier systems in the light of tabular reinforcement learning. We note that although genetic algorithms are the most distinctive feature of learning classifier systems, it is not clear whether genetic algorithms are important to learning classifiers systems. In fact, there are models which are strongly based on evolutionary computation (e.g., Wilson's XCS) and others which do not exploit evolutionary computation at all (e.g., Stolzmann's ACS). To find some clarifications, we try to develop learning classifier systems “from scratch”, i.e., starting from one of the most known reinforcement learning technique, Q-learning. We first consider thebasics of reinforcement learning: a problem modeled as a Markov decision process and tabular Q-learning. We introduce a formal framework to define a general purpose rule-based representation which we use to implement tabular Q-learning. We formally define generalization within rules and discuss the possible approaches to extend our rule-based Q-learning with generalization capabilities. We suggest that genetic algorithms are probably the most general approach for adding generalization although they might be not the only solution.

This is a preview of subscription content, access via your institution.

Author information

Affiliations

Authors

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Lanzi, P. Learning classifier systems from a reinforcement learning perspective. Soft Computing 6, 162–170 (2002). https://doi.org/10.1007/s005000100113

Download citation

  • Keywords Genetic algorithms, Reinforcement learning, XCS, Q-learning