Abstract
MACS (Modular Anticipatory Classifier System) is a new Anticipatory Classifier System. With respect to its predecessors, ACS, ACS2 and YACS, the latent learning process in MACS is able to take advantage of new regularities. Instead of anticipating all attributes of the perceived situations in the same classifier, MACS only anticipates one attribute per classifier. In this paper we describe how the model of the environment represented by the classifiers can be used to perform active exploration and how this exploration policy is aggregated with the exploitation policy. The architecture is validated experimentally. Then we draw more general principles from the architectural choices giving rise to MACS. We show that building a model of the environment can be seen as a function approximation problem which can be solved with Anticipatory Classifier Systems such as MACS, but also with accuracy-based systems like XCS or XCSF, organized into a Dyna architecture.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
M. V. Butz, D. E. Goldberg, and W. Stolzmann. Introducing a genetic generalization pressure to the Anticipatory Classifier System part I: Theoretical approach. In Proceedings of the 2000 Genetic and Evolutionary Computation Conference (GECCO 2000), pages 34–41, 2000.
M. V. Butz. An Algorithmic Description of ACS2. In W. Stolzmann, and S.W. Wilson, editors. Advances in Learning Classifier Systems, volume 2321 of Lecture Notes in Artificial Intelligence. Springer-Verlag, Berlin, 2002 Lanzi et al. [LSW02], pages 211–229.
M. V. Butz. Biasing Exploration in an Anticipatory Learning Classifier System. In W. Stolzmann, and S.W. Wilson, editors. Advances in Learning Classifier Systems, volume 2321 of Lecture Notes in Artificial Intelligence. Springer-Verlag, Berlin, 2002 Lanzi et al. [LSW02], pages 3–22.
P. Gérard, J.-A. Meyer, and O. Sigaud. Combining latent learning with dynamic programming. European Journal of Operation Research, to appear, 2003.
P. Gérard and O. Sigaud. YACS: Combining Anticipation and Dynamic Programming in Classifier Systems. In P. L. Lanzi, W. Stolzmann, and S.W. Wilson, editors, Advances in Learning Classifier Systems, volume 1996 of Lecture Notes in Artificial Intelligence, pages 52–69. Springer-Verlag, Berlin, 2001.
P. Gérard, W. Stolzmann, and O. Sigaud. YACS: a new Learning Classifier System with Anticipation. Journal of Soft Computing: Special Issue on Learning Classifier Systems, 6(3–4):216–228, 2002.
J.H. Holland. Properties of the bucket brigade algorithm. In J.J. Grefenstette, editor, Proceedings of the 1st international Conference on Genetic Algorithms and their applications (ICGA85), pages 1–7. L.E. Associates, July 1985.
P. L. Lanzi. Learning Classifier Systems from a reinforcement learning perspective. Technical Report 00-03, Dip. di Elettronica e Informazione, Politecnico di Milano, 2000.
P. L. Lanzi, W. Stolzmann, and S.W. Wilson, editors. Advances in Learning Classifier Systems, volume 2321 of Lecture Notes in Artificial Intelligence. Springer-Verlag, Berlin, 2002.
R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
W. Stolzmann. Anticipatory Classifier Systems. In J.R. Koza, W. Banzhaf, K. Chellapilla, K. Deb, M. Dorigo, D. B. Fogel, M. H. Garzon, D. E. Goldberg, H. Iba, and R. Riolo, editors, Genetic Programming, pages 658–664. Morgan Kaufmann Publishers, Inc., San Francisco, CA, 1998.
W. Stolzmann. An introduction to Anticipatory Classifier Systems. In P. L. Lanzi, W. Stolzmann, and S. W. Wilson, editors, Learning Classifier Systems: from Foundations to Applications, pages 175–194. Springer-Verlag, Heidelberg, 2000.
R. S. Sutton. Integrating architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning ICML’90, pages 216–224, San Mateo, CA, 1990. Morgan Kaufmann.
R. S. Sutton. Reinforcement learning architectures for animats. In J.-A. Meyer and S. W. Wilson, editors, From animals to animats: Proceedings of the First International Conference on Simulation of Adaptative Behavior, pages 288–296, Cambridge, MA, 1991. MIT Press.
C. J. Watkins. Learning with delayed rewards. PhD thesis, Psychology Department, University of Cambridge, England, 1989.
S. W. Wilson. ZCS, a zeroth level Classifier System. Evolutionary Computation, 2(1):1–18, 1994.
S. W. Wilson. Classifier fitness based on accuracy. Evolutionary Computation, 3(2):149–175, 1995.
S. W. Wilson. Function approximation with a classifier system. In L. Spector, Goodman E. D., A. Wu, W. B. Langdon, H. M. Voigt, and M. Gen, editors, Proeeedings of the Genetic and Evolutionary Computation Conference (GECCO01), pages 974–981. Morgan Kaufmann, 2001.
S. W. Wilson. Classifiers that approximate functions. Natural Computing, 1(2–3):211–234, 2002.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gérard, P., Sigaud, O. (2003). Designing Efficient Exploration with MACS: Modules and Function Approximation. In: Cantú-Paz, E., et al. Genetic and Evolutionary Computation — GECCO 2003. GECCO 2003. Lecture Notes in Computer Science, vol 2724. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45110-2_85
Download citation
DOI: https://doi.org/10.1007/3-540-45110-2_85
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40603-7
Online ISBN: 978-3-540-45110-5
eBook Packages: Springer Book Archive