Abstract
A persistent challenge in data mining involves matching an applicable as well as effective machine learner to a target problem. One approach to facilitate this process is to develop algorithms that avoid modeling assumptions and seek to adapt to the problem at hand. Learning classifier systems (LCSs) have proven themselves to be a flexible, interpretable, and powerful approach to classification problems. They are particularly advantageous with respect to multivariate, complex, or heterogeneous patterns of association. While LCSs have been successfully adapted to handle continuous-valued endpoint (i.e. regression) problems, there are still some key performance deficits with respect to model prediction accuracy and simplicity when compared to other machine learners. In the present study we propose a strategy towards improving LCS performance on supervised learning continuous-valued endpoint problems. Specifically, we hypothesize that if an LCS population includes and co-evolves two disparate representations (i.e. LCS rules, and genetic programming trees) than the system can adapt the appropriate representation to best capture meaningful patterns of association, regardless of the complexity of that association, or the nature of the endpoint (i.e. discrete vs. continuous). To successfully integrate these modeling representations, we rely on multi-objective fitness (i.e. accuracy, and instance coverage) and an information exchange mechanism between the two representation ‘species’. This paper lays out the reasoning for this approach, introduces the proposed methodology, and presents basic preliminary results supporting the potential of this approach as an area for further evaluation and development.
Keywords
- Learning Classifier Systems (LCS)
- GP Trees
- Supervised Learning (SL)
- Rule-based Machine Learning
- Endpoint Prediction
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
M. Ahluwalia and L. Bull. A genetic programming-based classifier system. In Proceedings of the genetic and evolutionary computation conference, volume 1, pages 11–18, 1999.
J. Bacardit, E. K. Burke, and N. Krasnogor. Improving the scalability of rule-based evolutionary learning. Memetic Computing, 1(1):55–67, 2009.
E. Bernadó-Mansilla and J. M. Garrell-Guiu. Accuracy-based learning classifier systems: models, analysis and applications to classification tasks. Evolutionary Computation, 11(3):209–238, 2003.
A. Bonarini. An introduction to learning fuzzy classifier systems. In Learning Classifier Systems, pages 83–104. Springer, 2000.
M. V. Butz and O. Herbort. Context-dependent predictions and cognitive arm control with xcsf. In Proceedings of the 10th annual conference on Genetic and evolutionary computation, pages 1357–1364. ACM, 2008.
P. G. Espejo, S. Ventura, and F. Herrera. A survey on the application of genetic programming to classification. IEEE Transactions on Systems, Man and Cybernetics, Part C: Applications and Reviews, 40(2):121–144, 2010.
M. Iqbal, W. N. Browne, and M. Zhang. Xcsr with computed continuous action. In AI 2012: Advances in Artificial Intelligence, pages 350–361. Springer, 2012.
M. Iqbal, W. N. Browne, and M. Zhang. Evolving optimum populations with xcs classifier systems. Soft Computing, 17(3):503–518, 2013.
J. R. Koza. Genetic programming ii: Automatic discovery of reusable subprograms. Cambridge, MA, USA, 1994.
P. L. Lanzi. Extending the representation of classifier conditions part i: from binary to messy coding. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation-Volume 1, pages 337–344. Morgan Kaufmann Publishers Inc., 1999.
P. L. Lanzi and D. Loiacono. Classifier systems that compute action mappings. In Proceedings of the 9th annual conference on Genetic and evolutionary computation, pages 1822–1829. ACM, 2007.
A. Orriols-Puig, J. Casillas, and E. Bernadó-Mansilla. Fuzzy-ucs: a michigan-style learning fuzzy-classifier system for supervised learning. Evolutionary Computation, IEEE Transactions on, 13(2):260–283, 2009.
P. Tufts. Dynamic classifiers: genetic programming and classifier systems. In Proceedings of the Genetic Programming. Papers from the 1995 AAAI Fall Symposium, pages 114–119, 1995.
R. Urbanowicz, A. Granizo-Mackenzie, and J. Moore. Instance-linked attribute tracking and feedback for michigan-style supervised learning classifier systems. In Proceedings of the fourteenth international conference on Genetic and evolutionary computation conference, pages 927–934. ACM, 2012.
R. Urbanowicz and J. Moore. The application of michigan-style learning classifier systems to address genetic heterogeneity and epistasis in association studies. In Proceedings of the 12th annual conference on Genetic and evolutionary computation, pages 195–202. ACM, 2010.
R. Urbanowicz and J. Moore. Retooling fitness for noisy problems in a supervised michigan-style learning classifier system. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, pages 591–598. ACM, 2015.
R. Urbanowicz, N. Ramanand, and J. Moore. Continuous endpoint data mining with exstracs: A supervised learning classifier system. In Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation, pages 1029–1036. ACM, 2015.
R. J. Urbanowicz, A. S. Andrew, M. R. Karagas, and J. H. Moore. Role of genetic heterogeneity and epistasis in bladder cancer susceptibility and outcome: a learning classifier system approach. Journal of the American Medical Informatics Association, 20(4):603–612, 2013.
R. J. Urbanowicz, G. Bertasius, and J. H. Moore. An extended michigan-style learning classifier system for flexible supervised learning, classification, and data mining. In Parallel Problem Solving from Nature–PPSN XIII, pages 211–221. Springer, 2014.
R. J. Urbanowicz and W. N. Browne. Introduction to Learning Classifier Systems. Springer, 2017.
R. J. Urbanowicz, D. Granizo-Mackenzie, and J. H. Moore. Using expert knowledge to guide covering and mutation in a michigan style learning classifier system to detect epistasis and heterogeneity. In Parallel Problem Solving from Nature-PPSN XII, pages 266–275. Springer, 2012.
R. J. Urbanowicz and J. H. Moore. Learning classifier systems: a complete introduction, review, and roadmap. Journal of Artificial Evolution and Applications, 2009.
R. J. Urbanowicz and J. H. Moore. The application of pittsburgh-style learning classifier systems to address genetic heterogeneity and epistasis in association studies. In International Conference on Parallel Problem Solving from Nature, pages 404–413. Springer, 2010.
R. J. Urbanowicz and J. H. Moore. Exstracs 2.0: description and evaluation of a scalable learning classifier system. Evolutionary Intelligence, 8(2–3): 89–116, 2015.
R. J. Urbanowicz, R. S. Olson, and J. H. Moore. Pareto inspired multi-objective rule fitness for noise-adaptive rule-based machine learning. In International Conference on Parallel Problem Solving from Nature, pages 514–524. Springer, 2016.
M. Valenzuela-Rendón. The fuzzy classifier system: A classifier system for continuously varying variables. In Proceedings of the Fourth Intemational Conference on Genetic Algorithms (Morgan Kauffman), 1991.
S. W. Wilson. Classifier fitness based on accuracy. Evolutionary computation, 3(2):149–175, 1995.
S. W. Wilson. Get real! xcs with continuous-valued inputs. In Learning Classifier Systems, pages 209–219. Springer, 2000.
S. W. Wilson. Function approximation with a classifier system. In Proc. 3rd Genetic and Evolutionary Computation Conf.(GECCO’01), pages 974–981. Citeseer, 2001.
S. W. Wilson. Classifiers that approximate functions. Natural Computing, 1(2–3):211–234, 2002.
S. W. Wilson. Three architectures for continuous action. In Learning Classifier Systems, pages 239–257. Springer, 2007.
Acknowledgements
This work was supported by NIH grants AI11679, LM009012, AI116794, DK112217, ES013508, and LM010098.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Urbanowicz, R.J., Yang, B., Moore, J.H. (2018). Problem Driven Machine Learning by Co-evolving Genetic Programming Trees and Rules in a Learning Classifier System. In: Banzhaf, W., Olson, R., Tozier, W., Riolo, R. (eds) Genetic Programming Theory and Practice XV. Genetic and Evolutionary Computation. Springer, Cham. https://doi.org/10.1007/978-3-319-90512-9_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-90512-9_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-90511-2
Online ISBN: 978-3-319-90512-9
eBook Packages: Computer ScienceComputer Science (R0)