Empirical Evaluation of Ensemble Techniques for a Pittsburgh Learning Classifier System

  • Jaume Bacardit
  • Natalio Krasnogor
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4998)

Abstract

Ensemble techniques have proved to be very successful in boosting the performance of several types of machine learning methods. In this paper, we illustrate its usefulness in combination with GAssist, a Pittsburgh-style Learning Classifier System. Two types of ensembles are tested. First we evaluate an ensemble for consensus prediction. In this case several rule sets learnt using GAssist with different initial random seeds are combined using a flat voting scheme in a fashion similar to bagging. The second type of ensemble is intended to deal more efficiently with ordinal classification problems. That is, problems where the classes have some intrinsic order between them and, in case of misclassification, it is preferred to predict a class that is close to the correct one within the class intrinsic order. The ensemble for consensus prediction is evaluated using 25 datasets from the UCI repository. The hierarchical ensemble is evaluated using a Bioinformatics dataset. Both methods significantly improve the performance and behaviour of GAssist in all the tested domains.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Jaume Bacardit
    • 1
    • 2
  • Natalio Krasnogor
    • 1
  1. 1.Automated Scheduling, Optimization and Planning research group, School of Computer ScienceUniversity of NottinghamNottinghamUK
  2. 2.Multidisciplinary Centre for Integrative Biology, School of BiosciencesUniversity of NottinghamSutton BoningtonUK

Personalised recommendations