Stacking for Misclassiffication Cost Performance

  • Mike Cameron-Jones
  • Andrew Charman-Williams
Conference paper

DOI: 10.1007/3-540-45153-6_21

Part of the Lecture Notes in Computer Science book series (LNCS, volume 2056)
Cite this paper as:
Cameron-Jones M., Charman-Williams A. (2001) Stacking for Misclassiffication Cost Performance. In: Stroulia E., Matwin S. (eds) Advances in Artificial Intelligence. AI 2001. Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence), vol 2056. Springer, Berlin, Heidelberg

Abstract

This paper investigates the application of the multiple classifier technique known as “stacking” [23], to the task of classifier learning for misclassiffication cost performance, by straightforwardly adapting a technique successfully developed by Ting and Witten 20 for the task of classiffier learning for accuracy performance. Experiments are reported comparing the performance of the stacked classiffier with that of its component classifiers, and of other proposed cost-sensitive multiple classifier methods - a variation of “bagging”, and two “boosting” style methods. These experiments confirm that stacking is competitive with the other methods that have previously been proposed. Some further experiments examine the performance of stacking methods with different numbers of component classifiers, including the case of stacking a single classifier, and provide the first demonstration that stacking a single classifier can be beneficial for many data sets.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Mike Cameron-Jones
    • 1
  • Andrew Charman-Williams
    • 1
  1. 1.University of TasmaniaLauncestonAustralia

Personalised recommendations