Advances in Artificial Intelligence

Volume 2056 of the series Lecture Notes in Computer Science pp 215-224


Stacking for Misclassiffication Cost Performance

  • Mike Cameron-JonesAffiliated withUniversity of Tasmania
  • , Andrew Charman-WilliamsAffiliated withUniversity of Tasmania

* Final gross prices may vary according to local VAT.

Get Access


This paper investigates the application of the multiple classifier technique known as “stacking” [23], to the task of classifier learning for misclassiffication cost performance, by straightforwardly adapting a technique successfully developed by Ting and Witten 20 for the task of classiffier learning for accuracy performance. Experiments are reported comparing the performance of the stacked classiffier with that of its component classifiers, and of other proposed cost-sensitive multiple classifier methods - a variation of “bagging”, and two “boosting” style methods. These experiments confirm that stacking is competitive with the other methods that have previously been proposed. Some further experiments examine the performance of stacking methods with different numbers of component classifiers, including the case of stacking a single classifier, and provide the first demonstration that stacking a single classifier can be beneficial for many data sets.