Advertisement

Abstract

In recent years a number of authors have suggested that combining classifiers within local regions of the measurement space might yield superior classification performance to rigid global weightingschemes. In this paper we describe a modified version of the CART algorithm, called ARPACC, that performs local classifier combination. One obstacle to such combination is the fact that the ‘optimal’ covariance combination results originally assumed only two classes and classifier unbiasedness. In this paper we adopt an approach based on minimizing the Brier score and introduce a generalized matrix inverse solution for use in cases where the error matrix is singular. We also report some preliminary experimental results on simulated data.

Keywords

Local Combination Brier Score CART 

References

  1. 1.
    Bates, J.M., Granger, W.J.: The combination of forecasts. Operational Research Quarterly 20, 451–468 (1969)CrossRefGoogle Scholar
  2. 2.
    Blockeel, H., Struyf, J.: Frankenstein classifiers: Some experiments on the sisyphus data set. In: Proceedings of IDDM 2001, pp. 1–12 (2001)Google Scholar
  3. 3.
    Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees, Wadsworth, U.S (1984)Google Scholar
  4. 4.
    Dickinson, J.P.: The combination of short term forecasts. In: Proc. Univ. of Lancaster Forecasting Conference (1972)Google Scholar
  5. 5.
    Dickinson, J.P.: Some comments on the combination of forecasts. Operational Research Quarterly 26, 205–210 (1975)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Frank, E., Wang, Y., Inglis, S., Holmes, G., Witten, I.: Using model trees for classification. Machine Learning 32, 63–76 (1998)zbMATHCrossRefGoogle Scholar
  7. 7.
    Hand, D.J.: Construction and Assessment of Classification Rules. John Wiley & Sons, Chichester (1997)zbMATHGoogle Scholar
  8. 8.
    Hashem, S., Schmeiser, B.: Improving model accuracy using optimal linear combinations of trained neural networks. IEEE Transactions on Neural Networks 6, 792–794 (1995)CrossRefGoogle Scholar
  9. 9.
    Joergensen, T.M., Linneberg, C.: Feature weighted ensemble classifiers - a modified decision scheme. In: Kittler, J., Roli, F. (eds.) MCS 2001. LNCS, vol. 2096, pp. 218–227. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  10. 10.
    Kamel, M.S., Wanas, N.M.: Data dependence in combining classifiers. In: Windeatt, T., Roli, F. (eds.) MCS 2003. LNCS, vol. 2709, pp. 1–14. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  11. 11.
    Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(3) (1998)Google Scholar
  12. 12.
    Venables, W., Ripley, B.: Modern Applied Statistics with S-Plus. Springer, Heidelberg (1994)zbMATHGoogle Scholar
  13. 13.
    Woods, K., Kegelmeyer, W.P., Bowyer, K.: Combination of multiple classifiers using local accuracy estimates. IEEE Transactions on Pattern Analysis and Machie Intelligence 19, 405–410 (1997)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Ross A. McDonald
    • 1
  • Idris A. Eckley
    • 2
  • David J. Hand
    • 1
  1. 1.Imperial College London 
  2. 2.Shell Research Ltd 

Personalised recommendations