Combining Methods for Dynamic Multiple Classifier Systems

  • Amber Tomas
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5064)

Abstract

Most of what we know about multiple classifier systems is based on empirical findings, rather than theoretical results. Although there exist some theoretical results for simple and weighted averaging, it is difficult to gain an intuitive feel for classifier combination. In this paper we derive a bound on the region of the feature space in which the decision boundary can lie, for several methods of classifier combination using non-negative weights. This includes simple and weighted averaging of classifier outputs, and allows for a more intuitive understanding of the influence of the classifiers combined. We then apply this result to the design of a multiple logistic model for classifier combination in dynamic scenarios, and discuss its relevance to the concept of diversity amongst a set of classifiers.We consider the use of pairs of classifiers trained on label-swapped data, and deduce that although non-negative weights may be beneficial in stationary classification scenarios, for dynamic problems it is often necessary to use unconstrained weights for the combination.

Keywords

Dynamic Classification Multiple Classifier Systems Classifier Diversity 

References

  1. 1.
    Breiman, L.: Stacked Regressions. Machine Learning 24, 49–64 (1996)MATHMathSciNetGoogle Scholar
  2. 2.
    Fumera, G., Roli, F.: Performance Analysis and Comparison of Linear Combiners for Classifier Fusion. In: Caelli, T.M., Amin, A., Duin, R.P.W., Kamel, M.S., de Ridder, D. (eds.) SPR 2002 and SSPR 2002. LNCS, vol. 2396, pp. 424–432. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  3. 3.
    Fumera, G., Roli, F.: A Theoretical and Experimental Analysis of Linear Combiners for Multiple Classifier Systems. IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 942–956 (2005)CrossRefGoogle Scholar
  4. 4.
    Kelly, M., Hand, D., Adams, N.: The Impact of Changing Populations on Classifier Performance. In: KDD 1999: Proc. 5th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, California, United States, pp. 367–371. ACM, New York (1999)CrossRefGoogle Scholar
  5. 5.
    Kuncheva, L.I.: Classifier Ensembles for Changing Environments. In: Roli, F., Kittler, J., Windeatt, T. (eds.) MCS 2004. LNCS, vol. 3077, pp. 1–15. Springer, Heidelberg (2004)Google Scholar
  6. 6.
    Kuncheva, L.I., Whitaker, C.J.: Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy. Machine Learning 51, 181–207 (2003)MATHCrossRefGoogle Scholar
  7. 7.
    Le Blanc, M., Tibshirani, R.: Combining Estimates in Regression and Classification. Technical Report 9318, Dept. of Statistics, Univ. of Toronto (1993)Google Scholar
  8. 8.
    Tumer, K., Ghosh, J.: Analysis of Decision Boundaries in Linearly Combined Neural Classifiers. Pattern Recognition 29, 341–348 (1996)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Amber Tomas
    • 1
  1. 1.Department of StatisticsThe University of OxfordOxfordUnited Kingdom

Personalised recommendations