Probabilistic Aggregation of Classifiers for Incremental Learning

  • Patricia Trejo
  • Ricardo Ñanculef
  • Héctor Allende
  • Claudio Moraga
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4507)


We work with a recently proposed algorithm where an ensemble of base classifiers, combined using weighted majority voting, is used for incremental classification of data. To successfully accommodate novel information without compromising previously acquired knowledge this algorithm requires an adequate strategy to determine the voting weights. Given an instance to classify, we propose to define each voting weight as the posterior probability of the corresponding hypothesis given the instance. By operating with priors and the likelihood models the obtained weights can take into account the location of the instance in the different class-specific feature spaces but also the coverage of each class k given the classifier and the quality of the learned hypothesis. This approach can provide important improvements in the generalization performance of the resulting classifier and its ability to control the stability/plasticity tradeoff. Experiments are carried out with three real classification problems already introduced to test incremental algorithms.


Mahalanobis Distance Likelihood Model Incremental Learn Probabilistic Uniform Concept Drift 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)Google Scholar
  2. 2.
    Freud, Y., Schapire, R.: A decision-theoretic generalization of on-line learning and application to boosting. Journal of Computer and System Sciences 55(1), 119–137 (1997)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Gangardiwala, A., Polikar, R.: Dynamically weighted majority voting for incremental learning and comparison of three boosting based approaches. In: Joint Conf. on Neural Networks (IJCNN 2005), pp. 1131–1136 (2005)Google Scholar
  4. 4.
    Grossberg, S.: Nonlinear neural networks: principles, mechanisms and architectures. Neural Networks 1(1), 17–61 (1988)CrossRefMathSciNetGoogle Scholar
  5. 5.
    Kuncheva, L.: Combining pattern classifiers: Methods and algorithms. Wiley InterScience, Chichester (2004)zbMATHGoogle Scholar
  6. 6.
    Littlestone, N., Warmuth, M.: The weighted majority algorithm. Information and Computation 108(2), 212–261 (1994)zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Muhlbaier, M.D., Topalis, A., Polikar, R.: Learn++.MT: A New Approach to Incremental Learning. In: Roli, F., Kittler, J., Windeatt, T. (eds.) MCS 2004. LNCS, vol. 3077, pp. 52–61. Springer, Heidelberg (2004)Google Scholar
  8. 8.
    Polikar, R., Udpa, L., Udpa, S., Honavar, V.: Learn++: An incremental learning algorithm for supervised neural networks. IEEE Transactions on systems, man, and cybernetics Part C: applications and reviews 31(4), 497–508 (2001)CrossRefGoogle Scholar
  9. 9.
    Vijayakumar, S., Ogawa, H.: RKHS based functional analysis for exact incremental learning. Neurocomputing 29, 85–113 (1999)CrossRefGoogle Scholar
  10. 10.
    Widmer, K., Kubat, M.: Learning in the presence of concept drift and hidden contexts. Machine Learning 23, 69–101 (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Patricia Trejo
    • 1
  • Ricardo Ñanculef
    • 1
  • Héctor Allende
    • 1
  • Claudio Moraga
    • 2
    • 3
  1. 1.Universidad Técnica Federico Santa María, Departamento de Informática, CP 110-V ValparaísoChile
  2. 2.European Centre for Soft Computing 33600 Mieres, AsturiasSpain
  3. 3.Dortmund University, 44221 DortmundGermany

Personalised recommendations