Advertisement

Can AdaBoost.M1 Learn Incrementally? A Comparison to Learn + +  Under Different Combination Rules

  • Hussein Syed Mohammed
  • James Leander
  • Matthew Marbach
  • Robi Polikar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4131)

Abstract

We had previously introduced Learn + + , inspired in part by the ensemble based AdaBoost algorithm, for incrementally learning from new data, including new concept classes, without forgetting what had been previously learned. In this effort, we compare the incremental learning performance of Learn + +  and AdaBoost under several combination schemes, including their native, weighted majority voting. We show on several databases that changing AdaBoost’s distribution update rule from hypothesis based update to ensemble based update allows significantly more efficient incremental learning ability, regardless of the combination rule used to combine the classifiers.

Keywords

Generalization Performance Incremental Learning Combination Rule Weak Classifier Narrow Confidence Interval 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    French, R.: Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences 3(4), 128–135 (1999)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Grossberg, S.: Nonlinear neural networks: principles, mechanisms and architectures. Neural Networks 1(1), 17–61 (1988)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Carpenter, G.A., Grossberg, S., Markuzon, N., Reynolds, J.H., Rosen, D.B.: Fuzzy ARTMAP: A neural network architecture for incremental learning of analog multidimen-sional maps. IEEE Trans. on Neural Networks 3(5), 698–713 (1992)CrossRefGoogle Scholar
  4. 4.
    Freund, Y., Schapire, R.: Decision-theoretic generalization of on-line learning and an application to boosting. J. Comp. Sys. Sci. 55(1), 119–139 (1997)MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Polikar, R., Udpa, L., Udpa, S., Honavar, V.: Learn + + : An incremental learning algorithm for supervised neural networks. IEEE Trans. on System, Man and Cybernetics (C) 31(4), 497–508 (2001)CrossRefGoogle Scholar
  6. 6.
    Polikar, R., Byorick, J., Krause, S., Marino, A., Moreton, M.: Learn++: A classifier independent incremental learning algorithm for supervised neural networks. In: Proc. of Int. Joint Conference on Neural Networks (IJCNN 2002), May 12-17, 2002, vol. 2, pp. 1742–1747. Honolulu, HI (2002)Google Scholar
  7. 7.
    Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On combining classifiers. IEEE Trans. on Pattern Analy. and Machine Int. 20(3), 226–239 (1998)CrossRefGoogle Scholar
  8. 8.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. John Wiley & Sons, N.J (2004)MATHCrossRefGoogle Scholar
  9. 9.
    Kuncheva, L.I., Bezdek, J.C., Duin, R.P.W.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Rec. 34(2), 299–314 (2001)MATHCrossRefGoogle Scholar
  10. 10.
    Blake, C.L., Merz, C.J.: Univ. of California, Irvine, Repository of Machine Learning Databases at Irvine, CAGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Hussein Syed Mohammed
    • 1
  • James Leander
    • 1
  • Matthew Marbach
    • 1
  • Robi Polikar
    • 1
  1. 1.Electrical and Computer EngineeringRowan UniversityGlassboroUSA

Personalised recommendations