Adaptive Classifier Selection Based on Two Level Hypothesis Tests for Incremental Learning

  • Haixia Chen
  • Senmiao Yuan
  • Kai Jiang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4109)


Recently, the importance of incremental learning in changing environments has been acknowledged. This paper proposes a new ensemble learning method based on two level hypothesis tests for incremental learning in concept changing environments. We analyze the classification error as a stochastic variable, and introduce hypothesis test as mechanism for adaptively selecting classifiers. Hypothesis tests are used to distinguish between useful and useless individual classifiers and to identify classifier to be updated. Classifiers deemed as useful by the hypothesis test are integrated to form the final prediction. Experiments with simulated concept changing scenarios show that the proposed method could adaptively choose proper classifiers and adapt quickly to different concept changes to maintain its performance level.


Concept Change Base Classifier Incremental Learning Concept Drift Target Concept 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Widmer, G., Kubat, M.: Learning in the presence of concept drift and hidden contexts. Machine Learning 23, 69–101 (1996)Google Scholar
  2. 2.
    Maloof, M.A., Michalski, R.S.: Incremental learning with partial instance memory. Artificial Intelligence 154, 95–126 (2004)MATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    McCloskey, M., Cohen, N.: Catastrophic interference in connectionist networks: the sequential learning problem. The Psychology of Learning and Motivation 24, 109–164 (1989)CrossRefGoogle Scholar
  4. 4.
    Schlimmer, J.C., Granger Jr., R.H.: Incremental learning from noisy data. Machine Learning 1, 317–354 (1986)Google Scholar
  5. 5.
    Polikar, R., Udpa, L., Udpa, S.S., Honavar, V.: Learn++: an incremental learning algorithm for supervised neural networks. IEEE Transactions on Systems, man, and Cybernetics-Part C: Applications and Reviews 31(4), 497–508 (2001)CrossRefGoogle Scholar
  6. 6.
    Chu, F., Zaniolo, C.: Fast and light boosting for adaptive mining of data streams. In: Dai, H., Srikant, R., Zhang, C. (eds.) PAKDD 2004. LNCS (LNAI), vol. 3056, pp. 282–292. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  7. 7.
    Street, W., Kim, Y.: A streaming ensemble algorithm (SEA) for large-scale classification. In: Proc. 7th ACM SIGKDD, pp. 377–382. ACM Press, New York (2001)Google Scholar
  8. 8.
    Wang, H., Fan, W., Yu, P.S., Han, J.: Mining concept-drifting data streams using ensemble classifiers. In: SIGKDD 2003, Washington, DC, USA, August 24-27 (2003)Google Scholar
  9. 9.
    Chu, F., Wang, Y., Zaniolo, C.: Mining noisy data streams via a discriminative model. In: Suzuki, E., Arikawa, S. (eds.) DS 2004. LNCS (LNAI), vol. 3245, pp. 47–59. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  10. 10.
    Klinkenberg, R., Renz, I.: Adaptive information filtering: Learning in the presence of concept drifts. In: Learning for Text Categorization, pp. 33–40. AAAI Press, Menlo Park (1998)Google Scholar
  11. 11.
    Fung, G.P.C., Yu, J.X., Lu, H.: Classifying text streams in the presence of concept drifts. In: Dai, H., Srikant, R., Zhang, C. (eds.) PAKDD 2004. LNCS (LNAI), vol. 3056, pp. 373–383. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  12. 12.
    Natwichai, J., Li, X.: Knowledge maintenance on data streams with concept drifting. In: Zhang, J., He, J.-H., Fu, Y. (eds.) CIS 2004. LNCS, vol. 3314, pp. 705–710. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  13. 13.
    Gama, J., Medas, P., Castillo, G., Rodrigues, P.: Learning with drift detection. In: Bazzan, A.L.C., Labidi, S. (eds.) SBIA 2004. LNCS (LNAI), vol. 3171, pp. 286–295. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  14. 14.
    Duda, R.O., Hart, P.E.: Pattern classification and scene analysis, 2nd edn. Willey and Sons, New York (2001)Google Scholar
  15. 15.
    Domingos, P., Pazzani, M.: On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning 29, 103–129 (1997)MATHCrossRefGoogle Scholar
  16. 16.
    Hulten, G., Spencer, L., Domingos, P.: Mining time-changing data streams. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, pp. 97–106 (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Haixia Chen
    • 1
  • Senmiao Yuan
    • 1
  • Kai Jiang
    • 2
  1. 1.College of Computer Science and TechnologyJilin UniversityChangchunChina
  2. 2.China Electronics Technology Group Corporation No. 45 Research InstituteBeijingChina

Personalised recommendations