Any change in the classification problem in the course of online classification is termed changing environments. Examples of changing environments include change in the underlying data distribution, change in the class definition, adding or removing a feature. The two general strategies for handling changing environments are (i) constant update of the classifier and (ii) re-training of the classifier after change detection. The former strategy is useful with gradual changes while the latter is useful with abrupt changes. If the type of changes is not known in advance, a combination of the two strategies may be advantageous. We propose a classifier ensemble using Winnow. For the constant-update strategy we used the nearest neighbour with a fixed size window and two methods with a learning rate: the online perceptron and an online version of the linear discriminant classifier (LDC). For the detect-and-retrain strategy we used the nearest neighbour classifier and the online LDC. Experiments were carried out on 28 data sets and 3 different scenarios: no change, gradual change and abrupt change. The results indicate that the combination works better than each strategy on its own.


Learning Rate Near Neighbour Average Rank Concept Drift Sequential Probability Ratio Test 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Demšar, J.: Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research 7, 1–30 (2006)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Fürnkranz, J.: Round robin classification. Journal of Machine Learning Research 2, 721–747 (2002)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Gama, J., Medas, P., Castillo, G., Rodrigues, P.: Learning with drift detection. In: Bazzan, A.L.C., Labidi, S. (eds.) SBIA 2004. LNCS, vol. 3171, pp. 286–295. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  4. 4.
    Kuncheva, L.I.: Classifier ensembles for changing environments. In: Roli, F., Kittler, J., Windeatt, T. (eds.) MCS 2004. LNCS, vol. 3077, pp. 1–15. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  5. 5.
    Littlestone, N.: Learning quickly when irrelevant attributes abound: A new linear threshold algorithm. Machine Learning 2(4), 285–318 (1988)Google Scholar
  6. 6.
    Núñez, M., Fidalgo, R., Morales, R.: Learning in environments with unknown dynamics: Towards more robust concept learners. Journal of Machine Learning Research 8, 2595–2628 (2007)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Polikar, R., Udpa, L., Udpa, S.S., Honavar, V.: Learn++: An incremental learning algorithm for supervised neural networks. IEEE Transactions on Systems, Man and Cybernetics 31(4), 497–508 (2001)CrossRefGoogle Scholar
  8. 8.
    Reynolds, M.R., Stoumbos, Z.G.: The SPRT chart for monitoring a proportion. IIE Transactions 30(6), 545–561 (1998)Google Scholar
  9. 9.
    Stanley, K.O.: Learning concept drift with a committee of decision trees. Technical Report AI-03-302, Computer Science Department, University of Texas-Austin (2003)Google Scholar
  10. 10.
    Wang, H., Fan, W., Yu, P.S., Han, J.: Mining concept-drifting data streams using ensemble classifiers. In: KDD 2003: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 226–235. ACM, New York (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Juan J. Rodríguez
    • 1
  • Ludmila I. Kuncheva
    • 2
  1. 1.Lenguajes y Sistemas InformáticosUniversidad de BurgosSpain
  2. 2.School of Computer ScienceBangor UniversityUK

Personalised recommendations