Mining Concept-Drifting Data Streams

  • Haixun WangEmail author
  • Philip S. Yu
  • Jiawei Han


Knowledge discovery from infinite data streams is an important and difficult task. We are facing two challenges, the overwhelming volume and the concept drifts of the streaming data. In this chapter, we introduce a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Bayesian, etc., from sequential chunks of the data stream. The classifiers in the ensemble are judiciously weighted based on their expected classification accuracy on the test data under the time-evolving environment. Thus, the ensemble approach improves both the efficiency in learning the model and the accuracy in performing classification. Our empirical study shows that the proposed methods have substantial advantage over single-classifier approaches in prediction accuracy, and the ensemble framework is effective for a variety of classification models.

Key words

Data Mining concept learning classifier design and evaluation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.



We thank Wei Fan of IBM T. J. Watson Research Center for providing us with a revised version of the C4.5 decision tree classifier and running some experiments.


  1. Babcock B., Babu S. , Datar M. , Motawani R. , and Widom J., Models and issues in data stream systems, In ACM Symposium on Principles of Database Systems (PODS), 2002.Google Scholar
  2. Babu S. and Widom J., Continuous queries over data streams. SIGMOD Record, 30:109– 120, 2001.CrossRefGoogle Scholar
  3. Bauer, E. and Kohavi, R., An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, 36(1-2):105–139, 1999.CrossRefGoogle Scholar
  4. Chen Y., Dong G., Han J., Wah B. W., and Wang B. W., Multi-dimensional regression analysis of time-series data streams. In Proc. of Very Large Database (VLDB), Hongkong, China, 2002.Google Scholar
  5. Cohen W., Fast effective rule induction. In Int’l Conf. on Machine Learning (ICML), pages 115–123, 1995.Google Scholar
  6. Domingos P., and Hulten G., Mining high-speed data streams. In Int’l Conf. on Knowledge Discovery and Data Mining (SIGKDD), pages 71–80, Boston, MA, 2000. ACM Press.Google Scholar
  7. Fan W., Wang H., Yu P., and Lo S. , Progressive modeling. In Int’l Conf. Data Mining (ICDM), 2002.Google Scholar
  8. Fan W., Wang H., Yu P., and Lo S. , Inductive learning in less than one sequential scan, In Int’l Joint Conf. on Artificial Intelligence, 2003.Google Scholar
  9. Fan W.,Wang H., Yu P., and Stolfo S., A framework for scalable cost-sensitive learning based on combining probabilities and benefits. In SIAM Int’l Conf. on Data Mining (SDM), 2002.Google Scholar
  10. Fan W., Chu F., Wang H., and Yu P. S., Pruning and dynamic scheduling of cost-sensitive ensembles, In Proceedings of the 18th National Conference on Artificial Intelligence (AAAI), 2002.Google Scholar
  11. Freund Y., and Schapire R. E., Experiments with a new boosting algorithm, In Int’l Conf. on Machine Learning (ICML), pages 148–156, 1996.Google Scholar
  12. Gao L. and Wang X., Continually evaluating similarity-based pattern queries on a streaming time series, In Int’l Conf. Management of Data (SIGMOD), Madison, Wisconsin, June 2002.Google Scholar
  13. Gehrke J., Ganti V., Ramakrishnan R., and Loh W., BOAT– optimistic decision tree construction, In Int’l Conf. Management of Data (SIGMOD), 1999.Google Scholar
  14. Greenwald M., and Khanna S., Space-efficient online computation of quantile summaries, In Int’l Conf. Management of Data (SIGMOD), pages 58–66, Santa Barbara, CA, May 2001.Google Scholar
  15. Guha S., Milshra N., Motwani R., and O’Callaghan L., Clustering data streams, In IEEE Symposium on Foundations of Computer Science (FOCS), pages 359–366, 2000.Google Scholar
  16. Hall L., Bowyer K., Kegelmeyer W., Moore T., and Chao C., Distributed learning on very large data sets, In Workshop on Distributed and Parallel Knowledge Discover, 2000.Google Scholar
  17. Hulten G., Spencer L., and Domingos P., Mining time-changing data streams, In Int’l Conf. on Knowledge Discovery and Data Mining (SIGKDD), pages 97–106, San Francisco, CA, 2001. ACM Press.Google Scholar
  18. Quinlan J. R., C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993.Google Scholar
  19. Rokach L., Mining manufacturing data using genetic algorithm-based feature set decomposition, Int. J. Intelligent Systems Technologies and Applications, 4(1):57-78, 2008.CrossRefGoogle Scholar
  20. Shafer C., Agrawal R., and Mehta M., Sprint: A scalable parallel classifier for Data Mining, In Proc. of Very Large Database (VLDB), 1996.Google Scholar
  21. Stolfo S., Fan W., Lee W., Prodromidis A., and Chan P., Credit card fraud detection using meta-learning: Issues and initial results. In AAAI-97 Workshop on Fraud Detection and Risk Management, 1997.Google Scholar
  22. Street W. N. and Kim Y. S., A streaming ensemble algorithm (SEA) for large-scale classification. In Int’l Conf. on Knowledge Discovery and Data Mining (SIGKDD), 2001.Google Scholar
  23. Tumer K. and Ghosh J., Error correlation and error reduction in ensemble classifiers, Connection Science, 8(3-4):385–403, 1996.CrossRefGoogle Scholar
  24. Utgoff, P. E., Incremental induction of decision trees, Machine Learning, 4:161–186, 1989.CrossRefGoogle Scholar
  25. Wang H., FanW., Yu P. S., and Han J., Mining concept-drifting data streams using ensemble classifiers, In Int’l Conf. on Knowledge Discovery and Data Mining (SIGKDD), 2003.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.IBM T. J. Watson Research CenterNew-YorkUSA
  2. 2.University of Illinois, Urbana ChampaignUrbanaUSA

Personalised recommendations