Neural Computing and Applications

, Volume 20, Issue 7, pp 935–944 | Cite as

On-line learning from streaming data with delayed attributes: a comparison of classifiers and strategies

  • Mónica Millán-Giraldo
  • J. Salvador Sánchez
  • V. Javier Traver


In many real applications, data are not all available at the same time, or it is not affordable to process them all in a batch process, but rather, instances arrive sequentially in a stream. The scenario of streaming data introduces new challenges to the machine learning community, since difficult decisions have to be made. The problem addressed in this paper is that of classifying incoming instances for which one attribute arrives only after a given delay. In this formulation, many open issues arise, such as how to classify the incomplete instance, whether to wait for the delayed attribute before performing any classification, or when and how to update a reference set. Three different strategies are proposed which address these issues differently. Orthogonally to these strategies, three classifiers of different characteristics are used. Keeping on-line learning strategies independent of the classifiers facilitates system design and contrasts with the common alternative of carefully crafting an ad hoc classifier. To assess how good learning is under these different strategies and classifiers, they are compared using learning curves and final classification errors for fifteen data sets. Results indicate that learning in this stringent context of streaming data and delayed attributes can successfully take place even with simple on-line strategies. Furthermore, active strategies behave generally better than more conservative passive ones. Regarding the classifiers, it was found that simple instance-based classifiers such as the well-known nearest neighbor may outperform more elaborate classifiers such as the support vector machines, especially if some measure of classification confidence is considered in the process.


Streaming data On-line classification Delayed attributes Semi-supervised learning 



This work has been supported in part by the Spanish Ministry of Education and Science under grants CSD2007-00018 Consolider Ingenio 2010 and TIN2009-14205, and by Fundació Caixa Castelló—Bancaixa under grant P1-1B2009-04.


  1. 1.
    Agarwal C (2004) On-demand classification of data streams. In: Proceedings of the ACM international conference on knowledge discovery and data mining, pp 503–508Google Scholar
  2. 2.
    Agarwal C (2007) Data streams: models and algorithms. Springer, New YorkGoogle Scholar
  3. 3.
    Asuncion A, Newman DJ (2007) UCI machine learning repository. School of Information and Computer Science, University of California, Irvine, CA.
  4. 4.
    Babcock B, Babu S, Datar M, Motwani R, Widom J (2002) Models and issues in data stream systems. In: Proceedings of the 21st ACM SIGMOD-SIGACT-SIGART symposium on principles of database systems, pp 1–16Google Scholar
  5. 5.
    Bruzzone L, Roli R, Serpico SB (1995) An extension of the Jeffreys–Matusita distance to multiclass cases for feature selection. IEEE Trans Geosci Remote Sens 33(6):1318–1321CrossRefGoogle Scholar
  6. 6.
    Chang CC, Lin CJ (2001) LIBSVM: a library for support vector machines. Software available at
  7. 7.
    Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetGoogle Scholar
  8. 8.
    Ganti V, Gehrke J, Ramakrishnan R (2001) Demon: mining and monitoring evolving data. IEEE Trans Knowl Data Eng 13(1):50–63CrossRefGoogle Scholar
  9. 9.
    Gelman A, Meng XL (2004) Applied Bayesian modeling and causal inference from incomplete data perspectives. Wiley, ChichesterzbMATHCrossRefGoogle Scholar
  10. 10.
    Hashemi S, Yang Y (2009) Flexible decision tree for data stream classification in the presence of concept change, noise and missing values. Data Min Knowl Discov 19(1):95–131MathSciNetCrossRefGoogle Scholar
  11. 11.
    Keerthi SS, Lin CJ (2003) Asymptotic behaviors of support vector machines with Gaussian kernel. Neural Comput 15(7):1667–1689zbMATHCrossRefGoogle Scholar
  12. 12.
    Kuncheva LI (2008) Classifier ensembles for detecting concept change in streaming data: overview and perspectives. In: Proceedings of the 2nd workshop on supervised and unsupervised ensemble methods and their applications, pp 5–10Google Scholar
  13. 13.
    Maimon O, Rokach L (2005) Data mining and knowledge discovery handbook. Springer Science+Business Media, New YorkzbMATHCrossRefGoogle Scholar
  14. 14.
    Marwala T (2009) Computational intelligence for missing data imputation, estimation and management: knowledge optimization techniques. Information Science Reference, HersheyCrossRefGoogle Scholar
  15. 15.
    Millán-Giraldo M, Sánchez JS, Traver VJ (2009) Exploring early classification strategies of streaming data with delayed attributes. In: 16th International conference on neural information processing, LNCS 6863, part I, Bangkok, pp 875–883Google Scholar
  16. 16.
    Millán-Giraldo M, Duin RPW, Sánchez JS (2010) Dissimilarity-based classification of data with missing attributes. In: The 2nd international workshop on cognitive information processing (submitted)Google Scholar
  17. 17.
    Muthukrishnan S (2005) Data streams: algorithms and applications. Found Trends Theor Comput Sci 1(2):117–236MathSciNetCrossRefGoogle Scholar
  18. 18.
    Ripley BD (1996) Pattern recognition and neural networks. Cambridge University Press, CambridgezbMATHGoogle Scholar
  19. 19.
    Little RJA, Rubin DB (1987) Statistical analysis with missing data. Wiley, New YorkzbMATHGoogle Scholar
  20. 20.
    Saar-Tsechansky M, Provost F (2007) Handling missing values when applying classification models. J Mach Learn Res 8:1625–1657Google Scholar
  21. 21.
    Street WN, Kim Y (2001) A streaming ensemble algorithm (SEA) for large-scale classification. In: Proceedings of the 7th international conference on knowledge discovery and data mining, pp 377–382Google Scholar
  22. 22.
    Takeuchi J, Yamanishi K (2006) A unifying framework for detecting outliers and change points from time series. IEEE Trans Knowl Data Eng 18(4):482–492CrossRefGoogle Scholar
  23. 23.
    Tsymbal A (2004) The problem of concept drift: definitions and related work. Technical report, Department of Computer Science, Trinity College, DublinGoogle Scholar
  24. 24.
    Vázquez F, Sánchez JS, Pla F (2005) A stochastic approach to Wilsons editing algorithm. In: Proceedings of the 2nd Iberian conference on pattern recognition and image analysis, pp 35–42Google Scholar
  25. 25.
    Widyantoro DH, Yen J (2005) Relevant data expansion for learning concept drift from sparsely labeled data. IEEE Trans Knowl Data Eng 17(3):401–412CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2010

Authors and Affiliations

  • Mónica Millán-Giraldo
    • 1
  • J. Salvador Sánchez
    • 1
  • V. Javier Traver
    • 1
  1. 1.Dept. Lenguajes y Sistemas Informáticos, Institute of New Imaging TechnologiesUniversitat Jaume ICastellónSpain

Personalised recommendations