Voting Massive Collections of Bayesian Network Classifiers for Data Streams

  • Remco R. Bouckaert
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4304)


We present a new method for voting exponential (in the number of attributes) size sets of Bayesian classifiers in polynomial time with polynomial memory requirements. Training is linear in the number of instances in the dataset and can be performed incrementally. This allows the collection to learn from massive data streams. The method allows for flexibility in balancing computational complexity, memory requirements and classification performance. Unlike many other incremental Bayesian methods, all statistics kept in memory are directly used in classification.

Experimental results show that the classifiers perform well on both small and very large data sets, and that classification performance can be weighed against computational and memory costs.


Bayesian Network Memory Requirement Conditional Probability Table Wisconsin Breast Cancer Tree Augment Naive 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases. University of California, Irvine (1998)Google Scholar
  2. 2.
    Domingos, P., Hulten, G.: Mining High-Speed Data Streams. In: SIGKDD, pp. 71–80 (2000)Google Scholar
  3. 3.
    Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian Network Classifiers. Machine Learning 29, 131–163 (1997)MATHCrossRefGoogle Scholar
  4. 4.
    Hulten, G., Domingos, P.: Mining complex models from arbitrarily large databases in constant time. In: SIGKDD, pp. 525–531 (2002)Google Scholar
  5. 5.
    John, G.H., Langley, P.: Estimating Continuous Distributions in Bayesian Classifiers. Uncertainty in Artificial Intelligence, 338–345 (1995)Google Scholar
  6. 6.
    Keogh, E., Pazzani, M.: Learning augmented Bayesian classifiers: A comparison of distribution-based and classification-based approaches. AIStats, 225–230 (1999)Google Scholar
  7. 7.
    Sacha, J.P.: New synthesis of Bayesian network classifiers and interpretation of cardiac SPECT images, Ph.D. Dissertation, University of Toledo (1999)Google Scholar
  8. 8.
    Webb, G.I., Boughton, J.R., Wang, Z.: Not so naive Bayes: aggregating one-dependence estimators. Machine Learning 58(1), 5–24 (2005)MATHCrossRefGoogle Scholar
  9. 9.
    Witten, I.H., Frank, E.: Data mining: Practical machine learning tools and techniques with Java implementations. Morgan Kaufmann, San Francisco (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Remco R. Bouckaert
    • 1
  1. 1.Computer Science DepartmentUniversity of WaikatoNew Zealand

Personalised recommendations