Skip to main content

Averaged Naive Bayes Trees: A New Extension of AODE

  • Conference paper
Advances in Machine Learning (ACML 2009)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5828))

Included in the following conference series:

Abstract

Naive Bayes (NB) is a simple Bayesian classifier that assumes the conditional independence and augmented NB (ANB) models are extensions of NB by relaxing the independence assumption. The averaged one-dependence estimators (AODE) is a classifier that averages ODEs, which are ANB models. However, the expressiveness of AODE is still limited by the restricted structure of ODE. In this paper, we propose a model averaging method for NB Trees (NBTs) with flexible structures and present experimental results in terms of classification accuracy. Results of comparative experiments show that our proposed method outperforms AODE on classification accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian network classifiers. Machine learning 29(2), 131–163 (1997)

    Article  MATH  Google Scholar 

  2. Chickering, D., Heckerman, D., Meek, C.: Large-sample learning of Bayesian networks is NP-hard. The Journal of Machine Learning Research 5, 1287–1330 (2004)

    MathSciNet  Google Scholar 

  3. Hovold, J.: Naive Bayes spam filtering using word-position-based attributes. In: Proceedings of the Second Conference on Email and Anti-Spam, CEAS (2005)

    Google Scholar 

  4. Quinlan, J.: C4. 5: Programs for machine learning. Morgan Kaufmann, San Francisco (1993)

    Google Scholar 

  5. Kohavi, R.: Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD), pp. 202–207 (1996)

    Google Scholar 

  6. Sahami, M.: Learning limited dependence Bayesian classifiers. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD), pp. 335–338 (1996)

    Google Scholar 

  7. Webb, G., Boughton, J., Wang, Z.: Not so naive Bayes: Aggregating one-dependence estimators. Machine Learning 58(1), 5–24 (2005)

    Article  MATH  Google Scholar 

  8. Jiang, L., Zhang, H.: Weightily averaged one-dependence estimators. In: Yang, Q., Webb, G. (eds.) PRICAI 2006. LNCS (LNAI), vol. 4099, pp. 970–974. Springer, Heidelberg (2006)

    Google Scholar 

  9. Guyon, I., Elisseeff, A.: An Introduction to Variable and Feature Selection. The Journal of Machine Learning Research 3, 1157–1182 (2003)

    Article  MATH  Google Scholar 

  10. Tang, E., Suganthan, P., Yao, X.: An analysis of diversity measures. Machine Learning 65, 247–271 (2006)

    Article  Google Scholar 

  11. Cestnik, B., Bratko, I.: On estimating probabilities in tree pruning. In: Kodratoff, Y. (ed.) EWSL 1991. LNCS, vol. 482, pp. 138–150. Springer, Heidelberg (1991)

    Chapter  Google Scholar 

  12. Mitchel, T.: Machine learning. McGraw Hill, New York (1997)

    Google Scholar 

  13. Breiman, L.: Random forests. Machine learning 45(1), 5–32 (2001)

    Article  MATH  Google Scholar 

  14. Boser, B., Guyon, I., Vapnik, V.: A training algorithm for optimal margin classifiers. In: Proceedings of the fifth annual workshop on Computational learning theory (COLT), pp. 144–152 (1992)

    Google Scholar 

  15. Platt, J.: Sequential minimal optimization: A fast algorithm for training support vector machines. Advances in Kernel Methods-Support Vector Learning 208 (1999)

    Google Scholar 

  16. Keerthi, S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Computation 13(3), 637–649 (2001)

    Article  MATH  Google Scholar 

  17. Asuncion, A., Newman, D.: UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences (2007)

    Google Scholar 

  18. Whitten, I., Frank, E.: Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann, San Francisco (2005)

    Google Scholar 

  19. Nadeau, C., Bengio, Y.: Inference for the generalization error. Machine Learning 52(3), 239–281 (2003)

    Article  MATH  Google Scholar 

  20. Hsu, C., Huang, H., Wong, T.: Why discretization works for naive Bayesian classifiers. In: Proceedings of the Seventeenth International Conference on Machine Learning (ICML), pp. 399–406 (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kurokawa, M., Yokoyama, H., Sakurai, A. (2009). Averaged Naive Bayes Trees: A New Extension of AODE. In: Zhou, ZH., Washio, T. (eds) Advances in Machine Learning. ACML 2009. Lecture Notes in Computer Science(), vol 5828. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-05224-8_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-05224-8_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-05223-1

  • Online ISBN: 978-3-642-05224-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics