Advertisement

Averaged Naive Bayes Trees: A New Extension of AODE

  • Mori Kurokawa
  • Hiroyuki Yokoyama
  • Akito Sakurai
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5828)

Abstract

Naive Bayes (NB) is a simple Bayesian classifier that assumes the conditional independence and augmented NB (ANB) models are extensions of NB by relaxing the independence assumption. The averaged one-dependence estimators (AODE) is a classifier that averages ODEs, which are ANB models. However, the expressiveness of AODE is still limited by the restricted structure of ODE. In this paper, we propose a model averaging method for NB Trees (NBTs) with flexible structures and present experimental results in terms of classification accuracy. Results of comparative experiments show that our proposed method outperforms AODE on classification accuracy.

Keywords

naive Bayes augmented naive Bayes averaged one-dependence estimators naive Bayes trees model averaging 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian network classifiers. Machine learning 29(2), 131–163 (1997)zbMATHCrossRefGoogle Scholar
  2. 2.
    Chickering, D., Heckerman, D., Meek, C.: Large-sample learning of Bayesian networks is NP-hard. The Journal of Machine Learning Research 5, 1287–1330 (2004)MathSciNetGoogle Scholar
  3. 3.
    Hovold, J.: Naive Bayes spam filtering using word-position-based attributes. In: Proceedings of the Second Conference on Email and Anti-Spam, CEAS (2005)Google Scholar
  4. 4.
    Quinlan, J.: C4. 5: Programs for machine learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  5. 5.
    Kohavi, R.: Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD), pp. 202–207 (1996)Google Scholar
  6. 6.
    Sahami, M.: Learning limited dependence Bayesian classifiers. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD), pp. 335–338 (1996)Google Scholar
  7. 7.
    Webb, G., Boughton, J., Wang, Z.: Not so naive Bayes: Aggregating one-dependence estimators. Machine Learning 58(1), 5–24 (2005)zbMATHCrossRefGoogle Scholar
  8. 8.
    Jiang, L., Zhang, H.: Weightily averaged one-dependence estimators. In: Yang, Q., Webb, G. (eds.) PRICAI 2006. LNCS (LNAI), vol. 4099, pp. 970–974. Springer, Heidelberg (2006)Google Scholar
  9. 9.
    Guyon, I., Elisseeff, A.: An Introduction to Variable and Feature Selection. The Journal of Machine Learning Research 3, 1157–1182 (2003)zbMATHCrossRefGoogle Scholar
  10. 10.
    Tang, E., Suganthan, P., Yao, X.: An analysis of diversity measures. Machine Learning 65, 247–271 (2006)CrossRefGoogle Scholar
  11. 11.
    Cestnik, B., Bratko, I.: On estimating probabilities in tree pruning. In: Kodratoff, Y. (ed.) EWSL 1991. LNCS, vol. 482, pp. 138–150. Springer, Heidelberg (1991)CrossRefGoogle Scholar
  12. 12.
    Mitchel, T.: Machine learning. McGraw Hill, New York (1997)Google Scholar
  13. 13.
    Breiman, L.: Random forests. Machine learning 45(1), 5–32 (2001)zbMATHCrossRefGoogle Scholar
  14. 14.
    Boser, B., Guyon, I., Vapnik, V.: A training algorithm for optimal margin classifiers. In: Proceedings of the fifth annual workshop on Computational learning theory (COLT), pp. 144–152 (1992)Google Scholar
  15. 15.
    Platt, J.: Sequential minimal optimization: A fast algorithm for training support vector machines. Advances in Kernel Methods-Support Vector Learning 208 (1999)Google Scholar
  16. 16.
    Keerthi, S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Computation 13(3), 637–649 (2001)zbMATHCrossRefGoogle Scholar
  17. 17.
    Asuncion, A., Newman, D.: UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences (2007)Google Scholar
  18. 18.
    Whitten, I., Frank, E.: Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann, San Francisco (2005)Google Scholar
  19. 19.
    Nadeau, C., Bengio, Y.: Inference for the generalization error. Machine Learning 52(3), 239–281 (2003)zbMATHCrossRefGoogle Scholar
  20. 20.
    Hsu, C., Huang, H., Wong, T.: Why discretization works for naive Bayesian classifiers. In: Proceedings of the Seventeenth International Conference on Machine Learning (ICML), pp. 399–406 (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Mori Kurokawa
    • 1
  • Hiroyuki Yokoyama
    • 1
  • Akito Sakurai
    • 2
  1. 1.KDDI R&D Laboratories, Inc.SaitamaJapan
  2. 2.Keio UniversityKanagawaJapan

Personalised recommendations