Advertisement

Learning Naïve Bayes Tree for Conditional Probability Estimation

  • Han Liang
  • Yuhong Yan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4013)

Abstract

Naïve Bayes Tree uses decision tree as the general structure and deploys naïve Bayesian classifiers at leaves. The intuition is that naïve Bayesian classifiers work better than decision trees when the sample data set is small. Therefore, after several attribute splits when constructing a decision tree, it is better to use naïve Bayesian classifiers at the leaves than to continue splitting the attributes. In this paper, we propose a learning algorithm to improve the conditional probability estimation in the diagram of Naïve Bayes Tree. The motivation for this work is that, for cost-sensitive learning where costs are associated with conditional probabilities, the score function is optimized when the estimates of conditional probabilities are accurate. The additional benefit is that both the classification accuracy and Area Under the Curve (AUC) could be improved. On a large suite of benchmark sample sets, our experiments show that the CLL tree outperforms the state-of-art learning algorithms, such as Naïve Bayes Tree and naïve Bayes significantly in yielding accurate conditional probability estimation and improving classification accuracy and AUC.

Keywords

Decision Tree Conditional Probability Conditional Independence Class Probability Laplace Estimation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Blake, C., Merz, C.J.: Uci repository of machine learning databaseGoogle Scholar
  2. 2.
    Elkan, C.: The foundations of cost-sensitive learning. In: Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence (1991)Google Scholar
  3. 3.
    Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian network classifiers. Machine Learning 29 (1997)Google Scholar
  4. 4.
    Hand, D.J., Till, R.J.: A simple generalisation of the area under the roc curve for multiple class classification problems. Machine Learning 45 (2001)Google Scholar
  5. 5.
    Kohavi, R.: Scaling up the accuracy of naive-bayes classifiers: a decision-tree hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (1996)Google Scholar
  6. 6.
    Nadeau, C., Bengio, Y.: Inference for the generalization error. Machine Learning 52(40) (2003)Google Scholar
  7. 7.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, San Francisco (1988)Google Scholar
  8. 8.
    Provost, F.J., Domingos, P.: Tree induction for probability-based ranking. Machine Learning 52(30) (2003)Google Scholar
  9. 9.
    Witten, I.H., Frank, E.: Data Mining –Practical Machine Learning Tools and Techniques with Java Implementation. Morgan Kaufmann, San Francisco (2000)Google Scholar
  10. 10.
    Zhang, H., Su, J.: Conditional independence trees. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS (LNAI), vol. 3201, Springer, Heidelberg (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Han Liang
    • 1
  • Yuhong Yan
    • 2
  1. 1.Faculty of Computer ScienceUniversity of New Brunswick FrederictonCanada
  2. 2.National Research Council of Canada FrederictonCanada

Personalised recommendations