Advertisement

Probabilistic Inference Trees for Classification and Ranking

  • Jiang Su
  • Harry Zhang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4013)

Abstract

In many applications, an accurate ranking of instances is as important as accurate classification. However, it has been observed that traditional decision trees perform well in classification, but poor in ranking. In this paper, we point out that there is an inherent obstacle for traditional decision trees to achieving both accurate classification and ranking. We propose to understand decision trees from probabilistic perspective, and use probability theory to compute probability estimates and perform classification and ranking. The new model is called probabilistic inference trees (PITs). Our experiments show that the PIT learning algorithm performs well in both ranking and classification. More precisely, it significantly outperforms the state-of-the-art decision tree learning algorithms designed for ranking, such as C4.4 [10] and Ling and Yan’s algorithm [6], and performs competitively with the traditional decision tree learning algorithms, such as C4.5, in classification. Our research provides a novel algorithm for the applications in which both accurate classification and ranking are desired.

Keywords

Decision Tree Class Probability Training Instance Conditional Probability Distribution Path Attribute 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bauer, E., Kohavi, R.: An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. Machine Learning 36(1-2), 105–139 (1999)CrossRefGoogle Scholar
  2. 2.
    Buntine, W.: Learning Classification Trees. Artificial Intelligence Frontiers in Statistics, pp. 182–201. Chapman Hall, London (1991)Google Scholar
  3. 3.
    Ferri, C., Flach, P.A., Hernndez-Orallo, J.: Improving the AUC of Probabilistic Estimation Trees. In: Proceedings of 14th European Conference on Machine Learning, pp. 121–132. Springer, Heidelberg (2003)Google Scholar
  4. 4.
    Hand, D.J., Till, R.J.: A simple generalisation of the area under the ROC curve for multiple class classification problems. Machine Learning 45, 171–186 (2001)MATHCrossRefGoogle Scholar
  5. 5.
    Kohavi, R.: Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-1996), pp. 202–207. AAAI Press, Menlo Park (1996)Google Scholar
  6. 6.
    Ling, C.X., Yan, R.J.: Decision Tree with Better Ranking. In: Proceedings of the 20th International Conference on Machine Learning, pp. 480–487. Morgan Kaufmann, San Francisco (2003)Google Scholar
  7. 7.
    Pazzani, M., Merz, C., Murphy, P., Ali, K., Hume, T., Brunk, C.: Reducing misclassification costs. In: Proceedings of the 11th International conference on Machine Learning, pp. 217–225. Morgan Kaufmann, San Francisco (1994)Google Scholar
  8. 8.
    Provost, F., Fawcett, T.: Analysis and visualization of classifier performance: comparison under imprecise class and cost distribution. In: Proceedings of the Third International Conference on Knowledge Discovery and Data Mining, pp. 43–48. AAAI Press, Menlo Park (1997)Google Scholar
  9. 9.
    Provost, F., Fawcett, T., Kohavi, R.: The case against accuracy estimation for comparing induction algorithms. In: Proceedings of the Fifteenth International Conference on Machine Learning, pp. 445–453. Morgan Kaufmann, San Francisco (1998)Google Scholar
  10. 10.
    Provost, F.J., Domingos, P.: Tree Induction for Probability-Based Ranking. Machine Learning 52(3), 199–215 (2003)MATHCrossRefGoogle Scholar
  11. 11.
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)Google Scholar
  12. 12.
    Symth, P., Gray, A., Fayyad, U.: Retrofitting decision tree classifiers using kernel density estimation. In: Proceedings of the Twelfth International Conference on Machine Learning, pp. 506–514. Morgan Kaufmann, San Francisco (1996)Google Scholar
  13. 13.
    Su, J., Zhang, H.: Representing Conditional Independence Using Decision Trees. In: Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-2005), pp. 874–879. AAAI Press, Menlo Park (2005)Google Scholar
  14. 14.
    Witten, I.H., Frank, E.: data mining-Practical Machine Learning Tools and Techniques with Java Implementation. Morgan Kaufmann, San Francisco (2000)Google Scholar
  15. 15.
    Zhang, H., Su, J.: Conditional Independence Trees. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS (LNAI), vol. 3201, pp. 513–524. Springer, Heidelberg (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Jiang Su
    • 1
  • Harry Zhang
    • 1
  1. 1.Faculty of Computer ScienceUniversity of New BrunswickFrederictonCanada

Personalised recommendations