Probabilistic Inference Trees for Classification and Ranking
In many applications, an accurate ranking of instances is as important as accurate classification. However, it has been observed that traditional decision trees perform well in classification, but poor in ranking. In this paper, we point out that there is an inherent obstacle for traditional decision trees to achieving both accurate classification and ranking. We propose to understand decision trees from probabilistic perspective, and use probability theory to compute probability estimates and perform classification and ranking. The new model is called probabilistic inference trees (PITs). Our experiments show that the PIT learning algorithm performs well in both ranking and classification. More precisely, it significantly outperforms the state-of-the-art decision tree learning algorithms designed for ranking, such as C4.4  and Ling and Yan’s algorithm , and performs competitively with the traditional decision tree learning algorithms, such as C4.5, in classification. Our research provides a novel algorithm for the applications in which both accurate classification and ranking are desired.
KeywordsDecision Tree Class Probability Training Instance Conditional Probability Distribution Path Attribute
Unable to display preview. Download preview PDF.
- 2.Buntine, W.: Learning Classification Trees. Artificial Intelligence Frontiers in Statistics, pp. 182–201. Chapman Hall, London (1991)Google Scholar
- 3.Ferri, C., Flach, P.A., Hernndez-Orallo, J.: Improving the AUC of Probabilistic Estimation Trees. In: Proceedings of 14th European Conference on Machine Learning, pp. 121–132. Springer, Heidelberg (2003)Google Scholar
- 5.Kohavi, R.: Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-1996), pp. 202–207. AAAI Press, Menlo Park (1996)Google Scholar
- 6.Ling, C.X., Yan, R.J.: Decision Tree with Better Ranking. In: Proceedings of the 20th International Conference on Machine Learning, pp. 480–487. Morgan Kaufmann, San Francisco (2003)Google Scholar
- 7.Pazzani, M., Merz, C., Murphy, P., Ali, K., Hume, T., Brunk, C.: Reducing misclassification costs. In: Proceedings of the 11th International conference on Machine Learning, pp. 217–225. Morgan Kaufmann, San Francisco (1994)Google Scholar
- 8.Provost, F., Fawcett, T.: Analysis and visualization of classifier performance: comparison under imprecise class and cost distribution. In: Proceedings of the Third International Conference on Knowledge Discovery and Data Mining, pp. 43–48. AAAI Press, Menlo Park (1997)Google Scholar
- 9.Provost, F., Fawcett, T., Kohavi, R.: The case against accuracy estimation for comparing induction algorithms. In: Proceedings of the Fifteenth International Conference on Machine Learning, pp. 445–453. Morgan Kaufmann, San Francisco (1998)Google Scholar
- 11.Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)Google Scholar
- 12.Symth, P., Gray, A., Fayyad, U.: Retrofitting decision tree classifiers using kernel density estimation. In: Proceedings of the Twelfth International Conference on Machine Learning, pp. 506–514. Morgan Kaufmann, San Francisco (1996)Google Scholar
- 13.Su, J., Zhang, H.: Representing Conditional Independence Using Decision Trees. In: Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-2005), pp. 874–879. AAAI Press, Menlo Park (2005)Google Scholar
- 14.Witten, I.H., Frank, E.: data mining-Practical Machine Learning Tools and Techniques with Java Implementation. Morgan Kaufmann, San Francisco (2000)Google Scholar