Skip to main content

Decision Tree Modeling for Ranking Data

  • Chapter
  • First Online:
Book cover Preference Learning

Abstract

Ranking/preference data arises from many applications in marketing, psychology, and politics. We establish a new decision tree model for the analysis of ranking data by adopting the concept of classification and regression tree. The existing splitting criteria are modified in a way that allows them to precisely measure the impurity of a set of ranking data. Two types of impurity measures for ranking data are introduced, namelyg-wise and top-k measures. Theoretical results show that the new measures exhibit properties of impurity functions. In model assessment, the area under the ROC curve (AUC) is applied to evaluate the tree performance. Experiments are carried out to investigate the predictive performance of the tree model for complete and partially ranked data and promising results are obtained. Finally, a real-world application of the proposed methodology to analyze a set of political rankings data is presented.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. A. Asuncion, D.J. Newman, UCI Machine Learning Repository [http://www.ics.uci.edu/~mlearn/MLRepository.html]. Irvine, CA: University of California, School of Information and Computer Science (2007)

  2. A.P. Bradley, The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit. 30, 1145–1159 (1997)

    Article  Google Scholar 

  3. L. Breiman, J.H. Friedman, R.A. Olshen, C.J. Stone.Classification and Regression Trees (Belmont, California: Wadsworth, 1984)

    Google Scholar 

  4. W. Cheng, J. Hühn, E. Hüllermeier, Decision tree and instance-based learning for label ranking, inProceedings of the 26th International Conference on Machine Learning (ICML 2009) (Montreal, Canada, 2009)

    Google Scholar 

  5. P.A. Chou, Optimal partitioning for classification and regression trees. IEEE Trans. Pattern Anal. Mach. Intell. 13, 340–354 (1991)

    Article  Google Scholar 

  6. D.E. Critchlow, M.A. Fligner, J.S. Verducci, Probability models on rankings. J. Math. Psychol. 35, 294–318 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  7. C. Drummond, R.C. Holte, What ROC curves can’t do (and cost curves can), inProceedings of the 1st Workshop on ROC Analysis in AI (Valencia, Spain, 2004), pp. 19-Ű26

    Google Scholar 

  8. R.M. Duch, M.A. Taylor, Postmaterialism and the economic condition. Am. J. Pol. Sci. 37, 747–778 (1993)

    Article  Google Scholar 

  9. J. Fürnkranz, E. Hüllermeier, Pairwise preference learning and ranking, inProceedings of the 14th European Conference on Machine Learning (ECML-03) (Springer, Cavtat, Croatia, 2003), pp. 145–156

    Google Scholar 

  10. P. Geurts, L. Wehenkel, A. Florence, Kernelizing the output of tree-based methods, inProceedings of the 23rd International Conference on Machine Learning (ICML-06), (Pittsburgh, Pennsylvania, 2006), pp. 345–352

    Google Scholar 

  11. D.J. Hand, R.J. Till, A simple generalisation of the area under the ROC curve for multiple class classification problems. Mach. Learn. 45(2), 171–186 (2001)

    Article  MATH  Google Scholar 

  12. J.A. Hanley, B.J. McNeil, The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143, 29–36 (1982)

    Google Scholar 

  13. E. Hüllermeier, J. Fürnkranz, On Minimizing the Position Error in Label Ranking, inProceedings of the 17th European Conference on Machine Learning (ECML-07) (Springer, Warsawa, Poland, 2007), pp. 583–590

    Google Scholar 

  14. E. Hüllermeier, J. Fürnkranz, W. Cheng, K. Brinker, Label ranking by learning pairwise preferences. Artif. Intell. 172(16–17), 1897–1916 (2008)

    Article  MATH  Google Scholar 

  15. R. Inglehart,The Silent Revolution: Changing Values and Political Styles among Western Publics (Princeton Univerity Press, Princeton, 1977)

    Google Scholar 

  16. R. Jowell, L. Brook, L. Dowds,International Social Attributes: the 10th BSA Report (Dartmouth Publishing, Aldershot, 1993)

    Google Scholar 

  17. M.G. Karlaftis, Predicting mode choice through multivariate recursive partitioning. J. Trans. Eng. 130(22), 245–250 (2004)

    Article  Google Scholar 

  18. G.V. Kass, An exploratory technique for investigation large quantities of categorical data. Appl. Stat. 29, 119–127 (1980)

    Article  Google Scholar 

  19. W.-Y. Loh, Y.-S. Shih, Split selection methods for classification trees. Statistica Sinica 7, 815–840 (1997)

    MathSciNet  MATH  Google Scholar 

  20. J.I. Marden, Analyzing and Modeling Rank Data (Chapman & Hall, 1995)

    Google Scholar 

  21. J.R. Quinlan, Induction of decision trees. Mach. Learn. 1, 81–106 (1986)

    Google Scholar 

  22. J.R. Quinlan.C4.5: Programs for Machine Learning (Morgan Kaufmann, 1993)

    Google Scholar 

  23. B.D. Ripley,Pattern Recognition and Neural Networks (Cambridge University Press, Cambridge, 1996)

    MATH  Google Scholar 

  24. L. Rokach, O. Maimon, Decision trees, inThe Data Mining and Knowledge Discovery Handbook, ed. by O. Maimon, L. Rokach (Springer, Berlin, 2005), pp. 165–192

    Chapter  Google Scholar 

  25. R. Siciliano, F. Mola, Multivariate data analysis and modeling through classification and regression trees. Comput. Stat. Data Anal. 32, 285–301 (2000)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The research of Philip L.H. Yu was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. HKU 7473/05H).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Philip L. H. Yu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Yu, P.L.H., Wan, W.M., Lee, P.H. (2010). Decision Tree Modeling for Ranking Data. In: Fürnkranz, J., Hüllermeier, E. (eds) Preference Learning. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14125-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-14125-6_5

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-14124-9

  • Online ISBN: 978-3-642-14125-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics