Advertisement

Learning Preference Models from Data: On the Problem of Label Ranking and Its Variants

  • Eyke Hüllermeier
  • Johannes Fürnkranz
Part of the CISM International Centre for Mechanical Sciences book series (CISM, volume 504)

Abstract

The term “preference learning” refers to the application of machine learning methods for inducing preference models from empirical data. In the recent literature, corresponding problems appear in various guises. After a brief overview of the field, this work focuses on a particular learning scenario called label ranking where the problem is to learn a mapping from instances to rankings over a finite number of labels. Our approach for learning such a ranking function, called ranking by pairwise comparison (RPC), first induces a binary preference relation from suitable training data, using a natural extension of pairwise classification. A ranking is then derived from this relation by means of a ranking procedure. This paper elaborates on a key advantage of such an approach, namely the fact that our learner can be adapted to different loss functions by using different ranking procedures on the same underlying order relations. In particular, the Spearman rank correlation is minimized by using a simple weighted voting procedure. Moreover, we discuss a loss function suitable for settings where candidate labels must be tested successively until a target label is found. In this context, we propose the idea of “empirical conditioning” of class probabilities. A related ranking procedure, called “ranking through iterated choice”, is investigated experimentally.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. C. Alonso, JJ. Rodríguez, and B. Pulido. Enhancing consistency based diagnosis with machine learning techniques. In: Proc. 10th Conference of the Spanish Association for Artificial Intelligence, pages 312–321. Springer, 2004.Google Scholar
  2. William W. Cohen, Robert E. Schapire, and Yoram Singer. Learning to order things. In Michael I. Jordan, Michael J. Kearns, and Sara A. Solla, editors, Advances in Neural Information Processing Systems. The MIT Press, 1998.Google Scholar
  3. Jon Doyle. Prospects for preferences. In Computational Intelligence, volume 20, Pages 111–136, 2004.CrossRefMathSciNetGoogle Scholar
  4. J. Fodor and M. Roubens. Fuzzy Preference Modelling and Multicriteria Decision Support. Kluwer Academic Publishers, Dordrecht, The Netherlands, 1994.zbMATHGoogle Scholar
  5. J. Fürnkranz. Round robin classification. Journal of Machine Learning Research, 2: 721–747, 2002.zbMATHCrossRefGoogle Scholar
  6. J. Fürnkranz and E. Hüllermeier. Pairwise preference learning and ranking. In Proc. ECML-2003, 13th European Conference on Machine Learning, Cavtat-Dubrovnik, Croatia, September 2003.Google Scholar
  7. P. Haddawy, V. Ha, A. Restificar, B. Geisler, and J. Miyamoto. Preference elicitation via theory refinement. Journal of Machine Learning Research, 4: 317–337, 2003.CrossRefMathSciNetGoogle Scholar
  8. S. Har-Peled, D. Roth, and D. Zimak. Constraint classification: a new approach to multiclass classification. In: Proceedings 13th Int. Conf. on Algorithmic Learning Theory, pages 365–379, Lübeck, Germany, 2002. Springer.Google Scholar
  9. Sariel Har-Peled, Dan Roth, and Dav Zimak. Constraint classification for multiclass classification and ranking. In Suzanna Becker, Sebastian Thrun, and Klaus Obermayer, editors, Advances in Neural Information Processing Systems 15 (NIPS-02) pages 785–792, 2003.Google Scholar
  10. R. Herbrich, T. Graepel, P. Bollmann-Sdorra, and K. Obermayer. Supervised learning of preference relations. In Proceedings FGML-98, German National Workshop on Machine Learning, pages 43–47, 1998.Google Scholar
  11. E. Hüllermeier and J. Fürnkranz. Comparison of ranking procedures in pairwise preference learning. In Proc. IPMU-04, Perugia, Italy, 2004.Google Scholar
  12. Eyke Hüllermeier and Johannes Fürnkranz. Learning label preferences: Ranking error versus position error In Advances in Intelligent Data Analysis VI, Madrid, 2005, Springer.Google Scholar
  13. T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-02) pages 133–142. ACM Press, 2002.Google Scholar
  14. T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In Proceedings of the 28th Annual International ACM Conference on Research and Development in Information Retrieval (SIGIR-05) 2005.Google Scholar
  15. Dorian Pyle. Data Preparation for Data Mining. Morgan Kaufmann, San Francisco, CA, 1999.Google Scholar
  16. F. Radlinski and T. Joachims. Learning to rank from implicit feedback. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD-05) 2005.Google Scholar
  17. Gerald Tesauro. Connectionist learning of expert preferences by comparison training. In D. Touretzky, editor, Advances in Neural Information Processing Systems 1 (NIPS-88), pages 99–106. Morgan Kaufmann, 1989.Google Scholar
  18. J. Wang. Artificial neural networks versus natural neural networks: a connectionist paradigm for preference assessment. Decision Support Systems, 11: 415–429, 1994.CrossRefGoogle Scholar
  19. TF. Wu, CJ. Lin, and RC. Weng. Probability estimates for multi-class classification by pairwise coupling. Journal of Machine Learning Research, 5: 975–1005, 2004.MathSciNetGoogle Scholar

Copyright information

© CISM, Udine 2008

Authors and Affiliations

  • Eyke Hüllermeier
    • 1
  • Johannes Fürnkranz
    • 2
  1. 1.FB Mathematik und InformatikPhilipps-Universität MarburgGermany
  2. 2.FB InformatikTU DarmstadtGermany

Personalised recommendations