Learning to rank order — a distance-based approach

  • Maria Dobrska
  • Hui Wang
  • William Blackburn
Conference paper


Learning to rank order is a machine learning paradigm that is different to the common machine learning paradigms: learning to classify cluster or approximate. It has the potential to reveal more hidden knowledge in data than classification. Cohen, Schapire and Singer were early investigators of this problem. They took a preference-based approach where pairwise preferences were combined into a total ordering. It is however not always possible to have knowledge of pairwise preferences. In this paper we consider a distance-based approach to ordering, where the ordering of alternatives is predicted on the basis of their distances to a query. To learn such an ordering function we consider two orderings: one is the actual ordering and another one is the predicted ordering. We aim to maximise the agreement of the two orderings by varying the parameters of a distance function, resulting in a trained distance function which is taken to be the ordering function. We evaluated this work by comparing the trained distance and the untrained distance in an experiment on public data. Results show that the trained distance leads in general to a higher degree of agreement than untrained distance.


Attribute Weight Simple Genetic Algorithm Actual Ranking Ranking Distance Query Vector 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Asuncion, A., Newman, D.: UCI machine learning repository (2007). URL Scholar
  2. 2.
    Chen, J., Zhao, Z., Ye, J., Liu, H.: Nonlinear adaptive distance metric learning for clustering. In: Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD07), pp. 123–132. ACM (2007)Google Scholar
  3. 3.
    Cohen, W.W., Schapire, R.E., Singer, Y.: Learning to order things. In: Advances in Neural Information Processing Systems, vol. 10. The MIT Press (1998)Google Scholar
  4. 4.
    Fentress, S.W.: Exaptation as a means of evolving complex solutions (2005). MSc thesisGoogle Scholar
  5. 5.
    Hirschberg, D.S.: Algorithms for the longest common subsequence problem. Journal of ACM 24(4), 664–675 (1977)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Hochberg, Y., Rabinovitch, R.: Ranking by pairwise comparisons with special reference to ordering portfolios. American Journal of Mathematical and Management Sciences 20 (2000)Google Scholar
  7. 7.
    Paredes, R., Vidal, E.: Learning prototypes and distances: a prototype reduction technique based on nearest neighbor error minimization. Pattern Recognition 39(2), 180–188 (2006)zbMATHCrossRefGoogle Scholar
  8. 8.
    Paredes, R., Vidal, E.: Learning weighted metrics to minimize nearest-neighbor classification error. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(7), 1100–1110 (2006)CrossRefGoogle Scholar
  9. 9.
    Schultz, M., Joachims, T.: Learning a distance metric from relative comparisons. In: Proceedings of Neural Information Processing Systems (NIPS-04) (2004)Google Scholar
  10. 10.
    Wang, H.: All common subsequences. In: Proceedings of International Joint Conference in Artificial Intelligence (IJCAI-07), pp. 635–640 (2007)Google Scholar
  11. 11.
    Wilson, D.R., Martinez, T.R.: Improved heterogeneous distance functions. Journal of Artificial Intelligence Research 6, 1–34 (1997)zbMATHMathSciNetGoogle Scholar
  12. 12.
    Witten, I.H., Frank, E.: Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann (2005)Google Scholar

Copyright information

© Springer-Verlag London Limited 2009

Authors and Affiliations

  • Maria Dobrska
    • 1
  • Hui Wang
    • 1
  • William Blackburn
    • 1
  1. 1.School of Computing and MathematicsUniversity of Ulster at JordanstownNorthern Ireland, UK

Personalised recommendations