Information Retrieval

, Volume 13, Issue 4, pp 346–374

LETOR: A benchmark collection for research on learning to rank for information retrieval

Article

DOI: 10.1007/s10791-009-9123-y

Cite this article as:
Qin, T., Liu, T., Xu, J. et al. Inf Retrieval (2010) 13: 346. doi:10.1007/s10791-009-9123-y

Abstract

LETOR is a benchmark collection for the research on learning to rank for information retrieval, released by Microsoft Research Asia. In this paper, we describe the details of the LETOR collection and show how it can be used in different kinds of researches. Specifically, we describe how the document corpora and query sets in LETOR are selected, how the documents are sampled, how the learning features and meta information are extracted, and how the datasets are partitioned for comprehensive evaluation. We then compare several state-of-the-art learning to rank algorithms on LETOR, report their ranking performances, and make discussions on the results. After that, we discuss possible new research topics that can be supported by LETOR, in addition to algorithm comparison. We hope that this paper can help people to gain deeper understanding of LETOR, and enable more interesting research projects on learning to rank and related topics.

Keywords

Learning to rankInformation retrievalBenchmark datasetsFeature extraction

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.Microsoft Research AsiaBeijingChina