Information Retrieval

, Volume 13, Issue 4, pp 346-374

First online:

LETOR: A benchmark collection for research on learning to rank for information retrieval

  • Tao QinAffiliated withMicrosoft Research Asia Email author 
  • , Tie-Yan LiuAffiliated withMicrosoft Research Asia
  • , Jun XuAffiliated withMicrosoft Research Asia
  • , Hang LiAffiliated withMicrosoft Research Asia

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


LETOR is a benchmark collection for the research on learning to rank for information retrieval, released by Microsoft Research Asia. In this paper, we describe the details of the LETOR collection and show how it can be used in different kinds of researches. Specifically, we describe how the document corpora and query sets in LETOR are selected, how the documents are sampled, how the learning features and meta information are extracted, and how the datasets are partitioned for comprehensive evaluation. We then compare several state-of-the-art learning to rank algorithms on LETOR, report their ranking performances, and make discussions on the results. After that, we discuss possible new research topics that can be supported by LETOR, in addition to algorithm comparison. We hope that this paper can help people to gain deeper understanding of LETOR, and enable more interesting research projects on learning to rank and related topics.


Learning to rank Information retrieval Benchmark datasets Feature extraction