Advertisement

Computing Information Retrieval Performance Measures Efficiently in the Presence of Tied Scores

  • Frank McSherry
  • Marc Najork
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4956)

Abstract

The Information Retrieval community uses a variety of performance measures to evaluate the effectiveness of scoring functions. In this paper, we show how to adapt six popular measures — precision, recall, F1, average precision, reciprocal rank, and normalized discounted cumulative gain — to cope with scoring functions that are likely to assign many tied scores to the results of a search. Tied scores impose only a partial ordering on the results, meaning that there are multiple possible orderings of the result set, each one performing differently. One approach to cope with ties would be to average the performance values across all possible result orderings; but unfortunately, generating result permutations requires super-exponential time. The approach presented in this paper computes precisely the same performance value as the approach of averaging over all permutations, but does so as efficiently as the original, tie-oblivious measures.

Keywords

Relevant Result Average Precision Ranking Algorithm Test Collection Gain Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cleverdon, C.W., Mills, J.: The testing of index language devices. Aslib Proceedings 15(4), 106–130 (1963)CrossRefGoogle Scholar
  2. 2.
    Cooper, W.: Expected Search Length: A Single Measure of Retrieval Effectiveness Based on the Weak Ordering Action of Retrieval Systems. American Documentation 19(1), 30–41 (1968)CrossRefGoogle Scholar
  3. 3.
    Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems 20(4), 422–446 (2002)CrossRefGoogle Scholar
  4. 4.
    National Institute of Standards and Technology. TREC 2005 Robust Track Guidelines (2005), http://trec.nist.gov/data/robust/05/05.guidelines.html
  5. 5.
    Raghavan, V., Jung, G.: A Critical Investigation of Recall and Precision as Measures of Retrieval System Performance. ACM Transactions on Information Systems 7(3), 205–229 (1989)CrossRefGoogle Scholar
  6. 6.
    Voorhees, E.M., Harman, D.K.: TREC: Experiment and Evaluation in Information Retrieval. MIT Press, Cambridge (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Frank McSherry
    • 1
  • Marc Najork
    • 1
  1. 1.Microsoft ResearchMountain ViewUSA

Personalised recommendations