Computing Information Retrieval Performance Measures Efficiently in the Presence of Tied Scores
The Information Retrieval community uses a variety of performance measures to evaluate the effectiveness of scoring functions. In this paper, we show how to adapt six popular measures — precision, recall, F1, average precision, reciprocal rank, and normalized discounted cumulative gain — to cope with scoring functions that are likely to assign many tied scores to the results of a search. Tied scores impose only a partial ordering on the results, meaning that there are multiple possible orderings of the result set, each one performing differently. One approach to cope with ties would be to average the performance values across all possible result orderings; but unfortunately, generating result permutations requires super-exponential time. The approach presented in this paper computes precisely the same performance value as the approach of averaging over all permutations, but does so as efficiently as the original, tie-oblivious measures.
KeywordsRelevant Result Average Precision Ranking Algorithm Test Collection Gain Function
Unable to display preview. Download preview PDF.
- 4.National Institute of Standards and Technology. TREC 2005 Robust Track Guidelines (2005), http://trec.nist.gov/data/robust/05/05.guidelines.html
- 6.Voorhees, E.M., Harman, D.K.: TREC: Experiment and Evaluation in Information Retrieval. MIT Press, Cambridge (2005)Google Scholar