How Precise Does Document Scoring Need to Be?
We explore the implications of tied scores arising in the document similarity scoring regimes that are used when queries are processed in a retrieval engine. Our investigation has two parts: first, we evaluate past TREC runs to determine the prevalence and impact of tied scores, to understand the alternative treatments that might be used to handle them; and second, we explore the implications of what might be thought of as “deliberate” tied scores, in order to allow for faster search. In the first part of our investigation we show that while tied scores had the potential to be disruptive to TREC evaluations, in practice their effect was relatively minor. The second part of our exploration helps understand why that was so, and shows that quite marked levels of score rounding can be tolerated, without greatly affecting the ability to compare between systems. The latter finding offers the potential for approximate scoring regimes that provide faster query processing with little or no loss of effectiveness.
KeywordsAverage Precision Discrimination Ratio Normalize Discount Cumulative Gain Reciprocal Rank Normalize Discount Cumulative Gain
- 1.Anh, V.N., Moffat, A.: Pruned query evaluation using pre-computed impacts. In: Proceedings of SIGIR, pp. 372–379 (2006)Google Scholar
- 2.Bailey, P., Craswell, N., Soboroff, I., Thomas, P., de Vries, A.P., Yilmaz, E.: Relevance assessment: are judges exchangeable and does it matter. In: Proceedings of SIGIR, pp. 667–674 (2008)Google Scholar
- 3.Broder, A.Z., Carmel, D., Herscovici, M., Soffer, A., Zien, J.: Efficient query evaluation using a two-level retrieval process. In: Proceedings of CIKM, pp. 426–434 (2003)Google Scholar
- 4.Harman, D.K.: The TREC test collections (Chap. 2). In: Voorhees, E.M., Harman, D.K. (eds.) TREC: Experiment and Evaluation in Information Retrieval, pp. 21–52. MIT Press, Cambridge (2005)Google Scholar
- 9.Ponte, J.M., Croft, W.B.: A language modeling approach to information retrieval. In: Proceedings of SIGIR, pp. 275–281 (1998)Google Scholar
- 10.Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M., Gatford, M.: Okapi at TREC-3. In: Proceedings of TREC, pp. 109–126 (1994)Google Scholar
- 11.Sakai, T.: Alternatives to BPref. In: Proceedings of SIGIR, pp. 71–78 (2007)Google Scholar
- 12.Scholer, F., Turpin, A., Sanderson, M.: Quantifying test collection quality based on the consistency of relevance judgements. In: Proceedings of SIGIR, pp. 1063–1072 (2011)Google Scholar
- 13.Voorhees, E.M., Harman, D.K.: Overview of the seventh text retrieval conference (TREC-7). In: Proceedings of TREC, pp. 1–23. NIST Special Publication 500-242 (1998)Google Scholar