How Precise Does Document Scoring Need to Be?

  • Ziying Yang
  • Alistair Moffat
  • Andrew Turpin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9994)


We explore the implications of tied scores arising in the document similarity scoring regimes that are used when queries are processed in a retrieval engine. Our investigation has two parts: first, we evaluate past TREC runs to determine the prevalence and impact of tied scores, to understand the alternative treatments that might be used to handle them; and second, we explore the implications of what might be thought of as “deliberate” tied scores, in order to allow for faster search. In the first part of our investigation we show that while tied scores had the potential to be disruptive to TREC evaluations, in practice their effect was relatively minor. The second part of our exploration helps understand why that was so, and shows that quite marked levels of score rounding can be tolerated, without greatly affecting the ability to compare between systems. The latter finding offers the potential for approximate scoring regimes that provide faster query processing with little or no loss of effectiveness.


Average Precision Discrimination Ratio Normalize Discount Cumulative Gain Reciprocal Rank Normalize Discount Cumulative Gain 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Anh, V.N., Moffat, A.: Pruned query evaluation using pre-computed impacts. In: Proceedings of SIGIR, pp. 372–379 (2006)Google Scholar
  2. 2.
    Bailey, P., Craswell, N., Soboroff, I., Thomas, P., de Vries, A.P., Yilmaz, E.: Relevance assessment: are judges exchangeable and does it matter. In: Proceedings of SIGIR, pp. 667–674 (2008)Google Scholar
  3. 3.
    Broder, A.Z., Carmel, D., Herscovici, M., Soffer, A., Zien, J.: Efficient query evaluation using a two-level retrieval process. In: Proceedings of CIKM, pp. 426–434 (2003)Google Scholar
  4. 4.
    Harman, D.K.: The TREC test collections (Chap. 2). In: Voorhees, E.M., Harman, D.K. (eds.) TREC: Experiment and Evaluation in Information Retrieval, pp. 21–52. MIT Press, Cambridge (2005)Google Scholar
  5. 5.
    Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Sys. 20(4), 422–446 (2002)CrossRefGoogle Scholar
  6. 6.
    McSherry, F., Najork, M.: Computing information retrieval performance measures efficiently in the presence of tied scores. In: Macdonald, C., Ounis, I., Plachouras, V., Ruthven, I., White, R.W. (eds.) ECIR 2008. LNCS, vol. 4956, pp. 414–421. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  7. 7.
    Moffat, A., Zobel, J.: Rank-biased precision for measurement of retrieval effectiveness. ACM Trans. Inf. Syst. 27(1), 2.1–2.27 (2008)CrossRefGoogle Scholar
  8. 8.
    Moffat, A., Zobel, J., Sacks-Davis, R.: Memory efficient ranking. Inf. Process. Manag. 30(6), 733–744 (1994)CrossRefGoogle Scholar
  9. 9.
    Ponte, J.M., Croft, W.B.: A language modeling approach to information retrieval. In: Proceedings of SIGIR, pp. 275–281 (1998)Google Scholar
  10. 10.
    Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M., Gatford, M.: Okapi at TREC-3. In: Proceedings of TREC, pp. 109–126 (1994)Google Scholar
  11. 11.
    Sakai, T.: Alternatives to BPref. In: Proceedings of SIGIR, pp. 71–78 (2007)Google Scholar
  12. 12.
    Scholer, F., Turpin, A., Sanderson, M.: Quantifying test collection quality based on the consistency of relevance judgements. In: Proceedings of SIGIR, pp. 1063–1072 (2011)Google Scholar
  13. 13.
    Voorhees, E.M., Harman, D.K.: Overview of the seventh text retrieval conference (TREC-7). In: Proceedings of TREC, pp. 1–23. NIST Special Publication 500-242 (1998)Google Scholar
  14. 14.
    Voorhees, E.M.: Variations in relevance judgements and the measurement of retrieval effectiveness. Inf. Process. Manag. 36(5), 697–716 (2000)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Department of Computing and Information SystemsThe University of MelbourneMelbourneAustralia

Personalised recommendations