Optimizing Base Rankers Using Clicks

A Case Study Using BM25
  • Anne Schuth
  • Floor Sietsma
  • Shimon Whiteson
  • Maarten de Rijke
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8416)

Abstract

We study the problem of optimizing an individual base ranker using clicks. Surprisingly, while there has been considerable attention for using clicks to optimize linear combinations of base rankers, the problem of optimizing an individual base ranker using clicks has been ignored. The problem is different from the problem of optimizing linear combinations of base rankers as the scoring function of a base ranker may be highly non-linear. For the sake of concreteness, we focus on the optimization of a specific base ranker, viz. BM25. We start by showing that significant improvements in performance can be obtained when optimizing the parameters of BM25 for individual datasets. We also show that it is possible to optimize these parameters from clicks, i.e., without the use of manually annotated data, reaching or even beating manually tuned parameters.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Asadi, N., Metzler, D., Elsayed, T., Lin, J.: Pseudo test collections for learning web search ranking functions. In: SIGIR 2011, pp. 1073–1082. ACM (2011)Google Scholar
  2. 2.
    Azzopardi, L., de Rijke, M., Balog, K.: Building simulated queries for known-item topics: An analysis using six European languages. In: SIGIR 2007, pp. 455–462. ACM (2007)Google Scholar
  3. 3.
    Berendsen, R., Tsagkias, M., Weerkamp, W., de Rijke, M.: Pseudo test collections for training and tuning microblog rankers. In: SIGIR 2013. ACM (2013)Google Scholar
  4. 4.
    Carterette, B., Jones, R.: Evaluating search engines by modeling the relationship between relevance and clicks. In: NIPS 2007, pp. 217–224. MIT Press (2008)Google Scholar
  5. 5.
    Gao, N., Deng, Z.-H., Yu, H., Jiang, J.-J.: Listopt: Learning to optimize for XML ranking. In: Huang, J.Z., Cao, L., Srivastava, J. (eds.) PAKDD 2011, Part II. LNCS, vol. 6635, pp. 482–492. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  6. 6.
    Guo, F., Liu, C., Wang, Y.M.: Efficient multiple-click models in web search. In: WSDM 2009. ACM (2009)Google Scholar
  7. 7.
    Hofmann, K., Whiteson, S., de Rijke, M.: A probabilistic method for inferring preferences from clicks. In: CIKM 2011. ACM (2011)Google Scholar
  8. 8.
    Hofmann, K., Whiteson, S., de Rijke, M.: Estimating interleaved comparison outcomes from historical click data. In: CIKM 2012. ACM (2012)Google Scholar
  9. 9.
    Hofmann, K., Schuth, A., Whiteson, S., de Rijke, M.: Reusing historical interaction data for faster online learning to rank for IR. In: WSDM 2013. ACM (2013)Google Scholar
  10. 10.
    Hofmann, K., Whiteson, S., de Rijke, M.: Balancing exploration and exploitation in listwise and pairwise online learning to rank for information retrieval. Information Retrieval 16(1), 63–90 (2013)CrossRefGoogle Scholar
  11. 11.
    Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS) 20(4) (2002)Google Scholar
  12. 12.
    Ji, S., Zhou, K., Liao, C., Zheng, Z., Xue, G.-R., Chapelle, O., Sun, G., Zha, H.: Global ranking by exploiting user clicks. In: SIGIR 2009, pp. 35–42. ACM (2009)Google Scholar
  13. 13.
    Joachims, T.: Optimizing search engines using clickthrough data. In: KDD 2002 (2002)Google Scholar
  14. 14.
    Jung, S., Herlocker, J.L., Webster, J.: Click data as implicit relevance feedback in web search. Information Processing & Management 43(3), 791–807 (2007)CrossRefGoogle Scholar
  15. 15.
    Liu, T.-Y.: Learning to rank for information retrieval. Foundations and Trends in Information Retrieval 3(3), 225–331 (2009)CrossRefGoogle Scholar
  16. 16.
    Qin, T., Liu, T.-Y., Xu, J., Li, H.: LETOR: A benchmark collection for research on learning to rank for information retrieval. Information Retrieval 13(4), 346–374 (2010)CrossRefGoogle Scholar
  17. 17.
    Radlinski, F., Kleinberg, R., Joachims, T.: Learning diverse rankings with multi-armed bandits. In: ICML 2008. ACM (2008a)Google Scholar
  18. 18.
    Radlinski, F., Kurup, M., Joachims, T.: How does clickthrough data reflect retrieval quality? In: CIKM 2008, pp. 43–52. ACM, New York (2008b)Google Scholar
  19. 19.
    Robertson, S., Walker, S.: Okapi at TREC-3. In: TREC-3. NIST (1995)Google Scholar
  20. 20.
    Robertson, S., Zaragoza, H.: The probabilistic relevance framework: BM25 and beyond. Foundations and Trends in Information Retrieval 3(4), 333–389 (2009)CrossRefGoogle Scholar
  21. 21.
    Schuth, A., Hofmann, K., Whiteson, S., de Rijke, M.: Lerot: An online learning to rank framework. In: LivingLab 2013, pp. 23–26. ACM (2013)Google Scholar
  22. 22.
    Sparck Jones, K., Walker, S., Robertson, S.: A probabilistic model of information retrieval: Development and comparative experiments. Information Processing and Management 36, 779–808, 809–840 (2000)Google Scholar
  23. 23.
    Svore, K.M., Burges, C.J.: A machine learning approach for improved BM25 retrieval. In: CIKM 2009. ACM (2009)Google Scholar
  24. 24.
    Tague, J., Nelson, M., Wu, H.: Problems in the simulation of bibliographic retrieval systems. In: SIGIR 1980, pp. 236–255 (1980)Google Scholar
  25. 25.
    Taylor, M., Zaragoza, H., Craswell, N., Robertson, S., Burges, C.: Optimisation methods for ranking functions with multiple parameters. In: CIKM 2006. ACM (2006)Google Scholar
  26. 26.
    Yue, Y., Joachims, T.: Interactively optimizing information retrieval systems as a dueling bandits problem. In: ICML 2009 (2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Anne Schuth
    • 1
  • Floor Sietsma
    • 1
  • Shimon Whiteson
    • 1
  • Maarten de Rijke
    • 1
  1. 1.ISLAUniversity of AmsterdamThe Netherlands

Personalised recommendations