Skip to main content

Optimizing Ranking Models in an Online Setting

  • Conference paper
  • First Online:
Book cover Advances in Information Retrieval (ECIR 2019)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11437))

Included in the following conference series:

Abstract

Online Learning to Rank (OLTR) methods optimize ranking models by directly interacting with users, which allows them to be very efficient and responsive. All OLTR methods introduced during the past decade have extended on the original OLTR method: Dueling Bandit Gradient Descent (DBGD). Recently, a fundamentally different approach was introduced with the Pairwise Differentiable Gradient Descent (PDGD) algorithm. To date the only comparisons of the two approaches are limited to simulations with cascading click models and low levels of noise. The main outcome so far is that PDGD converges at higher levels of performance and learns considerably faster than DBGD-based methods. However, the PDGD algorithm assumes cascading user behavior, potentially giving it an unfair advantage. Furthermore, the robustness of both methods to high levels of noise has not been investigated. Therefore, it is unclear whether the reported advantages of PDGD over DBGD generalize to different experimental conditions. In this paper, we investigate whether the previous conclusions about the PDGD and DBGD comparison generalize from ideal to worst-case circumstances. We do so in two ways. First, we compare the theoretical properties of PDGD and DBGD, by taking a critical look at previously proven properties in the context of ranking. Second, we estimate an upper and lower bound on the performance of methods by simulating both ideal user behavior and extremely difficult behavior, i.e., almost-random non-cascading user models. Our findings show that the theoretical bounds of DBGD do not apply to any common ranking model and, furthermore, that the performance of DBGD is substantially worse than PDGD in both ideal and worst-case circumstances. These results reproduce previously published findings about the relative performance of PDGD vs. DBGD and generalize them to extremely noisy and non-cascading circumstances.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The resources for reproducing the experiments in this paper are available at https://github.com/HarrieO/OnlineLearningToRank.

References

  1. Ai, Q., Bi, K., Luo, C., Guo, J., Croft, W.B.: Unbiased learning to rank with unbiased propensity estimation. In: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 385–394. ACM (2018)

    Google Scholar 

  2. Borisov, A., Markov, I., de Rijke, M., Serdyukov, P.: A neural click model for web search. In: International World Wide Web Conferences Steering Committee, WWW, pp. 531–541 (2016)

    Google Scholar 

  3. Chapelle, O., Chang, Y.: Yahoo! learning to rank challenge overview. J. Mach. Learn. Res. 14, 1–24 (2011)

    Google Scholar 

  4. Chuklin, A., Markov, I., de Rijke, M.: Click Models for Web Search. Morgan and Claypool Publishers, San Rafael (2015)

    Book  Google Scholar 

  5. Dato, D., Lucchese, C., Nardini, F.M., Orlando, S., Perego, R., Tonellotto, N., Venturini, R.: Fast ranking with additive ensembles of oblivious and non-oblivious regression trees. ACM Trans. Inform. Syst. (TOIS), 35(2) (2016). Article 15

    Article  Google Scholar 

  6. Dumais, S.: Keynote: the web changes everything: understanding and supporting people in dynamic information environments. In: Lalmas, M., Jose, J., Rauber, A., Sebastiani, F., Frommholz, I. (eds.) ECDL 2010. LNCS, vol. 6273, pp. 1–1. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15464-5_1

    Chapter  Google Scholar 

  7. Guo, F., Liu, C., Wang, Y.M.: Efficient multiple-click models in web search. In: WSDM, pp. 124–131. ACM (2009)

    Google Scholar 

  8. He, J., Zhai, C., Li, X.: Evaluation of methods for relative comparison of retrieval systems based on clickthroughs. In: CIKM, pp. 2029–2032. ACM (2009)

    Google Scholar 

  9. Hofmann, K.: Fast and Reliable Online Learning to Rank for Information Retrieval. Ph.D. thesis, University of Amsterdam (2013)

    Google Scholar 

  10. Hofmann, K., Schuth, A., Whiteson, S., de Rijke, M.: Reusing historical interaction data for faster online learning to rank for information retrieval. In: WSDM, pp. 183–192. ACM (2013)

    Google Scholar 

  11. Hofmann, K., Whiteson, S., de Rijke, M.: Balancing exploration and exploitation in learning to rank online. In: Clough, P., Foley, C., Gurrin, C., Jones, G.J.F., Kraaij, W., Lee, H., Mudoch, V. (eds.) ECIR 2011. LNCS, vol. 6611, pp. 251–263. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-20161-5_25

    Chapter  Google Scholar 

  12. Hofmann, K., Whiteson, S., de Rijke, M.: A probabilistic method for inferring preferences from clicks. In: CIKM, pp. 249–258. ACM (2011)

    Google Scholar 

  13. Joachims, T.: Optimizing search engines using clickthrough data. In: KDD, pp. 133–142. ACM (2002)

    Google Scholar 

  14. Joachims, T., Swaminathan, A., Schnabel, T.: Unbiased learning-to-rank with biased feedback. In: WSDM, pp. 781–789. ACM (2017)

    Google Scholar 

  15. Lefortier, D., Serdyukov, P., de Rijke, M.: Online exploration for detecting shifts in fresh intent. In: CIKM, pp. 589–598. ACM, November 2014

    Google Scholar 

  16. Liu, T.Y.: Learning to rank for information retrieval. Found. Trends Inform. Retrieval 3(3), 225–331 (2009)

    Article  Google Scholar 

  17. Oosterhuis, H., de Rijke, M.: Balancing speed and quality in online learning to rank for information retrieval. In: CIKM, pp. 277–286. ACM (2017)

    Google Scholar 

  18. Oosterhuis, H., de Rijke, M.: Sensitive and scalable online evaluation with theoretical guarantees. In: CIKM, pp. 77–86. ACM (2017)

    Google Scholar 

  19. Oosterhuis, H., de Rijke, M.: Differentiable unbiased online learning to rank. In: CIKM, pp. 1293–1302. ACM (2018)

    Google Scholar 

  20. Oosterhuis, H., Schuth, A., de Rijke, M.: Probabilistic multileave gradient descent. In: Ferro, N., Crestani, F., Moens, M.-F., Mothe, J., Silvestri, F., Di Nunzio, G.M., Hauff, C., Silvello, G. (eds.) ECIR 2016. LNCS, vol. 9626, pp. 661–668. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-30671-1_50

    Chapter  Google Scholar 

  21. Qin, T., Liu, T.Y.: Introducing letor 4.0 datasets. arXiv preprint arXiv:1306.2597 (2013)

  22. Radlinski, F., Craswell, N.: Optimized interleaving for online retrieval evaluation. In: WSDM, pp. 245–254. ACM (2013)

    Google Scholar 

  23. Sanderson, M.: Test collection based evaluation of information retrieval systems. Found. Trends Inform. Retrieval 4(4), 247–375 (2010)

    Article  Google Scholar 

  24. Schuth, A., et al.: Probabilistic multileave for online retrieval evaluation. In: SIGIR, pp. 955–958. ACM (2015)

    Google Scholar 

  25. Schuth, A., Oosterhuis, H., Whiteson, S., de Rijke, M.: Multileave gradient descent for fast online learning to rank. In: WSDM, pp. 457–466. ACM (2016)

    Google Scholar 

  26. Schuth, A., Sietsma, F., Whiteson, S., Lefortier, D., de Rijke, M.: Multileaved comparisons for fast online evaluation. In: CIKM, pp. 71–80. ACM (2014)

    Google Scholar 

  27. Vakkari, P., Hakala, N.: Changes in relevance criteria and problem stages in task performance. J. Doc. 56, 540–562 (2000)

    Article  Google Scholar 

  28. Wang, H., Langley, R., Kim, S., McCord-Snook, E., Wang, H.: Efficient exploration of gradient space for online learning to rank. In: SIGIR, pp. 145–154. ACM (2018)

    Google Scholar 

  29. Wang, X., Bendersky, M., Metzler, D., Najork, M.: Learning to rank with selection bias in personal search. In: SIGIR, pp. 115–124. ACM (2016)

    Google Scholar 

  30. Yue, Y., Joachims, T.: Interactively optimizing information retrieval systems as a dueling bandits problem. In: ICML, pp. 1201–1208. ACM (2009)

    Google Scholar 

  31. Yue, Y., Patel, R., Roehrig, H.: Beyond position bias: Examining result attractiveness as a source of presentation bias in clickthrough data. In: WWW, pp. 1011–1018. ACM (2010)

    Google Scholar 

  32. Zhao, T., King, I.: Constructing reliable gradient exploration for online learning to rank. In: CIKM, pp. 1643–1652. ACM (2016)

    Google Scholar 

  33. Zoghi, M., Whiteson, S., de Rijke, M., Munos, R.: Relative confidence sampling for efficient on-line ranker evaluation. In: WSDM, pp. 73–82. ACM (2014)

    Google Scholar 

Download references

Acknowledgements

This research was supported by Ahold Delhaize, the Association of Universities in the Netherlands (VSNU), the Innovation Center for Artificial Intelligence (ICAI), and the Netherlands Organization for Scientific Research (NWO) under project nr 612.001.551. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harrie Oosterhuis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Oosterhuis, H., de Rijke, M. (2019). Optimizing Ranking Models in an Online Setting. In: Azzopardi, L., Stein, B., Fuhr, N., Mayr, P., Hauff, C., Hiemstra, D. (eds) Advances in Information Retrieval. ECIR 2019. Lecture Notes in Computer Science(), vol 11437. Springer, Cham. https://doi.org/10.1007/978-3-030-15712-8_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-15712-8_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-15711-1

  • Online ISBN: 978-3-030-15712-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics