Discounted Cumulative Gain and User Decision Models

  • Georges Dupret
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7024)

Abstract

We propose to explain Discounted Cumulative Gain (DCG) as the consequences of a set of hypothesis, in a generative probabilistic model, on how users browse the result page ranking list of a search engine. This exercise of reconstructing a user model from a metric allows us to show that it is possible to estimate from data the numerical values of the discounting factors. It also allows us to compare different candidate user models in terms of their ability to describe the observed data, and hence to select the best one. It is generally not possible to relate the performance of a ranking function in terms of DCG with the clicks observed after the function is deployed on a production environment. We show in this paper that a user model make this possible. Finally, we show that DCG can be interpreted as a measure of the utility a user gains per unit of effort she is ready to allocate. This contrasts nicely with a recent interpretation given to average precision (AP), another popular Information Retrieval metric, as a measure of effort needed to achieve a unit of utility [7].

Keywords

Discount Factor User Model User Behavior Average Precision Ranking Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bollmann, P., Raghavan, V.V.: A utility-theoretic analysis of expected search length. In: SIGIR 1988, pp. 245–256. ACM, New York (1988)Google Scholar
  2. 2.
    Buckley, C., Voorhees, E.M.: Retrieval evaluation with incomplete information. In: SIGIR 2004, pp. 25–32. ACM, New York (2004)Google Scholar
  3. 3.
    Carterette, B., Jones, R.: Evaluating search engines by modeling the relationship between relevance and clicks. Advances in Neural Information Processing Systems 20, 217–224 (2008)Google Scholar
  4. 4.
    Craswell, N., Zoeter, O., Taylor, M., Ramsey, B.: An experimental comparison of click position-bias models. In: First ACM International Conference on Web Search and Data Mining, WSDM 2008 (2008)Google Scholar
  5. 5.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. R. Statist. Soc. B 39, 1–38 (1977)MathSciNetMATHGoogle Scholar
  6. 6.
    Dupret, G.: User models to compare and evaluate web IR metrics. In: Proceedings of SIGIR 2009 Workshop on The Future of IR Evaluation (2009), http://staff.science.uva.nl/k̃amps/ireval/papers/georges.pdfGoogle Scholar
  7. 7.
    Dupret, G., Piwowarski, B.: A User Behavior Model for Average Precision and its Generalization to Graded Judgments. In: Proceedings of the 33th ACM SIGIR Conference (2010)Google Scholar
  8. 8.
    Führ, N.: A probability ranking principle for interactive information retrieval. In: Information Retrieval. Springer, Heidelberg (2008)Google Scholar
  9. 9.
    Granka, L., Joachims, T., Gay, G.: Eye-tracking analysis of user behavior in www search. In: ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pp. 478–479 (2004)Google Scholar
  10. 10.
    Guo, F., Liu, C., Wang, Y.M.: Efficient multiple-click models in web search. In: WSDM 2009: Proceedings of the Second ACM International Conference on Web Search and Data Mining, pp. 124–131. ACM, New York (2009)Google Scholar
  11. 11.
    Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (ACM TOIS) 20(4), 222–246 (2002)Google Scholar
  12. 12.
    Kelly, D.: Methods for Evaluating Interactive Information Retrieval Systems with Users. Foundations and Trends in Information Retrieval, vol. 3 (2009)Google Scholar
  13. 13.
    Moffat, A., Zobel, J.: Rank-biased precision for measurement of retrieval effectiveness. ACM Trans. Inf. Syst. 27(1), 1–27 (2008)CrossRefGoogle Scholar
  14. 14.
    Robertson, S.: A new interpretation of average precision. In: SIGIR 2008, pp. 689–690. ACM, New York (2008)Google Scholar
  15. 15.
    Voorhees, E.M., Harman, D. (eds.): TREC: Experiment and Evaluation in Information Retrieval. MIT Press, Cambridge (2005)Google Scholar
  16. 16.
    Yilmaz, E., Aslam, J.A., Robertson, S.: A new rank correlation coefficient for information retrieval. In: SIGIR 2008: Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 587–594. ACM, New York (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Georges Dupret
    • 1
  1. 1.Yahoo!USA

Personalised recommendations