Using Clicks as Implicit Judgments: Expectations Versus Observations

  • Falk Scholer
  • Milad Shokouhi
  • Bodo Billerbeck
  • Andrew Turpin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4956)

Abstract

Clickthrough data has been the subject of increasing popularity as an implicit indicator of user feedback. Previous analysis has suggested that user click behaviour is subject to a quality bias—that is, users click at different rank positions when viewing effective search results than when viewing less effective search results. Based on this observation, it should be possible to use click data to infer the quality of the underlying search system. In this paper we carry out a user study to systematically investigate how click behaviour changes for different levels of search system effectiveness as measured by information retrieval performance metrics. Our results show that click behaviour does not vary systematically with the quality of search results. However, click behaviour does vary significantly between individual users, and between search topics. This suggests that using direct click behaviour—click rank and click frequency—to infer the quality of the underlying search system is problematic. Further analysis of our user click data indicates that the correspondence between clicks in a search result list and subsequent confirmation that the clicked resource is actually relevant is low. Using clicks as an implicit indication of relevance should therefore be done with caution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agichtein, E., Brill, E., Dumais, S.: Improving web search ranking by incorporating user behavior information. In: Efthimiadis, et al. (eds.) [7], pp. 19–26.Google Scholar
  2. 2.
    Agichtein, E., Brill, E., Dumais, S., Ragno, R.: Learning user interaction models for predicting web search result preferences. In: Efthimiadis, et al. (eds.) [7], pp. 3–10.Google Scholar
  3. 3.
    Allan, J., Carterette, B., Lewis, J.: When will information retrieval be “good enough”? In: Marchionini, et al. (eds.) [15], pp. 433–440.Google Scholar
  4. 4.
    Bailey, P., Craswell, N., Hawking, D.: Engineering a multi-purpose test collection for web retrieval experiments. Information Processing and Management 39(6), 853–871 (2003)CrossRefGoogle Scholar
  5. 5.
    Buckley, C., Voorhees, E.M.: Retrieval system evaluation. In: TREC: experiment and evaluation in information retrieval [21]Google Scholar
  6. 6.
    Craswell, N., Szummer, M.: Random walks on the click graph. In: Kraaij, et al. (eds.) [14], pp. 239–246Google Scholar
  7. 7.
    Efthimiadis, E., Dumais, S., Hawking, D., Järvelin, K. (eds.): Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, WA (2006)Google Scholar
  8. 8.
    Fox, S., Karnawat, K., Mydland, M., Dumais, S., White, T.: Evaluating implicit measures to improve web search. ACM Transactions on Information Systems 23(2), 147–168 (2005)CrossRefGoogle Scholar
  9. 9.
    Harman, D.K.: The TREC test collection. In: TREC: experiment and evaluation in information retrieval [21]Google Scholar
  10. 10.
    Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, Edmonton, Alberta, Canada, pp. 133–142. ACM Press, New York (2002)CrossRefGoogle Scholar
  11. 11.
    Joachims, T., Granka, L., Pan, B., Hembrooke, H., Gay, G.: Accurately interpreting clickthrough data as implicit feedback. In: Marchionini,, et al. (eds.) [15], pp. 154–161.Google Scholar
  12. 12.
    Joachims, T., Granka, L., Pan, B., Hembrooke, H., Radlinski, F., Gay, G.: Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Transactions on Information Systems 25(2), 7 (2007)CrossRefGoogle Scholar
  13. 13.
    Kemp, C., Ramamohanarao, K.: Long-term learning for web search engines. In: Proceedings of the 6th European Conference on Principles of Data Mining and Knowledge Discovery, London, UK, pp. 263–274. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  14. 14.
    Kraaij, W., de Vries, A., Clarke, C., Fuhr, N., Kando, N. (eds.): Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands (2007)Google Scholar
  15. 15.
    Marchionini, G., Moffat, A., Tait, J., Baeza-Yates, R., Ziviani, N. (eds.): Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Salvador, Brazil (2005)Google Scholar
  16. 16.
    Radlinski, F., Joachims, T.: Query chains: learning to rank from implicit feedback. In: Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, Chicago, Illinois, USA, pp. 239–248 (2005)Google Scholar
  17. 17.
    Radlinski, F., Joachims, T.: Active exploration for learning rankings from clickthrough data. In: Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, San Jose, California, pp. 570–579 (2007)Google Scholar
  18. 18.
    Turpin, A., Scholer, F.: User performance versus precision measures for simple search tasks. In: Efthimiadis, et al. (eds.) [7], pp. 11–18.Google Scholar
  19. 19.
    Turpin, A., Scholer, F., Billerbeck, B., Abel, L.: Examining the pseudo-standard web search engine results page. In: Proceedings of the 11th Australasian Document Computing Symposium, Brisbane, Australia, pp. 9–16 (2006)Google Scholar
  20. 20.
    Turpin, A., Tsegay, Y., Hawking, D., Williams, H.E.: Fast generation of result snippets in web search. In: Kraaij, et al. (eds.) [14], pp. 127–134.Google Scholar
  21. 21.
    Voorhees, E.M., Harman, D.K.: TREC: experiment and evaluation in information retrieval. MIT Press, Cambridge (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Falk Scholer
    • 1
  • Milad Shokouhi
    • 2
  • Bodo Billerbeck
    • 2
  • Andrew Turpin
    • 1
  1. 1.School of Computer Science and ITRMIT UniversityMelbourneAustralia
  2. 2.Microsoft ResearchCambridgeUK

Personalised recommendations