Information Retrieval

, Volume 11, Issue 3, pp 251–265 | Cite as

A probability ranking principle for interactive information retrieval

Article

Abstract

The classical Probability Ranking Principle (PRP) forms the theoretical basis for probabilistic Information Retrieval (IR) models, which are dominating IR theory since about 20 years. However, the assumptions underlying the PRP often do not hold, and its view is too narrow for interactive information retrieval (IIR). In this article, a new theoretical framework for interactive retrieval is proposed: The basic idea is that during IIR, a user moves between situations. In each situation, the system presents to the user a list of choices, about which s/he has to decide, and the first positive decision moves the user to a new situation. Each choice is associated with a number of cost and probability parameters. Based on these parameters, an optimum ordering of the choices can the derived—the PRP for IIR. The relationship of this rule to the classical PRP is described, and issues of further research are pointed out.

Keywords

Probabilistic retrieval Interactive retrieval Optimum retrieval rule 

References

  1. Bates, M. J. (1989). The design of browsing and berrypicking techniques for the online search interface. Online Review 13(5), 407–424. http://www.gseis.ucla.edu/faculty/bates/berrypicking.html.
  2. Belkin, N., Oddy, R., & Brooks, H. (1982). ASK for information retrieval: Part I. Background and theory. The Journal of Documentation, 38(2), 61–71.CrossRefGoogle Scholar
  3. Bookstein, A. (1983). Outline of a general probabilistic retrieval model. The Journal of Documentation, 39(2), 63–72.CrossRefGoogle Scholar
  4. Borlund, P., & Ingwersen, P. (1998). Measures of relative relevance and ranked half-life: Performance indicators for interactive IR. In SIGIR ’98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 324–331). New York, NY: ACM Press.Google Scholar
  5. Campbell, I. (2000). Interactive evaluation of the ostensive model using a new test collection of images with multiple relevance assessments. Information Retrieval, 2(1), 89–114.CrossRefGoogle Scholar
  6. Carbonell, J., & Goldstein, J. (1998). The use of MMR, diversity-based reranking for reordering documents and producing summaries. In W. B. Croft, A. Moffat, C. J. van Rijsbergen, R. Wilkinson, & Zobel, J. (Eds.), Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 335–336). New York: ACM.Google Scholar
  7. Chen, H., & Karger, D. R. (2006). Less is more: Probabilistic models for retrieving fewer relevant documents. In E. N. Efthimiadis, S. T. Dumais, D. Hawking, & K. Järvelin (Eds.), SIGIR 2006: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington, USA, August 6–11, 2006 (pp. 429–436). ACM.Google Scholar
  8. Fuhr, N. (1992). Probabilistic models in information retrieval. The Computer Journal, 35(3), 243–255.MATHCrossRefGoogle Scholar
  9. Goldin, D. Q., Smolka, S. A., & Wegner, P. (2006). Interactive computation. The new paradigm. Springer.Google Scholar
  10. Ingwersen, P. (1996). Cognitive perspectives of information retrieval. The Journal of Documentation, 52(1), 3–50.CrossRefGoogle Scholar
  11. Joachims, T., Granka, L., Pan, B., Hembrooke, H., Radlinski, F., & Gay, G. (2007). Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search. ACM Trans. Inf. Syst. 25(2), 7.Google Scholar
  12. Malik, S., Klas, C.-P., Fuhr, N., Larsen, B., & Tombros, A. (2006). Designing a user interface for interactive retrieval of structured documents—Lessons learned from the INEX interactive track. In Proceedings of the European Conference on Digital Libraries.Google Scholar
  13. Nottelmann, H., & Fuhr, N. (2003a). Evaluating different methods of estimating retrieval quality for resource selection. In J. Callan, G. Cormack, C. Clarke, D. Hawking, & A. Smeaton (Eds.), Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM.Google Scholar
  14. Nottelmann, H., & Fuhr, N. (2003b). From retrieval status values to probabilities of relevance for advanced IR applications. Information Retrieval 6(4), 263–388.Google Scholar
  15. O’Day, V. L., & Jeffries, R. (1993). Orienting in an information landscape: How information seekers get from here to there. In Proceedings of the INTERCHI ’93 (pp. 438–445). IOS Press.Google Scholar
  16. Page, L., Brin, S., Motwani, R., & Winograd, T. (1998). The PageRank citation ranking: Bringing order to the web. Technical report, Stanford Digital Library Technologies Project.Google Scholar
  17. Robertson, S. E. (1977). The probability ranking principle in IR. The Journal of Documentation, 33, 294–304.CrossRefGoogle Scholar
  18. Stirling, K. H. (1975). The effect of document ranking on retrieval system performance: A search for an optimal ranking rule. In Proceedings of the American Society for Information Science 12 (pp. 105–106).Google Scholar
  19. Turpin, A. H., & Hersh, W. (2001). Why batch and user evaluations do not give the same results. In W. B. Croft, D. Harper, D. H. Kraft, & J. Zobel (Eds.), Proceedings of the 24th Annual International Conference on Research and Development in Information Retrieval (pp. 225–231). New York: ACM Press.Google Scholar
  20. Turpin, A., & Scholer, F. (2006). User performance versus precision measures for simple search tasks. In E. N. Efthimiadis, S. T. Dumais, D. Hawking, & K. Järvelin (Eds.), SIGIR 2006: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington, USA, August 6–11, 2006 (pp. 11–18). ACM.Google Scholar
  21. Voorhees, E., & Harman, D. (2000). Overview of the Eighth Text REtrieval Conference (TREC-8). In The Eighth Text REtrieval Conference (TREC-8) (pp. 1–24). Gaithersburg, MD: NIST.Google Scholar
  22. White, R., & Drucker, S. (2007). Investigating behavioral variability in web search. In Proceedings of WWW (pp. 21–30).Google Scholar
  23. White, R. W., Jose, J. M., Ruthven, I., & van Risjbergen, C. J. (2005). Evaluating implicit feedback models using searcher simulations. ACM Transactions on Information Systems, 23(3), 325–361.CrossRefGoogle Scholar
  24. Williamson, J. (2006). Continuous uncertain interaction. PhD thesis, University of Glasgow, Computer Science.Google Scholar
  25. Williamson, J., & Murray-Smith, R. (2004). Granular synthesis for the display of time-varying probability densities. In Int. Workshop on Interactive Sonification, Bielefeld. http://www.dcs.gla.ac.uk/jhw/ison.pdf.

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  1. 1.Faculty of Engineering Sciences, Department of Computational and Cognitive SciencesUniversity of Duisburg-EssenDuisburgGermany

Personalised recommendations