A probability ranking principle for interactive information retrieval
- 484 Downloads
The classical Probability Ranking Principle (PRP) forms the theoretical basis for probabilistic Information Retrieval (IR) models, which are dominating IR theory since about 20 years. However, the assumptions underlying the PRP often do not hold, and its view is too narrow for interactive information retrieval (IIR). In this article, a new theoretical framework for interactive retrieval is proposed: The basic idea is that during IIR, a user moves between situations. In each situation, the system presents to the user a list of choices, about which s/he has to decide, and the first positive decision moves the user to a new situation. Each choice is associated with a number of cost and probability parameters. Based on these parameters, an optimum ordering of the choices can the derived—the PRP for IIR. The relationship of this rule to the classical PRP is described, and issues of further research are pointed out.
KeywordsProbabilistic retrieval Interactive retrieval Optimum retrieval rule
I wish to thank the Glasgow IR group, especially Keith van Rijsbergen, for their hospitality and fruitful discussions when staying with them in August 2007, while I was writing this article. The suggestions by the three anonymous reviewers were very helpful in improving the initial version of this paper.
- Bates, M. J. (1989). The design of browsing and berrypicking techniques for the online search interface. Online Review 13(5), 407–424. http://www.gseis.ucla.edu/faculty/bates/berrypicking.html.
- Borlund, P., & Ingwersen, P. (1998). Measures of relative relevance and ranked half-life: Performance indicators for interactive IR. In SIGIR ’98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 324–331). New York, NY: ACM Press.Google Scholar
- Carbonell, J., & Goldstein, J. (1998). The use of MMR, diversity-based reranking for reordering documents and producing summaries. In W. B. Croft, A. Moffat, C. J. van Rijsbergen, R. Wilkinson, & Zobel, J. (Eds.), Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 335–336). New York: ACM.Google Scholar
- Chen, H., & Karger, D. R. (2006). Less is more: Probabilistic models for retrieving fewer relevant documents. In E. N. Efthimiadis, S. T. Dumais, D. Hawking, & K. Järvelin (Eds.), SIGIR 2006: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington, USA, August 6–11, 2006 (pp. 429–436). ACM.Google Scholar
- Goldin, D. Q., Smolka, S. A., & Wegner, P. (2006). Interactive computation. The new paradigm. Springer.Google Scholar
- Joachims, T., Granka, L., Pan, B., Hembrooke, H., Radlinski, F., & Gay, G. (2007). Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search. ACM Trans. Inf. Syst. 25(2), 7.Google Scholar
- Malik, S., Klas, C.-P., Fuhr, N., Larsen, B., & Tombros, A. (2006). Designing a user interface for interactive retrieval of structured documents—Lessons learned from the INEX interactive track. In Proceedings of the European Conference on Digital Libraries.Google Scholar
- Nottelmann, H., & Fuhr, N. (2003a). Evaluating different methods of estimating retrieval quality for resource selection. In J. Callan, G. Cormack, C. Clarke, D. Hawking, & A. Smeaton (Eds.), Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM.Google Scholar
- Nottelmann, H., & Fuhr, N. (2003b). From retrieval status values to probabilities of relevance for advanced IR applications. Information Retrieval 6(4), 263–388.Google Scholar
- O’Day, V. L., & Jeffries, R. (1993). Orienting in an information landscape: How information seekers get from here to there. In Proceedings of the INTERCHI ’93 (pp. 438–445). IOS Press.Google Scholar
- Page, L., Brin, S., Motwani, R., & Winograd, T. (1998). The PageRank citation ranking: Bringing order to the web. Technical report, Stanford Digital Library Technologies Project.Google Scholar
- Stirling, K. H. (1975). The effect of document ranking on retrieval system performance: A search for an optimal ranking rule. In Proceedings of the American Society for Information Science 12 (pp. 105–106).Google Scholar
- Turpin, A. H., & Hersh, W. (2001). Why batch and user evaluations do not give the same results. In W. B. Croft, D. Harper, D. H. Kraft, & J. Zobel (Eds.), Proceedings of the 24th Annual International Conference on Research and Development in Information Retrieval (pp. 225–231). New York: ACM Press.Google Scholar
- Turpin, A., & Scholer, F. (2006). User performance versus precision measures for simple search tasks. In E. N. Efthimiadis, S. T. Dumais, D. Hawking, & K. Järvelin (Eds.), SIGIR 2006: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington, USA, August 6–11, 2006 (pp. 11–18). ACM.Google Scholar
- Voorhees, E., & Harman, D. (2000). Overview of the Eighth Text REtrieval Conference (TREC-8). In The Eighth Text REtrieval Conference (TREC-8) (pp. 1–24). Gaithersburg, MD: NIST.Google Scholar
- White, R., & Drucker, S. (2007). Investigating behavioral variability in web search. In Proceedings of WWW (pp. 21–30).Google Scholar
- Williamson, J. (2006). Continuous uncertain interaction. PhD thesis, University of Glasgow, Computer Science.Google Scholar
- Williamson, J., & Murray-Smith, R. (2004). Granular synthesis for the display of time-varying probability densities. In Int. Workshop on Interactive Sonification, Bielefeld. http://www.dcs.gla.ac.uk/jhw/ison.pdf.