Effective Pre-retrieval Query Performance Prediction Using Similarity and Variability Evidence

  • Ying Zhao
  • Falk Scholer
  • Yohannes Tsegay
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4956)

Abstract

Query performance prediction aims to estimate the quality of answers that a search system will return in response to a particular query. In this paper we propose a new family of pre-retrieval predictors based on information at both the collection and document level. Pre-retrieval predictors are important because they can be calculated from information that is available at indexing time; they are therefore more efficient than predictors that incorporate information obtained from actual search results. Experimental evaluation of our approach shows that the new predictors give more consistent performance than previously proposed pre-retrieval methods across a variety of data types and search tasks.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bailey, P., Craswell, N., Hawking, D.: Engineering a multi-purpose test collection for web retrieval experiments. Information Processing and Management 39(6), 853–871 (2003)CrossRefGoogle Scholar
  2. 2.
    Broder, A.: A taxonomy of web search. SIGIR Forum 36(2), 3–10 (2002)CrossRefGoogle Scholar
  3. 3.
    Buckley, C., Voorhees, E.M.: Retrieval system evaluation. In: Voorhees, E.M., Harman, D.K. (eds.) TREC: experiment and evaluation in information retrieval, MIT Press, Cambridge (2005)Google Scholar
  4. 4.
    Carmel, D., Yom-Tov, E., Soboroff, I.: SIGIR workshop report: predicting query difficulty - methods and applications. SIGIR Forum 39(2), 25–28 (2005)CrossRefGoogle Scholar
  5. 5.
    Clarke, C., Craswell, N., Soboroff, I.: Overview of the TREC, terabyte track. In: The Thirteenth Text REtrieval Conference (TREC 2004), Gaithersburg, MD, 2005. National Institute of Standards and Technology Special Publication 500-261 (2004)Google Scholar
  6. 6.
    Cronen-Townsend, S., Zhou, Y., Croft, W.B.: Predicting query performance. In: Proceedings of the ACM SIGIR International Conference on Research and Development in Information Retrieval, Tampere, Finland, pp. 299–306 (2005)Google Scholar
  7. 7.
    Freund, J.E.: Modern Elementary Statistics, 10th edn. (2001)Google Scholar
  8. 8.
    Harman, D., Buckley, C.: The NRRC reliable information access (RIA) workshop. In: Proceedings of the ACM SIGIR International Conference on Research and Development in Information Retrieval, Sheffield, United Kingdom, pp. 528–529 (2004)Google Scholar
  9. 9.
    He, B., Ounis, I.: Query performance prediction. Information System 31(7), 585–594 (2006)CrossRefGoogle Scholar
  10. 10.
    Kwok, K.L.: An attempt to identify weakest and strongest queries. In: Predicting Query Difficulty, SIGIR 2005 Workshop (2005)Google Scholar
  11. 11.
    Scholer, F., Williams, H.E., Turpin, A.: Query association surrogates for web search. Journal of the American Society for Information Science and Technology 55(7), 637–650 (2004)CrossRefGoogle Scholar
  12. 12.
    Sheskin, D.: Handbook of parametric and nonparametric statistical proceedures. CRC Press, Boca Raton (1997)Google Scholar
  13. 13.
    Sparck Jones, K., Walker, S., Robertson, S.E.: A probabilistic model of information retrieval: development and comparative experiments. Part 1. Information Processing and Management 36(6), 779–808 (2000)CrossRefGoogle Scholar
  14. 14.
    Voorhees, E.M.: Overview of the TREC, robust retrieval track. In: The Fourteenth Text REtrieval Conference (TREC 2005), Gaithersburg, MD, 2006. National Institute of Standards and Technology Special Publication 500-266 (2005)Google Scholar
  15. 15.
    Witten, I., Moffat, A., Bell, T.: Managing Gigabytes: Compressing and Indexing Documents and Images, 2nd edn. Morgan Kaufmann, San Francisco (1999)Google Scholar
  16. 16.
    Yom-Tov, E., Fine, S., Carmel, D., Darlow, A.: Learning to estimate query difficulty: including applications to missing content detection and distributed information retrieval. In: Proceedings of the ACM SIGIR International Conference on Research and Development in Information Retrieval, Salvador, Brazil, pp. 512–519 (2005)Google Scholar
  17. 17.
    Zhai, C., Lafferty, J.: A study of smoothing methods for language models applied to information retrieval. ACM Transactions On Information Systems 22(2), 179–214 (2004)CrossRefGoogle Scholar
  18. 18.
    Zhou, Y., Croft, W.B.: Ranking robustness: a novel framework to predict query performance. In: Proceedings of the ACM SIGIR International Conference on Research and Development in Information Retrieval, Arlington, Virginia, pp. 567–574 (2006)Google Scholar
  19. 19.
    Zhou, Y., Croft, W.B.: Query performance prediction in web search environments. In: Proceedings of the ACM SIGIR International Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands, pp. 543–550 (2007)Google Scholar
  20. 20.
    Zobel, J., Moffat, A.: Inverted files for text search engines. ACM Computing Surveys 38(2) (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Ying Zhao
    • 1
  • Falk Scholer
    • 1
  • Yohannes Tsegay
    • 1
  1. 1.School of Computer Science and ITRMIT UniversityMelbourneAustralia

Personalised recommendations