Modeling the Score Distributions of Relevant and Non-relevant Documents

  • Evangelos Kanoulas
  • Virgil Pavlu
  • Keshi Dai
  • Javed A. Aslam
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5766)

Abstract

Empirical modeling of the score distributions associated with retrieved documents is an essential task for many retrieval applications. In this work, we propose modeling the relevant documents’ scores by a mixture of Gaussians and modeling the non-relevant scores by a Gamma distribution. Applying variational inference we automatically trade-off the goodness-of-fit with the complexity of the model. We test our model on traditional retrieval functions and actual search engines submitted to TREC. We demonstrate the utility of our model in inferring precision-recall curves. In all experiments our model outperforms the dominant exponential-Gaussian model.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Amati, G., Van Rijsbergen, C.J.: Probabilistic models of information retrieval based on measuring divergence from randomness. ACM Transactions on Information Systems 20(4), 357–389 (2002)CrossRefGoogle Scholar
  2. 2.
    Arampatzis, A., van Hameran, A.: The score-distributional threshold optimization for adaptive binary classification tasks. In: SIGIR 2001: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 285–293. ACM Press, New York (2001)Google Scholar
  3. 3.
    Baumgarten, C.: A probabilistic solution to the selection and fusion problem in distributed information retrieval. In: SIGIR 1999: Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, pp. 246–253. ACM Press, New York (1999)Google Scholar
  4. 4.
    Bennett, P.N.: Using asymmetric distributions to improve text classifier probability estimates. In: SIGIR 2003: Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pp. 111–118. ACM Press, New York (2003)CrossRefGoogle Scholar
  5. 5.
    Bishop, C.M.: Pattern Recognition and Machine Learning, Information Science and Statistics. Springer, Heidelberg (2006)MATHGoogle Scholar
  6. 6.
    Bookstein, A.: When the most “pertinent” document should not be retrieved—an analysis of the swets model. Information Processing & Management 13(6), 377–383 (1977)CrossRefMATHGoogle Scholar
  7. 7.
    Collins-Thompson, K., Ogilvie, P., Zhang, Y., Callan, J.: Information filtering, novelty detection, and named-page finding. In: Proceedings of the 11th Text Retrieval Conference (2003)Google Scholar
  8. 8.
    Corduneanu, A., Bishop, C.M.: Variational bayesian model selection for mixture distributions. In: Proceedings Eighth International Conference on Artificial Intelligence and Statistics, pp. 27–34. Morgan Kaufmann, San Francisco (2001)Google Scholar
  9. 9.
    Hiemstra, D.: Using language models for information retrieval. PhD thesis, Centre for Telematics and Information Technology. University of Twente (2001)Google Scholar
  10. 10.
    Manmatha, R., Rath, T., Feng, F.: Modeling score distributions for combining the outputs of search engines. In: SIGIR 2001: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 267–275. ACM, New York (2001)Google Scholar
  11. 11.
    Ounis, I., Lioma, C., Macdonald, C., Plachouras, V.: Research directions in terrier. In: Baeza-Yates, R., et al. (eds.) Novatica/UPGRADE Special Issue on Next Generation Web Search, vol. 8(1), pp. 49–56 (2007) (invited Paper)Google Scholar
  12. 12.
    Robertson, S.E., Walker, S.: Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In: SIGIR 1994: Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 232–241. Springer, New York (1994)Google Scholar
  13. 13.
    Robertson, S.: On score distributions and relevance. In: Amati, G., Carpineto, C., Romano, G. (eds.) ECIR 2007. LNCS, vol. 4425, pp. 40–51. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  14. 14.
    Robertson, S.E., Jones, S.K.: Relevance weighting of search terms. Journal of the American Society for Information Science 27(3), 129–146 (1976)CrossRefGoogle Scholar
  15. 15.
    Spitters, M., Kraaij, W.: A language modeling approach to tracking news events. In: Proceedings of TDT workshop 2000, pp. 101–106 (2000)Google Scholar
  16. 16.
    Swets, J.A.: Information retrieval systems. Science 141(3577), 245–250 (1963)CrossRefGoogle Scholar
  17. 17.
    Swets, J.A.: Effectiveness of information retrieval methods. American Documentation 20, 72–89 (1969)CrossRefGoogle Scholar
  18. 18.
    Voorhees, E.M., Harman, D.K.: TREC: Experiment and Evaluation in Information Retrieval. Digital Libraries and Electronic Publishing/ MIT Press (September 2005)Google Scholar
  19. 19.
    Zhang, Y., Callan, J.: Maximum likelihood estimation for filtering thresholds. In: SIGIR 2001: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 294–302. ACM, New York (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Evangelos Kanoulas
    • 1
  • Virgil Pavlu
    • 1
  • Keshi Dai
    • 1
  • Javed A. Aslam
    • 1
  1. 1.College of Computer and Information ScienceNortheastern UniversityBostonUSA

Personalised recommendations