Iterative Estimation of Document Relevance Score for Pseudo-Relevance Feedback

  • Mozhdeh AriannezhadEmail author
  • Ali Montazeralghaem
  • Hamed Zamani
  • Azadeh Shakery
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10193)


Pseudo-relevance feedback (PRF) is an effective technique for improving the retrieval performance through updating the query model using the top retrieved documents. Previous work shows that estimating the effectiveness of feedback documents can substantially affect the PRF performance. Following the recent studies on theoretical analysis of PRF models, in this paper, we introduce a new constraint which states that the documents containing more informative terms for PRF should have higher relevance scores. Furthermore, we provide a general iterative algorithm that can be applied to any PRF model to ensure the satisfaction of the proposed constraint. In this regard, the algorithm computes the feedback weight of terms and the relevance score of feedback documents, simultaneously. To study the effectiveness of the proposed algorithm, we modify the log-logistic feedback model, a state-of-the-art PRF model, as a case study. Our experiments on three TREC collections demonstrate that the modified log-logistic significantly outperforms competitive baselines, with up to \(12\%\) MAP improvement over the original log-logistic model.


Pseudo-relevance feedback Document effectiveness Axiomatic analysis Query expansion 



This work was supported in part by the Center for Intelligent Information Retrieval. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.


  1. 1.
    Clinchant, S., Gaussier, E.: Information-based models for ad hoc IR. In: SIGIR (2010)Google Scholar
  2. 2.
    Clinchant, S., Gaussier, E.: A theoretical analysis of pseudo-relevance feedback models. In: ICTIR (2013)Google Scholar
  3. 3.
    Collins-Thompson, K.: Reducing the risk of query expansion via robust constrained optimization. In: CIKM (2009)Google Scholar
  4. 4.
    Dehghani, M., Azarbonyad, H., Kamps, J., Hiemstra, D., Marx, M.: Luhn revisited: significant words language models. In: CIKM (2016)Google Scholar
  5. 5.
    Keikha, M., Seo, J., Croft, W.B., Crestani, F.: Predicting document effectiveness in pseudo relevance feedback. In: CIKM (2011)Google Scholar
  6. 6.
    Kleinberg, J.M.: Authoritative sources in a hyperlinked environment. J. ACM 46(5), 604–632 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Lavrenko, V., Croft, W.B.: Relevance based language models. In: SIGIR (2001)Google Scholar
  8. 8.
    Montazeralghaem, A., Zamani, H., Shakery, A.: Axiomatic analysis for improving the log-logistic feedback model. In: SIGIR (2016)Google Scholar
  9. 9.
    Pal, D., Mitra, M., Bhattacharya, S.: Improving pseudo relevance feedback in the divergence from randomness model. In: ICTIR (2015)Google Scholar
  10. 10.
    Seo, J., Croft, W.B.: Geometric representations for multiple documents. In: SIGIR (2010)Google Scholar
  11. 11.
    Zamani, H., Dadashkarimi, J., Shakery, A., Croft, W.B.: Pseudo-relevance feedback based on matrix factorization. In: CIKM (2016)Google Scholar
  12. 12.
    Zhai, C., Lafferty, J.: Model-based feedback in the language modeling approach to information retrieval. In: CIKM (2001)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Mozhdeh Ariannezhad
    • 1
    Email author
  • Ali Montazeralghaem
    • 1
  • Hamed Zamani
    • 2
  • Azadeh Shakery
    • 1
  1. 1.School of Electrical and Computer Engineering, College of EngineeringUniversity of TehranTehranIran
  2. 2.Center for Intelligent Information Retrieval, College of Information and Computer SciencesUniversity of Massachusetts AmherstAmherstUSA

Personalised recommendations