Advertisement

Improving Ranking and Robustness of Search Systems by Exploiting the Popularity of Documents

  • Ashraf Bah
  • Ben Carterette
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9460)

Abstract

In building Information Retrieval systems, much of research is geared towards optimizing a specific aspect of the system. Consequently, there are a lot of systems that improve effectiveness of search results by striving to outperform a baseline system. Other systems, however, focus on improving the robustness of the system by minimizing the risk of obtaining, for any topic, a result subpar with that of the baseline system. Both tasks have been organized by TREC Web tracks 2013 and 2014, and have been undertaken by the track participants. Our work herein, proposes two re-ranking approaches – based on exploiting the popularity of documents with respect to a general topic – that improve the effectiveness while improving the robustness of the baseline systems. We used each of the runs submitted to TREC Web tracks 2013 – 14 as baseline, and empirically show that our algorithms improve the effectiveness as well as the robustness of the systems in an overwhelming number of cases, even though the systems used to produce them employ a variety of retrieval models.

References

  1. 1.
    Bah, A., Carterette, B.: Aggregating results from multiple related queries to improve web search over sessions. In: Jaafar, A., Mohamad Ali, N., Mohd Noah, S.A., Smeaton, A.F., Bruza, P., Bakar, Z.A., Jamil, N., Sembok, T.M.T. (eds.) AIRS 2014. LNCS, vol. 8870, pp. 172–183. Springer, Heidelberg (2014)Google Scholar
  2. 2.
    Bhattacharjee, R., Goel, A.: Algorithms and incentives for robust ranking. In: SODA (2007)Google Scholar
  3. 3.
    Burges, C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N., Hullender, G.: Learning to rank using gradient descent. In: ICML (2005)Google Scholar
  4. 4.
    Büttcher, S., Clarke, C. L., Lushman, B.: Term proximity scoring for ad-hoc retrieval on very large text collections. In: SIGIR (2006)Google Scholar
  5. 5.
    Carbonell, J., Goldstein, J.: The use of MMR, diversity-based reranking for reordering documents and producing summaries. In: SIGIR (1998)Google Scholar
  6. 6.
    Carterette, B., Chandar, P.: Probabilistic models of ranking novel documents for faceted topic retrieval. In: CIKM (2009)Google Scholar
  7. 7.
    Chapelle, O., Ji, S., Liao, C., Velipasaoglu, E., Lai, L., Wu, S.L.: Intent-based diversification of web search results: metrics and algorithms. IR 14(6), 572–592 (2011)Google Scholar
  8. 8.
    Chapelle, O., Metlzer, D., Zhang, Y., Grinspan, P.: Expected reciprocal rank for graded relevance. In: CIKM (2009)Google Scholar
  9. 9.
    Clarke, C.L., Kolla, M., Cormack, G.V., Vechtomova, O., Ashkan, A., Büttcher, S., MacKinnon, I.: Novelty and diversity in information retrieval evaluation. In: SIGIR (2008)Google Scholar
  10. 10.
    Collins-Thompson, K., Bennett, P., Diaz, F., Clarke, C.L., Voorhees, E.M.: TREC 2013 web track overview. In: TREC (2013)Google Scholar
  11. 11.
    Collins-Thompson, K., Bennett, P., Diaz, F., Clarke, C.L., Voorhees, E.M.: TREC 2014 web track overview. In: TREC (2014)Google Scholar
  12. 12.
    Collins-Thompson, K.: Reducing the risk of query expansion via robust constrained optimization. In: CIKM (2009)Google Scholar
  13. 13.
    Macdonald, C., Ounis, I., Dinçer, B.: Tackling biased baselines in the risk-sensitive evaluation of retrieval systems. In: de Rijke, M., Kenter, T., de Vries, A.P., Zhai, C., de Jong, F., Radinsky, K., Hofmann, K. (eds.) ECIR 2014. LNCS, vol. 8416, pp. 26–38. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  14. 14.
    Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. TOIS 20(4), 422–446 (2002)CrossRefGoogle Scholar
  15. 15.
    Kang, C., Wang, X., Chen, J., Liao, C., Chang, Y., Tseng, B., Zheng, Z.: Learning to re-rank web search results with multiple pairwise features. In: WSDM 2011 (2011)Google Scholar
  16. 16.
    Liu, T.Y.: Learning to rank for information retrieval. FnTIR 3(3), 225–331 (2009)Google Scholar
  17. 17.
    Lv, Y., Zhai, C., Chen, W.: A boosting approach to improving pseudo-relevance feedback. In: SIGIR (2011)Google Scholar
  18. 18.
    Metzler, D., Croft, W.B.: A markov random field model for term dependencies. In: SIGIR (2005)Google Scholar
  19. 19.
    Ponte, J.M., Croft, W.B.: A language modeling approach to information retrieval. In: SIGIR (1998)Google Scholar
  20. 20.
    Radlinski, F., Dumais, S.: Improving personalized web search using result diversification. In: SIGIR (2006)Google Scholar
  21. 21.
    Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M. M., Gatford, M.: Okapi at TREC-3. In: TREC (1994)Google Scholar
  22. 22.
    Santos, R.L., Macdonald, C., Ounis, I.: Exploiting query reformulations for web search result diversification. In: WWW (2010)Google Scholar
  23. 23.
    Tao, T., Zhai, C.: An exploration of proximity measures in information retrieval. In: SIGIR (2007)Google Scholar
  24. 24.
    Wang, J., Zhu, J.: Portfolio theory of information retrieval. In: SIGIR (2009)Google Scholar
  25. 25.
    Wang, L., Bennett, P.N., Collins-Thompson, K.: Robust ranking models via risk-sensitive optimization. In: SIGIR (2012)Google Scholar
  26. 26.
    Zhai, C.X., Cohen, W.W., Lafferty, J.: Beyond independent relevance: methods and evaluation metrics for subtopic retrieval. In: SIGIR (2003)Google Scholar
  27. 27.
    Zhu, J., Wang, J., Cox, I.J., Taylor, M.J.: Risky business: modeling and exploiting uncertainty in information retrieval. In: SIGIR (2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of Computer SciencesUniversity of DelawareNewarkUSA

Personalised recommendations