Top-k Retrieval Using Facility Location Analysis

  • Guido Zuccon
  • Leif Azzopardi
  • Dell Zhang
  • Jun Wang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7224)


The top-k retrieval problem aims to find the optimal set of k documents from a number of relevant documents given the user’s query. The key issue is to balance the relevance and diversity of the top-k search results. In this paper, we address this problem using Facility Location Analysis taken from Operations Research, where the locations of facilities are optimally chosen according to some criteria. We show how this analysis technique is a generalization of state-of-the-art retrieval models for diversification (such as the Modern Portfolio Theory for Information Retrieval), which treat the top-k search results like “obnoxious facilities” that should be dispersed as far as possible from each other. However, Facility Location Analysis suggests that the top-k search results could be treated like “desirable facilities” to be placed as close as possible to their customers. This leads to a new top-k retrieval model where the best representatives of the relevant documents are selected. In a series of experiments conducted on two TREC diversity collections, we show that significant improvements can be made over the current state-of-the-art through this alternative treatment of the top-k retrieval problem.


Facility Location Relevant Document Facility Location Problem Heuristic Function Capacitate Facility Location Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Carbonell, J.G., Goldstein, J.: The use of MMR, diversity-based reranking for reordering documents and producing summaries. In: Proceedings of the 21st ACM SIGIR, Melb., Australia, pp. 335–336 (1998)Google Scholar
  2. 2.
    Carterette, B.: An analysis of np-completeness in novelty and diversity ranking. Information Retrieval 14, 89–106 (2011)CrossRefGoogle Scholar
  3. 3.
    Chandar, P., Carterette, B.: Diversification of search results using webgraphs. In: Proceeding of the 33rd International ACM SIGIR, pp. 869–870 (2010)Google Scholar
  4. 4.
    Chapelle, O., Metlzer, D., Zhang, Y., Grinspan, P.: Expected reciprocal rank for graded relevance. In: Proceeding of the 18th ACM CIKM, pp. 621–630 (2009)Google Scholar
  5. 5.
    Chen, H., Karger, D.R.: Less is more: Probabilistic models for retrieving fewer relevant documents. In: Proceedings of the 29th ACM SIGIR, Seattle, WA, USA, pp. 429–436 (2006)Google Scholar
  6. 6.
    Clarke, C.L.A., Kolla, M., Cormack, G.V., Vechtomova, O., Ashkan, A., Buttcher, S., MacKinnon, I.: Novelty and diversity in information retrieval evaluation. In: Proceedings of the 31st ACM SIGIR, Singapore, pp. 659–666 (2008)Google Scholar
  7. 7.
    Gollapudi, S., Sharma, A.: An axiomatic approach for result diversification. In: Proceedings of the 18th WWW, Madrid, Spain, pp. 381–390 (2009)Google Scholar
  8. 8.
    Gonzalez, T.F. (ed.): Handbook of Approximation Algorithms and Metaheuristics (2007)Google Scholar
  9. 9.
    Han, J., Kamber, M., Pei, J.: Data Mining: Concepts and Techniques. Morgan Kaufmann (2005)Google Scholar
  10. 10.
    Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS) 20(4), 422–446 (2002)CrossRefGoogle Scholar
  11. 11.
    Kurland, O., Lee, L.: PageRank without hyperlinks: Structural re-ranking using links induced by language models. In: Proceedings of the 28th ACM SIGIR, Salvador, Brazil, pp. 306–313 (2005)Google Scholar
  12. 12.
    Manning, C.D., Raghavan, P., Schütze, H.: Introduction to Information Retrieval. Cambridge University Press (2008)Google Scholar
  13. 13.
    Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach, 3rd edn. Prentice Hall (2009)Google Scholar
  14. 14.
    Santos, R.L.T., Macdonald, C., Ounis, I.: Intent-aware search result diversification. In: Proceedings of the 34th ACM SIGIR, pp. 595–604 (2011)Google Scholar
  15. 15.
    Wang, J., Zhu, J.: Portfolio theory of information retrieval. In: Proceedings of the 32nd ACM SIGIR, Boston, MA, USA, pp. 115–122 (2009)Google Scholar
  16. 16.
    Zhai, C.: Statistical Language Models for Information Retrieval. Morgan and Claypool (2008)Google Scholar
  17. 17.
    Zhai, C., Cohen, W.W., Lafferty, J.D.: Beyond independent relevance: Methods and evaluation metrics for subtopic retrieval. In: Proceedings of the 26th ACM SIGIR, TO, Canada, pp. 10–17 (2003)Google Scholar
  18. 18.
    Zuccon, G., Azzopardi, L.: Using the Quantum Probability Ranking Principle to Rank Interdependent Documents. In: Gurrin, C., He, Y., Kazai, G., Kruschwitz, U., Little, S., Roelleke, T., Rüger, S., van Rijsbergen, K. (eds.) ECIR 2010. LNCS, vol. 5993, pp. 357–369. Springer, Heidelberg (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Guido Zuccon
    • 1
  • Leif Azzopardi
    • 1
  • Dell Zhang
    • 2
  • Jun Wang
    • 3
  1. 1.School of Computing ScienceUniversity of GlasgowUK
  2. 2.DCSIS, BirkbeckUniversity of LondonUK
  3. 3.Department of Computing ScienceUniversity College LondonUK

Personalised recommendations