Encyclopedia of Database Systems

Living Edition
| Editors: Ling Liu, M. Tamer Özsu

Advanced Information Retrieval Measures

Living reference work entry
DOI: https://doi.org/10.1007/978-1-4899-7993-3_80705-1

Definition

Advanced information retrieval measures are effectiveness measures for various types of information access tasks that go beyond traditional document retrieval. Traditional document retrieval measures are suitable for set retrieval (measured by precision, recall, F-measure, etc.) or ad hoc ranked retrieval, the task of ranking documents by relevance (measured by average precision, etc.). Whereas, advanced information retrieval measures may work for diversified search (the task of retrieving relevant and diverse documents), aggregated search (the task of retrieving from multiple sources/media and merging the results), one-click access (the task of returning a textual multidocument summary instead of a list of URLs in response to a query), and multiquery sessions (information-seeking activities that involve query reformulations), among other tasks. Some advanced measures are based on user models that arguably better reflect real user behaviors than standard measures do.

Historic...

This is a preview of subscription content, log in to check access

Recommended Reading

  1. 1.
    Allan J, Croft B, Moffat A, Sanderson M, editors. Frontiers, challenges and opportunities for information retrieval: report from SWIRL 2012. SIGIR Forum. 2012;46(1):2–32.CrossRefGoogle Scholar
  2. 2.
    Chapelle O, Metzler D, Zhang Y, Grinspan P. Expected reciprocal rank for graded relevance. In: ACM CIKM 2009, Hongkong. 2009. p. 621–30.Google Scholar
  3. 3.
    Chapelle O, Ji S, Liao C, Velipasaoglu E, Lai L, Wu SL. Intent-based diversification of web search results: metrics and algorithms. Inf Retr. 2011;14(6):572–92.CrossRefGoogle Scholar
  4. 4.
    Clarke CLA, Craswell N, Soboroff I, Ashkan A. A comparative analysis of cascade measures for novelty and diversity. In: ACM WSDM 2011, Hong Kong. 2011. p. 75–84.Google Scholar
  5. 5.
    Järvelin K, Kekäläinen J. Cumulated gain-based evaluation of IR techniques. ACM TOIS. 2002;20(4):422–46.CrossRefGoogle Scholar
  6. 6.
    Kanoulas E, Carterette B, Clough PD, Sanderson M. Evaluating multi-query sessions. In: ACM SIGIR 2011, Beijing. 2011. p. 1026–53.Google Scholar
  7. 7.
    Moffat A, Zobel J. Rank-biased Precision for measurement of retrieval effectiveness. ACM TOIS. 2008;27(1):2:1–2:27.Google Scholar
  8. 8.
    Pollock SM. Measures for the comparison of information retrieval systems. Am Doc. 1968;19(4): 387–97.CrossRefMathSciNetGoogle Scholar
  9. 9.
    Robertson SE, Kanoulas E, Yilmaz E. Extending average Precision to graded relevance judgments. In: ACM SIGIR 2010, Geneva, 2010. p. 603–10.Google Scholar
  10. 10.
    Sakai T. Statistical reform in information retrieval? SIGIR Forum. 2014;48(1):3–12.CrossRefMathSciNetGoogle Scholar
  11. 11.
    Sakai, T. Inf Retrieval J (2016) 19: 256.  https://doi.org/10.1007/s10791-015-9273-z CrossRefGoogle Scholar
  12. 12.
    Sakai T, Dou Z. Summaries, ranked retrieval and sessions: a unified framework for information access evaluation. In: ACM SIGIR 2013, Dublin, 2013. p. 473–82.Google Scholar
  13. 13.
    Sakai T, Song R. Evaluating diversified search results using per-intent graded relevance. In: ACM SIGIR 2011, Beijing, 2011. p. 1043–52.Google Scholar
  14. 14.
    Sakai T, Kato MP, Song YI. Click the search button and be happy: evaluating direct and immediate information access. In: ACM CIKM 2011, Glasgow, 2011. p. 621–30.Google Scholar
  15. 15.
    Sakai T. Metrics, statistics, tests. In: PROMISE winter school 2013: bridging between information retrieval and databases, Bressanone. LNCS, vol 8173. 2014.Google Scholar
  16. 16.
    Smucker MD, Clarke CLA. Time-based calibration of effectiveness measures. In: ACM SIGIR 2012, Portland, 2012. p. 95–104.Google Scholar
  17. 17.
    Zhai C, Cohen WW, Lafferty J. Beyond independent relevance: methods and evaluation metrics for subtopic retrieval. In: ACM SIGIR 2003, Toronto, 2003. p. 10–7Google Scholar

Copyright information

© Springer Science+Business Media LLC 2018

Authors and Affiliations

  1. 1.Waseda UniversityTokyoJapan

Section editors and affiliations

  • Weiyi Meng
    • 1
  1. 1.Dept. of Computer ScienceState University of New York at BinghamtonBinghamtonUSA