Skip to main content

Advanced Information Retrieval Measures

  • Living reference work entry
  • First Online:
Encyclopedia of Database Systems
  • 111 Accesses

Definition

Advanced information retrieval measures are effectiveness measures for various types of information access tasks that go beyond traditional document retrieval. Traditional document retrieval measures are suitable for set retrieval (measured by precision, recall, F-measure, etc.) or ad hoc ranked retrieval, the task of ranking documents by relevance (measured by average precision, etc.). Whereas, advanced information retrieval measures may work for diversified search (the task of retrieving relevant and diverse documents), aggregated search (the task of retrieving from multiple sources/media and merging the results), one-click access (the task of returning a textual multidocument summary instead of a list of URLs in response to a query), and multiquery sessions (information-seeking activities that involve query reformulations), among other tasks. Some advanced measures are based on user models that arguably better reflect real user behaviors than standard measures do.

Historic...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Recommended Reading

  1. Allan J, Croft B, Moffat A, Sanderson M, editors. Frontiers, challenges and opportunities for information retrieval: report from SWIRL 2012. SIGIR Forum. 2012;46(1):2–32.

    Article  Google Scholar 

  2. Chapelle O, Metzler D, Zhang Y, Grinspan P. Expected reciprocal rank for graded relevance. In: ACM CIKM 2009, Hongkong. 2009. p. 621–30.

    Google Scholar 

  3. Chapelle O, Ji S, Liao C, Velipasaoglu E, Lai L, Wu SL. Intent-based diversification of web search results: metrics and algorithms. Inf Retr. 2011;14(6):572–92.

    Article  Google Scholar 

  4. Clarke CLA, Craswell N, Soboroff I, Ashkan A. A comparative analysis of cascade measures for novelty and diversity. In: ACM WSDM 2011, Hong Kong. 2011. p. 75–84.

    Google Scholar 

  5. Järvelin K, Kekäläinen J. Cumulated gain-based evaluation of IR techniques. ACM TOIS. 2002;20(4):422–46.

    Article  Google Scholar 

  6. Kanoulas E, Carterette B, Clough PD, Sanderson M. Evaluating multi-query sessions. In: ACM SIGIR 2011, Beijing. 2011. p. 1026–53.

    Google Scholar 

  7. Moffat A, Zobel J. Rank-biased Precision for measurement of retrieval effectiveness. ACM TOIS. 2008;27(1):2:1–2:27.

    Google Scholar 

  8. Pollock SM. Measures for the comparison of information retrieval systems. Am Doc. 1968;19(4): 387–97.

    Article  MathSciNet  Google Scholar 

  9. Robertson SE, Kanoulas E, Yilmaz E. Extending average Precision to graded relevance judgments. In: ACM SIGIR 2010, Geneva, 2010. p. 603–10.

    Google Scholar 

  10. Sakai T. Statistical reform in information retrieval? SIGIR Forum. 2014;48(1):3–12.

    Article  MathSciNet  Google Scholar 

  11. Sakai, T. Inf Retrieval J (2016) 19: 256. https://doi.org/10.1007/s10791-015-9273-z

    Article  Google Scholar 

  12. Sakai T, Dou Z. Summaries, ranked retrieval and sessions: a unified framework for information access evaluation. In: ACM SIGIR 2013, Dublin, 2013. p. 473–82.

    Google Scholar 

  13. Sakai T, Song R. Evaluating diversified search results using per-intent graded relevance. In: ACM SIGIR 2011, Beijing, 2011. p. 1043–52.

    Google Scholar 

  14. Sakai T, Kato MP, Song YI. Click the search button and be happy: evaluating direct and immediate information access. In: ACM CIKM 2011, Glasgow, 2011. p. 621–30.

    Google Scholar 

  15. Sakai T. Metrics, statistics, tests. In: PROMISE winter school 2013: bridging between information retrieval and databases, Bressanone. LNCS, vol 8173. 2014.

    Google Scholar 

  16. Smucker MD, Clarke CLA. Time-based calibration of effectiveness measures. In: ACM SIGIR 2012, Portland, 2012. p. 95–104.

    Google Scholar 

  17. Zhai C, Cohen WW, Lafferty J. Beyond independent relevance: methods and evaluation metrics for subtopic retrieval. In: ACM SIGIR 2003, Toronto, 2003. p. 10–7

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tetsuya Sakai .

Editor information

Editors and Affiliations

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Science+Business Media LLC

About this entry

Cite this entry

Sakai, T. (2018). Advanced Information Retrieval Measures. In: Liu, L., Özsu, M. (eds) Encyclopedia of Database Systems. Springer, New York, NY. https://doi.org/10.1007/978-1-4899-7993-3_80705-1

Download citation

  • DOI: https://doi.org/10.1007/978-1-4899-7993-3_80705-1

  • Received:

  • Accepted:

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4899-7993-3

  • Online ISBN: 978-1-4899-7993-3

  • eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering

Publish with us

Policies and ethics