Skip to main content

Part of the book series: The Information Retrieval Series ((INRE,volume 29))

Abstract

An important property of information retrieval (IR) system performance is its effectiveness at finding and ranking relevant documents in response to a user query. Research and development in IR requires rapid evaluation of effectiveness in order to test new approaches. This chapter covers the test collections required to evaluate effectiveness as well as traditional and newer measures of effectiveness.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Aslam J, Yilmaz E, Pavlu V (2005) A geometric analysis and interpretation of R-precision. In: Proceedings of CIKM, pp 664–671

    Google Scholar 

  2. Aslam JA, Pavlu V (2008) A practical sampling strategy for efficient retrieval evaluation. Northeastern University tech report

    Google Scholar 

  3. Beitzel SM, Jensen EC, Chowdhury A, Grossman D, Frieder O (2003) Using manually-built web directories for automatic evaluation of known-item retrieval. In: Callan J, Hawking D, Smeaton A (eds) SIGIR ’03: Proceedings of the 26th annual international ACM SIGIR conference on research and development in information retrieval, July 28–August 1. ACM, New York, pp 373–374

    Chapter  Google Scholar 

  4. Borlund P, Ruthven I (eds) (2008) Inf Process Manag 44(1). Special issue

    Google Scholar 

  5. Buckley C, Dimmick D, Soboroff I, Voorhees E (2006) Bias and the limits of pooling. In: Dumais ST, Efthimiadis EN, Hawking D, Järvelin K (eds) SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on research and development in information retrieval, August 6–August 11. ACM, New York, pp 619–620

    Chapter  Google Scholar 

  6. Carterette B, Allan J, Sitaraman RK (2006) Minimal test collections for retrieval evaluation. In: Dumais ST, Efthimiadis EN, Hawking D, Järvelin K (eds) SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on research and development in information retrieval, August 6–August 11. ACM, New York, pp 268–275

    Chapter  Google Scholar 

  7. Clarke CLA, Craswell N, Soboroff I (2009) Overview of the TREC 2009 Web track. In: Voorhees EM, Buckland LP (eds) Proceedings of the 18th text retrieval conference (TREC 2009), Nov 2009. NIST, Gaithersburg

    Google Scholar 

  8. Cleverdon CW, Mills J (1997) The testing of index language devices. In: Spärck Jones K, Willett P (eds) Readings in information retrieval. Morgan Kaufmann, San Francisco, pp 98–110

    Google Scholar 

  9. Cormack GV, Palmer CR, Clarke CL (1998) Efficient construction of large test collections. In: Croft WB, Moffat A, van Rijsbergen CJ, Wilkinson R, Zobel J (eds) SIGIR ’98: Proceedings of the 21st annual international ACM SIGIR conference on research and development in information retrieval, Aug 24–28. ACM, New York, pp 282–289

    Chapter  Google Scholar 

  10. Dumais ST, Belkin NJ (2005) The TREC interactive tracks: Putting the user into search. In: Voorhees EM, Harman DK (eds) TREC: Experiment and evaluation in information retrieval. MIT Press, Cambridge, pp 123–152

    Google Scholar 

  11. Harman D (1997) The TREC conferences. In: Spärck Jones K, Willett P (eds) Readings in information retrieval. Morgan Kaufmann, San Francisco, pp 247–256

    Google Scholar 

  12. Harman D (2002) Overview of the TREC 2002 novelty track. In: Voorhees E (ed) Proceedings of the 11th text retrieval conference (TREC 2002), Nov 2002. NIST, Gaithersburg, pp 46–55

    Google Scholar 

  13. Lupu M, Piroi F, Huang X, Zhu J, Tait J (2009) Overview of the TREC 2009 chemical IR track. In: Voorhees EM, Buckland LP (eds) Proceedings of the 18th text retrieval conference (TREC 2009), Nov 2009. NIST, Gaithersburg

    Google Scholar 

  14. Robertson S, Hull DA (2000) The TREC-9 filtering track final report. In: Voorhees EM, Harman DK (eds) Proceedings of the 9th text retrieval conference (TREC-9) Nov 2000. NIST, Gaithersburg

    Google Scholar 

  15. Salton G, Lesk ME (1997) Computer evaluation of indexing and text processing. In: Spärck Jones K, Willett P (eds) Readings in information retrieval. Morgan Kaufmann, San Francisco, pp 60–84

    Google Scholar 

  16. Smucker M, Allan J, Carterette B (2007) A comparison of statistical significance tests for information retrieval evaluation. In: Proceedings of CIKM, pp 623–632

    Google Scholar 

  17. Spärck Jones K, van Rijsbergen CJ (1976) Information retrieval test collections. J Doc 32(1):59–75

    Article  Google Scholar 

  18. Tague J (1981) The pragmatics of information retrieval evaluation. In: Spärck Jones K (ed) Information retrieval experiment. Buttersworth, London, pp 59–102

    Google Scholar 

  19. Tague-Sutcliffe J (1997) The pragmatics of information retrieval evaluation revisited. In: Spärck Jones K, Willett P (eds) Readings in information retrieval. Morgan Kaufmann, San Francisco, pp 205–216

    Google Scholar 

  20. Voorhees E (1998) Variations in relevance judgments and the measurement of retrieval effectiveness. In: Croft WB, Moffat A, van Rijsbergen CJ, Wilkinson R, Zobel J (eds) SIGIR ’98: Proceedings of the 21st annual international ACM SIGIR conference on research and development in information retrieval, Aug 24–28. ACM, New York, pp 315–323

    Chapter  Google Scholar 

  21. Voorhees EM, Harman DK (eds) (2005) TREC: Experiment and evaluation in information retrieval. MIT Press, Cambridge

    Google Scholar 

  22. Zobel J (1998) How reliable are the results of large-scale information retrieval experiments? In: Croft WB, Moffat A, van Rijsbergen CJ, Wilkinson R, Zobel J (eds) SIGIR ’98: Proceedings of the 21st annual international ACM SIGIR conference on research and development in information retrieval, Aug 24–28. ACM, New York, pp 307–314

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ben Carterette .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Carterette, B., Voorhees, E.M. (2011). Overview of Information Retrieval Evaluation. In: Lupu, M., Mayer, K., Tait, J., Trippe, A. (eds) Current Challenges in Patent Information Retrieval. The Information Retrieval Series, vol 29. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-19231-9_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-19231-9_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-19230-2

  • Online ISBN: 978-3-642-19231-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics