Skip to main content

Evaluation in Information Retrieval

  • Chapter
  • First Online:
Lectures on Information Retrieval (ESSIR 2000)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1980))

Included in the following conference series:

Abstract

In this talk I summarize the components of a traditional laboratory-style evaluation experiment in information retrieval (as exemplified by TREC), and discusses some of the issues around this form of experiment. Some kinds of research questions fit very well into this framework ; others much less easily. The major area of dificulty for the framework is the area concerned with the user interface and user informationseeking behaviour. I go on to discuss a series of experiments conducted at City University with the Okapi system, both of the traditional form and of a more user-oriented type. I then discuss the current TREC filtering track, which does not present quite such severe problems, but is nevertheless based on a simple model of how users might interact with the system ; this has some effect on the experimental methodology.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Sparck Jones, K. (ed.): Information retrieval experiment. Butterworths, London, 1981. Also available at http://www.itl.nist.gov/iad/894.02/projects/irlib/pubs/ire/iretoc.html

  2. Robertson, S.E.: The methodology of information retrieval experiment. In: Sparck Jones, K. [1] 2–31.

    Google Scholar 

  3. Robertson, S.E. and Hancock-Beaulieu, M.: On the evaluation of IR systems, Information Processing & Management, 28:457–466, 1992.

    Google Scholar 

  4. A special issue devoted to work with the Okapi system at City University: Journal of Documentation, 53:1, 1997. Overview article: Robertson, S.E.: Overview of the Okapi projects, 3–7. User interface evaluation: Beaulieu, M.: Experiments on interfaces to support query expansion, 8-19.

    Google Scholar 

  5. Hull, D. and Robertson, S.E.: The TREC-8 Filtering Track Final Report. Available from http://trec.nist.gov/pubs/trec8/t8_proceedings.html

  6. Buckley, C. and Voorhees, E.: Evaluating evaluation measure stability. In: Belkin, N.J., Ingwersen, P. and Leong, M.-K. (eds): SIGIR 2000. ACM Press, New York (2000) 33–40

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Robertson, S. (2000). Evaluation in Information Retrieval. In: Agosti, M., Crestani, F., Pasi, G. (eds) Lectures on Information Retrieval. ESSIR 2000. Lecture Notes in Computer Science, vol 1980. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45368-7_4

Download citation

  • DOI: https://doi.org/10.1007/3-540-45368-7_4

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-41933-4

  • Online ISBN: 978-3-540-45368-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics