Skip to main content

CLEF Methodology and Metrics

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2406))

Abstract

We describe the organization of the CLEF 2001 evaluation campaign, outline the guidelines given to participants, and explain the techniques and measures used in CLEF campaigns for result calculation and analysis.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cleverdon, C. The Cranfield Tests on Index Language Devices. In K. Sparck-Jones and P. Willett (Eds.): Readings in Information Retrieval, pages 47–59. Morgan Kaufmann, 1997.

    Google Scholar 

  2. Harman, D. The TREC Conferences. In R. Kuhlen and M. Rittberger (Eds.): Hypertext-Information Retrieval-Multimedia: Synergieeffekte Elektronischer Informationssysteme, Proceedings of HIM’ 95, pages 9–28. Universitätsverlag Konstanz

    Google Scholar 

  3. Voorhees, E. The Philosophy of Information Retrieval Evaluation. This volume.

    Google Scholar 

  4. Gey, F.C. & Oard, D.W. The TREC-2001 Cross-Language Information Retrieval Track: Searching Arabic using English, French or Arabic Queries. NIST Special Publication 500–250:The Tenth Text REtrieval Conference (TREC 2001).

    Google Scholar 

  5. Kando, N., Aihara, K., Eguchi, K., Kato, H. (Eds.) Proceedings of the Second NTCIR Workshop Meeting on Evaluation of Chinese & Japanese Text Retrieval and Text Summarization, National Institute of Informatics (NII), ISBN 4-924600-89-X.

    Google Scholar 

  6. Gey, F.C. & Kluck, M. (2001). The Domain-Specific Task of CLEF-Specific Evaluation Strategies in Cross-Language Information Retrieval. In C. Peters (Ed.). Cross-Language Information Retrieval and Evaluation. Lecture Notes in Computer Science 2069, Springer Verlag, pp 48–56.4.

    Google Scholar 

  7. Oard, D.W. and Gonzalo, J. The CLEF 2001 Interactive Track. This volume.

    Google Scholar 

  8. Womser-Hacker, C. Multilingual Topic Generation within the CLEF 2001 Experiments.This volume.

    Google Scholar 

  9. ftp://ftp.cs.cornell.edu/pub/smart/

  10. Schäuble, P. Content-Based Information Retrieval from Large Text and Audio Databases. Section 1.6 Evaluation Issues, Pages 22–29, Kluwer Academic Publishers, 1997.

    Google Scholar 

  11. Braschler, M. CLEF 2001-Overview of Results. This volume.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Braschler, M., Peters, C. (2002). CLEF Methodology and Metrics. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds) Evaluation of Cross-Language Information Retrieval Systems. CLEF 2001. Lecture Notes in Computer Science, vol 2406. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45691-0_37

Download citation

  • DOI: https://doi.org/10.1007/3-540-45691-0_37

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-44042-0

  • Online ISBN: 978-3-540-45691-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics