Advertisement

Abstract

We describe the overall organization of the CLEF 2003 evaluation campaign, with a particular focus on the cross-language ad hoc and domain-specific retrieval tracks. The paper discusses the evaluation approach adopted, describes the tracks and tasks offered and the test collections used, and provides an outline of the guidelines given to the participants. It concludes with an overview of the techniques employed for results calculation and analysis for the monolingual, bilingual and multilingual and GIRT tasks.

Keywords

Information Retrieval Document Collection Information Retrieval System Test Collection Relevance Assessment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cleverdon, C.: The Cranfield Tests on Index Language Devices. In: Sparck-Jones, K., Willett, P. (eds.) Readings in Information Retrieval, pp. 47–59. Morgan Kaufmann, San Francisco (1997)Google Scholar
  2. 2.
    Harman, D.: The TREC Conferences. In: Kuhlen, R., Rittberger, M. (eds.) Hypertext - Information Retrieval - Multimedia: Synergieeffekte Elektronischer Informationssysteme, Proceedings of HIM 1995, Universitätsverlag Konstanz, pp. 9–28 (1995)Google Scholar
  3. 3.
    Voorhees, E.: The Philosophy of Information Retrieval Evaluation. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2001. LNCS, vol. 2406, pp. 355–370. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    Text REtrieval Conference (TREC) Series: http://trec.nist.gov/
  5. 5.
    NTCIR (NII-NACSIS Test Collection for IR Systems): http://research.nii.ac.jp/ntcir/
  6. 6.
    Braschler, M.: CLEF 2003 - Overview of Results. In: Peters, C., Gonzalo, J., Braschler, M., Kluck, M. (eds.) CLEF 2003. LNCS, vol. 3237, pp. 44–63. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  7. 7.
    Gey, F.C., Kluck, M.: The Domain-Specific Task of CLEF – Specific Evaluation Strategies in Cross-Language Information Retrieval. In: Peters, C. (ed.) CLEF 2000. LNCS, vol. 2069, pp. 48–56. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  8. 8.
    Jones, G.J.F., Federico, M.: Cross-Language Spoken Document Retrieval Pilot Track Report. In: Peters, C., Braschler, M., Gonzalo, J. (eds.) CLEF 2002. LNCS, vol. 2785, pp. 446–457. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  9. 9.
    Womser-Hacker, C.: Multilingual Topic Generation within the CLEF 2001 Experiments. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) Evaluation of Cross-Language Information Retrieval Systems. LNCS, vol. 2069, pp. 389–393. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  10. 10.
    Mandl, T., Womser-Hacker, C.: Linguistic and Statistical Analysis of the CLEF Topics (this volume)Google Scholar
  11. 11.
  12. 12.
    Schäuble, P.: Content-Based Information Retrieval from Large Text and Audio Databases. Section 1.6 Evaluation Issues, pp. 22–29. Kluwer Academic Publishers, Dordrecht (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Martin Braschler
    • 1
    • 2
  • Carol Peters
    • 3
  1. 1.Eurospider Information Technology AGZürichSwitzerland
  2. 2.Institut interfacultaire d’informatiqueUniversité de NeuchâtelNeuchâtelSwitzerland
  3. 3.ISTI-CNR, Area di RicercaPisaItaly

Personalised recommendations