On the Evaluation of Snippet Selection for WebCLEF

  • Arnold Overwijk
  • Dong Nguyen
  • Claudia Hauff
  • Dolf Trieschnigg
  • Djoerd Hiemstra
  • Franciska de Jong
Conference paper

DOI: 10.1007/978-3-642-04447-2_103

Part of the Lecture Notes in Computer Science book series (LNCS, volume 5706)
Cite this paper as:
Overwijk A., Nguyen D., Hauff C., Trieschnigg D., Hiemstra D., de Jong F. (2009) On the Evaluation of Snippet Selection for WebCLEF. In: Peters C. et al. (eds) Evaluating Systems for Multilingual and Multimodal Information Access. CLEF 2008. Lecture Notes in Computer Science, vol 5706. Springer, Berlin, Heidelberg

Abstract

WebCLEF is about supporting a user who is an expert in writing a survey article on a specific topic with a clear goal and audience by generating a ranked list with relevant snippets. This paper focuses on the evaluation methodology of WebCLEF. We show that the evaluation method and test set used for WebCLEF 2007 cannot be used to evaluate new systems and give recommendations how to improve the evaluation.

Keywords

Measurement Performance Experimentation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Arnold Overwijk
    • 1
  • Dong Nguyen
    • 1
  • Claudia Hauff
    • 1
  • Dolf Trieschnigg
    • 1
  • Djoerd Hiemstra
    • 1
  • Franciska de Jong
    • 1
  1. 1.University of TwenteThe Netherlands

Personalised recommendations