Advertisement

CLEF 2005: Ad Hoc Track Overview

  • Giorgio M. Di Nunzio
  • Nicola Ferro
  • Gareth J. F. Jones
  • Carol Peters
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4022)

Abstract

We describe the objectives and organization of the CLEF 2005 ad hoc track and discuss the main characteristics of the tasks offered to test monolingual, bilingual, and multilingual textual document retrieval. The performance achieved for each task is presented and a statistical analysis of results is given. The mono- and bilingual tasks followed the pattern of previous years but included target collections for two new-to-CLEF languages: Bulgarian and Hungarian. The multilingual tasks concentrated on exploring the reuse of existing test collections from an earlier CLEF campaign. The objectives were to attempt to measure progress in multilingual information retrieval by comparing the results for CLEF 2005 submissions with those of participants in earlier workshops, and also to encourage participants to explore multilingual list merging techniques.

Keywords

Average Precision Test Collection Relevance Assessment Topic Language Target Collection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cleverdon, C.W.: The Cranfield Tests on Index Languages Devices. In: Spack Jones, K., Willett, P. (eds.) Readings in Information Retrieval, pp. 47–60. Morgan Kaufmann Publisher, Inc., San Francisco (1997)Google Scholar
  2. 2.
    Braschler, M.: CLEF 2003 – Overview of Results. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2003. LNCS, vol. 3237, pp. 44–63. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  3. 3.
    Di Nunzio, G.M., Ferro, N.: Appendix A. Results of the Core Tracks and Domain-Specific Tracks. In: Peters, C., Quochi, V. (eds.) Working Notes for the CLEF 2005 Workshop (2006), http://www.clefcampaign.org/2005/working_notes/workingnotes2005/appendix_a.pdf
  4. 4.
    Braschler, M., Peters, C.: CLEF 2003 Methodology and Metrics. In: Peters, C., Gonzalo, J., Braschler, M., Kluck, M. (eds.) CLEF 2003. LNCS, vol. 3237, pp. 7–20. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  5. 5.
    Gonzalo, J., Peters, C.: The Impact of Evaluation on Multilingual Text Retrieval. In: Baeza-Yates, R., Ziviani, N., Marchionini, G., Moffat, A., Tait, J. (eds.) Proc. 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2005), pp. 603–604. ACM Press, New York (2005)CrossRefGoogle Scholar
  6. 6.
    Hull, D.: Using Statistical Testing in the Evaluation of Retrieval Experiments. In: Korfhage, R., Rasmussen, E., Willett, P. (eds.) Proc. 16th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 1993), pp. 329–338. ACM Press, New York (1993)CrossRefGoogle Scholar
  7. 7.
    Conover, W.J.: Practical Nonparametric Statistics, 1st edn. John Wiley and Sons, New York (1971)Google Scholar
  8. 8.
    Judge, G.G., Hill, R.C., Griffiths, W.E., Lütkepohl, H., Lee, T.C.: Introduction to the Theory and Practice of Econometrics, 2nd edn. John Wiley and Sons, New York (1988)MATHGoogle Scholar
  9. 9.
    Tague-Sutcliffe, J.: The Pragmatics of Information Retrieval Experimentation, Revisited. In: Spack Jones, K., Willett, P. (eds.) Readings in Information Retrieval, pp. 205–216. Morgan Kaufmann Publisher, Inc., San Francisco (1997)Google Scholar
  10. 10.
    Voorhees, E.M., Buckley, C.: The Effect of Topic Set Size on Retrieval Experiment Error. In: Croft, W.B., Moffat, A., van Rijsbergen, C.J., Wilkinson, R., Zobel, J. (eds.) Proc. 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 1998), pp. 307–314. ACM Press, New York (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Giorgio M. Di Nunzio
    • 1
  • Nicola Ferro
    • 1
  • Gareth J. F. Jones
    • 2
  • Carol Peters
    • 3
  1. 1.Department of Information EngineeringUniversity of PaduaItaly
  2. 2.School of ComputingDublin City UniversityIreland
  3. 3.ISTI-CNR, Area di RicercaPisaItaly

Personalised recommendations