The Grid@CLEF track is a long term activity with the aim of running a series of systematic experiments in order to improve the comprehension of MLIA systems and gain an exhaustive picture of their behaviour with respect to languages.

In particular, Grid@CLEF 2009 is a pilot track that has started to move the first steps in this direction by giving the participants the possibility of getting experienced with the new way of carrying out experimentation that is needed in Grid@CLEF to test all the different combinations of IR components and languages. Grid@CLEF 2009 offered traditional monolingual ad-hoc tasks in 5 different languages (Dutch, English, French, German, and Italian) which make use of consolidated and very well known collections from CLEF 2001 and 2002 and used a set of 84 topics.

Participants had to conduct experiments according to the CIRCO framework, an XML-based protocol which allows for a distributed, loosely-coupled, and asynchronous experimental evaluation of IR systems. We provided a Java library which can be exploited to implement CIRCO and an example implementation with the Lucene IR system.

The participation has been especially challenging also for the size of the XML files generated by CIRCO, which can become 50-60 times the size of the collection. Of the 9 initially subscribed participants, only 2 were able to submit runs in time and we received a total of 18 runs in 3 languages (English, French, and German) out of the 5 offered. The two participants used different IR systems or combination of them, namely Lucene, Terrier, and Cheshire II.


Information Retrieval Application Program Interface Average Precision Information Retrieval System Word Sense Disambiguation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Agirre, E., Di Nunzio, G.M., Ferro, N., Mandl, T., Peters, C.: CLEF 2008: Ad Hoc Track Overview. In: Peters, C., Deselaers, T., Ferro, N., Gonzalo, J., Jones, G.J.F., Kurimo, M., Mandl, T., Peñas, A. (eds.) CLEF 2008. LNCS, vol. 5706, pp. 15–37. Springer, Heidelberg (2009)Google Scholar
  2. 2.
    Braschler, M.: CLEF 2001 – Overview of Results. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2001. LNCS, vol. 2406, pp. 9–26. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  3. 3.
    Braschler, M.: CLEF 2002 – Overview of Results. In: Peters, C., Braschler, M., Gonzalo, J. (eds.) CLEF 2002. LNCS, vol. 2785, pp. 9–27. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  4. 4.
    Braschler, M., Peters, C.: CLEF 2003 Methodology and Metrics. In: Peters, C., Gonzalo, J., Braschler, M., Kluck, M. (eds.) CLEF 2003. LNCS, vol. 3237, pp. 7–20. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  5. 5.
    Cleverdon, C.W.: The Cranfield Tests on Index Languages Devices. In: Spärck Jones, K., Willett, P. (eds.) Readings in Information Retrieval, pp. 47–60. Morgan Kaufmann Publisher, Inc., San Francisco (1997)Google Scholar
  6. 6.
    Cooper, W.S., Gey, F.C., Dabney, D.P.: Probabilistic Retrieval Based on Staged Logistic Regression. In: Belkin, N.J., Ingwersen, P., Mark Pejtersen, A., Fox, E.A. (eds.) Proc. 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 1992), pp. 198–210. ACM Press, New York (1992)CrossRefGoogle Scholar
  7. 7.
    Di Nunzio, G.M., Ferro, N.: Appendix D: Results of the Grid@CLEF Track. In: Borri, F., Nardi, A., Peters, C. (eds.) Working Notes for the CLEF 2009 Workshop (2009) (published online)Google Scholar
  8. 8.
    Eibl, M., Kürsten, J.: Putting It All Together: The Xtrieval Framework at Grid@CLEF. In: Peters et al. [15] (2009)Google Scholar
  9. 9.
    Ferro, N.: Specification of the CIRCO Framework, Version 0.10. Technical Report IMS. 2009. CIRCO.0.10, Department of Information Engineering, University of Padua, Italy (2009)Google Scholar
  10. 10.
    Ferro, N., Harman, D.: Dealing with MultiLingual Information Access: Grid Experiments at TrebleCLEF. In: Agosti, M., Esposito, F., Thanos, C. (eds.) Post-proceedings of the Fourth Italian Research Conference on Digital Library Systems (IRCDL 2008), pp. 29–32. ISTI-CNR at Gruppo ALI, Pisa (2008)Google Scholar
  11. 11.
    Ferro, N., Peters, C.: From CLEF to TrebleCLEF: the Evolution of the Cross-Language Evaluation Forum. In: Kando, N., Sugimoto, M. (eds.) Proc. 7th NTCIR Workshop Meeting on Evaluation of Information Access Technologies: Information Retrieval, Question Answering and Cross-Lingual Information Access, pp. 577–593. National Institute of Informatics, Tokyo (2008)Google Scholar
  12. 12.
    Ferro, N., Peters, C.: CLEF Ad-hoc: A Perspective on the Evolution of the Cross-Language Evaluation Forum. In: Agosti, F. Esposito, C. Thanos, editors, Post-proceedings of the Fifth Italian Research Conference on Digital Library Systems (IRCDL 2009), pp. 72–79. DELOS Association and Department of Information Engineering of the University of Padua (2009)Google Scholar
  13. 13.
    Ferro, N., Peters, C.: CLEF, Ad Hoc Track Overview: TEL & Persian Tasks. In: Peters et al. [15] (2009)Google Scholar
  14. 14.
    Larson, R.R.: Decomposing Text Processing for Retrieval: Cheshire tries GRID@CLEF. In: Peters et al. [15]Google Scholar
  15. 15.
    Peters, C., Di Nunzio, G.M., Kurimo, M., Mandl, T., Mostefa, D., Peñas, A., Roda, G. (eds.): Multilingual Information Access Evaluation Vol. I Text Retrieval Experiments – Tenth Workshop of the Cross–Language Evaluation Forum (CLEF 2009). Revised Selected Papers. LNCS. Springer, Heidelberg (2010)Google Scholar
  16. 16.
    Robertson, S.E.: The methodology of information retrieval experiment. In: Spärck Jones, K. (ed.) Information Retrieval Experiment, pp. 9–31. Butterworths, London (1981)Google Scholar
  17. 17.
    Robertson, S.E., Spärck Jones, K.: Relevance Weighting of Search Terms. Journal of the American Society for Information Science (JASIS) 27(3), 129–146 (1976)CrossRefGoogle Scholar
  18. 18.
    Robertson, S.E., Walker, S., Beaulieu, M.: Experimentation as a way of life: Okapi at TREC. Information Processing & Management 36(1), 95–108 (2000)CrossRefGoogle Scholar
  19. 19.
    Salton, G., Buckley, C.: Term-weighting Approaches in Automatic Text Retrieval. Information Processing & Management 24(5), 513–523 (1988)CrossRefGoogle Scholar
  20. 20.
    Salton, G., Wong, A., Yang, C.S.: A Vector Space Model for Automatic Indexing. Communications of the ACM (CACM) 18(11), 613–620 (1975)zbMATHCrossRefGoogle Scholar
  21. 21.
    Savoy, J.: A Stemming Procedure and Stopword List for General French Corpora. Journal of the American Society for Information Science (JASIS) 50(10), 944–952 (1999)CrossRefGoogle Scholar
  22. 22.
    Savoy, J.: Report on CLEF-2001 Experiments: Effective Combined Query-Translation Approach. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2001. LNCS, vol. 2406, pp. 27–43. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  23. 23.
    W3C. XML Schema Part 1: Structures – W3C Recommendation (October 28, 2004),
  24. 24.
    W3C. XML Schema Part 2: Datatypes – W3C Recommendation (October 28, 2004),
  25. 25.
    W3C. Extensible Markup Language (XML) 1.0 (Fifth Edition) – W3C Recommendation (November 26, 2008),

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Nicola Ferro
    • 1
  • Donna Harman
    • 2
  1. 1.Department of Information EngineeringUniversity of PaduaItaly
  2. 2.National Institute of Standards and Technology (NIST)USA

Personalised recommendations