CLEF 2007: Ad Hoc Track Overview

  • Giorgio M. Di Nunzio
  • Nicola Ferro
  • Thomas Mandl
  • Carol Peters
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5152)

Abstract

We describe the objectives and organization of the CLEF 2007 Ad Hoc track and discuss the main characteristics of the tasks offered to test monolingual and cross-language textual document retrieval systems. The track was divided into two streams. The main stream offered mono- and bilingual tasks on target collections for central European languages (Bulgarian, Czech and Hungarian). Similarly to last year, a bilingual task that encouraged system testing with non-European languages against English documents was also offered; this year, particular attention was given to Indian languages. The second stream, designed for more experienced participants, offered mono- and bilingual “robust” tasks with the objective of privileging experiments which achieve good stable performance over all queries rather than high average performance. These experiments re-used CLEF test collections from previous years in three languages (English, French, and Portuguese). The performance achieved for each task is presented and discussed.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Paskin, N. (ed.): The DOI Handbook – Edition 4.4.1. International DOI Foundation (IDF) (2006) [last visited 2007, August 30], http://dx.doi.org/10.1000/186
  2. 2.
    Braschler, M.: CLEF 2003 - Overview of results. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2003. LNCS, vol. 3237, pp. 44–63. Springer, Heidelberg (2004)Google Scholar
  3. 3.
    Tomlinson, S.: Sampling Precision to Depth 10000 at CLEF 2007. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 57–64. Springer, Heidelberg (2008)Google Scholar
  4. 4.
    Tomlinson, S.: Sampling Precision to Depth 10000: Evaluation Experiments at CLEF 2007. In: Nardi, A., Peters, C. (eds.) Working Notes for the CLEF 2007 Workshop (2007) [last visited May 2008], http://www.clef-campaign.org/
  5. 5.
    Braschler, M., Peters, C.: CLEF 2003 Methodology and Metrics. In: Peters, C., Gonzalo, J., Braschler, M., Kluck, M. (eds.) CLEF 2003. LNCS, vol. 3237, pp. 7–20. Springer, Heidelberg (2004)Google Scholar
  6. 6.
    Di Nunzio, G.M., Ferro, N.: Appendix A: Results of the Core Tracks – Ad-hoc Bilingual and Monolingual Tasks. In: Nardi, A., Peters, C. (eds.) Working Notes for the CLEF 2007 Workshop (2007) [last visited May 2008], http://www.clef-campaign.org/
  7. 7.
    Di Nunzio, G.M., Ferro, N.: Appendix B: Results of the Core Tracks – Ad-hoc Robust Bilingual and Monolingual Tasks. In: Nardi, A., Peters, C. (eds.) Working Notes for the CLEF 2007 Workshop (2007) [last visited May 2008], http://www.clef-campaign.org/
  8. 8.
    Di Nunzio, G.M., Ferro, N., Peters, C., Mandl, T.: CLEF 2007: Ad Hoc Track Overview. In: Nardi, A., Peters, C. (eds.) Working Notes for the CLEF 2007 Workshop [last visited May 2008], http://www.clef-campaign.org/
  9. 9.
    Dolamic, L., Savoy, J.: Stemming Approaches for East European Languages. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 37–44. Springer, Heidelberg (2008)Google Scholar
  10. 10.
    Noguera, E., Llopis, F.: Applying Query Expansion Techniques to Ad Hoc Monolingual Tasks with the IR-n system. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 45–48. Springer, Heidelberg (2008)Google Scholar
  11. 11.
    Majumder, P., Mitra, M., Pal, D.: Bulgarian, Hungarian and Czech Stemming using YASS. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 49–56. Springer, Heidelberg (2008)Google Scholar
  12. 12.
    Češka, P., Pecina, P.: Charles University at CLEF 2007 Ad-Hoc Track. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 33–36. Springer, Heidelberg (2008)Google Scholar
  13. 13.
    Ircing, P., Müller, L.: Czech Monolingual Information Retrieval Using Off-The-Shelf Components – the University of West Bohemia at CLEF 2007 Ad-Hoc track. In: Nardi, A., Peters, C. (eds.) Working Notes for the CLEF 2007 Workshop (2007) [last visited May 2008], http://www.clef-campaign.org/
  14. 14.
    Adriani, M., Hayurani, H., Sari, S.: Indonesian-English Transitive Translation for Cross-Language Information Retrieval. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 127–133. Springer, Heidelberg (2008)Google Scholar
  15. 15.
    Zhou, D., Truran, M., Brailsford, T.: Disambiguation and Unknown Term Translation in Cross Language Information Retrieval. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 64–71. Springer, Heidelberg (2008)Google Scholar
  16. 16.
    Argaw, A.A.: Amharic-English Information Retrieval with Pseudo Relevance Feedback. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 119–126. Springer, Heidelberg (2008)Google Scholar
  17. 17.
    Schönhofen, P., Benczúr, A., Bíró, I., Csalogány, K.: Cross-Language Retrieval with Wikipedia. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 72–79. Springer, Heidelberg (2008)Google Scholar
  18. 18.
    Pingali, P., Tune, K.K., Varma, V.: Improving Recall for Hindi, telugu, Oromo to English CLIR. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 103–110. Springer, Heidelberg (2008)Google Scholar
  19. 19.
    Bandyopadhyay, S., Mondal, T., Naskar, S.K., Ekbal, A., Haque, R., Godavarthy, S.R.: Bengali, Hindi and Telugu to English Ad-hoc Bilingual task at CLEF 2007. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 88–94. Springer, Heidelberg (2008)Google Scholar
  20. 20.
    Jagarlamudi, J., Kumaran, A.: Cross-Lingual Information Retrieval System for Indian Languages. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 80–87. Springer, Heidelberg (2008)Google Scholar
  21. 21.
    Chinnakotla, M.K., Ranadive, S., Damani, O.P., Bhattacharyya, P.: Hindi-English and Marathi-English Cross Language Information Retrieval Evaluation. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 111–118. Springer, Heidelberg (2008)Google Scholar
  22. 22.
    Pingali, P., Varma, V.: IIIT Hyderabad at CLEF 2007 – Adhoc Indian Language CLIR task. In: Nardi, A., Peters, C. (eds.) Working Notes for the CLEF 2007 Workshop [last visited May 2008] http://www.clef-campaign.org/
  23. 23.
    Mandal, D., Gupta, M., Dandapat, S., Banerjee, P., Sarkar, S.: Bengali and Hindi to English CLIR Evaluation. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 95–103. Springer, Heidelberg (2008)Google Scholar
  24. 24.
    Robertson, S.: On GMAP: and Other Transformations. In: Yu, P.S., Tsotras, V., Fox, E.A., Liu, C.B. (eds.) Proc. 15th International Conference on Information and Knowledge Management (CIKM 2006), pp. 78–83. ACM Press, New York (2006)CrossRefGoogle Scholar
  25. 25.
    Voorhees, E.M.: The TREC Robust Retrieval Track. SIGIR Forum 39, 11–20 (2005)CrossRefGoogle Scholar
  26. 26.
    Savoy, J.: Why do Successful Search Systems Fail for Some Topics. In: Cho, Y., Wan Koo, Y., Wainwright, R.L., Haddad, H.M., Shin, S.Y. (eds.) Proc. 2007 ACM Symposium on Applied Computing (SAC 2007), pp. 872–877. ACM Press, New York (2007)CrossRefGoogle Scholar
  27. 27.
    Sanderson, M., Zobel, J.: Information Retrieval System Evaluation: Effort, Sensitivity, and Reliability. In: Baeza-Yates, R., Ziviani, N., Marchionini, G., Moffat, A., Tait, J. (eds.) Proc. 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2005), pp. 162–169. ACM Press, New York (2005)CrossRefGoogle Scholar
  28. 28.
    Hull, D.: Using Statistical Testing in the Evaluation of Retrieval Experiments. In: Korfhage, R., Rasmussen, E., Willett, P. (eds.) Proc. 16th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 1993), pp. 329–338. ACM Press, New York (1993)CrossRefGoogle Scholar
  29. 29.
    Conover, W.J.: Practical Nonparametric Statistics, 1st edn. John Wiley and Sons, New York (1971)Google Scholar
  30. 30.
    Judge, G.G., Hill, R.C., Griffiths, W.E., Lütkepohl, H., Lee, T.C.: Introduction to the Theory and Practice of Econometrics, 2nd edn. John Wiley and Sons, New York (1988)MATHGoogle Scholar
  31. 31.
    Tague-Sutcliffe, J.: The Pragmatics of Information Retrieval Experimentation, Revisited. In: Spack Jones, K., Willett, P. (eds.) Readings in Information Retrieval, pp. 205–216. Morgan Kaufmann Publisher, Inc., San Francisco (1997)Google Scholar
  32. 32.
    Mandl, T., Womser-Hacker, C., Ferro, N., Di Nunzio, G.: How Robust are Multilingual Information Retrieval Systems? In: Proc. 2008 ACM SAC Symposium on Applied Computing (SAC), pp. 1132–1136. ACM Press, New York (2008)CrossRefGoogle Scholar
  33. 33.
    Martínez-Santiago, F., Ráez, A.M., Garcia Cumbreras, M.A.: SINAI at CLEF Ad-Hoc Robust Track 2007: Applying Google Search Engine for Robust Cross-lingual Retrieval. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 137–142. Springer, Heidelberg (2008)Google Scholar
  34. 34.
    Zazo, A., Berrocal, J.L.A., Figuerola, C.G.: Improving Robustness Using Query Expansion. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 143–147. Springer, Heidelberg (2008)Google Scholar
  35. 35.
    González-Cristóbal, J.-C., Goñi-Menoyo, J.M., Goñi-Menoyo, J., Lana-Serrano, S.: MIRACLE Progress in Monolingual Information Retrieval at Ad-Hoc CLEF 2007. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 156–159. Springer, Heidelberg (2008)Google Scholar
  36. 36.
    Vilares, J., Oakes, M., Vilares Ferro, M.: English-to-French CLIR: A Knowledge-Light Approach through Character N- Grams Alignment. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 148–155. Springer, Heidelberg (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Giorgio M. Di Nunzio
    • 1
  • Nicola Ferro
    • 1
  • Thomas Mandl
    • 2
  • Carol Peters
    • 3
  1. 1.Department of Information EngineeringUniversity of PaduaItaly
  2. 2.Information ScienceUniversity of HildesheimGermany
  3. 3.ISTI-CNRPisaItaly

Personalised recommendations