Evaluating Systems for Multilingual and Multimodal Information Access

9th Workshop of the Cross-Language Evaluation Forum, CLEF 2008, Aarhus, Denmark, September 17-19, 2008, Revised Selected Papers

  • Carol Peters
  • Thomas Deselaers
  • Nicola Ferro
  • Julio Gonzalo
  • Gareth J. F. Jones
  • Mikko Kurimo
  • Thomas Mandl
  • Anselmo Peñas
  • Vivien Petras
Conference proceedings CLEF 2008

DOI: 10.1007/978-3-642-04447-2

Part of the Lecture Notes in Computer Science book series (LNCS, volume 5706)

Table of contents (131 papers)

  1. Front Matter
  2. What Happened in CLEF 2008

    1. What Happened in CLEF 2008
      Carol Peters
      Pages 1-14
  3. Part I: Multilingual Textual Document Retrieval (Ad Hoc)

    1. CLEF 2008: Ad Hoc Track Overview
      Eneko Agirre, Giorgio Maria Di Nunzio, Nicola Ferro, Thomas Mandl, Carol Peters
      Pages 15-37
  4. TEL@CLEF

    1. Query Expansion via Library Classification System
      Alessio Bosca, Luca Dini
      Pages 42-49
    2. WikiTranslate: Query Translation for Cross-Lingual Information Retrieval Using Only Wikipedia
      Dong Nguyen, Arnold Overwijk, Claudia Hauff, Dolf R. B. Trieschnigg, Djoerd Hiemstra, Franciska de Jong
      Pages 58-65
    3. UFRGS@CLEF2008: Using Association Rules for Cross-Language Information Retrieval
      André Pinto Geraldo, Viviane P. Moreira
      Pages 66-74
    4. CLEF 2008 Ad-Hoc Track: Comparing and Combining Different IR Approaches
      Jens Kürsten, Thomas Wilhelm, Maximilian Eibl
      Pages 75-82
  5. Persian@CLEF

    1. Improving Persian Information Retrieval Systems Using Stemming and Part of Speech Tagging
      Reza Karimpour, Amineh Ghorbani, Azadeh Pishdad, Mitra Mohtarami, Abolfazl AleAhmad, Hadi Amiri et al.
      Pages 89-96
    2. Fusion of Retrieval Models at CLEF 2008 Ad Hoc Persian Track
      Zahra Aghazade, Nazanin Dehghani, Leili Farzinvash, Razieh Rahimi, Abolfazl AleAhmad, Hadi Amiri et al.
      Pages 97-104
    3. Cross Language Experiments at Persian@CLEF 2008
      Abolfazl AleAhmad, Ehsan Kamalloo, Arash Zareh, Masoud Rahgozar, Farhad Oroumchian
      Pages 105-112
  6. Robust-WSD

    1. Evaluating Word Sense Disambiguation Tools for Information Retrieval Task
      Fernando Martínez-Santiago, José M. Perea-Ortega, Miguel A. García-Cumbreras
      Pages 113-117
    2. SENSE: SEmantic N-levels Search Engine at CLEF2008 Ad Hoc Robust-WSD Track
      Annalina Caputo, Pierpaolo Basile, Giovanni Semeraro
      Pages 126-133
    3. IR-n in the CLEF Robust WSD Task 2008
      Sergio Navarro, Fernando Llopis, Rafael Muñoz
      Pages 134-137
    4. Query Clauses and Term Independence
      José R. Pérez-Agüera, Hugo Zaragoza
      Pages 138-145
    5. Analysis of Word Sense Disambiguation-Based Information Retrieval
      Jacques Guyot, Gilles Falquet, Saïd Radhouani, Karim Benzineb
      Pages 146-154
    6. Crosslanguage Retrieval Based on Wikipedia Statistics
      Andreas Juffinger, Roman Kern, Michael Granitzer
      Pages 155-162

About these proceedings

Introduction

The ninth campaign of the Cross-Language Evaluation Forum (CLEF) for European languages was held from January to September 2008. There were seven main eval- tion tracks in CLEF 2008 plus two pilot tasks. The aim, as usual, was to test the p- formance of a wide range of multilingual information access (MLIA) systems or s- tem components. This year, 100 groups, mainly but not only from academia, parti- pated in the campaign. Most of the groups were from Europe but there was also a good contingent from North America and Asia plus a few participants from South America and Africa. Full details regarding the design of the tracks, the methodologies used for evaluation, and the results obtained by the participants can be found in the different sections of these proceedings. The results of the CLEF 2008 campaign were presented at a two-and-a-half day workshop held in Aarhus, Denmark, September 17–19, and attended by 150 resear- ers and system developers. The annual workshop, held in conjunction with the European Conference on Digital Libraries, plays an important role by providing the opportunity for all the groups that have participated in the evaluation campaign to get together comparing approaches and exchanging ideas. The schedule of the workshop was divided between plenary track overviews, and parallel, poster and breakout sessions presenting this year’s experiments and discu- ing ideas for the future. There were several invited talks.

Keywords

Cross-Language Evaluation Forum Wiki answer validation cross-language cross-language queries cross-lingual data mining image retrieval information retrieval machine learning medical images natural language processing semantic analysis video retrieval

Editors and affiliations

  • Carol Peters
    • 1
  • Thomas Deselaers
    • 2
  • Nicola Ferro
    • 3
  • Julio Gonzalo
    • 4
  • Gareth J. F. Jones
    • 5
  • Mikko Kurimo
    • 6
  • Thomas Mandl
    • 7
  • Anselmo Peñas
    • 4
  • Vivien Petras
    • 8
  1. 1.Istituto di Scienza e Tecnologie dell’InformazioneCNRPisaItaly
  2. 2.RWTH Aachen UniversityAachenGermany
  3. 3.University of PaduaPaduaItaly
  4. 4.LSI-UNEDMadridSpain
  5. 5.Dublin City UniversityDublin 9Ireland
  6. 6.Helsinki University of TechnologyEspooFinland
  7. 7.University of HildesheimHildesheimGermany
  8. 8.Humboldt University BerlinGermany

Bibliographic information

  • Copyright Information Springer-Verlag Berlin Heidelberg 2009
  • Publisher Name Springer, Berlin, Heidelberg
  • eBook Packages Computer Science
  • Print ISBN 978-3-642-04446-5
  • Online ISBN 978-3-642-04447-2
  • Series Print ISSN 0302-9743
  • Series Online ISSN 1611-3349