Multilingual and Multimodal Information Access Evaluation

International Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010. Proceedings

  • Maristella Agosti
  • Nicola Ferro
  • Carol Peters
  • Maarten de Rijke
  • Alan Smeaton
Conference proceedings CLEF 2010

DOI: 10.1007/978-3-642-15998-5

Part of the Lecture Notes in Computer Science book series (LNCS, volume 6360)

Table of contents (16 papers)

  1. Front Matter
  2. Keynote Addresses

  3. Resources, Tools, and Methods

    1. A New Approach for Cross-Language Plagiarism Analysis
      Rafael Corezola Pereira, Viviane P. Moreira, Renata Galante
      Pages 15-26
    2. Creating a Persian-English Comparable Corpus
      Homa Baradaran Hashemi, Azadeh Shakery, Heshaam Faili
      Pages 27-39
  4. Experimental Collections and Datasets (1)

    1. Validating Query Simulators: An Experiment Using Commercial Searches and Purchases
      Bouke Huurnink, Katja Hofmann, Maarten de Rijke, Marc Bron
      Pages 40-51
    2. Using Parallel Corpora for Multilingual (Multi-document) Summarisation Evaluation
      Marco Turchi, Josef Steinberger, Mijail Kabadjov, Ralf Steinberger
      Pages 52-63
  5. Experimental Collections and Datasets (2)

  6. Evaluation Methodologies and Metrics (1)

    1. On the Evaluation of Entity Profiles
      Maarten de Rijke, Krisztian Balog, Toine Bogers, Antal van den Bosch
      Pages 94-99
  7. Evaluation Methodologies and Metrics (2)

    1. Evaluating Information Extraction
      Andrea Esuli, Fabrizio Sebastiani
      Pages 100-111
    2. Tie-Breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation
      Guillaume Cabanac, Gilles Hubert, Mohand Boughanem, Claude Chrisment
      Pages 112-123
    3. Automated Component–Level Evaluation: Present and Future
      Allan Hanbury, Henning Müller
      Pages 124-135
  8. Panels

    1. The Four Ladies of Experimental Evaluation
      Donna Harman, Noriko Kando, Mounia Lalmas, Carol Peters
      Pages 136-139
    2. A PROMISE for Experimental Evaluation
      Martin Braschler, Khalid Choukri, Nicola Ferro, Allan Hanbury, Jussi Karlgren, Henning Müller et al.
      Pages 140-144
  9. Back Matter

About these proceedings

Introduction

In its ?rst ten years of activities (2000-2009), the Cross-Language Evaluation Forum (CLEF) played a leading role in stimulating investigation and research in a wide range of key areas in the information retrieval domain, such as cro- language question answering, image and geographic information retrieval, int- activeretrieval,and many more.It also promotedthe study andimplementation of appropriateevaluation methodologies for these diverse types of tasks and - dia. As a result, CLEF has been extremely successful in building a wide, strong, and multidisciplinary research community, which covers and spans the di?erent areasofexpertiseneededto dealwith thespreadofCLEFtracksandtasks.This constantly growing and almost completely voluntary community has dedicated an incredible amount of e?ort to making CLEF happen and is at the core of the CLEF achievements. CLEF 2010 represented a radical innovation of the “classic CLEF” format and an experiment aimed at understanding how “next generation” evaluation campaigns might be structured. We had to face the problem of how to innovate CLEFwhile still preservingits traditionalcorebusiness,namely the benchma- ing activities carried out in the various tracks and tasks. The consensus, after lively and community-wide discussions, was to make CLEF an independent four-day event, no longer organized in conjunction with the European Conference on Research and Advanced Technology for Digital Libraries (ECDL) where CLEF has been running as a two-and-a-half-day wo- shop. CLEF 2010 thus consisted of two main parts: a peer-reviewed conference – the ?rst two days – and a series of laboratories and workshops – the second two days.

Keywords

Design MapReduce comparable corpora corpus cross-language cross-language queries cross-lingual data mining evaluation image retrieval information retrieval intrinsic plagiarism analysis medical images meta learning natural language process

Editors and affiliations

  • Maristella Agosti
    • 1
  • Nicola Ferro
    • 2
  • Carol Peters
    • 3
  • Maarten de Rijke
    • 4
  • Alan Smeaton
    • 5
  1. 1.Department of Information EngineeringUniversity of PaduaPadovaItaly
  2. 2.University of PaduaPaduaItaly
  3. 3.ISTI-CNR, Area Ricerca CNRPisaItaly
  4. 4.ISLAUniversity of AmsterdamAmsterdamThe Netherlands
  5. 5.Dublin City UniversityDublinIreland

Bibliographic information

  • Copyright Information Springer-Verlag Heidelberg 2010
  • Publisher Name Springer, Berlin, Heidelberg
  • eBook Packages Computer Science
  • Print ISBN 978-3-642-15997-8
  • Online ISBN 978-3-642-15998-5
  • Series Print ISSN 0302-9743
  • Series Online ISSN 1611-3349