Advertisement

Overview of the Clef 2008 Multilingual Question Answering Track

  • Pamela Forner
  • Anselmo Peñas
  • Eneko Agirre
  • Iñaki Alegria
  • Corina Forăscu
  • Nicolas Moreau
  • Petya Osenova
  • Prokopis Prokopidis
  • Paulo Rocha
  • Bogdan Sacaleanu
  • Richard Sutcliffe
  • Erik Tjong Kim Sang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5706)

Abstract

The QA campaign at CLEF 2008 [1], was mainly the same as that proposed last year. The results and the analyses reported by last year’s participants suggested that the changes introduced in the previous campaign had led to a drop in systems’ performance. So for this year’s competition it has been decided to practically replicate last year’s exercise. Following last year’s experience some QA pairs were grouped in clusters. Every cluster was characterized by a topic (not given to participants). The questions from a cluster contained co-references between one of them and the others. Moreover, as last year, the systems were given the possibility to search for answers in Wikipedia as document corpus beside the usual newswire collection. In addition to the main task, three additional exercises were offered, namely the Answer Validation Exercise (AVE), the Question Answering on Speech Transcriptions (QAST), which continued last year’s successful pilots, together with the new Word Sense Disambiguation for Question Answering (QA-WSD). As general remark, it must be said that the main task still proved to be very challenging for participating systems. As a kind of shallow comparison with last year’s results the best overall accuracy dropped significantly from 42% to 19% in the multi-lingual subtasks, but increased a little in the monolingual sub-tasks, going from 54% to 63%.

Keywords

Correct Answer Target Language Document Collection Question Answering Word Sense Disambiguation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    QA@CLEF Website, http://clef-qa.itc.it/
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
    Hartrumpf, S., Glöckner, I., Leveling, J.: University of Hagen at QA@CLEF 2007: Coreference Resolution for Questions and Answer Merging. In: Peters, C., et al. (eds.) CLEF 2008. LNCS, vol. 5706. Springer, Heidelberg (2009)Google Scholar
  7. 7.
    Herrera, J., Peñas, A., Verdejo, F.: Question Answering Pilot Task at CLEF 2004. In: Peters, C., Clough, P., Gonzalo, J., Jones, G.J.F., Kluck, M., Magnini, B. (eds.) CLEF 2004. LNCS, vol. 3491, pp. 581–590. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  8. 8.
    Landis, J.R., Koch, G.G.: The measurements of observer agreement for categorical data. Biometrics 33, 159–174 (1997)CrossRefzbMATHGoogle Scholar
  9. 9.
    Magnini, B., Giampiccolo, D., Forner, P., Ayache, C., Jijkoun, V., Osenova, P., Peñas, A., Rocha, P., Sacaleanu, B., Sutcliffe, R.: Overview of the CLEF 2006 Multilingual Question Answering Track. In: Peters, C., Clough, P., Gey, F.C., Karlgren, J., Magnini, B., Oard, D.W., de Rijke, M., Stempfhuber, M. (eds.) CLEF 2006. LNCS, vol. 4730, pp. 223–256. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  10. 10.
    Vallin, A., Magnini, B., Giampiccolo, D., Aunimo, L., Ayache, C., Osenova, P., Peñas, A., de Rijke, M., Sacaleanu, B., Santos, D., Sutcliffe, R.: Overview of the CLEF 2005 Multilingual Question Answering Track. In: Peters, C., Gey, F.C., Gonzalo, J., Müller, H., Jones, G.J.F., Kluck, M., Magnini, B., de Rijke, M., Giampiccolo, D. (eds.) CLEF 2005. LNCS, vol. 4022, pp. 307–331. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  11. 11.
    Voorhees, E.: Overview of the TREC 2002 Question Answering Track. In: NIST Special Publication 500-251: The Eleventh Text REtrieval Conference (TREC 2002). National Institute of Standards and Technology, USA (2002)Google Scholar
  12. 12.
    Agirre, E., Lopez de Lacalle, O.: UBC-ALM: Combining k-NN with SVD for WSD. In: Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval 2007), Prague, Czech Republic, pp. 341–345 (2007)Google Scholar
  13. 13.
    Chan, Y.S., Ng, H.T., Zhong, Z.: NUS-PT: Exploiting Parallel Texts for Word Sense Disambiguation in the English All-Words Tasks. In: Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval 2007), Prague, Czech Republic, pp. 253–256 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Pamela Forner
    • 1
  • Anselmo Peñas
    • 2
  • Eneko Agirre
    • 3
  • Iñaki Alegria
    • 4
  • Corina Forăscu
    • 5
  • Nicolas Moreau
    • 6
  • Petya Osenova
    • 7
  • Prokopis Prokopidis
    • 8
  • Paulo Rocha
    • 9
  • Bogdan Sacaleanu
    • 10
  • Richard Sutcliffe
    • 11
  • Erik Tjong Kim Sang
    • 12
  1. 1.CELCTTrentoItaly
  2. 2.Departamento de Lenguajes y Sistemas InformáticosUNEDMadridSpain
  3. 3.Computer Science DepartmentUniversity of Basque CountrySpain
  4. 4.University of Basque CountrySpain
  5. 5.UAIC and RACAIRomania
  6. 6.ELDA/ELRAParisFrance
  7. 7.BTBBulgaria
  8. 8.ILSP Greece, Athena Research CenterGreece
  9. 9.Linguateca, DEI UCPortugal
  10. 10.DFKIGermany
  11. 11.DLTG, University of LimerickIreland
  12. 12.University of GroningenNetherlands

Personalised recommendations