Advertisement

Overview of the CLEF 2007 Multilingual Question Answering Track

  • Danilo Giampiccolo
  • Pamela Forner
  • Jesús Herrera
  • Anselmo Peñas
  • Christelle Ayache
  • Corina Forascu
  • Valentin Jijkoun
  • Petya Osenova
  • Paulo Rocha
  • Bogdan Sacaleanu
  • Richard Sutcliffe
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5152)

Abstract

The fifth QA campaign at CLEF [1], having its first edition in 2003, offered not only a main task but an Answer Validation Exercise (AVE) [2], which continued last year’s pilot, and a new pilot: the Question Answering on Speech Transcripts (QAST) [3, 15]. The main task was characterized by the focus on cross-linguality, while covering as many European languages as possible. As novelty, some QA pairs were grouped in clusters. Every cluster was characterized by a topic (not given to participants). The questions from a cluster possibly contain co-references between one of them and the others. Finally, the need for searching answers in web formats was satisfied by introducing Wikipedia as document corpus. The results and the analyses reported by the participants suggest that the introduction of Wikipedia and the topic related questions led to a drop in systems’ performance.

Keywords

Correct Answer Target Language Question Type Question Answering Anaphora Resolution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    QA@CLEF Website, http://clef-qa.itc.it/
  2. 2.
  3. 3.
  4. 4.
    QA@CLEF 2007 Organizing Committee. Guidelines (2007), http://clef-qa.itc.it/2007/download/QACLEF07_Guidelines-for-Participants.pdf
  5. 5.
    Hartrumpf, S., Glöckner, I., Leveling, J.: University of Hagen at QA@CLEF 2007: Coreference Resolution for Questions and Answer Merging. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152. Springer, Heidelberg (2008)Google Scholar
  6. 6.
    Herrera, J., Peñas, A., Verdejo, F.: Question Answering Pilot Task at CLEF 2004. In: Peters, C., Clough, P., Gonzalo, J., Jones, G.J.F., Kluck, M., Magnini, B. (eds.) CLEF 2004. LNCS, vol. 3491, pp. 581–590. Springer, Heidelberg (2005)Google Scholar
  7. 7.
    Ion, R.: Word Sense Disambiguation Methods Applied to English and Romanian. PhD thesis, Romanian Academy, Bucharest (2007)Google Scholar
  8. 8.
    Ion, R., Mititelu, V.B.: Constrained Lexical Attraction Models. In: Nineteenth International Florida Artificial Intelligence Research Society Conference, pp. 297–302. AAAI Press, Menlo Park (2006)Google Scholar
  9. 9.
    Jijkoun, V., de Rijke, M.: Overview of the WiQA Task at CLEF 2006. In: Peters, C., Clough, P., Gey, F.C., Karlgren, J., Magnini, B., Oard, D.W., de Rijke, M., Stempfhuber, M. (eds.) CLEF 2006. LNCS, vol. 4730, pp. 265–274. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  10. 10.
    Landis, J.R., Koch, G.G.: The measurements of observer agreement for categorical data. Biometrics 33, 159–174 (1997)CrossRefMathSciNetGoogle Scholar
  11. 11.
    Laurent, D., Séguéla, P., Nêgre, S.: Cross Lingual Question Answering using QRISTAL for CLEF 2007. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152. Springer, Heidelberg (2008)Google Scholar
  12. 12.
    Magnini, B., Giampiccolo, D., Forner, P., Ayache, C., Jijkoun, V., Osenova, P., Peñas, A., Rocha, P., Sacaleanu, B., Sutcliffe, R.: Overview of the CLEF 2006 Multilingual Question Answering Track. In: Peters, C., Clough, P., Gey, F.C., Karlgren, J., Magnini, B., Oard, D.W., de Rijke, M., Stempfhuber, M. (eds.) CLEF 2006. LNCS, vol. 4730, pp. 223–256. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  13. 13.
    Peñas, A., Rodrigo, Á., Verdejo, F.: Overview of the Answer Validation Exercise 2007. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152. Springer, Heidelberg (2008)Google Scholar
  14. 14.
    Tufiş, D., Ştefănescu, D., Ion, R., Ceauşu, A.: RACAI’s Question Answering System at QA@CLEF 2007. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152. Springer, Heidelberg (2008)Google Scholar
  15. 15.
    Turmo, J., Comas, P., Ayache, C., Mostefa, D., Rosset, S., Lamel, L.: Overview of QAST 2007. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152. Springer, Heidelberg (2008)Google Scholar
  16. 16.
    Vallin, A., Magnini, B., Giampiccolo, D., Aunimo, L., Ayache, C., Osenova, P., Peñas, A., de Rijke, M., Sacaleanu, B., Santos, D., Sutcliffe, R.: Overview of the CLEF 2005 Multilingual Question Answering Track. In: Peters, C., Gey, F.C., Gonzalo, J., Müller, H., Jones, G.J.F., Kluck, M., Magnini, B., de Rijke, M., Giampiccolo, D. (eds.) CLEF 2005. LNCS, vol. 4022, pp. 307–331. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  17. 17.
    Voorhees, E.: Overview of the TREC 2002 Question Answering Track. In: The Eleventh Text REtrieval Conference (TREC 2002), National Institute of Standards and Technology, USA. NIST Special Publication 500-251 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Danilo Giampiccolo
    • 1
  • Pamela Forner
    • 1
  • Jesús Herrera
    • 2
  • Anselmo Peñas
    • 3
  • Christelle Ayache
    • 4
  • Corina Forascu
    • 5
  • Valentin Jijkoun
    • 6
  • Petya Osenova
    • 7
  • Paulo Rocha
    • 8
  • Bogdan Sacaleanu
    • 9
  • Richard Sutcliffe
    • 10
  1. 1.CELCTTrentoItaly
  2. 2.Departamento de Ingeniería del Software e Inteligencia ArtificialUniversidad Complutense de MadridSpain
  3. 3.Departamento de Lenguajes y Sistemas InformáticosUNEDMadridSpain
  4. 4.ELDA/ELRAParisFrance
  5. 5.Faculty of Computer ScienceUniversity “Al. I. Cuza” of Iaşi, Romania Institute for Computer Science, Romanian AcademyIaşiRomania
  6. 6.Informatics InstituteUniversity of AmsterdamThe Netherlands
  7. 7.BTBBulgaria
  8. 8.Linguateca, SINTEF ICTNorway and Portugal
  9. 9.DFKIGermany
  10. 10.DLTGUniversity of LimerickIreland

Personalised recommendations