Are Passages Enough? The MIRACLE Team Participation in QA@CLEF2009

  • María Teresa Vicente-Díez
  • César de Pablo-Sánchez
  • Paloma Martínez
  • Julián Moreno Schneider
  • Marta Garrote Salazar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6241)

Abstract

This paper summarizes the participation of the MIRACLE team in the Multilingual Question Answering Track at CLEF 2009. In this campaign, we took part in the monolingual Spanish task at ResPubliQA and submitted two runs. We have adapted our QA system to the new JRC-Acquis collection and the legal domain. We tested the use of answer filtering and ranking techniques against a baseline system using passage retrieval with no success. The run using question analysis and passage retrieval obtained a global accuracy of 0.33, while the addition of an answer filtering resulted in 0.29. We provide an analysis of the results for different questions types to investigate why it is difficult to leverage previous QA techniques. Another task of our work has been the application of temporal management to QA. Finally we include some discussion of the problems found with the new collection and the complexities of the domain.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Steinberger, R., Pouliquen, B., Widiger, A., Ignat, C., Erjavec, T., Tufiş, D., Varga, D.: The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages. In: Proceedings of the 5th International Conference on Language Resources and Evaluation, Italy (2006)Google Scholar
  2. 2.
    Martínez-González, A., de Pablo-Sánchez, C., Polo-Bayo, C., Vicente-Díez, M.T., Martinez-Fernández, P., Martínez-Fernández, J.L.: The MIRACLE Team at the CLEF 2008 Multilingual Question Answering Track. In: Peters, C., Deselaers, T., Ferro, N., Gonzalo, J., Jones, G.J.F., Kurimo, M., Mandl, T., Peñas, A., Petras, V. (eds.) CLEF 2008. LNCS, vol. 5706, pp. 409–420. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  3. 3.
    Apache Lucene project. The Apache Software Foundation, http://lucene.apache.org
  4. 4.
    Saquete, E.: Resolución de Información Temporal y su Aplicación a la Búsqueda de Respuestas. Thesis in Computer Science, Universidad de Alicante (2005)Google Scholar
  5. 5.
    De Rijke, et al.: Inference for temporal question answering Project (2004-2007)Google Scholar
  6. 6.
    Clark, C., Moldovan, D.: Temporally Relevant Answer Selection. In: Proceedings of the 2005 International Conference on Intelligence Analysis (2005)Google Scholar
  7. 7.
    Saquete, E., Martínez-Barco, P., Muñoz, R., Vicedo, J.: Splitting Complex Temporal Questions for Question Answering Systems. In: Proceedings of the ACL 2004 Conference, Barcelona, Spain (2004)Google Scholar
  8. 8.
    Vicente-Díez, M.T., y Martínez, P.: Aplicación de técnicas de extracción de información temporal a los sistemas de búsqueda de respuestas. Procesamiento del lenguaje natural (42), 25–30 (2009)Google Scholar
  9. 9.
    Vicente-Díez, M.T., Martínez, P.: Temporal Semantics Extraction for Improving Web Search. 8th International Workshop on Web Semantics (WebS 2009). In: Tajoa, A.M., Wagner, R.R. (eds.) Proceedings of the 20th International Workshop on Database and Expert Systems Applications, DEXA 2009, pp. 69–73. IEEE Press, Los Alamitos (2009)Google Scholar
  10. 10.
    Pérez, J., Garrido, G., Rodrigo, A., Araujo, L., Peñas, A.: Information Retrieval Baselines for the ResPubliQA task. In: CLEF 2009 Working Notes (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • María Teresa Vicente-Díez
    • 1
  • César de Pablo-Sánchez
    • 1
  • Paloma Martínez
    • 1
  • Julián Moreno Schneider
    • 1
  • Marta Garrote Salazar
    • 1
  1. 1.Universidad Carlos III de MadridLeganés, MadridSpain

Personalised recommendations