Advertisement

Information Retrieval

, Volume 13, Issue 5, pp 507–533 | Cite as

FIDJI: using syntax for validating answers in multiple documents

  • Véronique Moriceau
  • Xavier Tannier
Focused Retrieval and Result Aggr.

Abstract

This article presents FIDJI, a question-answering (QA) system for French. FIDJI combines syntactic information with traditional QA techniques such as named entity recognition and term weighting; it does not require any pre-processing other than classical search engine indexing. Among other uses of syntax, we experiment in this system the validation of answers through different documents, as well as specific techniques for answering different types of questions (e.g., yes/no or list questions). We present several experiments which show the benefits of syntactic analysis, as well as multi-document validation. Different types of questions and corpora are tested, and specificities are commented. Links with result aggregation are also discussed.

Keywords

Focused retrieval Question answering Syntactic analysis Multi-document validation Result aggregation 

References

  1. Aït-Mokhtar, S., Chanod, J.-P., & Roux, C. (2002). Robustness beyond shallowness: Incremental deep parsing. Natural Language Engineering, 8(2/3), 121–144. Cambridge University Press.Google Scholar
  2. Barzilay, R., & Lee, L. (2003). Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In Proceedings of HLT-NAACL 2003 (pp. 16–23). Edmonton, AB, Canada.Google Scholar
  3. Bouma, G., Fahmi, I., Mur, J., van Noord, G., van der Plas, L., & Tiedemann, J. (2007). Linguistic knowledge and question answering. Traitement Automatique des Langues, 46(3), 15–39.Google Scholar
  4. Brun, C., & Hagège, C. (2004). Intertwining deep syntactic processing and named entity detection. In Proceedings of España for natural language processing (EsTAL 2004). Alicante, Spain.Google Scholar
  5. Di Nunzio, G. M., Ferro, N., Mandl, T., & Peters, C. (2006). CLEF 2006: Ad hoc track overview. Working notes for the CLEF 2006 workshop. Alicante, Spain.Google Scholar
  6. Fujita, A., & Sato, S. (2008). Computing paraphrasability of syntactic variants using web snippets. In Proceedings of the third international joint conference on natural language processing (IJCNLP) (pp. 537–544). Hyderabad, India.Google Scholar
  7. Harabagiu, S., & Bejan, C. A. (2005). Question answering based on temporal inference. In Proceedings of the workshop on inference for textual question answering. Pittsburg, Pennsylvania, USA.Google Scholar
  8. Harabagiu, S., & Hickl, A. (2006). Methods for using textual entailment in open-domain question answering. In 44th Annual meeting of the association for computational linguistics. Sidney, Australia.Google Scholar
  9. Hartrumpf, S. (2008). Semantic decomposition for question answering. In Proceedings of the 18th European conference on artificial intelligence (ECAI) (pp. 313–317). Patras, Greece.Google Scholar
  10. Hickl, A., Wang, P., Lehmann, J., & Harabagiu, S. (2006). FERRET: Interactive question-answering for real-world environments. In COLING/ACL interactive presentation sessions. Sydney, Australia.Google Scholar
  11. Katz, B. (1997). Annotating the world wide web using natural language. In Proceedings of the 5th RIAO conference on computer assisted information searching on the Internet (RIAO ‘97). Google Scholar
  12. Katz, B., Borchardt, G., & Felshin, S. (2005). Syntactic and semantic decomposition strategies for question answering from multiple resources. In Proceedings of the AAAI 2005 workshop on inference for textual question answering. Pittsburgh, Pennsylvania, USA.Google Scholar
  13. Katz, B., & Lin, J. (2003). Selectively using relations to improve precision in question answering. In Proceedings of the EACL-2003 workshop on natural language processing for question answering. Google Scholar
  14. Laurent, D., Séguéla, P., & Nègre, S. (2005). Cross lingual question answering using QRISTAL for CLEF 2005. Working notes for the CLEF 2005 workshop. Vienna, Austria.Google Scholar
  15. Laurent, D., Séguéla, P., & Nègre, S. (2006). Cross lingual question answering using QRISTAL for CLEF 2006. Working notes for the CLEF 2006 workshop. Alicante, Spain.Google Scholar
  16. Laurent, D., Séguéla, P., & Nègre, S. (2008). Cross lingual question answering using QRISTAL for CLEF 2008. Working notes for the CLEF 2006 workshop. Alicante, Spain.Google Scholar
  17. Lee, Y.-H., Lee, C.-W., Sung, C.-L., Tzou, M.-T., Wang, C.-C., Liu, S.-H., et al. (2008). Complex question answering with ASQA at NTCIR 7 ACLIA. In Proceeding of the 7th NTCIR workshop meeting (pp. 70–76). Tokyo, Japan.Google Scholar
  18. Ligozat, A.-L. (2007). Apport de l’analyse syntaxique des phrases dans un système de questions-réponses. Traitement automatique des langues, 46, 103–125.Google Scholar
  19. Lin, C.-J., & Liu, R.-R. (2008). An analysis of multi-focus questions. SIGIR workshop on focused retrieval. Singapore.Google Scholar
  20. Lin, D., & Pantel, P. (2001). Discovery of inference rules for question-answering. Journal of Natural Language Engineering, 7(4), 343–360.CrossRefGoogle Scholar
  21. Mel’čuk, I. (1984). Dependency syntax: Theory and practice. Albany: State University of New York Press.Google Scholar
  22. Moriceau, V., Tannier, X., Grappy, A., & Grau, B. (2008). Justification of answers by verification of dependency relations—the French AVE task. Working Notes for the CLEF 2008 Workshop. Aarhus, Denmark.Google Scholar
  23. Quintard, L. (2008). Overview of the Quaero 2008 monolingual question answering track. Retrieved from Laboratoire National de Métrologie et d’Essais: http://www.lne.eu/publications_en/research/quaero-QA-2008-overview.pdf.
  24. Quintard, L. (2009). P2 Evaluation Report: Evaluation Design for Task 3.5 on Question Answering Systems. Quaero program, CTC project.Google Scholar
  25. Rodrigo, A., Peñas, A., & Verdejo, F. (2008). Overview of the answer validation exercise 2008. Working notes for the CLEF 2008 workshop. Aarhus, Denmark.Google Scholar
  26. Saquete, E., Martinez-Barco, P. R. M., & Vicedo, J. L. (2004). Splitting complex temporal questions for question answering systems. In 42nd Annual meeting of the association for computational linguistics (pp. 566–573). Barcelone, Spain: Association for Computational Linguistics, Morristown, NJ, USA.Google Scholar
  27. Sun, R., Jiang, J., Tan, Y. F., Cui, H., Chua, T.-S., & Kan, M.-Y. (2005). Using syntactic and semantic relation analysis in question answering. In The fourteenth text retrieval conference (TREC). Gaithersburg, MD, USA: Department of Commerce, National Institute of Standards and Technology.Google Scholar
  28. Vallin, A., Giampiccolo, D., Aunimo, L., Ayache, C., Osenova, P., Peñas, A., et al. (2005). Overview of the CLEF 2005 Multilingual Question Answering Track. Working notes for the CLEF 2005 workshop. Vienna, Austria.Google Scholar
  29. Verberne, S., Boves, L., Oostdijk, N., & Coppen, P.-A. (2007). Discourse-based answering of why-questions. Traitement Automatique des Langues, Discours et document: traitements automatiques, 47, 21–41.Google Scholar
  30. Verberne, S., Raaijmakers, S., Theijssen, D., & Boves, L. (2009). Learning to rank answers to why-questions. In Proceedings of 9th Dutch-Belgian information retrieval workshop (DIR 2009) (pp. 34–41).Google Scholar
  31. Yu, H., & Hatzivassiloglou, V. (2003). Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Proceedings of 2003 conference on empirical methods in natural language processing (EMNLP) (pp. 129–136).Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Université Paris Sud 11LIMSI-CNRSOrsayFrance

Personalised recommendations