Information Retrieval

, Volume 15, Issue 5, pp 413–432 | Cite as

Architecture and evaluation of BRUJA, a multilingual question answering system

  • M. Á. García-Cumbreras
  • F. Martínez-Santiago
  • L. A. Ureña-López
Article

Abstract

Given a user question, the goal of a Question Answering (QA) system is to retrieve answers rather than full documents or even best-matching passages, as most Information Retrieval systems currently do. In this paper, we present BRUJA, a QA system for the management of multilingual collections. BRUJ rkstions (English, Spanish and French). The BRUJA architecture is not formed with three monolingual QA systems but instead uses English as Interlingua to make usual QA tasks such as question classifications and answer extractions. In addition, BRUJA uses Cross Language Information Retrieval (CLIR) techniques to retrieve relevant documents from a multilingual collection. On the one hand, we have more documents to find answers from but on the other hand, we are introducing noise into the system because of translations to the Interlingua (English) and the CLIR module. The question is whether the difficulty of managing three languages is worth it or whether a monolingual QA system delivers better results. We report on in-depth experimentation and demonstrate that our multilingual QA system gets better results than its monolingual counterpart whenever it uses good translation resources and, especially, CLIR techniques that are state-of-the-art.

Keywords

Question answering Multilingual question answering Cross Language Information Retrieval 

Notes

Acknowledgments

This work has been partially supported by a grant from the Spanish Government, project TEXT-COOL 2.0 (TIN2009-13391-C04-02) and FEDER, a grant from the Andalusian Government, project GeOasis (P08-TIC-41999), and a grant from the University of Jaén, project UJA2009/12/14.

References

  1. Aceves-Pérez, R. M., Montes-y-Gómez, M., & Villaseñor-Pineda, L. (2007). Enhancing cross-language question answering by combining multiple question translations. In International conference on intelligent text processing and computational linguistics CICLing-2007.Google Scholar
  2. Aceves-Pérez, R. M., Montes-Gómez, M., Villaseñor, L., & Ureña L. A. (2008). Two approaches for multilingual question answering: Merging passages vs. merging answers. International Journal of Computational Linguistics and Chinese Language Processing Special Issue on Cross-Lingual Information Retrieval and Question Answering, 13.Google Scholar
  3. Adriani, M. (2002). English-Dutch CLIR using query translation techniques, evaluation of cross-language information retrieval systems. Lecture Notes in Computer Science, 2406, 1–43.CrossRefGoogle Scholar
  4. Bellot, P., SanJuan E., Moriceau, V., & Tannier, X. (2010). Overview of the 2010 QA Track: Preliminary results. In Pre-proceedings of the INitiative for the evaluation of XML retrieval workshop (INEX 2010) (pp. 209–213).Google Scholar
  5. Callan, J. P., Lu, Z., & Croft, W. B. (1995). Searching distributed collections with inference networks. In Proceedings of the 18th international conference of the ACM SIGIR’95 (pp. 21–28).Google Scholar
  6. Calve, A., & Savoy, J. (2000). Database merging strategy based on logistic regression. Information Processing and Management, 36, 341–359.CrossRefGoogle Scholar
  7. Dumais, S. T. (1994) Latent Semantic Indexing (LSI) and TREC-2. Proceedings of TREC’2 (pp. 105–115).Google Scholar
  8. García-Cumbreras, M. A. (2009). Thesis: BRUJA: Un sistema de Búsqueda de Respuestas Multilingüe, Universidad de Jaén.Google Scholar
  9. García-Cumbreras, M. A., Ureña-López, L. A., & Martínez-Santiago, F. (2006). BRUJA: Question classification for Spanish. Using machine translation and an English classifier. In Proceedings of multilingual question answering, ACL workshop.Google Scholar
  10. Hancock-Beaulieu, M., & Jones, S. (1998). Interactive searching and interface issues in the Okapi best match probabilistic retrieval system. Interacting with Computers, 10, 237–248.CrossRefGoogle Scholar
  11. Hirschman, L., & Gaizauskas R. (2001). Natural language question answering: The view from here. Natural Language Engineering, 4, 275–300.Google Scholar
  12. Hovy, E., Gerber, L., Hermjakob, U., Lin, C., & Ravichandran, D. (1999). Towards sematics-based answer pinpointing. In Proceedings of the DARPA human language technology conference (HLT).Google Scholar
  13. Hovy, E., Gerber, L., Hermjakob, U., Junk, M., & Lin, C. (2000). Question answering in webclopedia. In Proceedings of the ninth text REtrieval conference TREC-9 655–664.Google Scholar
  14. Hull, D. A., & Grefenstette, G. (1996). Querying across languages: A dictionary-based approach to multilingual information retrieval, SIGIR (pp. 49–57).Google Scholar
  15. Ko, J. Luom, L., Nyberg, E., & Mitamura, T. (2010). Probabilistic models for answer-ranking in multilingual question-answering. ACM Transactions on Information Systems, 1–37.Google Scholar
  16. Li, X. D., & Roth, D. (2002) Learning question classifiers. In Proceedings of coling (COLING?02).Google Scholar
  17. Lin, F., Shima, H., Wang, M., & Mitamura, T. (2005). CMU JAVELIN system for NTCIR5 CLQA1. In Proceedings of the 5th NTCIR workshop (NII test collection for IR systems).Google Scholar
  18. Magnini, B., Romagnoli, S., Vallin, A., Herreras, J., Peñas, A., Peinado, V. et al. (2004). The multiple language question answering track at CLEF 2003. In Comparative evaluation of multilingual information access systems: 4th workshop of the cross-language evaluation forum, CLEF 2003 Lecture Notes in Computer Science (Vol. 3237, pp. 471–486).Google Scholar
  19. Magnini, B., Giampiccolo, D., & Forner P. et al. (2006). Overview of the CLEF 2006 Multilingual Question Answering Track. In Proceedings of the cross language evaluation forum CLEF 2006.Google Scholar
  20. Martínez-Santiago, F., García-Cumbreras, M. A., & Ureña López, L. A. (2004). The merging problem in distributed information retrieval and the 2-step RSV merging algorithm. Advances in Natural Language Processing, 3230, 442–453.CrossRefGoogle Scholar
  21. Martínez-Santiago, F., García-Cumbreras M. A., Díaz-Galiano, M.C., & Ureña López, L. A. (2005). SINAI at CLEF 2004: Using machine translation resources with a mixed 2-step RSV merging algorithm. Multilingual information access for text, speech and images. Lecture Notes in Computer Science, 3491.Google Scholar
  22. Martínez-Santiago, F., García-Cumbreras M. A., & Ureña López, L. A. (2006a). Does pseudo-relevance feedback improve distributed information retrieval systems?. Information Processing and Management, 42, 1151–1162.CrossRefGoogle Scholar
  23. Martínez-Santiago, F., Martín-Valdivia, M., & Ureña-López, L. A. (2006b). A merging strategy proposal: The 2-step retrieval status value method. Information Retrieval, 9, 71–93.CrossRefGoogle Scholar
  24. Moldovan, D., Harabagiu, S., Harabagiu, A., Pasca, M., Mihalcea, R., Girju, R. et al. (2000). The structure and performance of an open-domain question answering system. In Proceedings of the conference of the association for computational linguistics ACL-2000 563–570.Google Scholar
  25. Peñas, A., Forner, P., Rodrigo, A., Sutcliffe, R., Forascu, C., & Mota, C. (2010). Overview of ResPubliQA 2010: Question answering evaluation over European legislation. CLEF 2010 Working Notes.Google Scholar
  26. Pérez, J., Garrido, G., Rodrigo, A., Araujo, L., & Peñas, A. (2009). Information retrieval baselines for the ResPubliQA task. CLEF 2009 Working Notes.Google Scholar
  27. Powell, A. L., French, J. C., Callan, J., Connell, M., & Viles, C. L. (2000). The impact of database selection on distributed searching. In Proceedings of the 23rd international conference of the ACM-SIGIR’2000 (pp. 232–239).Google Scholar
  28. Robertson, S. E., & Walker, S. (1999). Okapi-Keenbow at TREC-8. In Proceedings of the 8th text retrieval conference TREC-8, NIST special publication (pp. 151–162).Google Scholar
  29. Salton, G., & Buckley, G. (1990). Improving retrieval performance by relevance feedback. Journal of American Society for Information Sciences, 21, 288–297.CrossRefGoogle Scholar
  30. Savoy, J. (2002). Report on CLEF-2001 experiments. Evaluation of Cross-Language Information Retrieval Systems, 2406.Google Scholar
  31. Savoy, J. (2004). Combining multiple strategies for effective monolingual and cross-language retrieval. Information Retrieval, 7, 121–148.CrossRefGoogle Scholar
  32. Voorhees, E., Gupta, N. K., & Johnson-Laird, B. (1995). The collection fusion problem. In Proceedings of the 3th text retrieval conference TREC-3 95–104.Google Scholar
  33. Voorhees, E. (1999). The TREC-8 question answering track report. In Proceedings of Text Retrieval Conference TREC-8.Google Scholar
  34. Vallin, A., Magnini, B., Giampiccolo, D., Aunimo, L., Ayache, C., Osenova, P. et al. (2005). Overview of the CLEF 2005 multilingual question answering track. Accessing multilingual information repositories. Lecture Notes in Computer Science, 4022, 307–331.Google Scholar
  35. Webber, B., & Webb, N. (2010). Question answering. The handbook of computational linguistics and natural language processing (pp. 630–654).Google Scholar
  36. Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics Bulletin.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • M. Á. García-Cumbreras
    • 1
  • F. Martínez-Santiago
    • 1
  • L. A. Ureña-López
    • 1
  1. 1.SINAI Research Group, Computer Science DepartmentUniversity of JaénJaénSpain

Personalised recommendations