A Trainable Multi-factored QA System
This paper reports on the construction and testing of a new Question Answering (QA) system, implemented as an workflow which builds on several web services developed at the Research Institute for Artificial Intelligence (RACAI).The evaluation of the system has been independently done by the organizers of the Romanian-Romanian task of the ResPubliQA 2009 exercise and has been rated the best performing system with the highest improvement due to the NLP technology over a baseline state-of-the-art IR system. We describe a principled way of combining different relevance measures for obtaining a general relevance (to the user’s question) score that will serve as the sort key for the returned paragraphs. The system was trained on a specific corpus, but its functionality is independent on the linguistic register of the training data. The trained QA system that participated in the ResPubliQA shared task is available as a web application at http://www2.racai.ro/sir-resdec/ .
KeywordsContent Word Question Answering Statistical Machine Translation Query Algorithm Question Class
Unable to display preview. Download preview PDF.
- 1.Heie, M.H., Whittaker, E.W.D., Novak, J.R., Mrozinski, J., Furu, S.: TAC2008 Question Answering Experiments at Tokyo Institute of Technology. In: Proceedings of the Text Analysis Conference (TAC 2008), November 17-19. National Institute of Standards and Technology, Gaithersburg, (2008)Google Scholar
- 3.Ion, R., Ştefănescu, D., Ceauşu, A., Tufiş, D., Irimia, E., Barbu Mititelu, V.: A Trainable Multi-factored QA System. In: Peters, C., et al. (eds.) Working Notes for the CLEF 2009 Workshop, Corfu, Greece, p. 14 (October 2009)Google Scholar
- 4.Ittycheriah, A., Franz, M., Zhu, W.J., Ratnaparkhi, A., Mammone, R.J.: IBM’s Statistical Question Answering System. In: Proceedings of the 9th Text Retrieval Conference (TREC-9), Gaithersburg, Maryland, November 13-16, pp. 229–235 (2000)Google Scholar
- 5.Och, F.J.: Minimum Error Rate Training in statistical machine translation. In: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, Sapporo, Japan, July 07-12, pp. 160–167 (2003)Google Scholar
- 6.Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the ACL 2002, 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, pp. 311–318 (July 2002)Google Scholar
- 7.Peñas, A., Forner, P., Sutcliffe, R., Rodrigo, Á., Forăscu, C., Alegria, I., Giampiccolo, D., Moreau, N., Osenova, P.: Overview of ResPubliQA 2009: Question Answering Evaluation over European Legislation. In: Peters, C., et al. (eds.) Working Notes for the CLEF 2009 Workshop, Corfu, Greece, October 2009, p. 14 (October 2009)Google Scholar
- 8.Tufiş, D., Ion, R., Ceauşu, A., Ştefănescu, D.: RACAI’s Linguistic Web Services. In: Proceedings of the 6th Language Resources and Evaluation Conference - LREC 2008. ELRA - European Language Ressources Association, Marrakech, Morocco (May 2008)Google Scholar
- 9.Wiegand, M., Momtazi, S., Kazalski, S., Xu, F., Chrupała, G., Klakow, D.: The Alyssa System at TAC QA 2008. In: Proceedings of the Text Analysis Conference (TAC 2008), November 17-19. National Institute of Standards and Technology, Gaithersburg (2008)Google Scholar