Experiments on Robust NL Question Interpretation and Multi-layered Document Annotation for a Cross–Language Question/Answering System
This report describes the work done by the QA group of the Language Technology Lab at DFKI, for the 2004 edition of the Cross-Language Evaluation Forum (CLEF). Based on the experience we obtained through our participation at QA@Clef-2003 with our initial cross-lingual QA prototype system BiQue (cf. ), the focus of the system extension for this year’s task was a) on robust NL question interpretation using advanced linguistic-based components, b) flexible interface strategies to IR-search engines, and c) on strategies for off-line annotation of the data collection, which support query-specific indexing and answer selection.
The overall architecture of the extended system, as well as the results obtained in the CLEF–2004 Monolingual German and Bilingual German/English QA tracks will be presented and discussed throughout the paper.
Unable to display preview. Download preview PDF.
- 2.Moldovan, D., Pasca, M., Harabagiu, S., Surdeanu, M.: Performance issues and error analysis in an open-domain question answering system. In: Proceedings of the ACL 2002, Philadelphia, pp. 33–40 (2002)Google Scholar
- 3.Witten, I.H., Moffat, A., Bell, T.C.: Managing Gigabytes: Compressing and Indexing Documents and Images. Morgan Kaufmann Publishers, San Francisco (1999)Google Scholar
- 4.Neumann, G., Backofen, R., Baur, J., Becker, M., Braun, C.: An information extraction core system for real world german text processing. In: ANLP 1997, Washington, USA, pp. 208–215 (1997)Google Scholar
- 6.Milward, D.: Distributed representation for robust interpretation of dialogue utterances. In: Proceedings of the ACL 2000, Hong Kong, pp. 133–141 (2000)Google Scholar