Advertisement

Coreference Resolution for Questions and Answer Merging by Validation

  • Sven Hartrumpf
  • Ingo Glöckner
  • Johannes Leveling
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5152)

Abstract

For its fourth participation at QA@CLEF, the German question answering (QA) system InSicht was improved for CLEF 2007 in the following main areas: questions containing pronominal or nominal anaphors are treated by a coreference resolver; the shallow QA methods are improved; and a specialized module is added for answer merging. Results showed a performance drop compared to last year mainly due to problems in handling the newly added Wikipedia corpus. However, dialog treatment by coreference resolution delivered very accurate results so that follow-up questions can be handled similarly to isolated questions.

Keywords

Question Answering Performance Drop Coreference Resolution Question Answering System Answer Candidate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hartrumpf, S., Glöckner, I., Leveling, J.: University of Hagen at QA@CLEF 2007: Coreference resolution for questions and answer merging. In: Working Notes for the CLEF 2007 Workshop, Budapest, Hungary (2007)Google Scholar
  2. 2.
    Hartrumpf, S.: Hybrid Disambiguation in Natural Language Analysis. Der Andere Verlag, Osnabrück (2003)Google Scholar
  3. 3.
    Vicedo, J.L., Ferrández, A.: Coreference in Q & A. In: Strzalkowski, T., Harabagiu, S. (eds.) Advances in Open Domain Question Answering. Text, Speech and Language Technology, vol. 32, pp. 71–96. Springer, Dordrecht (2006)CrossRefGoogle Scholar
  4. 4.
    Harabagiu, S., Moldovan, D., Paşca, M., Surdeanu, M., Mihalcea, R., Gîrju, R., Rus, V., Lăcătuşu, F., Morărescu, P., Bunescu, R.: Answering complex, list and context questions with LCC’s question-answering server. In: Voorhees, E.M., Harman, D. (eds.) Proceedings of TREC-10, pp. 355–361 (2001)Google Scholar
  5. 5.
    Oh, J.H., Lee, K.S., Chang, D.S., Seo, C.W., Choi, K.S.: TREC-10 experiments at KAIST: Batch filtering and question answering. In: Voorhees, E.M., Harman, D. (eds.) Proceedings of TREC-10, pp. 347–354 (2001)Google Scholar
  6. 6.
    Leveling, J.: On the role of information retrieval in the question answering system IRSAW. In: Proceedings of LWA 2006, Workshop FGIR, Hildesheim, Germany, pp. 119–125 (2006)Google Scholar
  7. 7.
    Leveling, J.: A modified information retrieval approach to produce answer candidates for question answering. In: Hinneburg, A. (ed.) Proceedings of LWA 2007, Workshop FGIR. Gesellschaft für Informatik, Halle/Saale, Germany (2007)Google Scholar
  8. 8.
    Glöckner, I., Hartrumpf, S., Leveling, J.: Logical validation, answer merging and witness selection – a case study in multi-stream question answering. In: Proceedings of RIAO 2007, Pittsburgh, USA (2007)Google Scholar
  9. 9.
    Glöckner, I.: University of Hagen at QA@CLEF 2007: Answer validation exercise. In: Working Notes for the CLEF 2007 Workshop, Budapest, Hungary (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Sven Hartrumpf
    • 1
  • Ingo Glöckner
    • 1
  • Johannes Leveling
    • 1
  1. 1.Intelligent Information and Communication Systems (IICS)University of Hagen (FernUniversität in Hagen)HagenGermany

Personalised recommendations