The LIMSI Participation in the QAst Track

  • Sophie Rosset
  • Olivier Galibert
  • Gilles Adda
  • Eric Bilinski
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5152)

Abstract

In this paper, we present two different question-answering systems on speech transcripts which participated to the QAst 2007 evaluation. These two systems are based on a complete and multi-level analysis of both queries and documents. The first system uses handcrafted rules for small text fragments (snippet) selection and answer extraction. The second one replaces the handcrafting with an automatically generated research descriptor. A score based on those descriptors is used to select documents and snippets. The extraction and scoring of candidate answers is based on proximity measurements within the research descriptor elements and a number of secondary factors. The evaluation results are ranged from 17% to 39% as accuracy depending on the tasks.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Voorhees, E.M., Buckland, L.P.: In: Voorhees, Buckland (eds.) The Fifteenth Text REtrieval Conference Proceedings (TREC 2006) (2006)Google Scholar
  2. 2.
    Giampiccolo, D., Forner, P., Peñas, A., Ayache, C., Cristea, D., Jijkoun, V., Osenova, P., Rocha, P., Sacaleanu, B., Sutcliffe, R.: Overview of the CLEF 2007 Multilingual Question Answering Track. In: Working Notes for the CLEF 2007 Workshop, Budapest, Hungary (September 2007)Google Scholar
  3. 3.
    Ayache, C., Grau, B., Vilnat, A.: Evaluation of question-answering systems: The French EQueR-EVALDA Evaluation Campaign. In: Proceedings of LREC 2006, Genoa, Italy (2006)Google Scholar
  4. 4.
    Turmo, J., Comas, P., Ayache, C., Mostefa, D., Rosset, S., Lamel, L.: Overview of QAST 2007. In: Working Notes of CLEF 2007 Workshop, Budapest, Hungary (September 2007)Google Scholar
  5. 5.
    Harabagiu, S., Moldovan, D.: Question-Answering. In: Mitkov, R. (ed.) The Oxford Handbook of Computational Linguistics. Oxford University Press, Oxford (2003)Google Scholar
  6. 6.
    Harabagiu, S., Hickl, A.: Methods for using textual entailment in Open-Domain question-answering. In: Proceedings of COLING 2006, Sydney, Australia (July 2006)Google Scholar
  7. 7.
    van Schooten, B., Rosset, S., Galibert, O., Max, A., op den Akker, R., Illouz, G.: Handling speech input in the Ritel QA dialogue system. In: Proceedings of Interspeech 2007, Antwerp. Belgium (August 2007)Google Scholar
  8. 8.
    Déchelotte, D., Schwenk, H., Adda, G., Gauvain, J.-L.: Improved Machine Translation of Speech-to-Text outputs. In: Proceedings of Interspeech 2007, Antwerp. Belgium (August 2007)Google Scholar
  9. 9.
    Bikel, D.M., Miller, S., Schwartz, R., Weischedel, R.: Nymble: a high-performance learning name-finder. In: Proceedings of ANLP 1997, Washington, USA (1997)Google Scholar
  10. 10.
    Isozaki, H., Kazawa, H.: Efficient Support Vector Classifiers for Named Entity Recognition. In: Proceedings of COLING, Taipei (2002)Google Scholar
  11. 11.
    Surdeanu, M., Turmo, J., Comelles, E.: Named Entity Recognition from spontaneous Open-Domain Speech. In: Proceedings of InterSpeech 2005, Lisbon, Portugal (2005)Google Scholar
  12. 12.
    Wolinski, F., Vichot, F., Dillet, B.: Automatic Processing of Proper Names in Texts. In: Proceedings of EACL 1995, Dublin, Ireland (1995)Google Scholar
  13. 13.
    Sekine, S.: Definition, dictionaries and tagger of Extended Named Entity hierarchy. In: Proceedings of LREC 2004, Lisbon, Portugal (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Sophie Rosset
    • 1
  • Olivier Galibert
    • 1
  • Gilles Adda
    • 1
  • Eric Bilinski
    • 1
  1. 1.Spoken Language Processing GroupLIMSI-CNRSOrsay cedexFrance

Personalised recommendations