Advertisement

Overview of the INEX 2011 Question Answering Track (QA@INEX)

  • Eric SanJuan
  • Véronique Moriceau
  • Xavier Tannier
  • Patrice Bellot
  • Josiane Mothe
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7424)

Abstract

The INEX QA track aimed to evaluate complex question-answering tasks where answers are short texts generated from the Wikipedia by extraction of relevant short passages and aggregation into a coherent summary. In such a task, Question-answering, XML/passage retrieval and automatic summarization are combined in order to get closer to real information needs. Based on the groundwork carried out in 2009-2010 edition to determine the sub-tasks and a novel evaluation methodology, the 2011 edition experimented contextualizing tweets using a recent cleaned dump of the Wikipedia. Participants had to contextualize 132 tweets from the New York Times (NYT). Informativeness of answers has been evaluated, as well as their readability. 13 teams from 6 countries actively participated to this track. This tweet contextualization task will continue in 2012 as part of the CLEF INEX lab with same methodology and baseline but on a much wider range of tweet types.

Keywords

Question Answering Automatic Summarization Focus Information Retrieval XML Natural Language Processing Wikipedia Text Readability Text informativeness 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    At-Mokhtar, S., Chanod, J.P., Roux, C.: Robustness beyond shallowness: Incremental deep parsing. Natural Language Engineering 8, 121–144 (2002)Google Scholar
  2. 2.
    Chen, C., Ibekwe-Sanjuan, F., Hou, J.: The structure and dynamics of cocitation clusters: A multiple-perspective cocitation analysis. JASIST 61(7), 1386–1409 (2010)CrossRefGoogle Scholar
  3. 3.
    Dang, H.: Overview of the TAC 2008 Opinion Question Answering and Summarization Tasks. In: Proc. of the First Text Analysis Conference (2008)Google Scholar
  4. 4.
    Geva, S., Kamps, J., Trotman, A. (eds.): INEX 2009. LNCS, vol. 6203. Springer, Heidelberg (2010)Google Scholar
  5. 5.
    Louis, A., Nenkova, A.: Performance confidence estimation for automatic summarization. In: EACL, pp. 541–548. The Association for Computer Linguistics (2009)Google Scholar
  6. 6.
    Metzler, D., Croft, W.B.: Combining the language model and inference network approaches to retrieval. Inf. Process. Manage. 40(5), 735–750 (2004)CrossRefGoogle Scholar
  7. 7.
    Moriceau, V., SanJuan, E., Tannier, X., Bellot, P.: Overview of the 2009 qa track: Towards a common task for qa, focused ir and automatic summarization systems. In: Geva et al. [4], pp. 355–365Google Scholar
  8. 8.
    Nenkova, A., Passonneau, R.: Evaluating content selection in summarization: The pyramid method. In: Proceedings of HLT-NAACL, vol. 2004 (2004)Google Scholar
  9. 9.
    Pitler, E., Louis, A., Nenkova, A.: Automatic evaluation of linguistic quality in multi-document summarization. In: ACL, pp. 544–554 (2010)Google Scholar
  10. 10.
    Saggion, H., Torres-Moreno, J.M., da Cunha, I., SanJuan, E., Velázquez-Morales, P.: Multilingual summarization evaluation without human models. In: Huang, C.R., Jurafsky, D. (eds.) COLING (Posters), pp. 1059–1067. Chinese Information Processing Society of China (2010)Google Scholar
  11. 11.
    SanJuan, E., Bellot, P., Moriceau, V., Tannier, X.: Overview of the INEX 2010 Question Answering Track (QA@INEX). In: Geva, S., Kamps, J., Schenkel, R., Trotman, A. (eds.) INEX 2010. LNCS, vol. 6932, pp. 269–281. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  12. 12.
    SanJuan, E., Ibekwe-Sanjuan, F.: Combining language models with nlp and interactive query expansion. In: Geva, et al. [4], pp. 122–132Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Eric SanJuan
    • 1
  • Véronique Moriceau
    • 2
  • Xavier Tannier
    • 2
  • Patrice Bellot
    • 3
  • Josiane Mothe
    • 4
  1. 1.LIA, Université d’Avignon et des Pays de VaucluseFrance
  2. 2.LIMSI-CNRS, University Paris-SudFrance
  3. 3.LSIS - Aix-Marseille UniversityFrance
  4. 4.IRIT, Toulouse UniversityFrance

Personalised recommendations