Advertisement

Overview of the INEX 2010 Question Answering Track (QA@INEX)

  • Eric SanJuan
  • Patrice Bellot
  • Véronique Moriceau
  • Xavier Tannier
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6932)

Abstract

The INEX Question Answering track (QA@INEX) aims to evaluate a complex question-answering task using the Wikipedia. The set of questions is composed of factoid, precise questions that expect short answers, as well as more complex questions that can be answered by several sentences or by an aggregation of texts from different documents.

Long answers have been evaluated based on Kullback Leibler (KL) divergence between n-gram distributions. This allowed summarization systems to participate. Most of them generated a readable extract of sentences from top ranked documents by a state-of-the-art document retrieval engine. Participants also tested several methods of question disambiguation.

Evaluation has been carried out on a pool of real questions from OverBlog and Yahoo! Answers. Results tend to show that the baseline-restricted focused IR system minimizes KL divergence but misses readability meanwhile summarization systems tend to use longer and stand-alone sentences thus improving readability but increasing KL divergence.

Keywords

Kullback Leibler Short Answer Answer Task Relevant Passage Jensen Shannon Divergence 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Moriceau, V., SanJuan, E., Tannier, X., Bellot, P.: Overview of the 2009 qa track: Towards a common task for qa, focused ir and automatic summarization systems. In: [7], pp. 355–365 (2009)Google Scholar
  2. 2.
    Schenkel, R., Suchanek, F.M., Kasneci, G.: Yawn: A semantically annotated wikipedia xml corpus. In: Kemper, A., Schöning, H., Rose, T., Jarke, M., Seidl, T., Quix, C., Brochhaus, C. (eds.) BTW. LNI, vol. 103, pp. 277–291 GI (2007)Google Scholar
  3. 3.
    Pitler, E., Louis, A., Nenkova, A.: Automatic evaluation of linguistic quality in multi-document summarization. In: ACL, pp. 544–554 (2010)Google Scholar
  4. 4.
    Nenkova, A., Passonneau, R.: Evaluating content selection in summarization: The pyramid method. In: Proceedings of HLT-NAACL, vol. 2004 (2004)Google Scholar
  5. 5.
    Dang, H.: Overview of the TAC 2008 Opinion Question Answering and Summarization Tasks. In: Proc. of the First Text Analysis Conference (2008)Google Scholar
  6. 6.
    Louis, A., Nenkova, A.: Performance confidence estimation for automatic summarization. In: EACL, The Association for Computer Linguistics, pp. 541–548 (2009)Google Scholar
  7. 7.
    Geva, S., Kamps, J., Trotman, A. (eds.): Focused Retrieval and Evaluation, 8th International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2009, Brisbane, Australia, December 7-9, LNCS, vol. 6203. Springer, Heidelberg (2010) (Revised and selected papers)Google Scholar
  8. 8.
    Moriceau, V., Tannier, X.: FIDJI: Using Syntax for Validating Answers in Multiple Documents. Information Retrieval, Special Issue on Focused Information Retrieval 13, 507–533 (2010)Google Scholar
  9. 9.
    SanJuan, E., Ibekwe-Sanjuan, F.: Combining language models with nlp and interactive query expansion. In: [7], pp. 122–132Google Scholar
  10. 10.
    Ibekwe-Sanjuan, F., SanJuan, E.: Use of multiword terms and query expansion for interactive information retrieval. In: Geva, S., Kamps, J., Trotman, A. (eds.) INEX 2008. LNCS, vol. 5631, pp. 54–64. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  11. 11.
    Allan, J., Aslam, J., Belkin, N.J., Buckley, C., Callan, J.P., Croft, W.B., Dumais, S.T., Fuhr, N., Harman, D., Harper, D.J., Hiemstra, D., Hofmann, T., Hovy, E.H., Kraaij, W., Lafferty, J.D., Lavrenko, V., Lewis, D.D., Liddy, L., Manmatha, R., McCallum, A., Ponte, J.M., Prager, J.M., Radev, D.R., Resnik, P., Robertson, S.E., Rosenfeld, R., Roukos, S., Sanderson, M., Schwartz, R.M., Singhal, A., Smeaton, A.F., Turtle, H.R., Voorhees, E.M., Weischedel, R.M., Xu, J., Zhai, C.: Challenges in Information retrieval and language modeling: report of a workshop held at the center for intelligent information retrieval, university of massachusetts amherst. SIGIR Forum 37(1), 31–47 (2002)Google Scholar
  12. 12.
    Gao, J., Qi, H., Xia, X., Nie, J.Y.: Linear discriminant model for information retrieval, pp. 290–297. ACM, New York (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Eric SanJuan
    • 1
  • Patrice Bellot
    • 1
  • Véronique Moriceau
    • 2
  • Xavier Tannier
    • 2
  1. 1.LIA, Université d’Avignon et des Pays de VaucluseFrance
  2. 2.LIMSI-CNRS, University Paris-Sud 11France

Personalised recommendations