Improving Question Answering by Combining Multiple Systems Via Answer Validation

  • Alberto Téllez-Valero
  • Manuel Montes-y-Gómez
  • Luis Villaseñor-Pineda
  • Anselmo Peñas
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4919)


Nowadays there exist several kinds of question answering systems. According to recent evaluation results, most of these systems are complementary (i.e., each one is better than the others in answering some specific type of questions). This fact indicates that a pertinent combination of various systems may allow improving the best individual result. This paper focuses on this problem. It proposes using an answer validation method to handle this combination. The main advantage of this approach is that it does not rely on internal system’s features nor depend on external answer’s redundancies. Experimental results confirm the appropriateness of our proposal. They mainly show that it outperforms individual system’s results as well as the precision obtained by a redundancy-based combination strategy.


Ensemble Method Question Answering Longe Common Subsequence Longe Common Subsequence Answer Validation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Vallin, A., et al.: Overview of the clef 2005 multilingual question answering track. In: Peters, C., et al. (eds.) CLEF 2005. LNCS, vol. 4022, pp. 307–331. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  2. 2.
    Peñas, A., et al.: Overview of the answer validation exercise 2006. In: [14], pp. 257–264 (2006)Google Scholar
  3. 3.
    Magnini, B., et al.: Is it the right answer? exploiting web redundancy for answer validation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, July 6-12, 2002, pp. 425–432 (2002)Google Scholar
  4. 4.
    Dietterich, T.G.: Machine-learning research: Four current directions. The AI Magazine 18(4), 97–136 (1998)Google Scholar
  5. 5.
    Pizzato, L.A.S., Molla-Aliod, D.: Extracting exact answers using a meta question answering system. In: Proceedings of the Australasian Language Technology Workshop, Sydney, Australia, December 2005, pp. 105–112 (2005)Google Scholar
  6. 6.
    Chu-Carroll, J., et al.: In question answering, two heads are better than one. In: Proceedings of the HLTNAACL, pp. 24–31 (2003)Google Scholar
  7. 7.
    Rotaru, M., Litman, D.J.: Improving question answering for reading comprehension tests by combining multiple systems. In: Proceedings of the American Association for Artificial Intelligence (AAAI) 2005 Workshop on Question Answering in Restricted Domains, Pittsburgh, PA (2005)Google Scholar
  8. 8.
    Jijkoun, V., de Rijke, M.: Answer selection in a multi-stream open domain question answering system. In: McDonald, S., Tait, J.I. (eds.) ECIR 2004. LNCS, vol. 2997, pp. 99–111. Springer, Heidelberg (2004)Google Scholar
  9. 9.
    Aceves-Pérez, R.M., y Gómez, M.M., Pineda, L.V.: Graph-based answer fusion in multilingual question answering. In: Matoušek, V., Mautner, P. (eds.) TSD 2007. LNCS (LNAI), vol. 4629, pp. 621–629. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  10. 10.
    Dagan, I., Magnini, B., Glickman, O.: The pascal recognising textual entailment challenge. In: Proceedings of Pascal Challenge Workshop on Recognizing Textual Entailment, Southampton, UK, April 2005, pp. 1–8 (2005)Google Scholar
  11. 11.
    Carreras, X., et al.: Freeling: An open-source suite of language analyzers. In: Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004), Lisbon, Portugal (2004)Google Scholar
  12. 12.
    Peñas, A., Rodrigo, Á., Verdejo, F.: SPARTE, a test suite for recognising textual entailment in spanish. In: Gelbukh, A. (ed.) CICLing 2006. LNCS, vol. 3878, pp. 275–286. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  13. 13.
    Magnini, B., et al.: Overview of the CLEF 2006 multilingual question answering track. In: Peters, C., et al. (eds.) CLEF 2006. LNCS, vol. 4730, pp. 223–256. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  14. 14.
    Peters, C., et al. (eds.): CLEF 2006. LNCS, vol. 4730. Springer, Heidelberg (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Alberto Téllez-Valero
    • 1
  • Manuel Montes-y-Gómez
    • 1
  • Luis Villaseñor-Pineda
    • 1
  • Anselmo Peñas
    • 2
  1. 1.Instituto Nacional de Astrofísica, Óptica y ElectrónicaGrupo de Tecnologías del LenguajeSta. María TonantzintlaMexico
  2. 2.Depto. Lenguajes y Sistemas InformáticosUniversidad Nacional de Educación a DistanciaMadridSpain

Personalised recommendations