Improving Robustness Using Query Expansion

  • Angel F. Zazo
  • José L. Alonso Berrocal
  • Carlos G. Figuerola
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5152)

Abstract

This paper describes our work at CLEF 2007 Robust Task. We have applied local query expansion using windows of terms, but considering different measures of robustness during the training phase in order to optimize the performance: MAP, GMAP, MMR, GS@10, P@10, number of failed topics, number of topics below 0.1 MAP, and number of topics with P@10=0. The results were not disappointing, but no settings were found that simultaneously improved all measures. A key issue for us was to decide which set of measures we had to select for optimization.

This year all our runs also gave good rankings, both base runs and expanded ones. However, our expansion technique does not improve significantly the retrieval performance. At TREC and CLEF Robust Tasks other expansion techniques have been used to improve robustness, but results were not uniform. In conclusion, regarding robustness the objective must be to make good information retrieval systems, rather than to tune some query expansion techniques.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Voorhees, E.M.: Overview of the TREC 2003 robust retrieval track. In: The Twelfth Text REtrieval Conference (TREC 2003), vol. 500-255, pp. 69–77. NIST Special Publication (2003)Google Scholar
  2. 2.
    Voorhees, E.M.: Overview of the TREC 2004 robust retrieval track. In: The Thirteen Text REtrieval Conference (TREC 2004), Gaithersburg, Maryland, November 16-19, vol. 500-261, pp. 70–79. NIST Special Publication (2004)Google Scholar
  3. 3.
    Voorhees, E.M.: Overview of the TREC 2005 robust retrieval track. In: The Fourteenth Text REtrieval Conference (TREC 2005), Gaithersburg, Maryland, November 15-18. NIST Special Publication 500-266 (2005)Google Scholar
  4. 4.
    Di Nunzio, G.M., Ferro, N., Mandl, T., Peters, C.: CLEF 2006: Ad hoc track overview. In: Peters, C., Clough, P., Gey, F.C., Karlgren, J., Magnini, B., Oard, D.W., de Rijke, M., Stempfhuber, M. (eds.) CLEF 2006. LNCS, vol. 4730, pp. 21–34. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  5. 5.
    Tomlinson, S.: Comparing the robustness of expansion techniques and retrieval measures. In: Peters, C., Clough, P., Gey, F.C., Karlgren, J., Magnini, B., Oard, D.W., de Rijke, M., Stempfhuber, M. (eds.) CLEF 2006. LNCS, vol. 4730, pp. 129–136. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  6. 6.
    Zazo, A.F., Alonso Berrocal, J.L., Figuerola, C.G.: Local query expansion using terms windows for robust retrieval. In: Peters, C., Clough, P., Gey, F.C., Karlgren, J., Magnini, B., Oard, D.W., de Rijke, M., Stempfhuber, M. (eds.) CLEF 2006. LNCS, vol. 4730, pp. 145–152. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  7. 7.
    Singhal, A., Buckley, C., Mitra, M.: Pivoted document length normalization. In: Proceedings of the 19th ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 21–29 (1996)Google Scholar
  8. 8.
    Tomlinson, S.: Early precision measures: implications from the downside of blind feedback. In: SIGIR, pp. 705–706 (2006)Google Scholar
  9. 9.
    Tomlinson, S.: Sampling precision to depth 10000: evaluation experiments at CLEF 2007. In: CLEF 2007 Workshop Working Notes (2007)Google Scholar
  10. 10.
    Di Nunzio, G.M., Ferro, N., Mandl, T., Peters, C.: CLEF 2007: Ad hoc track overview. In: Peters, C., et al. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 13–32. Springer, Heidelberg (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Angel F. Zazo
    • 1
  • José L. Alonso Berrocal
    • 1
  • Carlos G. Figuerola
    • 1
  1. 1.REINA Research GroupUniversity of SalamancaSalamancaSpain

Personalised recommendations