In this poster we demonstrate an approach to gain a better understanding of the interactions between search tasks, test collections and components and configurations of retrieval systems by testing a large set of experiment configurations against standard ad-hoc test collections.


Ad-hoc Retrieval Component-based Evaluation 


  1. 1.
    Ferro, N., Harman, D.: CLEF 2009: Grid@CLEF Pilot Track Overview. In: Peters, C., Di Nunzio, G.M., Kurimo, M., Mandl, T., Mostefa, D., Peñas, A., Roda, G. (eds.) CLEF 2009. LNCS, vol. 6241, pp. 552–565. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  2. 2.
    Harman, D., Buckley, C.: Overview of the Reliable Information Access Workshop. Information Retrieval 12(6), 615–641 (2009)CrossRefGoogle Scholar
  3. 3.
    Hanbury, A., Müller, H.: Automated Component–Level Evaluation: Present and Future. In: Agosti, M., Ferro, N., Peters, C., de Rijke, M., Smeaton, A. (eds.) CLEF 2010. LNCS, vol. 6360, pp. 124–135. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  4. 4.
    Kürsten, J., Wilhelm, T., Eibl, M.: Extensible Retrieval and Evaluation Framework: Xtrieval. In: Baumeister, J., Atzmüller, M. (eds.) LWA 2008, University of Würzburg, pp. 107–110 (2008)Google Scholar
  5. 5.
    Kampstra, P.: Beanplot: A Boxplot Alternative for Visual Comparison of Distributions. Journal of Statistical Software 28(1), 1–9 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Jens Kürsten
    • 1
  • Maximilian Eibl
    • 1
  1. 1.Dept. of Computer ScienceChemnitz University of TechnologyChemnitzGermany

Personalised recommendations