Automated Component–Level Evaluation: Present and Future

  • Allan Hanbury
  • Henning Müller
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6360)


Automated component–level evaluation of information retrieval (IR) is the main focus of this paper. We present a review of the current state of web–based and component–level evaluation. Based on these systems, propositions are made for a comprehensive framework for web service–based component–level IR system evaluation. The advantages of such an approach are considered, as well as the requirements for implementing it. Acceptance of such systems by researchers who develop components and systems is crucial for having an impact and requires that a clear benefit is demonstrated.


Information Retrieval Level Evaluation Information Retrieval System Relevance Judgement Evaluation Campaign 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Harman, D.: Overview of the first Text REtrieval Conference (TREC–1). In: Proceedings of the first Text REtrieval Conference (TREC–1), Washington DC, USA, pp. 1–20 (1992)Google Scholar
  2. 2.
    Cleverdon, C.W.: Report on the testing and analysis of an investigation into the comparative efficiency of indexing systems. Technical report, Aslib Cranfield Research Project, Cranfield, USA (1962)Google Scholar
  3. 3.
    Robertson, S.: On the history of evaluation in IR. Journal of Information Science 34, 439–456 (2008)CrossRefGoogle Scholar
  4. 4.
    Armstrong, T.G., Moffat, A., Webber, W., Zobel, J.: Improvements that don’t add up: ad-hoc retrieval results since 1998. In: CIKM 2009: Proceeding of the 18th ACM Conference on Information and Knowledge Management, pp. 601–610. ACM, New York (2009)Google Scholar
  5. 5.
    Robertson, S.E.: The methodology of information retrieval experiment. In: Jones, K.S. (ed.) Information Retrieval Experiment, pp. 9–31. Butterworths (1981)Google Scholar
  6. 6.
    Müller, H., Müller, W., Marchand-Maillet, S., Squire, D.M., Pun, T.: A web–based evaluation system for content–based image retrieval. In: Proceedings of the 9th ACM International Conference on Multimedia (ACM MM 2001), Ottawa, Canada, pp. 50–54. ACM, New York (2001)Google Scholar
  7. 7.
    Snoek, C.G.M., Worring, M., van Gemert, J.C., Geusebroek, J.M., Smeulders, A.W.M.: The challenge problem for automated detection of 101 semantic concepts in multimedia. In: Proc. ACM Multimedia, pp. 421–430 (2006)Google Scholar
  8. 8.
    Downie, J.S.: The music information retrieval evaluation exchange (2005-2007): A window into music information retrieval research. Acoustical Science and Technology 29, 247–255 (2008)CrossRefGoogle Scholar
  9. 9.
    Ferro, N., Harman, D.: CLEF 2009: Grid@CLEF pilot track overview. In: Working Notes of CLEF 2009 (2009)Google Scholar
  10. 10.
    Ferro, N.: Specification of the circo framework, version 0.10. Technical Report IMS.2009.CIRCO.0.10, Department of Information Engineering, University of Padua, Italy (2009)Google Scholar
  11. 11.
    Mitamura, T., Nyberg, E., Shima, H., Kato, T., Mori, T., Lin, C.Y., Song, R., Lin, C.J., Sakai, T., Ji, D., Kando, N.: Overview of the ntcir-7 aclia tasks: Advanced cross-lingual information access. In: Proceedings of the 7th NTCIR Workshop Meeting on Evaluation of Information Access Technologies (2008)Google Scholar
  12. 12.
    Deselaers, T., Hanbury, A.: The visual concept detection task in ImageCLEF 2008. In: Peters, C., Deselaers, T., Ferro, N., Gonzalo, J., Jones, G.J.F., Kurimo, M., Mandl, T., Peñas, A., Petras, V. (eds.) CLEF 2008. LNCS, vol. 5706, pp. 531–538. Springer, Heidelberg (2009)Google Scholar
  13. 13.
    Müller, H., Müller, W., Marchand-Maillet, S., Pun, T., Squire, D.: A web-based evaluation system for CBIR. In: Proc. ACM Multimedia, pp. 50–54 (2001)Google Scholar
  14. 14.
    Squire, D.M., Müller, W., Müller, H., Pun, T.: Content–based query of image databases: inspirations from text retrieval. In: Ersboll, B.K., Johansen, P. (eds.) Pattern Recognition Letters (Selected Papers from The 11th Scandinavian Conference on Image Analysis SCIA 1999), vol. 21, pp. 1193–1198 (2000)Google Scholar
  15. 15.
    Krallinger, M., Morgan, A., Smith, L., Leitner, F., Tanabe, L., Wilbur, J., Hirschman, L., Valencia, A.: Evaluation of text-mining systems for biology: overview of the second biocreative community challenge. Genome Biology 9 (2008)Google Scholar
  16. 16.
    Morga, A.A., Lu, Z., Wang, X., Cohen, A.M., Fluck, J., Ruch, P., Divoli, A., Fundel, K., Leaman, R., Hakenberg, J., Sun, C., Liu, H., Torres, R., Krauthammer, M., Lau, W.W., Liu, H., Hsu, C., Schuemi, M., Cohen, K.B., Hitschmann, L.: Overview of BioCreative II gene normalization. Gene Biology 9, S2–S3 (2008)Google Scholar
  17. 17.
    Ludäscher, B., Altintas, I., Berkley, C., Higgins, D., Jaeger, E., Jones, M., Lee, E.A., Tao, J., Zhao, Y.: Scientific workflow management and the Kepler system. Concurrency and Computation: Practice and Experience 18, 1039–1065 (2006)CrossRefGoogle Scholar
  18. 18.
    Wang, J., Crawl, D., Altintas, I.: Kepler + Hadoop: a general architecture facilitating data-intensive applications in scientific workflow systems. In: WORKS 2009: Proceedings of the 4th Workshop on Workflows in Support of Large-Scale Scienc, pp. 1–8. ACM, New York (2009)Google Scholar
  19. 19.
    von Ahn, L.: Games with a purpose. IEEE Computer Magazine, 96–98 (2006)Google Scholar
  20. 20.
    Alonso, O., Rose, D.E., Stewart, B.: Crowdsourcing for relevance evaluation. SIGIR Forum 42, 9–15 (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Allan Hanbury
    • 1
  • Henning Müller
    • 2
  1. 1.Information Retrieval FacilityViennaAustria
  2. 2.University of Applied Sciences Western Switzerland (HES–SO)SierreSwitzerland

Personalised recommendations