Skip to main content

Automated Component–Level Evaluation: Present and Future

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 6360))

Abstract

Automated component–level evaluation of information retrieval (IR) is the main focus of this paper. We present a review of the current state of web–based and component–level evaluation. Based on these systems, propositions are made for a comprehensive framework for web service–based component–level IR system evaluation. The advantages of such an approach are considered, as well as the requirements for implementing it. Acceptance of such systems by researchers who develop components and systems is crucial for having an impact and requires that a clear benefit is demonstrated.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Harman, D.: Overview of the first Text REtrieval Conference (TREC–1). In: Proceedings of the first Text REtrieval Conference (TREC–1), Washington DC, USA, pp. 1–20 (1992)

    Google Scholar 

  2. Cleverdon, C.W.: Report on the testing and analysis of an investigation into the comparative efficiency of indexing systems. Technical report, Aslib Cranfield Research Project, Cranfield, USA (1962)

    Google Scholar 

  3. Robertson, S.: On the history of evaluation in IR. Journal of Information Science 34, 439–456 (2008)

    Article  Google Scholar 

  4. Armstrong, T.G., Moffat, A., Webber, W., Zobel, J.: Improvements that don’t add up: ad-hoc retrieval results since 1998. In: CIKM 2009: Proceeding of the 18th ACM Conference on Information and Knowledge Management, pp. 601–610. ACM, New York (2009)

    Google Scholar 

  5. Robertson, S.E.: The methodology of information retrieval experiment. In: Jones, K.S. (ed.) Information Retrieval Experiment, pp. 9–31. Butterworths (1981)

    Google Scholar 

  6. Müller, H., Müller, W., Marchand-Maillet, S., Squire, D.M., Pun, T.: A web–based evaluation system for content–based image retrieval. In: Proceedings of the 9th ACM International Conference on Multimedia (ACM MM 2001), Ottawa, Canada, pp. 50–54. ACM, New York (2001)

    Google Scholar 

  7. Snoek, C.G.M., Worring, M., van Gemert, J.C., Geusebroek, J.M., Smeulders, A.W.M.: The challenge problem for automated detection of 101 semantic concepts in multimedia. In: Proc. ACM Multimedia, pp. 421–430 (2006)

    Google Scholar 

  8. Downie, J.S.: The music information retrieval evaluation exchange (2005-2007): A window into music information retrieval research. Acoustical Science and Technology 29, 247–255 (2008)

    Article  Google Scholar 

  9. Ferro, N., Harman, D.: CLEF 2009: Grid@CLEF pilot track overview. In: Working Notes of CLEF 2009 (2009)

    Google Scholar 

  10. Ferro, N.: Specification of the circo framework, version 0.10. Technical Report IMS.2009.CIRCO.0.10, Department of Information Engineering, University of Padua, Italy (2009)

    Google Scholar 

  11. Mitamura, T., Nyberg, E., Shima, H., Kato, T., Mori, T., Lin, C.Y., Song, R., Lin, C.J., Sakai, T., Ji, D., Kando, N.: Overview of the ntcir-7 aclia tasks: Advanced cross-lingual information access. In: Proceedings of the 7th NTCIR Workshop Meeting on Evaluation of Information Access Technologies (2008)

    Google Scholar 

  12. Deselaers, T., Hanbury, A.: The visual concept detection task in ImageCLEF 2008. In: Peters, C., Deselaers, T., Ferro, N., Gonzalo, J., Jones, G.J.F., Kurimo, M., Mandl, T., Peñas, A., Petras, V. (eds.) CLEF 2008. LNCS, vol. 5706, pp. 531–538. Springer, Heidelberg (2009)

    Google Scholar 

  13. Müller, H., Müller, W., Marchand-Maillet, S., Pun, T., Squire, D.: A web-based evaluation system for CBIR. In: Proc. ACM Multimedia, pp. 50–54 (2001)

    Google Scholar 

  14. Squire, D.M., Müller, W., Müller, H., Pun, T.: Content–based query of image databases: inspirations from text retrieval. In: Ersboll, B.K., Johansen, P. (eds.) Pattern Recognition Letters (Selected Papers from The 11th Scandinavian Conference on Image Analysis SCIA 1999), vol. 21, pp. 1193–1198 (2000)

    Google Scholar 

  15. Krallinger, M., Morgan, A., Smith, L., Leitner, F., Tanabe, L., Wilbur, J., Hirschman, L., Valencia, A.: Evaluation of text-mining systems for biology: overview of the second biocreative community challenge. Genome Biology 9 (2008)

    Google Scholar 

  16. Morga, A.A., Lu, Z., Wang, X., Cohen, A.M., Fluck, J., Ruch, P., Divoli, A., Fundel, K., Leaman, R., Hakenberg, J., Sun, C., Liu, H., Torres, R., Krauthammer, M., Lau, W.W., Liu, H., Hsu, C., Schuemi, M., Cohen, K.B., Hitschmann, L.: Overview of BioCreative II gene normalization. Gene Biology 9, S2–S3 (2008)

    Google Scholar 

  17. Ludäscher, B., Altintas, I., Berkley, C., Higgins, D., Jaeger, E., Jones, M., Lee, E.A., Tao, J., Zhao, Y.: Scientific workflow management and the Kepler system. Concurrency and Computation: Practice and Experience 18, 1039–1065 (2006)

    Article  Google Scholar 

  18. Wang, J., Crawl, D., Altintas, I.: Kepler + Hadoop: a general architecture facilitating data-intensive applications in scientific workflow systems. In: WORKS 2009: Proceedings of the 4th Workshop on Workflows in Support of Large-Scale Scienc, pp. 1–8. ACM, New York (2009)

    Google Scholar 

  19. von Ahn, L.: Games with a purpose. IEEE Computer Magazine, 96–98 (2006)

    Google Scholar 

  20. Alonso, O., Rose, D.E., Stewart, B.: Crowdsourcing for relevance evaluation. SIGIR Forum 42, 9–15 (2008)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hanbury, A., Müller, H. (2010). Automated Component–Level Evaluation: Present and Future. In: Agosti, M., Ferro, N., Peters, C., de Rijke, M., Smeaton, A. (eds) Multilingual and Multimodal Information Access Evaluation. CLEF 2010. Lecture Notes in Computer Science, vol 6360. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15998-5_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15998-5_14

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15997-8

  • Online ISBN: 978-3-642-15998-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics