Bringing the Algorithms to the Data: Cloud–Based Benchmarking for Medical Image Analysis

  • Allan Hanbury
  • Henning Müller
  • Georg Langs
  • Marc André Weber
  • Bjoern H. Menze
  • Tomas Salas Fernandez
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7488)


Benchmarks have shown to be an important tool to advance science in the fields of information analysis and retrieval. Problems of running benchmarks include obtaining large amounts of data, annotating it and then distributing it to the participants of a benchmark. Distribution of the data to participants is currently mostly done via data download that can take hours for large data sets and in countries with slow Internet connections even days. Sending physical hard disks was also used for distributing very large scale data sets (for example by TRECvid) but also this becomes infeasible if the data sets reach sizes of 5–10 TB. With cloud computing it is possible to make very large data sets available in a central place with limited costs. Instead of distributing the data to the participants, the participants can compute their algorithms on virtual machines of the cloud providers. This text presents reflections and ideas of a concrete project on using cloud–based benchmarking paradigms for medical image analysis and retrieval. It is planned to run two evaluation campaigns in 2013 and 2014 using the proposed technology.


benchmark medical image analysis anatomy detection case–based medical information retrieval cloud computing 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Alonso, O., Rose, D.E., Stewart, B.: Crowdsourcing for relevance evaluation. ACM SIGIR Forum 42(2), 9–15 (2008)CrossRefGoogle Scholar
  2. 2.
    Buyya, R., Yeo, C.S., Venugopal, S.: Market–oriented cloud computing: Vision, hype, and reality for delivering it services as computing utilities. In: 10th IEEE International Conference on High Performance Computing and Communications, pp. 5–13. IEEE (2008)Google Scholar
  3. 3.
    Everingham, M., Zisserman, A., Williams, C.K.I., Van Gool, L., Allan, M., Bishop, C.M., Chapelle, O., Dalal, N., Deselaers, T., Dorkó, G., Duffner, S., Eichhorn, J., Farquhar, J.D.R., Fritz, M., Garcia, C., Griffiths, T., Jurie, F., Keysers, D., Koskela, M., Laaksonen, J., Larlus, D., Leibe, B., Meng, H., Ney, H., Schiele, B., Schmid, C., Seemann, E., Shawe-Taylor, J., Storkey, A.J., Szedmak, S., Triggs, B., Ulusoy, I., Viitaniemi, V., Zhang, J.: The 2005 PASCAL Visual Object Classes Challenge. In: Quiñonero-Candela, J., Dagan, I., Magnini, B., d’Alché-Buc, F. (eds.) MLCW 2005. LNCS (LNAI), vol. 3944, pp. 117–176. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  4. 4.
    Harman, D.: Overview of the first Text REtrieval Conference (TREC–1). In: Proceedings of the First Text REtrieval Conference (TREC–1), Washington DC, USA, pp. 1–20 (1992)Google Scholar
  5. 5.
    Müller, H., Clough, P., Deselaers, T., Caputo, B. (eds.): ImageCLEF – Experimental Evaluation in Visual Information Retrieval. The Springer International Series On Information Retrieval, vol. 32. Springer, Heidelberg (2010)zbMATHGoogle Scholar
  6. 6.
    Rowe, B.R., Wood, D.W., Link, A.N., Simoni, D.A.: Economic impact assessment of NIST’s Text REtrieval Conference (TREC) Program. Tech. Rep. Project Number 0211875, RTI International (2010)Google Scholar
  7. 7.
    Smeaton, A.F., Kraaij, W., Over, P.: TRECVID 2003: An overview. In: Proceedings of the TRECVID 2003 Conference (December 2003)Google Scholar
  8. 8.
    Thornley, C.V., Johnson, A.C., Smeaton, A.F., Lee, H.: The scholarly impact of TRECVid (2003–2009). JASIST 62(4), 613–627 (2011)CrossRefGoogle Scholar
  9. 9.
    Tsikrika, T., Seco de Herrera, A.G., Müller, H.: Assessing the Scholarly Impact of ImageCLEF. In: Forner, P., Gonzalo, J., Kekäläinen, J., Lalmas, M., de Rijke, M. (eds.) CLEF 2011. LNCS, vol. 6941, pp. 95–106. Springer, Heidelberg (2011)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Allan Hanbury
    • 1
  • Henning Müller
    • 2
  • Georg Langs
    • 3
  • Marc André Weber
    • 4
  • Bjoern H. Menze
    • 5
  • Tomas Salas Fernandez
    • 6
  1. 1.Vienna University of TechnologyAustria
  2. 2.University of Applied Sciences Western Switzerland (HES–SO)Switzerland
  3. 3.CIR, Dep. of RadiologyMedical University of ViennaAustria
  4. 4.Radiology DepartmentUniversity of HeidelbergGermany
  5. 5.ETHZZürichSwitzerland
  6. 6.GencatSpain

Personalised recommendations