This work introduces a new multimodal image dataset, with the aim of detecting the interplay between visual elements and semantic relations present in radiology images. The objective is accomplished by retrieving all image-caption pairs from the open-access biomedical literature database PubMedCentral, as these captions describe the visual content in their semantic context. All compound, multi-pane, and non-radiology images were eliminated using an automatic binary classifier fine-tuned with a deep convolutional neural network system. Radiology Objects in COntext (ROCO) dataset contains over 81k radiology images with several medical imaging modalities including Computer Tomography, Ultrasound, X-Ray, Fluoroscopy, Positron Emission Tomography, Mammography, Magnetic Resonance Imaging, Angiography. All images in ROCO have corresponding caption, keywords, Unified Medical Language Systems Concept Unique Identifiers and Semantic Type. An out-of-class set with 6k images ranging from synthetic radiology figures to digital arts is provided, to improve prediction and classification performance. Adopting ROCO, systems for caption and keywords generation can be modeled, which allows multimodal representation for datasets lacking text representation. Systems with the goal of image structuring and semantic information tagging can be created using ROCO, which is beneficial and of assistance for image and information retrieval purposes.


Deep learning Image retrieval Image captioning Multimodal representation Natural language processing Radiology 


  1. 1.
    Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, Berkeley, CA, USA. USENIX Association (2016)Google Scholar
  2. 2.
    Bird, S., Klein, E., Loper, E.: Natural Language Processing with Python. O’Reilly (2009).
  3. 3.
    Bodenreider, O.: The unified medical language system (UMLS): integrating biomedical terminology. Nucl. Acids Res. 32(Database–Issue), 267–270 (2004). Scholar
  4. 4.
    García Seco de Herrera, A., Kalpathy-Cramer, J., Demner Fushman, D., Antani, S., Müller, H.: Overview of the ImageCLEF 2013 medical tasks. In: Working Notes of CLEF 2013 - Conference and Labs of the Evaluation forum. CEUR-WS Proceedings Notes, vol. 1179, Valencia, Spain, 23–26 September 2013 (2013)Google Scholar
  5. 5.
    García Seco de Herrera, A., Müller, H., Bromuri, S.: Overview of the ImageCLEF 2015 medical classification task. In: Working Notes of CLEF 2015 - Conference and Labs of the Evaluation Forum. CEUR-WS Proceedings Notes, vol. 1391, Toulouse, France, 8–11 September 2015 (2015)Google Scholar
  6. 6.
    García Seco de Herrera, A., Schaer, R., Bromuri, S., Müller, H.: Overview of the ImageCLEF 2016 medical task. In: Working Notes of CLEF 2016 - Conference and Labs of the Evaluation forum, Évora. CEUR-WS Proceedings Notes, vol. 1609, Portugal, 5–8 September 2016 (2016)Google Scholar
  7. 7.
    Koitka, S., Friedrich, C.M.: Optimized convolutional neural network ensembles for medical subfigure classification. In: Jones, G.J.F., et al. (eds.) CLEF 2017. LNCS, vol. 10456, pp. 57–68. Springer, Cham (2017). Scholar
  8. 8.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems, vol. 1, pp. 1097–1105. Curran Associates Inc., USA (2012)Google Scholar
  9. 9.
    LeCun, Y., Bengio, Y., Hinton, G.E.: Deep learning. Nature 521(7553), 436–444 (2015). Scholar
  10. 10.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part V. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). Scholar
  11. 11.
    Müller, H., Kalpathy-Cramer, J., Demner-Fushman, D., Antani, S.: Creating a classification of image types in the medical literature for visual categorization. In: Proceedings of SPIE 8319, Medical Imaging 2012: Advanced PACS-Based Imaging Informatics and Therapeutic Applications, 83190P, 23 February 2012, vol. 8425, p. 194 (2012).
  12. 12.
    Müller, H., et al.: Overview of the CLEF 2010 medical image retrieval track. In: CLEF 2010 LABs and Workshops, Notebook Papers. CEUR-WS Proceedings Notes, vol. 1176, Padua, Italy, 22–23 September 2010 (2010).
  13. 13.
    Pelka, O., Friedrich, C.M.: FHDO biomedical computer science group at medical classification task of ImageCLEF 2015. In: Working Notes of CLEF 2015 - Conference and Labs of the Evaluation forum, Toulouse, France, 8–11 September 2015 (2015).
  14. 14.
    Pelka, O., Friedrich, C.M.: Keyword generation for biomedical image retrieval with recurrent neural networks. In: Working Notes of CLEF 2017 - Conference and Labs of the Evaluation Forum. CEUR-WS Proceedings Notes, vol. 1866, Dublin, Ireland, 11–14 September 2017 (2017)Google Scholar
  15. 15.
    Pelka, O., Nensa, F., Friedrich, C.M.: Adopting semantic information of grayscale radiographs for image classification and retrieval. In: Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2018). BIOIMAGING, vol. 2, Funchal, Madeira, Portugal, 19–21 January 2018, pp. 179–187 (2018).
  16. 16.
    Porter, M.F.: Snowball: a language for stemming algorithms (2001).
  17. 17.
    Rajpurkar, P., et al.: Chexnet: radiologist-level pneumonia detection on chest x-rays with deep learning. CoRR abs/1711.05225 (2017).
  18. 18.
    Roberts, R.J.: PubMed central: the GenBank of the published literature. Proc. Natl. Acad. Sci. U.S.A. 98(2), 381–382 (2001). Scholar
  19. 19.
    Russakovsky, O.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). Scholar
  20. 20.
    Soldaini, L., Goharian, N.: QuickUMLS: a fast, unsupervised approach for medical concept extraction. In: MedIR Workshop, SIGIR (2016)Google Scholar
  21. 21.
    Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-First AAAI Conference on Artificial Intelligence (2017).
  22. 22.
    Tommasi, T., Caputo, B., Welter, P., Güld, M.O., Deserno, T.M.: Overviewof the CLEF 2009 medical image annotation track. In: Multilingual Information Access Evaluation II. Multimedia Experiments - 10th Workshop of the Cross-Language Evaluation Forum, CLEF 2009, Corfu, Greece, 30 September–2 October 2009, pp. 85–93 (2009).
  23. 23.
    Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: lessons learned from the 2015 MSCOCO image captioning challenge. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 652–663 (2017). Scholar
  24. 24.
    Xu, Y., Mo, T., Feng, Q., Zhong, P., Lai, M., Chang, E.I.: Deep learning of feature representation with multiple instance learning for medical image analysis. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2014, Florence, Italy, 4–9 May 2014, pp. 1626–1630 (2014).

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Obioma Pelka
    • 1
    • 3
    Email author
  • Sven Koitka
    • 1
    • 2
    • 4
  • Johannes Rückert
    • 1
  • Felix Nensa
    • 4
  • Christoph M. Friedrich
    • 1
    • 5
  1. 1.Department of Computer ScienceUniversity of Applied Sciences and Arts Dortmund (FHDO)DortmundGermany
  2. 2.Department of Computer ScienceTU Dortmund UniversityDortmundGermany
  3. 3.University of Duisburg-Essen, University Hospital EssenEssenGermany
  4. 4.Department of Diagnostic and Interventional Radiology and NeuroradiologyEssenGermany
  5. 5.Institute for Medical Informatics, Biometry and Epidemiology (IMIBE)EssenGermany

Personalised recommendations