Advertisement

Improving Pathological Structure Segmentation via Transfer Learning Across Diseases

  • Barleen KaurEmail author
  • Paul Lemaître
  • Raghav Mehta
  • Nazanin Mohammadi Sepahvand
  • Doina Precup
  • Douglas Arnold
  • Tal Arbel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11795)

Abstract

One of the biggest challenges in developing robust machine learning techniques for medical image analysis is the lack of access to large-scale annotated image datasets needed for supervised learning. When the task is to segment pathological structures (e.g. lesions, tumors) from patient images, training on a dataset with few samples is very challenging due to the large class imbalance and inter-subject variability. In this paper, we explore how to best leverage a segmentation model that has been pre-trained on a large dataset of patients images with one disease in order to successfully train a deep learning pathology segmentation model for a different disease, for which only a relatively small patient dataset is available. Specifically, we train a UNet model on a large-scale, proprietary, multi-center, multi-scanner Multiple Sclerosis (MS) clinical trial dataset containing over 3500 multi-modal MRI samples with expert-derived lesion labels. We explore several transfer learning approaches to leverage the learned MS model for the task of multi-class brain tumor segmentation on the BraTS 2018 dataset. Our results indicate that adapting and fine-tuning the encoder and decoder of the network trained on the larger MS dataset leads to improvement in brain tumor segmentation when few instances are available. This type of transfer learning outperforms training and testing the network on the BraTS dataset from scratch as well as several other transfer learning approaches, particularly when only a small subset of the dataset is available.

Keywords

Transfer learning Brain tumor segmentation MRI 

Notes

Acknowledgments

The MS dataset was provided through an award from the International Progressive MS Alliance (PA-1603-08175). The authors would also like to thank Nicholas J. Tustison for his guidance on using ANTs tool.

Supplementary material

490967_1_En_11_MOESM1_ESM.pdf (911 kb)
Supplementary material 1 (pdf 910 KB)

References

  1. 1.
    Avants, B.B., et al.: A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 54(3), 2033–2044 (2011)CrossRefGoogle Scholar
  2. 2.
    Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4, 170117 (2017)CrossRefGoogle Scholar
  3. 3.
    Bakas, S., et al.: Segmentation labels and radiomic features for the pre-operative scans of the TCGA-LGG collection. TCIA, vol. 286 (2017)Google Scholar
  4. 4.
    Cheplygina, V., et al.: Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. MIA 54, 280–296 (2019)Google Scholar
  5. 5.
    Chu, B., Madhavan, V., Beijbom, O., Hoffman, J., Darrell, T.: Best practices for fine-tuning visual classifiers to new domains. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 435–442. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-49409-8_34CrossRefGoogle Scholar
  6. 6.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
  7. 7.
    Ghafoorian, M., et al.: Transfer learning for domain adaptation in MRI: application in brain lesion segmentation. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 516–524. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_59CrossRefGoogle Scholar
  8. 8.
    Hussein, S., Cao, K., Song, Q., Bagci, U.: Risk stratification of lung nodules using 3D CNN-based multi-task learning. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 249–260. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-59050-9_20CrossRefGoogle Scholar
  9. 9.
    Huynh, B.Q., et al.: Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. JMI 3(3), 034501 (2016)Google Scholar
  10. 10.
    Hwang, S., Kim, H.-E.: Self-transfer learning for weakly supervised lesion localization. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 239–246. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_28CrossRefGoogle Scholar
  11. 11.
    Ioffe, S., et al.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
  12. 12.
    Jesson, A., Arbel, T.: Brain tumor segmentation using a 3D FCN with multi-scale loss. In: Crimi, A., Bakas, S., Kuijf, H., Menze, B., Reyes, M. (eds.) BrainLes 2017. LNCS, vol. 10670, pp. 392–402. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-75238-9_34CrossRefGoogle Scholar
  13. 13.
    Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. MIA 36, 61–78 (2017)Google Scholar
  14. 14.
    Mehta, R., Arbel, T.: 3D U-Net for brain tumour segmentation. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11384, pp. 254–266. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-11726-9_23CrossRefGoogle Scholar
  15. 15.
    Menegola, A., et al.: Knowledge transfer for melanoma screening with deep learning. ISBI 2017, 297–300 (2017)Google Scholar
  16. 16.
    Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BraTS). TMI 34(10), 1993–2024 (2014)Google Scholar
  17. 17.
    Nyúl, L.G., et al.: New variants of a method of MRI scale standardization. TMI 19(2), 143–150 (2000)Google Scholar
  18. 18.
    Sled, J.G., et al.: A nonparametric method for automatic correction of intensity nonuniformity in MRI data. TMI 17(1), 87–97 (1998)Google Scholar
  19. 19.
    Smith, S.M.: Fast robust automated brain extraction. HBM 17(3), 143–155 (2002)CrossRefGoogle Scholar
  20. 20.
    Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE TMI 35(5), 1299–1312 (2016)Google Scholar
  21. 21.
    Yosinski, J., et al.: How transferable are features in deep neural networks? In: Proceeding of NIPS, pp. 3320–3328 (2014)Google Scholar
  22. 22.
    Zhang, D., Shen, D., Alzheimer’s Disease Neuroimaging Initiative: Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer’s disease. NeuroImage, 59(2), 895–907 (2012) Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Barleen Kaur
    • 1
    • 2
    • 4
    Email author
  • Paul Lemaître
    • 2
  • Raghav Mehta
    • 2
  • Nazanin Mohammadi Sepahvand
    • 2
  • Doina Precup
    • 1
    • 4
  • Douglas Arnold
    • 3
    • 5
  • Tal Arbel
    • 2
  1. 1.School of Computer ScienceMcGill UniversityMontrealCanada
  2. 2.Centre for Intelligent MachinesMcGill UniversityMontrealCanada
  3. 3.Montreal Neurological InstituteMcGill UniversityMontrealCanada
  4. 4.Mila Quebec AI InstituteMontrealCanada
  5. 5.NeuroRx ResearchMontrealCanada

Personalised recommendations