Advertisement

Extending 2D Deep Learning Architectures to 3D Image Segmentation Problems

  • Alberto AlbiolEmail author
  • Antonio Albiol
  • Francisco Albiol
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11384)

Abstract

Several deep learning architectures are combined for brain tumor segmentation. All the architectures are inspired on recent 2D models where 2D convolution have been replaced by 3D convolutions. The key differences between the architectures are the size of the receptive field and the number of feature maps on the final layers. The obtained results are comparable to the top methods of previous Brats Challenges when median is use to average the results. Further investigation is still needed to analyze the outlier patients.

Keywords

Brain segmentation Brats 3D inception 3D VGG 3D densely connected 3D Xception 

References

  1. 1.
    Abadi, M., et al.: Tensorflow: large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/, software available from tensorflow.org
  2. 2.
    Bakas, S., et al.: Segmentation labels and radiomic features for the pre-operative scans of the TCGA-LGG collection. The Cancer Imaging Archive (2017)Google Scholar
  3. 3.
    Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Nat. Sci. Data 4, 170117 (2017)CrossRefGoogle Scholar
  4. 4.
    Bakas, S., et al.: Segmentation labels and radiomic features for the pre-operative scans of the TCGA-GBM collection. The Cancer Imaging Archive (2017)Google Scholar
  5. 5.
    Bakas, S., Reyes, M., Menze, B.: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge. In: arXiv preprint arXiv:1811.02629 (2018)
  6. 6.
    Chollet, F.: Xception: deep learning with depthwise separable convolutions (2016), cite arxiv:1610.02357
  7. 7.
    NIPY developers: nibabel 1.0.2, August 2016.  https://doi.org/10.5281/zenodo.60861
  8. 8.
    Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)Google Scholar
  9. 9.
    Kamnitsas, K., et al.: Ensembles of multiple models and architectures for robust brain tumour segmentation. In: International MICCAI Brainlesion Workshop (2017)Google Scholar
  10. 10.
    Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)CrossRefGoogle Scholar
  11. 11.
    Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (brats). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2015).  https://doi.org/10.1109/TMI.2014.2377694CrossRefGoogle Scholar
  12. 12.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: arXiv: 1409.1556 (2014)
  13. 13.
    Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826, June 2016.  https://doi.org/10.1109/CVPR.2016.308
  14. 14.
    Szegedy, C., et al.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition (CVPR) (2015), http://arxiv.org/abs/1409.4842
  15. 15.
    Tustison, N.J., et al.: N4ITK: improved N3 bias correction. IEEE Trans. Med. Imaging 29(6), 1310–1320 (2010).  https://doi.org/10.1109/tmi.2010.2046908CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.iTeamUniversitat Politècnica de ValènciaValenciaSpain
  2. 2.Instituto de Física Corpuscular (IFIC)Universitat de València, Consejo Superior de Investigaciones CientíficasValenciaSpain

Personalised recommendations