Advertisement

Residual Semantic Segmentation of the Prostate from Magnetic Resonance Images

  • Md Sazzad Hossain
  • Andrew P. Paplinski
  • John M. Betts
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11307)

Abstract

The diagnosis and treatment of prostate cancer requires the accurate segmentation of the prostate in Magnetic Resonance Images (MRI). Manual segmentation is currently the most accurate method of performing this task. However, this requires specialist knowledge, and is time consuming. To overcome these limitations, we demonstrate an automatic segmentation of the prostate region in MRI images using a VGG19-based fully convolutional neural network. This new network, VGG19RSeg, identifies a region of interest in the image using semantic segmentation, that is, a pixel-wise classification of the content of the input image. Although several studies have applied fully convolutional neural networks to medical image segmentation tasks, our study introduces two new forms of residual connections (remote and neighbouring) which increases the accuracy of segmentation over the basic architecture. Our results, using this new architecture, show that the proposed VGG19RSeg can achieve a mean Dice Similarity Coefficient of 94.57%, making it more accurate than comparable methods reported in the literature.

Keywords

Prostate MRI images Semantic segmentation Deep convolutional neural networks 

Notes

Acknowledgements

Datasets used in this study were part of the PROMISE12 grand challenge for prostate segmentation data sets. The authors wish to thank the Monash University Massive-HPC facility for the provision of high performance computing resources.

References

  1. 1.
    Siegel, R.L., Miller, K.D., Jemal, A.: Cancer statistics, 2016. CA Cancer J. Clin. 66(1), 7–30 (2016)CrossRefGoogle Scholar
  2. 2.
  3. 3.
    Moore, C.M., et al.: Image-guided prostate biopsy using magnetic resonance imaging-derived targets: a systematic review. Eur. Urol. 63(1), 125–140 (2013)CrossRefGoogle Scholar
  4. 4.
    Khallaghi, S., et al.: Biomechanically constrained surface registration: application to MR-TRUS fusion for prostate interventions. IEEE Trans. Med. Imaging 34(11), 2404–2414 (2015)CrossRefGoogle Scholar
  5. 5.
    Fedorov, A., et al.: Open-source image registration for MRI–TRUS fusion-guided prostate interventions. Int. J. Comput. Assist. Radiol. Surg. 10(6), 925–934 (2015)CrossRefGoogle Scholar
  6. 6.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  7. 7.
    Liao, S., Gao, Y., Oto, A., Shen, D.: Representation learning: a unified deep learning framework for automatic prostate MR segmentation. In: Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N. (eds.) MICCAI 2013. LNCS, vol. 8150, pp. 254–261. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40763-5_32CrossRefGoogle Scholar
  8. 8.
    Toth, R., Madabhushi, A.: Multifeature landmark-free active appearance models: application to prostate MRI segmentation. IEEE Trans. Med. Imaging 31(8), 1638–1650 (2012)CrossRefGoogle Scholar
  9. 9.
    Yan, P., Xu, S., Turkbey, B., Kruecker, J.: Discrete deformable model guided by partial active shape model for TRUS image segmentation. IEEE Trans. Biomed. Eng. 57(5), 1158–1166 (2010)CrossRefGoogle Scholar
  10. 10.
    Abdullah, S., Tischer, P., Wijewickrema, S., Paplinski, A.: Parameter-free hierarchical image segmentation. In: Visual Communications and Image Processing (VCIP), pp. 1–4. IEEE, St. Petersburg (2017)Google Scholar
  11. 11.
    Pereira, S., Pinto, A., Alves, V., Silva, C.A.: Brain Tumor segmentation using convolutional neural networks in MRI images. IEEE Trans. Med. Imaging 35(5), 1240–1251 (2016)CrossRefGoogle Scholar
  12. 12.
    Havaei, M., et al.: Brain tumor segmentation with deep neural networks. Med. Image Anal. 35, 18–31 (2017)CrossRefGoogle Scholar
  13. 13.
    Jia, H., Xia, Y., Song, Y., Cai, W., Fulham, M., Feng, D.D.: Atlas registration and ensemble deep convolutional neural network-based prostate segmentation using magnetic resonance imaging. Neurocomputing 275, 1358–1369 (2018)CrossRefGoogle Scholar
  14. 14.
    Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans. Med. Imaging 35(5), 1299–1312 (2016)CrossRefGoogle Scholar
  15. 15.
    Zhang, W., et al.: Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. Neuroimage 108, 214–224 (2015)CrossRefGoogle Scholar
  16. 16.
    Milletari, F., et al.: Hough-CNN: deep learning for segmentation of deep brain regions in MRI and ultrasound. Comput. Vis. Image Underst. 164, 92–102 (2017)CrossRefGoogle Scholar
  17. 17.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440. IEEE, Boston (2015)Google Scholar
  18. 18.
    Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1520–1528. IEEE, Santiago (2015)Google Scholar
  19. 19.
    Tian, Z., Liu, L., Fei, B.: Deep convolutional neural network for prostate MR segmentation. In: Medical Imaging 2017: Image-Guided Procedures, Robotic Interventions, and Modeling: International Society for Optics and Photonics, vol. 10135, p. 101351L (2017)Google Scholar
  20. 20.
    Ahmad, E., Goyal, M., McPhee, J.S., Degens, H., Yap, M.H.: Semantic Segmentation of Human Thigh Quadriceps Muscle in Magnetic Resonance Images. arXiv preprint arXiv:1801.00415 (2018)
  21. 21.
    Tran, P.V.: A Fully Convolutional Neural Network for Cardiac Segmentation in Short-Axis MRI. arXiv preprint arXiv:1604.00494 (2016)
  22. 22.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  23. 23.
    Canziani, A., Paszke, A., Culurciello, E.: An Analysis of Deep Neural Network Models for Practical Applications. arXiv preprint arXiv:1605.07678 (2016)
  24. 24.
    Norman, B., Pedoia, V., Majumdar, S.: Use of 2D U-Net convolutional neural networks for automated cartilage and meniscus segmentation of knee MR imaging data to determine relaxometry and morphometry. Radiology, 172322 (2018)Google Scholar
  25. 25.
    Milletari, F., Navab, N., Ahmadi, S.-A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. IEEE, Stanford (2016)Google Scholar
  26. 26.
    Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint arXiv:1409.1556 (2014)
  27. 27.
    Yang, S., Ramanan, D.: Multi-Scale Recognition with DAG-CNNs. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1215–1223. IEEE, Santiago (2015)Google Scholar
  28. 28.
    Shuai, B., Zuo, Z., Wang, B., Wang, G.: Dag-recurrent neural networks for scene labeling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  29. 29.
    MICCAI Grand Challenge: Prostate MR Image Segmentation 2012. https://promise12.grand-challenge.org/
  30. 30.
    Zou, K.H., et al.: Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports. Acad. Radiol. 11(2), 178–189 (2004)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Ghasab, M.A.J., Paplinski, A.P., Betts, J.M., Reynolds, H.M., Haworth, A.: Automatic 3D modelling for prostate cancer brachytherapy. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 4452–4456. IEEE, Beijing (2017)Google Scholar
  32. 32.
    Cho, C., Lee, Y.H., Lee, S.: Prostate detection and segmentation based on convolutional neural network and topological derivative. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 4452–4456. IEEE, Beijing (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Md Sazzad Hossain
    • 1
  • Andrew P. Paplinski
    • 1
  • John M. Betts
    • 1
  1. 1.Faculty of Information TechnologyMonash UniversityMelbourneAustralia

Personalised recommendations