Prostate segmentation in transrectal ultrasound using magnetic resonance imaging priors
In the current standard of care, real-time transrectal ultrasound (TRUS) is commonly used for prostate brachytherapy guidance. As TRUS provides limited soft tissue contrast, segmenting the prostate gland in TRUS images is often challenging and subject to inter-observer and intra-observer variability, especially at the base and apex where the gland boundary is hard to define. Magnetic resonance imaging (MRI) has higher soft tissue contrast allowing the prostate to be contoured easily. In this paper, we aim to show that prostate segmentation in TRUS images informed by MRI priors can improve on prostate segmentation that relies only on TRUS images.
First, we compare the TRUS-based prostate segmentation used in the treatment of 598 patients with a high-quality MRI prostate atlas and observe inconsistencies at the apex and base. Second, motivated by this finding, we propose an alternative TRUS segmentation technique that is fully automatic and uses MRI priors. The algorithm uses a convolutional neural network to segment the prostate in TRUS images at mid-gland, where the gland boundary can be clearly seen. It then reconstructs the gland boundary at the apex and base with the aid of a statistical shape model built from an MRI atlas of 78 patients.
Compared to the clinical TRUS segmentation, our method achieves similar mid-gland segmentation results in the 598-patient database. For the seven patients who had both TRUS and MRI, our method achieved more accurate segmentation of the base and apex with the MRI segmentation used as ground truth.
Our results suggest that utilizing MRI priors in TRUS prostate segmentation could potentially improve the performance at base and apex.
KeywordsProstate segmentation Statistical shape model Magnetic resonance imaging prior Convolutional neural network
This work was funded by Natural Sciences and Engineering Research Council of Canada (NSERC), the Canadian Institutes of Health Research (CIHR), and the Prostate Cancer Canada (PCC). We would like to thank the support from the Charles Laszlo Chair in Biomedical Engineering held by Professor Salcudean. The authors also gratefully acknowledge the help from physicians and staff at the Vancouver Cancer Centre who have contributed to this project.
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent was obtained from all individual participants included in the study.
- 1.Morris WJ, Keyes M, Palma D, Spadinger I, McKenzie MR, Agranovich A, Pickles T, Liu M, Kwan W, Wu J, Berthelet E, Pai H (2009) Population-based study of biochemical and survival outcomes after permanent 125I brachytherapy for low- and intermediate-risk prostate cancer. Urology 73(4):860–865CrossRefPubMedGoogle Scholar
- 2.Badiei S, Salcudean SE, Varah J, Morris WJ (2006) Prostate segmentation in 2D ultrasound images using image warping and ellipse fitting. In: International conference on medical image computing and computer-assisted intervention. Springer, Berlin, pp 17–24Google Scholar
- 12.Anas EM, Nouranian S, Mahdavi SS, Spadinger I, Morris WJ, Salcudean SE, Mousavi P, Abolmaesumi P (2017) Clinical target-Volume delineation in prostate brachytherapy using residual neural networks. In: International conference on medical image computing and computer-assisted intervention. Springer, Berlin, pp 365–373Google Scholar
- 17.Khallaghi S, Snchez CA, Rasoulian A, Nouranian S, Romagnoli C, Abdi H, Chang SD, Black PC, Goldenberg L, Morris WJ, Spadinger I, Fenster A, Ward A, Fels S, Abolmaesumi P (2015) Statistical biomechanical surface registration: application to MR–TRUS fusion for prostate interventions. IEEE Trans Med Imaging 34(12):2535–2549CrossRefPubMedGoogle Scholar
- 18.Litjens G, Toth R, van de Ven W, Hoeks C, Kerkstra S, van Ginneken B, Vincent G, Guillard G, Birbeck N, Zhang J, Strand R, Malmberg F, Ou Y, Davatzikos C, Kirschner M, Jung F, Yuan J, Qiu W, Gao Q, Edwards PE, Maan B, van der Heijden F, Ghose S, Mitra J, Dowling J, Barratt D, Huisman H, Madabhushi A (2014) Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge. Med Image Anal 18(2):359–373CrossRefPubMedGoogle Scholar
- 19.Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, Berlin, pp 234–241Google Scholar
- 20.Milletari F, Navab N, Ahmadi S (2016) V-net: fully convolutional neural networks for volumetric medical image segmentation. In: IEEE international conference on 3D vision, pp 565–571Google Scholar
- 21.Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd international conference on learning representations. arXiv:1409.1556
- 22.Xu B, Wang N, Chen T, Li M (2015) Empirical evaluation of rectified activations in convolutional network. arXiv:1505.00853
- 23.Gao H, Zhuang L, Kilian QW, Laurens van der M (2017) Densely connected convolutional networks. In: IEEE conference on computer vision and pattern recognition, pp 2261–2269Google Scholar
- 25.Salehi S, Erdogmus D, Gholipour A (2017) Tversky loss function for image segmentation using 3D fully convolutional deep networks. In: International workshop on machine learning in medical, imaging, pp 379–387Google Scholar
- 26.Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: Proceedings of the 3rd international conference on learning representations. arXiv:1412.6980
- 28.Blanz V, Mehl A, Vetter T, Seidel HP (2004) A statistical method for robust 3D surface reconstruction from sparse data. In: Proceedings of 2nd international symposium on 3D data processing, visualization and transmission, pp 293–300Google Scholar