Towards Automatic Semantic Segmentation in Volumetric Ultrasound

  • Xin Yang
  • Lequan Yu
  • Shengli Li
  • Xu Wang
  • Na Wang
  • Jing Qin
  • Dong NiEmail author
  • Pheng-Ann Heng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10433)


3D ultrasound is rapidly emerging as a viable imaging modality for routine prenatal examinations. However, lacking of efficient tools to decompose the volumetric data greatly limits its widespread. In this paper, we are looking at the problem of volumetric segmentation in ultrasound to promote the volume-based, precise maternal and fetal health monitoring. Our contribution is threefold. First, we propose the first and fully automatic framework for the simultaneous segmentation of multiple objects, including fetus, gestational sac and placenta, in ultrasound volumes, which remains as a rarely-studied but great challenge. Second, based on our customized 3D Fully Convolutional Network, we propose to inject a Recurrent Neural Network (RNN) to flexibly explore 3D semantic knowledge from a novel, sequential perspective, and therefore significantly refine the local segmentation result which is initially corrupted by the ubiquitous boundary uncertainty in ultrasound volumes. Third, considering sequence hierarchy, we introduce a hierarchical deep supervision mechanism to effectively boost the information flow within RNN and further improve the semantic segmentation results. Extensively validated on our in-house large datasets, our approach achieves superior performance and presents to be promising in boosting the interpretation of prenatal ultrasound volumes. Our framework is general and can be easily extended to other volumetric ultrasound segmentation tasks.



The work in this paper was supported by the grant from National Natural Science Foundation of China under Grant 61571304, and grants from Hong Kong Research Grants Council (Project no. CUHK 14202514) and Hong Kong Innovation and Technology Fund (Project no. GHP/002/13SZ).


  1. 1.
    Anquez, J., Angelini, E.D., Grangé, G., Bloch, I.: Automatic segmentation of antenatal 3-d ultrasound images. IEEE TBME 60(5), 1388–1400 (2013)Google Scholar
  2. 2.
    Chen, J., et al.: Combining fully convolutional and recurrent neural networks for 3d biomedical image segmentation. In: NIPS, pp. 3036–3044 (2016)Google Scholar
  3. 3.
    DallAsta, A., et al.: Quantitative analysis of fetal facial morphology using 3d ultrasound and statistical shape modeling: a feasibility study. AJOG 217(1), 76.e1–76.e8 (2017)Google Scholar
  4. 4.
    Dou, Q., Yu, L., et al.: 3d deeply supervised network for automated segmentation of volumetric medical images. Med. Image Anal. S1361-8415(17), 30072–30075 (2017)Google Scholar
  5. 5.
    Graves, A., Jaitly, N., Mohamed, A.R.: Hybrid speech recognition with deep bidirectional LSTM. In: ASRU, pp. 273–278 (2013)Google Scholar
  6. 6.
    Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z.: Deeply-supervised nets. In: Artificial Intelligence and Statistics, pp. 562–570 (2015)Google Scholar
  7. 7.
    Lee, W., et al.: Prospective validation of fetal weight estimation using fractional limb volume. UOG 41(2), 198–203 (2013)Google Scholar
  8. 8.
    Lipton, Z.C., Kale, D.C., Elkan, C., Wetzell, R.: Learning to diagnose with LSTM recurrent neural networks. arXiv preprint arXiv:1511.03677 (2015)
  9. 9.
    Litjens, G., et al.: A survey on deep learning in medical image analysis. arXiv preprint arXiv:1702.05747 (2017)
  10. 10.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR, pp. 3431–3440 (2015)Google Scholar
  11. 11.
    Mercé, L.T., Barco, M.J., Bau, S.: Three-dimensional volume sonographic study of fetal anatomy. JUM 27(7), 1053–1063 (2008)Google Scholar
  12. 12.
    Mozaffari, M.H., Lee, W.: 3d ultrasound image segmentation: a survey. arXiv preprint arXiv:1611.09811 (2016)
  13. 13.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). doi: 10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  14. 14.
    Simcox, L.E., et al.: Intraexaminer and interexaminer variability in 3d fetal volume measurements during the second and third trimesters of pregnancy. JUM 36(7), 1291–1530 (2017)CrossRefGoogle Scholar
  15. 15.
    Stevenson, G.N., et al.: 3-d ultrasound segmentation of the placenta using the random walker algorithm: reliability and agreement. UMB 41(12), 3182–3193 (2015)Google Scholar
  16. 16.
    Tu, Z.: Auto-context and its application to high-level vision tasks. In: CVPR, pp. 1–8 (2008)Google Scholar
  17. 17.
    Yang, X., Yu, L., Wu, L., Wang, Y., Ni, D., Qin, J., Heng, P.-A.: Fine-grained recurrent neural networks for automatic prostate segmentation in ultrasound images. In: AAAI, pp. 1633–1639 (2017)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Xin Yang
    • 1
  • Lequan Yu
    • 1
  • Shengli Li
    • 2
  • Xu Wang
    • 3
  • Na Wang
    • 3
  • Jing Qin
    • 4
  • Dong Ni
    • 3
    Email author
  • Pheng-Ann Heng
    • 1
    • 5
  1. 1.Department of Computer Science and EngineeringThe Chinese University of Hong KongHong KongChina
  2. 2.Department of UltrasoundAffliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical UniversityShenzhenChina
  3. 3.National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science CenterShenzhen UniversityShenzhenChina
  4. 4.School of Nursing, Centre for Smart HealthThe Hong Kong Polytechnic UniversityHong KongChina
  5. 5.Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhenChina

Personalised recommendations