Advertisement

FetusMap: Fetal Pose Estimation in 3D Ultrasound

  • Xin Yang
  • Wenlong Shi
  • Haoran Dou
  • Jikuan Qian
  • Yi Wang
  • Wufeng Xue
  • Shengli Li
  • Dong NiEmail author
  • Pheng-Ann Heng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

The 3D ultrasound (US) entrance inspires a multitude of automated prenatal examinations. However, studies about the structuralized description of the whole fetus in 3D US are still rare. In this paper, we propose to estimate the 3D pose of fetus in US volumes to facilitate its quantitative analyses in global and local scales. Given the great challenges in 3D US, including the high volume dimension, poor image quality, symmetric ambiguity in anatomical structures and large variations of fetal pose, our contribution is three-fold. Open image in new window This is the first work about 3D pose estimation of fetus in the literature. We aim to extract the skeleton of whole fetus and assign different segments/joints with correct torso/limb labels. Open image in new window We propose a self-supervised learning (SSL) framework to finetune the deep network to form visually plausible pose predictions. Specifically, we leverage the landmark-based registration to effectively encode case-adaptive anatomical priors and generate evolving label proxy for supervision. Open image in new window To enable our 3D network perceive better contextual cues with higher resolution input under limited computing resource, we further adopt the gradient check-pointing (GCP) strategy to save GPU memory and improve the prediction. Extensively validated on a large 3D US dataset, our method tackles varying fetal poses and achieves promising results. 3D pose estimation of fetus has potentials in serving as a map to provide navigation for many advanced studies.

Notes

Acknowledgments

The work in this paper was supported by the grant from Research Grants Council of Hong Kong SAR (Project No. CUHK14225616), National Natural Science Foundation of China (Project No. U1813204) and Shenzhen Peacock Plan (No. KQTD2016053112051497, KQJSCX20180328095606003).

References

  1. 1.
    Bai, W., et al.: Semi-supervised learning for network-based cardiac MR image segmentation. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 253–260. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_29CrossRefGoogle Scholar
  2. 2.
    Baumgartner, C.F., et al.: SonoNet: real-time detection and localisation of fetal standard scan planes in freehand ultrasound. IEEE TMI 36(11), 2204–2215 (2017) Google Scholar
  3. 3.
    Chen, T., Xu, B., Zhang, C., Guestrin, C.: Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174 (2016)
  4. 4.
    Chen, Y., Shen, C., et al.: Adversarial PoseNet: a structure-aware convolutional network for human pose estimation. In: ICCV, pp. 1212–1221 (2017)Google Scholar
  5. 5.
    Huang, R., Noble, J.A., Namburete, A.I.L.: Omni-supervised learning: scaling up to large unlabelled medical datasets. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 572–580. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_65CrossRefGoogle Scholar
  6. 6.
    Liu, J., et al.: Feature boosting network for 3D pose estimation. IEEE TPAMI (2019)Google Scholar
  7. 7.
    Namburete, A.I., et al.: Fully-automated alignment of 3D fetal brain ultrasound to a canonical reference space using multi-task learning. MedIA 46, 1–14 (2018)Google Scholar
  8. 8.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  9. 9.
    Salimans, T., Bulatov, Y.: Saving memory using gradient-checkpointing. https://github.com/openai/gradient-checkpointing/
  10. 10.
    Wang, G., Li, W., et al.: Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE TMI 37(7), 1562–1573 (2018)Google Scholar
  11. 11.
    Wu, L., et al.: Cascaded fully convolutional networks for automatic prenatal ultrasound image segmentation. In: ISBI, pp. 663–666. IEEE (2017)Google Scholar
  12. 12.
    Xu, Z., et al.: Less is more: simultaneous view classification and landmark detection for abdominal ultrasound images. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 711–719. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00934-2_79CrossRefGoogle Scholar
  13. 13.
    Yang, X., Yu, L., et al.: Towards automated semantic segmentation in prenatal volumetric ultrasound. IEEE TMI 38(1), 180–193 (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Xin Yang
    • 1
  • Wenlong Shi
    • 2
    • 3
  • Haoran Dou
    • 2
    • 3
  • Jikuan Qian
    • 2
    • 3
  • Yi Wang
    • 2
    • 3
  • Wufeng Xue
    • 2
    • 3
  • Shengli Li
    • 4
  • Dong Ni
    • 2
    • 3
    Email author
  • Pheng-Ann Heng
    • 1
  1. 1.Department of Computer Science and EngineeringThe Chinese University of Hong KongHong KongChina
  2. 2.National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science CenterShenzhen UniversityShenzhenChina
  3. 3.Medical UltraSound Image Computing (MUSIC) LabShenzhen UniversityShenzhenChina
  4. 4.Department of Ultrasound, Affliated Shenzhen Maternal and Child HealthcareHospital of Nanfang Medical UniversityShenzhenChina

Personalised recommendations