Advertisement

Integrating 3D Geometry of Organ for Improving Medical Image Segmentation

  • Jiawen Yao
  • Jinzheng Cai
  • Dong Yang
  • Daguang XuEmail author
  • Junzhou Huang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

Prior knowledge of organ shape and location plays an important role in medical imaging segmentation. However, traditional 2D/3D segmentation methods usually operate as pixel-wise/voxel-wise classifiers where their training objectives are not able to incorporate the 3D shape knowledge explicitly. In this paper, we proposed an efficient deep shape-aware network to learn 3D geometry of the organ. More specifically, the network uses a 3D mesh representation in a graph-based CNN which can handle the shape inference and accuracy propagation effectively. After integrating the shape-aware module into the backbone FCNs and jointly training the full model in the multi-task framework, the discriminative capability of intermediate feature representations is increased for both geometry and segmentation regularizations on disentangling subtly correlated tasks. Experimental results show that the proposed network can not only output accurate segmentation, but also generate smooth 3D mesh simultaneously which can be used for further 3D shape analysis.

Notes

Acknowledgements

This work was also partially supported by US National Science Foundation IIS-1718853 and the NSF CAREER grant IIS-1553687.

References

  1. 1.
    Bronstein, M.M., Bruna, J., LeCun, Y., Szlam, A., Vandergheynst, P.: Geometric deep learning: going beyond Euclidean data. IEEE Signal Process. Mag. 34(4), 18–42 (2017)CrossRefGoogle Scholar
  2. 2.
    Cai, J., Lu, L., Zhang, Z., Xing, F., Yang, L., Yin, Q.: Pancreas segmentation in MRI using graph-based decision fusion on convolutional neural networks. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 442–450. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_51CrossRefGoogle Scholar
  3. 3.
    Choy, C.B., Xu, D., Gwak, J.Y., Chen, K., Savarese, S.: 3D-R2N2: a unified approach for single and multi-view 3D object reconstruction. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 628–644. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46484-8_38CrossRefGoogle Scholar
  4. 4.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
  5. 5.
    Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M., Ganovelli, F., Ranzuglia, G.: MeshLab: an open-source mesh processing tool. In: Scarano, V., Chiara, R.D., Erra, U. (eds.) Eurographics Italian Chapter Conference. The Eurographics Association (2008)Google Scholar
  6. 6.
    Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3D object reconstruction from a single image. In: IEEE CVPR, pp. 605–613 (2017)Google Scholar
  7. 7.
    Jiang, L., Shi, S., Qi, X., Jia, J.: GAL: geometric adversarial loss for single-view 3D-object reconstruction. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11212, pp. 820–834. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01237-3_49CrossRefGoogle Scholar
  8. 8.
    Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. IEEE (2016)Google Scholar
  9. 9.
    Oktay, O., Ferrante, E., Kamnitsas, K., et al.: Anatomically constrained neural networks (ACNNS): application to cardiac image enhancement and segmentation. IEEE Trans. Med. Imaging 37(2), 384–395 (2018)CrossRefGoogle Scholar
  10. 10.
    Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: IEEE CVPR, pp. 652–660 (2017)Google Scholar
  11. 11.
    Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., Jiang, Y.-G.: Pixel2Mesh: generating 3D mesh models from single RGB images. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 55–71. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01252-6_4CrossRefGoogle Scholar
  12. 12.
    Wu, J., et al.: A deep Boltzmann machine-driven level set method for heart motion tracking using cine MRI images. Med. Image Anal. 47, 68–80 (2018)CrossRefGoogle Scholar
  13. 13.
    Yang, D., et al.: Automatic liver segmentation using an adversarial image-to-image network. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 507–515. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_58CrossRefGoogle Scholar
  14. 14.
    Yao, J., Xu, Z., Huang, X., Huang, J.: An efficient algorithm for dynamic MRI using low-rank and total variation regularizations. Med. Image Anal. 44, 14–27 (2018)CrossRefGoogle Scholar
  15. 15.
    Yu, L., et al.: Automatic 3D cardiovascular MR segmentation with densely-connected volumetric ConvNets. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 287–295. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_33CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Jiawen Yao
    • 1
  • Jinzheng Cai
    • 2
  • Dong Yang
    • 3
  • Daguang Xu
    • 3
    Email author
  • Junzhou Huang
    • 1
  1. 1.Department of Computer Science and EngineeringThe University of Texas at ArlingtonArlingtonUSA
  2. 2.Department of Biomedical EngineeringUniversity of FloridaGainesvilleUSA
  3. 3.NVIDIA CorporationBethesdaUSA

Personalised recommendations