Advertisement

Harnessing 2D Networks and 3D Features for Automated Pancreas Segmentation from Volumetric CT Images

  • Huai Chen
  • Xiuying Wang
  • Yijie Huang
  • Xiyi Wu
  • Yizhou Yu
  • Lisheng WangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

Segmenting pancreas from abdominal CT scans is an important prerequisite for pancreatic cancer diagnosis and precise treatment planning. However, automated pancreas segmentation faces challenges posed by shape and size variances, low contrast with regard to adjacent tissues and in particular negligibly small proportion to the whole abdominal volume. Current coarse-to-fine frameworks, either using tri-planar schemes or stacking 2D pre-segmentation as prior to 3D networks, have limitation on effectively capturing 3D information. While iterative updates on region of interest (ROI) in refinement stage alleviate accumulated errors caused by coarse segmentation, extra computational burden is introduced. In this paper, we harness 2D networks and 3D features to improve segmentation accuracy and efficiency. Firstly, in the 3D coarse segmentation network, a new bias-dice loss function is defined to increase ROI recall rates to improve efficiency by avoiding iterative ROI refinements. Secondly, for a full utilization of 3D information, dimension adaptation module (DAM) is introduced to bridge 2D networks and 3D information. Finally, a fusion decision module and parallel training strategy are proposed to fuse multi-source feature cues extracted from three sub-networks to make final predictions. The proposed method is evaluated in NIH dataset and outperforms the state-of-the-art methods in comparison, with mean Dice-Sørensen coefficient (DSC) of 85.22%, and with averagely 0.4 min for each instance.

Keywords

Pancreas segmentation Bias-dice Dimension adaptation module Fusion decision module 

References

  1. 1.
    Cai, J., Lu, L., Zhang, Z., Xing, F., Yang, L., Yin, Q.: Pancreas segmentation in MRI using graph-based decision fusion on convolutional neural networks. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 442–450. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_51CrossRefGoogle Scholar
  2. 2.
    Falk, T., et al.: U-Net: deep learning for cell counting, detection, and morphometry. Nat. Methods 16(1), 67 (2019)CrossRefGoogle Scholar
  3. 3.
    Farag, A., Lu, L., Turkbey, E., Liu, J., Summers, R.M.: A bottom-up approach for automatic pancreas segmentation in abdominal CT scans. In: Yoshida, H., Näppi, J., Saini, S. (eds.) ABD-MICCAI 2014. LNCS, vol. 8676, pp. 103–113. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-13692-9_10CrossRefGoogle Scholar
  4. 4.
    Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9(1), 62–66 (1979)CrossRefGoogle Scholar
  5. 5.
    Roth, H.R., et al.: DeepOrgan: multi-level deep convolutional networks for automated pancreas segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9349, pp. 556–564. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24553-9_68CrossRefGoogle Scholar
  6. 6.
    Roth, H.R., Lu, L., Farag, A., Sohn, A., Summers, R.M.: Spatial aggregation of holistically-nested networks for automated pancreas segmentation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 451–459. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_52CrossRefGoogle Scholar
  7. 7.
    Roth, H.R., et al.: Spatial aggregation of holistically-nested convolutional neural networks for automated pancreas localization and segmentation. Med. Image Anal. 45, 94–107 (2018)CrossRefGoogle Scholar
  8. 8.
    Stewart, B., Wild, C.P., et al.: World cancer report 2014 (2014)Google Scholar
  9. 9.
    Xia, Y., Xie, L., Liu, F., Zhu, Z., Fishman, E.K., Yuille, A.L.: Bridging the gap between 2D and 3D organ segmentation with volumetric fusion net. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 445–453. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00937-3_51CrossRefGoogle Scholar
  10. 10.
    Yu, Q., Xie, L., Wang, Y., Zhou, Y., Fishman, E.K., Yuille, A.L.: Recurrent saliency transformation network: incorporating multi-stage visual cues for small organ segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8280–8289 (2018)Google Scholar
  11. 11.
    Zhou, Y., Xie, L., Shen, W., Wang, Y., Fishman, E.K., Yuille, A.L.: A fixed-point model for pancreas segmentation in abdominal CT scans. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10433, pp. 693–701. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66182-7_79CrossRefGoogle Scholar
  12. 12.
    Zhu, Z., Xia, Y., Shen, W., Fishman, E., Yuille, A.: A 3D coarse-to-fine framework for volumetric medical image segmentation. In: 2018 International Conference on 3D Vision (3DV), pp. 682–690. IEEE (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Institute of Image Processing and Pattern Recognition, Department of AutomationShanghai Jiao Tong UniversityShanghaiPeople’s Republic of China
  2. 2.School of Computer ScienceThe University of SydneySydneyAustralia
  3. 3.Deepwise AI LabBeijingPeople’s Republic of China

Personalised recommendations