Advertisement

Hyper-Pairing Network for Multi-phase Pancreatic Ductal Adenocarcinoma Segmentation

  • Yuyin ZhouEmail author
  • Yingwei Li
  • Zhishuai Zhang
  • Yan Wang
  • Angtian Wang
  • Elliot K. Fishman
  • Alan L. Yuille
  • Seyoun Park
Conference paper
  • 5.4k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11765)

Abstract

Pancreatic ductal adenocarcinoma (PDAC) is one of the most lethal cancers with an overall five-year survival rate of 8%. Due to subtle texture changes of PDAC, pancreatic dual-phase imaging is recommended for better diagnosis of pancreatic disease. In this study, we aim at enhancing PDAC automatic segmentation by integrating multi-phase information (i.e., arterial phase and venous phase). To this end, we present Hyper-Pairing Network (HPN), a 3D fully convolution neural network which effectively integrates information from different phases. The proposed approach consists of a dual path network where the two parallel streams are interconnected with hyper-connections for intensive information exchange. Additionally, a pairing loss is added to encourage the commonality between high-level feature representations of different phases. Compared to prior arts which use single phase data, HPN reports a significant improvement up to 7.73% (from 56.21% to 63.94%) in terms of DSC.

Notes

Acknowledgements

This work was supported by the Lustgarten Foundation for Pancreatic Cancer Research.

References

  1. 1.
    Attiyeh, M.A., Chakraborty, J., Doussot, A., Langdon-Embry, L., Mainarich, S., et al.: Survival prediction in pancreatic ductal adenocarcinoma by quantitative computed tomography image analysis. Ann. Surg. Oncol. 25, 1034–1042 (2018)CrossRefGoogle Scholar
  2. 2.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
  3. 3.
    Dolz, J., Gopinath, K., Yuan, J., Lombaert, H., Desrosiers, C., Ayed, I.B.: HyperDense-Net: a hyper-densely connected CNN for multi-modal image segmentation. TMI 38, 1116–1126 (2018)Google Scholar
  4. 4.
    Dou, Q., et al.: Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks. TMI 35(5), 1182–1195 (2016) Google Scholar
  5. 5.
    Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. arXivGoogle Scholar
  6. 6.
    Li, Y., et al.: Multimodal hyper-connectivity of functional networks using functionally-weighted LASSO for MCI classification. Med. Image Anal. 52, 80–96 (2019)CrossRefGoogle Scholar
  7. 7.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR (2015)Google Scholar
  8. 8.
    Milletari, F., Navab, N., Ahmadi, S.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 3DV (2016)Google Scholar
  9. 9.
    Reaungamornrat, S., et al.: MIND demons: symmetric diffeomorphic deformable registration of MR and CT for image-guided spine surgery. TMI 35(11), 2413–2424 (2016)Google Scholar
  10. 10.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  11. 11.
    Roth, H.R., Lu, L., Farag, A., Sohn, A., Summers, R.M.: Spatial aggregation of holistically-nested networks for automated pancreas segmentation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 451–459. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_52CrossRefGoogle Scholar
  12. 12.
    Vercauteren, T., Pennec, X., Perchange, A., Ayache, N.: Diffeomorphic demons: efficient non-parametric image registration. NeuroImage 45(1), S61–S82 (2009)CrossRefGoogle Scholar
  13. 13.
    Yao, J., Zhu, X., Zhu, F., Huang, J.: Deep correlational learning for survival prediction from multi-modality data. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 406–414. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_46CrossRefGoogle Scholar
  14. 14.
    Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. In: ICLR (2018)Google Scholar
  15. 15.
    Zhang, W., et al.: Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. NeuroImage 108, 214–224 (2015)CrossRefGoogle Scholar
  16. 16.
    Zhu, W., et al.: AnatomyNet: deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy. Med. Phys. 46(2), 576–589 (2019)CrossRefGoogle Scholar
  17. 17.
    Zhu, Z., Xia, Y., Xie, L., Fishman, E.K., Yuille, A.L.: Multi-scale coarse-to-fine segmentation for screening pancreatic ductal adenocarcinoma. arXiv (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Yuyin Zhou
    • 1
    Email author
  • Yingwei Li
    • 1
  • Zhishuai Zhang
    • 1
  • Yan Wang
    • 1
  • Angtian Wang
    • 2
  • Elliot K. Fishman
    • 3
  • Alan L. Yuille
    • 1
  • Seyoun Park
    • 3
  1. 1.The Johns Hopkins UniversityBaltimoreUSA
  2. 2.Huazhong University of Science and TechnologyWuhanChina
  3. 3.The Johns Hopkins University School of MedicineBaltimoreUSA

Personalised recommendations