Skip to main content
Log in

CTUNet: automatic pancreas segmentation using a channel-wise transformer and 3D U-Net

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Diabetes, pancreatic cancer, and pancreatitis are all diseases of the pancreas, which seriously threaten people’s lives. The pancreas has a special anatomical structure, its size, shape, and position are variable, and it is highly similar to other surrounding deep abdominal tissues, so achieving accurate segmentation is still one of the most challenging tasks in the field of medical image segmentation. We propose a new network CTUNet that combines Transformer and 3D U-Net, which can achieve high-precision automatic segmentation of the pancreas. We deploy the Transformer on skip connections to coordinate global explicit features and guide the network learning. In view of pancreas reciprocity and shape variability, we design a Pancreas Attention module and add it to each encoder to further enhance the ability to extract context information and learn distinct features. In addition, in the decoder, we use a novel Feature Concatenation module with an attention mechanism to further promote the fusion of different levels of features and alleviate the problem of loss of down-sampling feature information. We train and test our model on the NIH dataset and evaluate with Dice Similarity Coefficient, Jaccard Index, Precision, and Recall. Experimental results show that our proposed model outperforms most existing pancreas segmentation methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Peery, A.F., Crockett, S.D., Barritt, A.S., Dellon, E.S., Eluri, S., Gangarosa, L.M., Jensen, E.T., Lund, J.L., Pasricha, S., Runge, T., et al.: Burden of gastrointestinal, liver, and pancreatic diseases in the United States. Gastroenterology 149(7), 1731–1741 (2015)

    Article  Google Scholar 

  2. Dmitriev, K., Gutenko, I., Nadeem, S., Kaufman, A.: Pancreas and cyst segmentation. In: Medical Imaging 2016: Image Processing, vol. 9784, pp. 628–634. SPIE (2016)

  3. Farag, A., Lu, L., Roth, H.R., Liu, J., Turkbey, E., Summers, R.M.: A bottom-up approach for pancreas segmentation using cascaded superpixels and (deep) image patch labeling. IEEE Trans. Image Process. 26(1), 386–399 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  4. Kronman, A., Joskowicz, L.: A geometric method for the detection and correction of segmentation leaks of anatomical structures in volumetric medical images. Int. J. Comput. Assist. Radiol. Surg. 11(3), 369–380 (2016)

    Article  Google Scholar 

  5. Roth, H.R., Lu, L., Farag, A., Shin, H.-C., Liu, J., Turkbey, E.B., Summers, R.M.: Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation. In: International Conference on Medical Image Computing and Computer-assisted Intervention, pp. 556–564. Springer (2015)

  6. Zhuang, H., Zhang, J., Liao, F.: A systematic review on application of deep learning in digestive system image processing. Vis. Comput. 1, 1–16 (2021)

    Google Scholar 

  7. Li, J., Gsaxner, C., Pepe, A., Schmalstieg, D., Kleesiek, J., Egger, J.: Sparse convolutional neural networks for medical image analysis (2022)

  8. Zhu, Z., Xia, Y., Shen, W., Fishman, E., Yuille, A.: A 3d coarse-to-fine framework for volumetric medical image segmentation. In: 2018 International Conference on 3D Vision (3DV), pp. 682–690. IEEE (2018)

  9. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3d u-net: learning dense volumetric segmentation from sparse annotation. In: International Conference on Medical Image Computing and Computer-assisted Intervention, pp. 424–432. Springer (2016)

  10. Lei, T., Wang, R., Wan, Y., Du, X., Meng, H., Nandi, A.K.: Medical image segmentation using deep learning: a Survey (2020)

  11. Zhou, Y., Xie, L., Shen, W., Wang, Y., Fishman, E.K., Yuille, A.L.: A fixed-point model for pancreas segmentation in abdominal ct scans. In: International Conference on Medical Image Computing and Computer-assisted Intervention, pp. 693–701. Springer (2017)

  12. Li, J., Chen, J., Tang, Y., Landman, B.A., Zhou, S.K.: Transforming medical imaging with transformers? A comparative review of key properties, current progresses, and future perspectives. arXiv preprint arXiv:2206.01136 (2022)

  13. Cheng, Z., Qu, A., He, X.: Contour-aware semantic segmentation network with spatial attention mechanism for medical image. Vis. Comput. 1, 1–14 (2021)

    Google Scholar 

  14. Knolle, M., Kaissis, G., Jungmann, F., Ziegelmayer, S., Sasse, D., Makowski, M., Rueckert, D., Braren, R.: Efficient, high-performance semantic segmentation using multi-scale feature extraction. Plos One 16(8), 0255397 (2021)

    Article  Google Scholar 

  15. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-assisted Intervention, pp. 234–241. Springer (2015)

  16. Wang, D., Hu, G., Lyu, C.: Frnet: an end-to-end feature refinement neural network for medical image segmentation. Vis. Comput. 37(5), 1101–1112 (2021)

    Article  Google Scholar 

  17. Zhao, T., Cao, K., Yao, J., Nogues, I., Lu, L., Huang, L., Xiao, J., Yin, Z., Zhang, L.: 3d graph anatomy geometry-integrated network for pancreatic mass segmentation, diagnosis, and quantitative patient management. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13743–13752 (2021)

  18. Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A.L., Zhou, Y.: Transunet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306 (2021)

  19. Vincent, A., Herman, J., Schulick, R., Hruban, R.H., Goggins, M.: Pancreatic cancer. The Lancet 378(9791), 607–620 (2011)

    Article  Google Scholar 

  20. Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., Kainz, B., et al.: Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018)

  21. Yang, M., Ma, T., Tian, Q., Tian, Y., Al-Dhelaan, A., Al-Dhelaan, M.: Aggregated squeeze-and-excitation transformations for densely connected convolutional networks. Vis. Comput. 1, 1–14 (2021)

    Google Scholar 

  22. Chen, Y., Dai, X., Liu, M., Chen, D., Yuan, L., Liu, Z.: Dynamic convolution: attention over convolution kernels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11030–11039 (2020)

  23. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)

    Article  Google Scholar 

  24. Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)

  25. Li, M., Lian, F., Guo, S.: Automatic pancreas segmentation using double adversarial networks with pyramidal pooling module. IEEE Access 9, 140965–140974 (2021)

    Article  Google Scholar 

  26. Wang, Y., Gong, G., Kong, D., Li, Q., Dai, J., Zhang, H., Qu, J., Liu, X., Xue, J.: Pancreas segmentation using a dual-input v-mesh network. Med. Image Anal. 69, 101958 (2021)

    Article  Google Scholar 

  27. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł, Polosukhin, I.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30, 1 (2017)

    Google Scholar 

  28. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: European Conference on Computer Vision, pp. 213–229. Springer (2020)

  29. Cao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q., Wang, M.: Swin-unet: Unet-like pure transformer for medical image segmentation. arXiv preprint arXiv:2105.05537 (2021)

  30. Wang, H., Cao, P., Wang, J., Zaiane, O.R.: Uctransnet: Rethinking the skip connections in u-net from a channel-wise perspective with transformer. arXiv preprint arXiv:2109.04335 (2021)

  31. Roth, H.R., Farag, A., Lu, L., Turkbey, E.B., Summers, R.M.: Deep convolutional networks for pancreas segmentation in ct imaging. In: Medical Imaging 2015: Image Processing, vol. 9413, p. 94131. International Society for Optics and Photonics (2015)

  32. Cai, J., Lu, L., Xie, Y., Xing, F., Yang, L.: Improving deep pancreas segmentation in ct and mri images via recurrent neural contextual learning and direct loss function. arXiv preprint arXiv:1707.04912 (2017)

  33. Zhou, Y., Xie, L., Shen, W., Fishman, E., Yuille, A.: Pancreas segmentation in abdominal ct scan: a coarse-to-fine approach. arXiv preprint arXiv:1612.08230 (2016)

  34. Yu, Q., Xie, L., Wang, Y., Zhou, Y., Fishman, E.K., Yuille, A.L.: Recurrent saliency transformation network: Incorporating multi-stage visual cues for small organ segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8280–8289 (2018)

  35. Lu, L., Jian, L., Luo, J., Xiao, B.: Pancreatic segmentation via ringed residual u-net. IEEE Access 7, 172871–172878 (2019)

    Article  Google Scholar 

  36. Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: Unet++: A nested u-net architecture for medical image segmentation. In: 4th Deep Learning in Medical Image Analysis (DLMIA) Workshop (2018)

  37. Wang, Z.-H., Liu, Z., Song, Y.-Q., Zhu, Y.: Densely connected deep u-net for abdominal multi-organ segmentation. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 1415–1419 (2019). IEEE

  38. Wang, W., Song, Q., Feng, R., Chen, T., Chen, J., Chen, D.Z., Wu, J.: A fully 3d cascaded framework for pancreas segmentation. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 207–211 (2020). IEEE

  39. Xue, J., He, K., Nie, D., Adeli, E., Shi, Z., Lee, S.-W., Zheng, Y., Liu, X., Li, D., Shen, D.: Cascaded multitask 3-d fully convolutional networks for pancreas segmentation. IEEE Trans. Cybern. 51(4), 2153–2165 (2019)

    Article  Google Scholar 

  40. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  41. Schlemper, J., Oktay, O., Schaap, M., Heinrich, M., Kainz, B., Glocker, B., Rueckert, D.: Attention gated networks: learning to leverage salient regions in medical images. Med. Image Anal. 53, 197–207 (2019)

    Article  Google Scholar 

  42. Li, Y., Wang, Z., Yin, L., Zhu, Z., Qi, G., Liu, Y.: X-net: a dual encoding-decoding method in medical image segmentation. Vis. Comput. 1, 1–11 (2021)

    Google Scholar 

  43. Sha, Y., Zhang, Y., Ji, X., Hu, L.: Transformer-unet: Raw image processing with unet. arXiv preprint arXiv:2109.08417 (2021)

  44. Rickmann, A.-M., Roy, A.G., Sarasua, I., Wachinger, C.: Recalibrating 3d convnets with project & excite. IEEE Trans. Med. Imag. 39(7), 2461–2471 (2020)

  45. Andonie, R.: Hyperparameter optimization in learning systems. J. Membr. Comput. 1(4), 279–291 (2019)

  46. Sudre, C.H., Li, W., Vercauteren, T., Ourselin, S., Cardoso, M.J.: Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: International Workshop on Deep Learning in Medical Image Analysis International Workshop on Multimodal Learning for Clinical Decision Support (2017)

  47. Zhu, W., Huang, Y., Zeng, L., Chen, X., Liu, Y., Qian, Z., Du, N., Fan, W., Xie, X.: Anatomynet: deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy. Med. Phys. 46(2), 576–589 (2019)

    Article  Google Scholar 

  48. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lifang Chen.

Ethics declarations

Ethical approval

The authors declare that they have no financial and personal relationships with other people or organizations that can inappropriately influence our work, and there is no professional or other personal interest of any nature or kind in any product, service, and company that could be construed as influencing the position presented in, or the review of, the manuscript entitled. Meanwhile, they do not violate any ethical guidelines and all pancreas CT scans for experiments are derived from the publicly available NIH pancreas dataset (https://wiki.cancerimagingarchive.net/display/Public/Pancreas-CT).

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (rar 8098 KB)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, L., Wan, L. CTUNet: automatic pancreas segmentation using a channel-wise transformer and 3D U-Net. Vis Comput 39, 5229–5243 (2023). https://doi.org/10.1007/s00371-022-02656-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-022-02656-2

Keywords

Navigation