Abstract
Automatic segmentation of multiple organs is a challenging topic. Most existing approaches are based on 2D network or 3D network, which leads to insufficient contextual exploration in organ segmentation. In recent years, many methods for automatic segmentation based on fully supervised deep learning have been proposed. However, it is very expensive and time-consuming for experienced medical practitioners to annotate a large number of pixels. In this paper, we propose a new two-dimensional multi slices semi-supervised method to perform the task of abdominal organ segmentation. The network adopts the information along the z-axis direction in CT images, preserves and exploits the useful temporal information in adjacent slices. Besides, we combine Cross-Entropy Loss and Dice Loss as loss functions to improve the performance of our method. We apply a teacher-student model with Exponential Moving Average (EMA) strategy to leverage the unlabeled data. The student model is trained with labeled data, and the teacher model is obtained by smoothing the student model weights via EMA. The pseudo-labels of unlabeled images predicted by the teacher model are used to train the student model as the final model. The mean DSC for all cases we obtained on the validation set was 0.5684, the mean NSD was 0.5971, and the total run time was 783.14 s.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Clark, K., et al.: The cancer imaging archive (TCIA): maintaining and operating a public information repository. J. Digit. Imaging 26(6), 1045–1057 (2013)
de Boer, P.T., Kroese, D., Mannor, S., Rubinstein, R.: A tutorial on the cross-entropy method. Ann. Oper. Res. 134(1), 19–67 (2005). https://doi.org/10.1007/s10479-005-5724-z
Heller, N., et al.: The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: results of the kits19 challenge. Med. Image Anal. 67, 101821 (2021)
Heller, N., et al.: An international challenge to use artificial intelligence to define the state-of-the-art in kidney and kidney tumor segmentation in CT imaging. Proc. Am. Soc. Clin. Oncol. 38(6), 626–626 (2020)
Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)
Li, X., Yu, L., Chen, H., Fu, C.W., Xing, L., Heng, P.A.: Transformation-consistent self-ensembling model for semisupervised medical image segmentation. IEEE Trans. Neural Networks Learn. Syst. 32(2), 523–534 (2020)
Lv, P., Wang, J., Wang, H.: 2.5 d lightweight RIU-net for automatic liver and tumor segmentation from CT. Biomed. Signal Process. Control 75, 103567 (2022)
Ma, J., et al.: Loss odyssey in medical image segmentation. Med. Image Anal. 71, 102035 (2021)
Ma, J., et al.: Fast and low-GPU-memory abdomen CT organ segmentation: the flare challenge. Med. Image Anal. 82, 102616 (2022). https://doi.org/10.1016/j.media.2022.102616
Ma, J., et al.: AbdomenCT-1K: is abdominal organ segmentation a solved problem? IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6695–6714 (2022)
Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. IEEE (2016)
Simpson, A.L., et al.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv preprint arXiv:1902.09063 (2019)
Wardhana, G., Naghibi, H., Sirmacek, B., Abayazid, M.: Toward reliable automatic liver and tumor segmentation using convolutional neural network based on 2.5 d models. Int. J. Comput. Assis. Radiol. Surg. 16(1), 41–51 (2021)
Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018)
Acknowledgements
The authors of this paper declare that the segmentation method they implemented for participation in the FLARE 2022 challenge has not used any pre-trained models nor additional datasets other than those provided by the organizers. The proposed solution is fully automatic without any manual intervention.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Chen, H. et al. (2022). Multi-organ Segmentation Based on 2.5D Semi-supervised Learning. In: Ma, J., Wang, B. (eds) Fast and Low-Resource Semi-supervised Abdominal Organ Segmentation. FLARE 2022. Lecture Notes in Computer Science, vol 13816. Springer, Cham. https://doi.org/10.1007/978-3-031-23911-3_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-23911-3_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-23910-6
Online ISBN: 978-3-031-23911-3
eBook Packages: Computer ScienceComputer Science (R0)