Skip to main content
Log in

Semi-supervised learning towards automated segmentation of PET images with limited annotations: application to lymphoma patients

  • Scientific Paper
  • Published:
Physical and Engineering Sciences in Medicine Aims and scope Submit manuscript

Abstract

Manual segmentation poses a time-consuming challenge for disease quantification, therapy evaluation, treatment planning, and outcome prediction. Convolutional neural networks (CNNs) hold promise in accurately identifying tumor locations and boundaries in PET scans. However, a major hurdle is the extensive amount of supervised and annotated data necessary for training. To overcome this limitation, this study explores semi-supervised approaches utilizing unlabeled data, specifically focusing on PET images of diffuse large B-cell lymphoma (DLBCL) and primary mediastinal large B-cell lymphoma (PMBCL) obtained from two centers. We considered 2-[18F]FDG PET images of 292 patients PMBCL (n = 104) and DLBCL (n = 188) (n = 232 for training and validation, and n = 60 for external testing). We harnessed classical wisdom embedded in traditional segmentation methods, such as the fuzzy clustering loss function (FCM), to tailor the training strategy for a 3D U-Net model, incorporating both supervised and unsupervised learning approaches. Various supervision levels were explored, including fully supervised methods with labeled FCM and unified focal/Dice loss, unsupervised methods with robust FCM (RFCM) and Mumford-Shah (MS) loss, and semi-supervised methods combining FCM with supervised Dice loss (MS + Dice) or labeled FCM (RFCM + FCM). The unified loss function yielded higher Dice scores (0.73 ± 0.11; 95% CI 0.67–0.8) than Dice loss (p value < 0.01). Among the semi-supervised approaches, RFCM + αFCM (α = 0.3) showed the best performance, with Dice score of 0.68 ± 0.10 (95% CI 0.45–0.77), outperforming MS + αDice for any supervision level (any α) (p < 0.01). Another semi-supervised approach with MS + αDice (α = 0.2) achieved Dice score of 0.59 ± 0.09 (95% CI 0.44–0.76) surpassing other supervision levels (p < 0.01). Given the time-consuming nature of manual delineations and the inconsistencies they may introduce, semi-supervised approaches hold promise for automating medical imaging segmentation workflows.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability

The data that was used for model development (autoPET) is publicly available. To protect study participant privacy, we cannot share the testing data from BC Cancer and Seoul St. Mary’s Hospital.

References

  1. Hasani N, Paravastu SS, Farhadi F et al (2022) Artificial intelligence in lymphoma PET imaging: a scoping review (current trends and future directions). PET Clin 17:145–174

    Article  PubMed  PubMed Central  Google Scholar 

  2. Cottereau A-S, Lanic H, Mareschal S et al (2016) Molecular profile and FDG-PET/CT total metabolic tumor volume improve risk classification at diagnosis for patients with diffuse large B-cell lymphoma. Clin Cancer Res 22:3801–3809

    Article  CAS  PubMed  Google Scholar 

  3. Kostakoglu L, Martelli M, Sehn LH, Belada D (2017) Baseline PET-derived metabolic tumor volume metrics predict progression-free and overall survival in DLBCL after first-line treatment: results from the phase 3. Blood 130:824

    Article  Google Scholar 

  4. Vercellino L, Cottereau A-S, Casasnovas O et al (2020) High total metabolic tumor volume at baseline predicts survival independent of response to therapy. Blood 135:1396–1405

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Ceriani L, Martelli M, Zinzani PL et al (2015) Utility of baseline 18FDG-PET/CT functional parameters in defining prognosis of primary mediastinal (thymic) large B-cell lymphoma. Blood 126:950–956

    Article  CAS  PubMed  Google Scholar 

  6. Ceriani L, Milan L, Martelli M et al (2018) Metabolic heterogeneity on baseline 18FDG-PET/CT scan is a predictor of outcome in primary mediastinal B-cell lymphoma. Blood 132:179–186

    Article  CAS  PubMed  Google Scholar 

  7. Cottereau A-S, Versari A, Loft A et al (2018) Prognostic value of baseline metabolic tumor volume in early-stage Hodgkin lymphoma in the standard arm of the H10 trial. Blood 131:1456–1463

    Article  CAS  PubMed  Google Scholar 

  8. Mikhaeel NG, Smith D, Dunn JT et al (2016) Combination of baseline metabolic tumour volume and early response on PET/CT improves progression-free survival prediction in DLBCL. Eur J Nucl Med Mol Imaging 43:1209–1219

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. Song M-K, Yang D-H, Lee G-W et al (2016) High total metabolic tumor volume in PET/CT predicts worse prognosis in diffuse large B cell lymphoma patients with bone marrow involvement in rituximab era. Leuk Res 42:1–6

    Article  PubMed  Google Scholar 

  10. Sasanelli M, Meignan M, Haioun C et al (2014) Pretherapy metabolic tumour volume is an independent predictor of outcome in patients with diffuse large B-cell lymphoma. Eur J Nucl Med Mol Imaging 41:2017–2022

    Article  CAS  PubMed  Google Scholar 

  11. Toledano MN, Desbordes P, Banjar A et al (2018) Combination of baseline FDG PET/CT total metabolic tumour volume and gene expression profile have a robust predictive value in patients with diffuse large B-cell lymphoma. Eur J Nucl Med Mol Imaging 45:680–688

    Article  CAS  PubMed  Google Scholar 

  12. Chang C-C, Cho S-F, Chuang Y-W et al (2017) Prognostic significance of total metabolic tumor volume on 18F-fluorodeoxyglucose positron emission tomography/computed tomography in patients with diffuse large B-cell lymphoma receiving rituximab-containing chemotherapy. Oncotarget 8:99587–99600

    Article  PubMed  PubMed Central  Google Scholar 

  13. Eude F, Toledano MN, Vera P et al (2021) Reproducibility of baseline tumour metabolic volume measurements in diffuse large B-cell lymphoma: is there a superior method? Metabolites 11:72. https://doi.org/10.3390/metabo11020072

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Barrington SF, Zwezerijnen BGJC, de Vet HCW et al (2021) Automated segmentation of baseline metabolic total tumor burden in diffuse large B-cell lymphoma: which method is most successful? A study on behalf of the PETRA consortium. J Nucl Med 62:332–337

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Kass M, Witkin A, Terzopoulos D (1988) Snakes: active contour models. Int J Comput Vis 1:321–331

    Article  Google Scholar 

  16. Sanjay-Gopal S, Hebert TJ (1998) Bayesian pixel classification using spatially variant finite mixtures and the generalized EM algorithm. IEEE Trans Image Process 7:1014–1028

    Article  CAS  PubMed  Google Scholar 

  17. Bezdek JC, Ehrlich R, Full W (1984) FCM: The fuzzy c-means clustering algorithm. Comput Geosci 10:191–203

    Article  Google Scholar 

  18. Cui R, Chen Z, Wu J et al (2021) A multiprocessing scheme for PET image pre-screening, noise reduction, segmentation and lesion partitioning. IEEE J Biomed Health Inform 25:1699–1711

    Article  PubMed  Google Scholar 

  19. Mumford D, Shah J (1989) Optimal approximations by piecewise smooth functions and associated variational problems. Commun Pure Appl Math 42:577–685

    Article  Google Scholar 

  20. Vese LA, Chan TF (2002) A multiphase level set framework for image segmentation using the Mumford and Shah model. Int J Comput Vis 50:271–293

    Article  Google Scholar 

  21. Liu S, Li J (2006) Automatic medical image segmentation using gradient and intensity combined level set method. Conf Proc IEEE Eng Med Biol Soc 2006:3118–3121

    Article  PubMed  Google Scholar 

  22. Zaidi H, El Naqa I (2010) PET-guided delineation of radiation therapy treatment volumes: a survey of image segmentation techniques. Eur J Nucl Med Mol Imaging 37:2165–2187

    Article  PubMed  Google Scholar 

  23. Weisman AJ, Kieler MW, Perlman SB et al (2020) Convolutional neural networks for automated PET/CT detection of diseased lymph node burden in patients with lymphoma. Radiol Artif Intell 2:e200016

    Article  PubMed  PubMed Central  Google Scholar 

  24. Blanc-Durand P, Van Der Gucht A, Schaefer N et al (2018) Automatic lesion detection and segmentation of 18F-FET PET in gliomas: a full 3D U-Net convolutional neural network study. PLoS ONE 13:e0195798

    Article  PubMed  PubMed Central  Google Scholar 

  25. Yousefirizi F, Dubljevic N, Ahamed S et al (2022) Convolutional neural network with a hybrid loss function for fully automated segmentation of lymphoma lesions in FDG PET images. In: Medical imaging 2022: image processing. SPIE, pp 214–220

  26. Yousefirizi F, Jha A, Ahamed S et al (2022) A novel loss function for improved deep learning-based segmentation: implications for TMTV computation. J Nucl Med 63:2588–2588

    Google Scholar 

  27. Coudray N, Moreira AL, Sakellaropoulos T, et al (2017) Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. medRxiv

  28. Sun, Shrivastava, Singh (2017) Revisiting unreasonable effectiveness of data in deep learning era. Proc Estonian Acad Sci Biol Ecol

  29. Willemink MJ, Koszek WA, Hardell C et al (2020) Preparing medical imaging data for machine learning. Radiology 295:4–15

    Article  PubMed  Google Scholar 

  30. Hatt M, Lee JA, Schmidtlein CR et al (2017) Classification and evaluation strategies of auto-segmentation approaches for PET: report of AAPM task group No. 211. Med Phys 44:e1–e42

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  31. Jha AK, Bradshaw TJ, Buvat I et al (2022) Nuclear medicine and artificial intelligence: best practices for evaluation (the RELAINCE guidelines). J Nucl Med. https://doi.org/10.2967/jnumed.121.263239

    Article  PubMed  PubMed Central  Google Scholar 

  32. Bradshaw TJ, Boellaard R, Dutta J et al (2021) Nuclear medicine and artificial intelligence: best practices for algorithm development. J Nucl Med. https://doi.org/10.2967/jnumed.121.262567

    Article  PubMed  Google Scholar 

  33. Hatt M, Rest CC-L, van Baardwijk A et al (2011) Impact of tumor size and tracer uptake heterogeneity in 18F-FDG PET and CT non-small cell lung cancer tumor delineation. J Nucl Med 52:1690–1697

    Article  PubMed  Google Scholar 

  34. Cheplygina V, de Bruijne M, Pluim JPW (2019) Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med Image Anal 54:280–296

    Article  PubMed  Google Scholar 

  35. Zhou Y, Wang Y, Tang P, et al (2019) Semi-supervised 3D abdominal multi-organ segmentation via deep multi-planar co-training. In: 2019 IEEE winter conference on applications of computer vision (WACV). ieeexplore.ieee.org, pp 121–140

  36. Afshari S, BenTaieb A, MiriKharaji Z, Hamarneh G (2019) Weakly supervised fully convolutional network for PET lesion segmentation. In: Medical imaging 2019: image processing. SPIE, pp 394–400

  37. Hu Y, Modat M, Gibson E et al (2018) Weakly-supervised convolutional neural networks for multimodal image registration. Med Image Anal 49:1–13

    Article  PubMed  PubMed Central  Google Scholar 

  38. Kamnitsas K, Baumgartner C, Ledig C et al (2017) Unsupervised domain adaptation in brain lesion segmentation with adversarial Networks. In: Information processing in medical imaging. Springer, New York, pp 597–609

  39. Moriya T, Oda H, Mitarai M et al (2019) Unsupervised segmentation of micro-CT images of lung cancer specimen using deep generative models. In: Medical image computing and computer assisted intervention—MICCAI 2019. Springer, New York, pp 240–248

  40. Moriya T, Roth HR, Nakamura S, et al (2018) Unsupervised segmentation of 3D medical images based on clustering and deep representation learning. In: Medical imaging 2018: biomedical applications in molecular, structural, and functional imaging. SPIE, pp 483–489

  41. Yousefirizi F, Jha AK, Brosch-Lenz J et al (2021) Toward high-throughput artificial intelligence-based segmentation in oncological PET imaging. PET Clin 16:577–596

    Article  PubMed  Google Scholar 

  42. Shi T, Jiang H, Wang M et al (2023) Metabolic anomaly appearance aware U-Net for automatic lymphoma segmentation in whole-body PET/CT scans. IEEE J Biomed Health Inform 1–12

  43. Lian C, Li H, Vera P, Ruan S (2018) Unsupervised co-segmentation of tumor in PET-CT images using belief functions based fusion. In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018). ieeexplore.ieee.org, pp 220–223

  44. Kim B, Ye JC (2020) Mumford-Shah loss functional for image segmentation with deep learning. IEEE Trans Image Process 29:1856–1866

    Article  Google Scholar 

  45. Chen J, Li Y, Luna LP et al (2021) Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks. Med Phys 48:3860–3877

    Article  PubMed  Google Scholar 

  46. Yousefirizi F, Bloise I, Martineau P, et al (2021) Reproducibility of a semi-automatic gradient-based segmentation approach for lymphoma PET. In: EANM Abstract Book, a supplement of the European journal of nuclear medicine and molecular imaging (EJNMMI). Springer, New York

  47. Çiçek Ö, Abdulkadir A, Lienkamp SS et al (2016) 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Medical image computing and computer-assisted intervention—MICCAI 2016. Springer, New York, pp 424–432

  48. Iantsen A, Ferreira M, Lucia F et al (2021) Convolutional neural networks for PET functional volume fully automatic segmentation: development and validation in a multi-center setting. Eur J Nucl Med Mol Imaging 48:3444–3456

    Article  PubMed  PubMed Central  Google Scholar 

  49. Pham DL (2001) Spatial models for fuzzy clustering. Comput Vis Image Underst 84:285–297

    Article  Google Scholar 

  50. Ahmed MN, Yamany SM, Mohamed N et al (2002) A modified fuzzy C-means algorithm for bias field estimation and segmentation of MRI data. IEEE Trans Med Imaging 21:193–199

    Article  PubMed  Google Scholar 

  51. Cai W, Chen S, Zhang D (2007) Fast and robust fuzzy c-means clustering algorithms incorporating local information for image segmentation. Pattern Recognit 40:825–838

    Article  Google Scholar 

  52. Chen S, Zhang D (2004) Robust image segmentation using FCM with spatial constraints based on new kernel-induced distance measure. IEEE Trans Syst Man Cybern B Cybern 34:1907–1916

    Article  PubMed  Google Scholar 

  53. Chuang K-S, Tzeng H-L, Chen S et al (2006) Fuzzy c-means clustering with spatial information for image segmentation. Comput Med Imaging Graph 30:9–15

    Article  PubMed  Google Scholar 

  54. Wang X-Y, Bu J (2010) A fast and robust image segmentation using FCM with spatial information. Digit Signal Process 20:1173–1182

    Article  Google Scholar 

  55. Geman S, Geman D (1984) Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. IEEE Trans Pattern Anal Mach Intell 6:721–741

    Article  CAS  PubMed  Google Scholar 

  56. Yousefirizi F, KlyuzhinJooHyun ISO et al (2024) TMTV-Net: fully automated total metabolic tumor volume segmentation in lymphoma PET/CT images—a multi-center generalizability analysis. Eur J Nucl Med Mol Imaging. https://doi.org/10.1007/s00259-024-06616-x

    Article  PubMed  Google Scholar 

  57. Ma J, Chen J, Ng M et al (2021) Loss odyssey in medical image segmentation. Med Image Anal 71:102035

    Article  PubMed  Google Scholar 

  58. Lin TY, Goyal P, Girshick R, He K (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988

  59. Kervadec H, Bouchtiba J, Desrosiers C (2019) Boundary loss for highly unbalanced segmentation. on medical imaging

  60. Yeung M, Sala E, Schönlieb C-B, Rundo L (2022) Unified Focal loss: generalising Dice and cross entropy-based losses to handle class imbalanced medical image segmentation. Comput Med Imaging Graph 95:102026

    Article  PubMed  PubMed Central  Google Scholar 

  61. Nioche C, Orlhac F, Boughdad S et al (2018) LIFEx: a freeware for radiomic feature calculation in multimodality imaging to accelerate advances in the characterization of tumor heterogeneity. Cancer Res 78:4786–4789

    Article  CAS  PubMed  Google Scholar 

  62. Gatidis S, Früh M, Fabritius M et al (2023) The autoPET challenge: towards fully automated lesion segmentation in oncologic PET/CT imaging

  63. Blanc-Durand P, Jégou S, Kanoun S et al (2021) Fully automatic segmentation of diffuse large B cell lymphoma lesions on 3D FDG-PET/CT for total metabolic tumour volume prediction using a convolutional neural network. Eur J Nucl Med Mol Imaging 48:1362–1370

    Article  CAS  PubMed  Google Scholar 

  64. Soret M, Bacharach SL, Buvat I (2007) Partial-volume effect in PET tumor imaging. J Nucl Med 48:932–945

    Article  PubMed  Google Scholar 

  65. Roy P, Ghosh S, Bhattacharya S, Pal U (2018) Effects of degradations on deep neural network architectures. arXiv:1807.10108

  66. Yousefirizi F, Klyuzhin I, Girum K et al (2023) Federated testing of AI techniques: towards sharing of implementations, not just code. J Nucl Med 64:P1482–P1482

    Google Scholar 

  67. Clark K, Vendt B, Smith K et al (2013) The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J Digit Imaging 26:1045–1057

    Article  PubMed  PubMed Central  Google Scholar 

  68. Gatidis S, Hepp T, Früh M et al (2022) A whole-body FDG-PET/CT dataset with manually annotated tumor lesions. Sci Data 9:601

    Article  CAS  PubMed  PubMed Central  Google Scholar 

Download references

Funding

This research was supported by the Canadian Institutes of Health Research (CIHR) Project Grant PJT-173231, in part through computational resources and services provided by Microsoft for Health, and the Swiss National Science Foundation under Grant SNRF 320030_176052.

Author information

Authors and Affiliations

Authors

Contributions

Fereshteh Yousefirizi and Arman Rahmim contributed to the study conception, design, and preparation of the manuscript. They also revised the manuscript based on input from co-authors. Isaac Shiri and Habib Zaidi provided support in the evaluation design, implementation, and text editing. Joo Hyun O, Laurie H. Sehn, Kerry J. Savage, and Carlos F. Uribe contributed to the provision of patient data, clinical study design, and overall support. Joo Hyun O, Ingrid Blioise, Patrick Martineau, Don Wilson, Francois Benard, and Carlos Uribe assisted with manual delineations and text revisions.

Corresponding author

Correspondence to Fereshteh Yousefirizi.

Ethics declarations

Competing interests

Carlos Uribe and Arman Rahmim are co-founders of Ascinta Technologies Inc.

Ethical approval

The retrospective study was conducted with the following ethics numbers: H19-01611 for the PMBCL study and H19-01866 for the DLBCL study at BC Cancer with the Ethics Committee and the Ethics Approval Number of UBC BC Cancer Research Ethics Board H19-001611, as well as KC11EISI0293 for the DLBCL study at Seoul St. Mary’s Hospital. This study was performed in accordance with the approved guidelines of Seoul St. Mary’s Hospital’s institutional review board approved on 10 November 2020 (KC20RASI0867). Since these were the retrospective studies, the ethics committee of the hospital waived the requirement for obtaining informed consents from patients.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 726 kb)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yousefirizi, F., Shiri, I., O, J.H. et al. Semi-supervised learning towards automated segmentation of PET images with limited annotations: application to lymphoma patients. Phys Eng Sci Med (2024). https://doi.org/10.1007/s13246-024-01408-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13246-024-01408-x

Keywords

Navigation