Skip to main content
Log in

Inter-fractional portability of deep learning models for lung target tracking on cine imaging acquired in MRI-guided radiotherapy

  • AI Notes
  • Published:
Physical and Engineering Sciences in Medicine Aims and scope Submit manuscript

Abstract

MRI-guided radiotherapy systems enable beam gating by tracking the target on planar, two-dimensional cine images acquired during treatment. This study aims to evaluate how deep-learning (DL) models for target tracking that are trained on data from one fraction can be translated to subsequent fractions. Cine images were acquired for six patients treated on an MRI-guided radiotherapy platform (MRIdian, Viewray Inc.) with an onboard 0.35 T MRI scanner. Three DL models (U-net, attention U-net and nested U-net) for target tracking were trained using two training strategies: (1) uniform training using data obtained only from the first fraction with testing performed on data from subsequent fractions and (2) adaptive training in which training was updated each fraction by adding 20 samples from the current fraction with testing performed on the remaining images from that fraction. Tracking performance was compared between algorithms, models and training strategies by evaluating the Dice similarity coefficient (DSC) and 95% Hausdorff Distance (HD95) between automatically generated and manually specified contours. The mean DSC for all six patients in comparing manual contours and contours generated by the onboard algorithm (OBT) were 0.68 ± 0.16. Compared to OBT, the DSC values improved 17.0 − 19.3% for the three DL models with uniform training, and 24.7 − 25.7% for the models based on adaptive training. The HD95 values improved 50.6 − 54.5% for the models based on adaptive training. DL-based techniques achieved better tracking performance than the onboard, registration-based tracking approach. DL-based tracking performance improved when implementing an adaptive strategy that augments training data fraction-by-fraction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Reference

  1. Shirato H, Shimizu S, Kitamura K et al (2007) Organ motion in image-guided radiotherapy: lessons from real-time tumor-tracking radiotherapy. Int J Clin Oncol 12:8–16

    Article  PubMed  Google Scholar 

  2. Keall P, Kini V, Vedam S et al (2002) Potential radiotherapy improvements with respiratory gating. Australasian Phys Eng Sci Med 25:1–6

    Article  CAS  Google Scholar 

  3. Cardenas A, Fontenot J, Forster KM et al (2004) Quality assurance evaluation of delivery of respiratory-gated treatments. J Appl Clin Med Phys 5:55–61

    PubMed  PubMed Central  Google Scholar 

  4. Mutic S, Dempsey JF (2014) The viewray system: magnetic resonance–guided and controlled radiotherapy. Seminars in radiation oncology. Elsevier, pp 196–199

  5. Anas EMA, Mousavi P, Abolmaesumi P (2018) A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy. Med Image Anal 48:107–116

    Article  PubMed  Google Scholar 

  6. Wu VC-C, Takeuchi M, Otani K et al (2013) Effect of through-plane and twisting motion on left ventricular strain calculation: direct comparison between two-dimensional and three-dimensional speckle-tracking echocardiography. J Am Soc Echocardiogr 26:1274–1281 e4

    Article  PubMed  Google Scholar 

  7. Hunt B, Gill GS, Alexander DA et al (2023) Fast deformable image registration for real-time target tracking during radiation therapy using cine mri and deep learning. Int J Radiat Oncol Biol Phys 115:983–993

    Article  PubMed  Google Scholar 

  8. Abdeltawab H, Khalifa F, Taher F et al (2020) A deep learning-based approach for automatic segmentation and quantification of the left ventricle from cardiac cine mr images. Comput Med Imaging Graph 81:101717

    Article  PubMed  PubMed Central  Google Scholar 

  9. Dangi S, Linte CA, Yaniv Z (2019) A distance map regularized cnn for cardiac cine mr image segmentation. Med Phys 46:5637–5651

    Article  PubMed  Google Scholar 

  10. Wolterink JM, Leiner T, Viergever MA et al (2017) Automatic segmentation and disease classification using cardiac cine mr images. International Workshop on Statistical Atlases and Computational Models of the Heart. Springer. pp. 101–110

  11. Wu J, Mazur TR, Ruan S et al (2018) A deep boltzmann machine-driven level set method for heart motion tracking using cine mri images. Med Image Anal 47:68–80

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Lu W, Chen M-L, Olivera GH et al (2004) Fast free-form deformable registration via calculus of variations. Phys Med Biol 49:3067

    Article  PubMed  Google Scholar 

  13. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In; Navab N et al (eds) International Conference on Medical image computing and computer-assisted intervention. Springer. pp. 234–241

  14. Oktay O, Schlemper J, Folgoc LL et al (2018) Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:180403999

  15. Zhou Z, Siddiquee MMR, Tajbakhsh N et al (2018) Unet++: a nested u-net architecture for medical image segmentation. In: Cardoso M et al (eds) Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, pp 3–11

  16. Sudre CH, Li W, Vercauteren T et al (2017) Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Cardoso M et al (eds) Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, pp 240–248

  17. Hofmanninger J, Prayer F, Pan J et al (2020) Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem. Eur Radiol Experimental 4:1–13

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Thomas R. Mazur or Bin Cai.

Ethics declarations

Competing interests

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, J., Stowe, H.B., Samson, P.P. et al. Inter-fractional portability of deep learning models for lung target tracking on cine imaging acquired in MRI-guided radiotherapy. Phys Eng Sci Med (2024). https://doi.org/10.1007/s13246-023-01371-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13246-023-01371-z

Keywords

Navigation