Abstract
MRI-guided radiotherapy systems enable beam gating by tracking the target on planar, two-dimensional cine images acquired during treatment. This study aims to evaluate how deep-learning (DL) models for target tracking that are trained on data from one fraction can be translated to subsequent fractions. Cine images were acquired for six patients treated on an MRI-guided radiotherapy platform (MRIdian, Viewray Inc.) with an onboard 0.35 T MRI scanner. Three DL models (U-net, attention U-net and nested U-net) for target tracking were trained using two training strategies: (1) uniform training using data obtained only from the first fraction with testing performed on data from subsequent fractions and (2) adaptive training in which training was updated each fraction by adding 20 samples from the current fraction with testing performed on the remaining images from that fraction. Tracking performance was compared between algorithms, models and training strategies by evaluating the Dice similarity coefficient (DSC) and 95% Hausdorff Distance (HD95) between automatically generated and manually specified contours. The mean DSC for all six patients in comparing manual contours and contours generated by the onboard algorithm (OBT) were 0.68 ± 0.16. Compared to OBT, the DSC values improved 17.0 − 19.3% for the three DL models with uniform training, and 24.7 − 25.7% for the models based on adaptive training. The HD95 values improved 50.6 − 54.5% for the models based on adaptive training. DL-based techniques achieved better tracking performance than the onboard, registration-based tracking approach. DL-based tracking performance improved when implementing an adaptive strategy that augments training data fraction-by-fraction.
Reference
Shirato H, Shimizu S, Kitamura K et al (2007) Organ motion in image-guided radiotherapy: lessons from real-time tumor-tracking radiotherapy. Int J Clin Oncol 12:8–16
Keall P, Kini V, Vedam S et al (2002) Potential radiotherapy improvements with respiratory gating. Australasian Phys Eng Sci Med 25:1–6
Cardenas A, Fontenot J, Forster KM et al (2004) Quality assurance evaluation of delivery of respiratory-gated treatments. J Appl Clin Med Phys 5:55–61
Mutic S, Dempsey JF (2014) The viewray system: magnetic resonance–guided and controlled radiotherapy. Seminars in radiation oncology. Elsevier, pp 196–199
Anas EMA, Mousavi P, Abolmaesumi P (2018) A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy. Med Image Anal 48:107–116
Wu VC-C, Takeuchi M, Otani K et al (2013) Effect of through-plane and twisting motion on left ventricular strain calculation: direct comparison between two-dimensional and three-dimensional speckle-tracking echocardiography. J Am Soc Echocardiogr 26:1274–1281 e4
Hunt B, Gill GS, Alexander DA et al (2023) Fast deformable image registration for real-time target tracking during radiation therapy using cine mri and deep learning. Int J Radiat Oncol Biol Phys 115:983–993
Abdeltawab H, Khalifa F, Taher F et al (2020) A deep learning-based approach for automatic segmentation and quantification of the left ventricle from cardiac cine mr images. Comput Med Imaging Graph 81:101717
Dangi S, Linte CA, Yaniv Z (2019) A distance map regularized cnn for cardiac cine mr image segmentation. Med Phys 46:5637–5651
Wolterink JM, Leiner T, Viergever MA et al (2017) Automatic segmentation and disease classification using cardiac cine mr images. International Workshop on Statistical Atlases and Computational Models of the Heart. Springer. pp. 101–110
Wu J, Mazur TR, Ruan S et al (2018) A deep boltzmann machine-driven level set method for heart motion tracking using cine mri images. Med Image Anal 47:68–80
Lu W, Chen M-L, Olivera GH et al (2004) Fast free-form deformable registration via calculus of variations. Phys Med Biol 49:3067
Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In; Navab N et al (eds) International Conference on Medical image computing and computer-assisted intervention. Springer. pp. 234–241
Oktay O, Schlemper J, Folgoc LL et al (2018) Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:180403999
Zhou Z, Siddiquee MMR, Tajbakhsh N et al (2018) Unet++: a nested u-net architecture for medical image segmentation. In: Cardoso M et al (eds) Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, pp 3–11
Sudre CH, Li W, Vercauteren T et al (2017) Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Cardoso M et al (eds) Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, pp 240–248
Hofmanninger J, Prayer F, Pan J et al (2020) Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem. Eur Radiol Experimental 4:1–13
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Competing interests
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Peng, J., Stowe, H.B., Samson, P.P. et al. Inter-fractional portability of deep learning models for lung target tracking on cine imaging acquired in MRI-guided radiotherapy. Phys Eng Sci Med (2024). https://doi.org/10.1007/s13246-023-01371-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13246-023-01371-z