Skip to main content
Log in

Background Subtraction Angiography with Deep Learning Using Multi-frame Spatiotemporal Angiographic Input

  • Published:
Journal of Imaging Informatics in Medicine Aims and scope Submit manuscript
  • 1 Altmetric

Abstract

Catheter Digital Subtraction Angiography (DSA) is markedly degraded by all voluntary, respiratory, or cardiac motion artifact that occurs during the exam acquisition. Prior efforts directed toward improving DSA images with machine learning have focused on extracting vessels from individual, isolated 2D angiographic frames. In this work, we introduce improved 2D + t deep learning models that leverage the rich temporal information in angiographic timeseries. A total of 516 cerebral angiograms were collected with 8784 individual series. We utilized feature-based computer vision algorithms to separate the database into “motionless” and “motion-degraded” subsets. Motion measured from the “motion degraded” category was then used to create a realistic, but synthetic, motion-augmented dataset suitable for training 2D U-Net, 3D U-Net, SegResNet, and UNETR models. Quantitative results on a hold-out test set demonstrate that the 3D U-Net outperforms competing 2D U-Net architectures, with substantially reduced motion artifacts when compared to DSA. In comparison to single-frame 2D U-Net, the 3D U-Net utilizing 16 input frames achieves a reduced RMSE (35.77 ± 15.02 vs 23.14 ± 9.56, p < 0.0001; mean ± std dev) and an improved Multi-Scale SSIM (0.86 ± 0.08 vs 0.93 ± 0.05, p < 0.0001). The 3D U-Net also performs favorably in comparison to alternative convolutional and transformer-based architectures (U-Net RMSE 23.20 ± 7.55 vs SegResNet 23.99 ± 7.81, p < 0.0001, and UNETR 25.42 ± 7.79, p < 0.0001, mean ± std dev). These results demonstrate that multi-frame temporal information can boost performance of motion-resistant Background Subtraction Deep Learning algorithms, and we have presented a neuroangiography domain-specific synthetic affine motion augmentation pipeline that can be utilized to generate suitable datasets for supervised training of 3D (2d + t) architectures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data Availability

Due to the risk of an inadvertent leak of Private Health Information, our Institutional Review Board has not allowed us to make the raw angiographic data publicly available

Abbreviations

RMSE:

Root Mean Squared Error

SSIM:

Structural Similarity Index Measure

MS-SSIM:

Multi-Scale Structural Similarity Index Measure

ORB:

Oriented FAST and Rotated BRIEF

DSA:

Digital Subtraction Angiography

BSA:

Background Subtraction Angiography

GAN:

Generative Adversarial Network

NIFTI:

Neuroimaging Informatics Technology Initiative

RELU:

Rectified Linear Unit

References

  1. Pelz, D.M., A.J. Fox, and F. Vinuela, Digital subtraction angiography: current clinical applications. Stroke, 1985. 16(3): p. 528-536.

    Article  CAS  PubMed  Google Scholar 

  2. Crummy, A.B., C.M. Strother, and C.A. Mistretta, The history of digital subtraction angiography. J Vasc Interv Radiol, 2018. 29(8): p. 1138-1141.

    Article  PubMed  Google Scholar 

  3. Ronneberger, O., P. Fischer, and T. Brox U-Net: convolutional networks for biomedical image segmentation. 2015. http://arxiv.org/abs/1505.04597.

  4. Çiçek, Ö., et al., 3D U-Net: learning dense volumetric segmentation from sparse annotation. ArXiv, 2016. https://arxiv.org/abs/1606.06650.

  5. Ellis, D. and M. Aizenberg, Trialing U-Net training modifications for segmenting gliomas using open source deep learning framework. 2021. p. 40–49.

  6. Isensee, F., et al. Brain tumor segmentation and radiomics survival prediction: contribution to the BRATS 2017 challenge. in Brainlesion: glioma, multiple sclerosis, stroke and traumatic brain injuries. 2018. Cham: Springer International Publishing.

  7. Isensee, F., et al., nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, 2021. 18(2): p. 203-211.

    Article  CAS  PubMed  Google Scholar 

  8. Kayalibay, B., G. Jensen, and P.V.D. Smagt, CNN-based segmentation of medical imaging data. ArXiv, 2017. https://arxiv.org/abs/1701.03056.

  9. Wu, C., Y. Zou, and Z. Yang. U-GAN: generative adversarial networks with U-Net for retinal vessel segmentation. in 2019 14th International Conference on Computer Science & Education (ICCSE). 2019.

  10. Dorta, G., et al. The GAN that warped: semantic attribute editing with unpaired data. in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020.

  11. Isola, P., et al. Image-to-image translation with conditional adversarial networks. 2016. http://arxiv.org/abs/1611.07004.

  12. Dong, X., et al., Automatic multiorgan segmentation in thorax CT images using U-net-GAN. Med Phys, 2019. 46(5): p. 2157-2168.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Gao, Y., et al., Deep learning-based digital subtraction angiography image generation. Int J Comput Assist Radiol Surg, 2019. 14(10): p. 1775-1784.

    Article  PubMed  Google Scholar 

  14. Ueda, D., et al., Deep learning-based angiogram generation model for cerebral angiography without misregistration artifacts. Radiology, 2021. 299(3): p. 675-681.

    Article  PubMed  Google Scholar 

  15. Yonezawa, H., et al., Maskless 2-dimensional digital subtraction angiography generation model for abdominal vasculature using deep learning. Journal of Vascular and Interventional Radiology, 2022. 33(7): p. 845-851. e8.

    Article  PubMed  Google Scholar 

  16. Wang, L., et al., Coronary artery segmentation in angiographic videos utilizing spatial-temporal information. BMC Med Imaging, 2020. 20(1): p. 110.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Hao, D., et al., Sequential vessel segmentation via deep channel attention network. Neural Netw, 2020. 128: p. 172-187.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Rublee, E., et al. An efficient alternative to SIFT or SURF. in Proceedings of international conference on computer vision.

  19. Lowe, D.G., Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004. 60(2): p. 91-110.

    Article  Google Scholar 

  20. Myronenko, A. 3D MRI brain tumor segmentation using autoencoder regularization. in Brainlesion: glioma, multiple sclerosis, stroke and traumatic brain injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part II 4. 2019. Springer.

  21. Hatamizadeh, A., et al. Unetr: transformers for 3d medical image segmentation. in Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2022.

  22. Dosovitskiy, A., et al., An image is worth 16x16 words: transformers for image recognition at scale. 2020.

  23. Xiao, T., et al., Early convolutions help transformers see better. Advances in Neural Information Processing Systems, 2021. 34: p. 30392-30400.

    Google Scholar 

  24. Wang, Z., et al., Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process, 2004. 13(4): p. 600-12.

    Article  PubMed  Google Scholar 

  25. Wang, Z., E.P. Simoncelli, and A.C. Bovik. Multiscale structural similarity for image quality assessment. in The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003. 2003.

  26. Zhang, L., et al., FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Process, 2011. 20(8): p. 2378-86.

    Article  PubMed  Google Scholar 

  27. Huang, Z., et al., Revisiting nnU-Net for Iterative pseudo labeling and efficient sliding window inference, in Fast and low-resource semi-supervised abdominal organ segmentation: MICCAI 2022 Challenge, FLARE 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings. 2023, Springer. p. 178-189.

    Google Scholar 

  28. Baid, U., et al., The RSNA-ASNR-MICCAI BraTS 2021 benchmark on brain tumor segmentation and radiogenomic classification. arXiv preprint http://arxiv.org/abs/2107.02314, 2021.

  29. Crabb, B.T., et al., Deep learning subtraction angiography: improved generalizability with transfer learning. (1535–7732 (Electronic)).

  30. Meijering, E.H., K.J. Zuiderveld, and M.A. Viergever, Image registration for digital subtraction angiography. International Journal of Computer Vision, 1999. 31: p. 227-246.

    Article  Google Scholar 

  31. Song, S., et al., Inter/intra-frame constrained vascular segmentation in X-ray angiographic image sequence. BMC Medical Informatics and Decision Making, 2019. 19(6): p. 270.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Nejati, M., S. Sadri, and R. Amirfattahi, Nonrigid image registration in digital subtraction angiography using multilevel B-spline. BioMed research international, 2013. 2013: p. 236315.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Jaubert, O., et al., Real-time deep artifact suppression using recurrent U-Nets for low-latency cardiac MRI. Magnetic Resonance in Medicine, 2021. 86(4): p. 1904-1916.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Azizmohammadi, F., et al., Model-free cardiorespiratory motion prediction from X-ray angiography sequence with LSTM network. Annu Int Conf IEEE Eng Med Biol Soc, 2019. 2019: p. 7014-7018.

    PubMed  Google Scholar 

Download references

Funding

We are grateful for funding and support from the American Heart Association Career Development Award 933248, from the NVIDIA Academic Hardware Grant and Applied Research Accelerator Program, and from the NIH National Heart, Lung, and Blood Institute under Award Number 1R41HL164298.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to study design, manuscript preparation, and editing. Angiographic data collection and deidentification were performed by DRC. Software development and data analysis were performed by DRC and LC.

Corresponding author

Correspondence to Donald R. Cantrell.

Ethics declarations

Ethics Approval

This work was performed on retrospective data obtained and managed in compliance with the Northwestern University Institutional Review Board (STU00212923).

Consent to Participate

Informed consent was waived. Consent to participate was not applicable based on Institutional Review Board determinations.

Consent for Publication

Consent for publication was not applicable based on Institutional Review Board determinations.

Competing Interests

Portions of the work described in this article have been included in a related patent filed by Northwestern University (PCT/US2021/037936), with DR Cantrell, SA Ansari, and L Cho listed as co-inventors. DR Cantrell, SA Ansari, and L Cho are founders and have shares in Cleavoya, LLC, which was awarded a Phase 1 Small Business Technology Transfer Grant from the NIH (1R41HL164298) to further develop portions of the work described in this article.

Disclaimer

The content of this report is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (AVI 31700 KB)

Supplementary file2 (MP4 14141 KB)

Supplementary file3 (AVI 35390 KB)

Supplementary file4 (AVI 32493 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cantrell, D.R., Cho, L., Zhou, C. et al. Background Subtraction Angiography with Deep Learning Using Multi-frame Spatiotemporal Angiographic Input. J Digit Imaging. Inform. med. 37, 134–144 (2024). https://doi.org/10.1007/s10278-023-00921-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10278-023-00921-x

Keywords

Navigation