Advertisement

Workflow Phase Detection in Fluoroscopic Images Using Convolutional Neural Networks

  • Nikolaus ArbogastEmail author
  • Tanja Kurzendorfer
  • Katharina Breininger
  • Peter Mountney
  • Daniel Toth
  • Srinivas A. Narayan
  • Andreas Maier
Conference paper
Part of the Informatik aktuell book series (INFORMAT)

Zusammenfassung

In image guided interventions, the radiation dose to the patient and personnel can be reduced by positioning the blades of a collimator to block off unnecessary X-rays and restrict the irradiated area to a region of interest. In a certain stage of the operation workflow phase detection can define objects of interest to enable automatic collimation. Workflow phase detection can be beneficial for clinical time management or operating rooms of the future. In this work, we propose a learning-based approach for an automatic classification of three surgical workflow phases. Our data consists of 24 congenital cardiac interventions with a total of 2985 fluoroscopic 2D X-ray images. We compare two different convolutional neural network architectures and investigate their performance regarding each phase. Using a residual network, a class-wise averaged accuracy of 86:14% was achieved. The predictions of the trained models can then be used for context specific collimation.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. 1.
    Bicknell C. Occupational radiation exposure and the vascular interventionalist. Eur J Vasc Endovasc Surg. 2013;46(4):431.CrossRefGoogle Scholar
  2. 2.
    Giordano BD, Baumhauer JF, Morgan TL, et al. Patient and surgeon radiation exposure: comparison of standard and mini-C-arm uoroscopy. J Bone Joint Surg Am. 2009;91(2):297–304.CrossRefGoogle Scholar
  3. 3.
    Johnson JN, Hornik CP, Li JS, et al. Cumulative radiation exposure and cancer risk estimation in children with heart disease. Circ. 2014;130(2):161–167.CrossRefGoogle Scholar
  4. 4.
    DiPietro R, Stauder R, Kayis E, et al. Automated surgical-phase recognition using rapidly-deployable sensors. Proc MICCAI Workshop M2CAI. 2015;.Google Scholar
  5. 5.
    Twinanda AP, Yengera G, Mutter D, et al. RSDNet: learning to predict remaining surgery duration from laparoscopic videos without manual annotations. arXiv:180203243. 2018;.
  6. 6.
    Alhrishy M, Toth D, Narayan SA, et al. A machine learning framework for context specific collimation and workflow phase detection. Comput Methods Biomech Biomed Engin. 2018;.Google Scholar
  7. 7.
    Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Adv Neural Inf Process Syst. 2012; p. 1097–1105.Google Scholar
  8. 8.
    He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. Proc CVPR. 2016; p. 770–778.Google Scholar
  9. 9.
    He K, Zhang X, Ren S, et al. Delving deep into rectifiers: surpassing human-level performance on imagenet classification. Proc ICCV. 2015; p. 1026–1034.Google Scholar
  10. 10.
    Breininger K, Albarqouni S, Kurzendorfer T, et al. Intraoperative stent segmentation in X-ray uoroscopy for endovascular aortic repair. Int J Comput Assist Radiol Surg. 2018;13(8).CrossRefGoogle Scholar

Copyright information

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2019

Authors and Affiliations

  • Nikolaus Arbogast
    • 1
    • 2
    Email author
  • Tanja Kurzendorfer
    • 1
    • 2
  • Katharina Breininger
    • 1
  • Peter Mountney
    • 3
  • Daniel Toth
    • 4
    • 5
  • Srinivas A. Narayan
    • 4
  • Andreas Maier
    • 1
  1. 1.Pattern Recognition Lab, Department of Computer ScienceFriedrich-Alexander-Universität Erlangen-NürnbergErlangenDeutschland
  2. 2.Siemens Healthcare GmbHForchheimDeutschland
  3. 3.Siemens HealthineersMedical Imaging TechnologiesPrincetonUSA
  4. 4.School of Biomedical Engineering and Imaging SciencesKing’s College LondonLondonUK
  5. 5.Siemens HealthineersFrimleyUK

Personalised recommendations