Learning to Avoid Poor Images: Towards Task-aware C-arm Cone-beam CT Trajectories

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)


Metal artifacts in computed tomography (CT) arise from a mismatch between physics of image formation and idealized assumptions during tomographic reconstruction. These artifacts are particularly strong around metal implants, inhibiting widespread adoption of 3D cone-beam CT (CBCT) despite clear opportunity for intra-operative verification of implant positioning, e. g. in spinal fusion surgery. On synthetic and real data, we demonstrate that much of the artifact can be avoided by acquiring better data for reconstruction in a task-aware and patient-specific manner, and describe the first step towards the envisioned task-aware CBCT protocol. The traditional short-scan CBCT trajectory is planar, with little room for scene-specific adjustment. We extend this trajectory by autonomously adjusting out-of-plane angulation. This enables C-arm source trajectories that are scene-specific in that they avoid acquiring “poor images”, characterized by beam hardening, photon starvation, and noise. The recommendation of ideal out-of-plane angulation is performed on-the-fly using a deep convolutional neural network that regresses a detectability-rank derived from imaging physics.


Robotic imaging Deep reinforcement learning 



We gratefully acknowledge support of the NVIDIA Corporation for donating GPUs, and Gerhard Kleinzig and Sebastian Vogt from SIEMENS for making an ARCADIS Orbic 3D available. JNZ was supported by a DAAD FITweltweit fellowship.

Supplementary material

Supplementary material 1 (mp4 7730 KB)


  1. 1.
    Bahadir, C.D., Dalca, A.V., Sabuncu, M.R.: Learning-based optimization of the under-sampling pattern in MRI. arXiv preprint arXiv:1901.01960 (2019)Google Scholar
  2. 2.
    Cordemans, V., Kaminski, L., Banse, X., Francq, B.G., Cartiaux, O.: Accuracy of a new intraoperative cone beam CT imaging technique compared to postoperative CT scan for assessment of pedicle screws placement and breaches detection. Eur. Spine. J. 26(11), 2906–2916 (2017)CrossRefGoogle Scholar
  3. 3.
    Feldkamp, L.A., Davis, L., Kress, J.W.: Practical cone-beam algorithm. Josa a 1(6), 612–619 (1984)CrossRefGoogle Scholar
  4. 4.
    Gang, G.J., Stayman, J.W., Zbijewski, W., Siewerdsen, J.H.: Task-baseddetectability in CT image reconstruction by filtered backprojection andpenalized likelihood estimation. Med. Phys. 41(8), 081902 (2014)CrossRefGoogle Scholar
  5. 5.
    Gelalis, I.D., et al.: Accuracy of pedicle screw placement: a systematic review of prospective in vivo studies comparing free hand, fluoroscopy guidance and navigation techniques. Eur. Spine J. 21(2), 247–255 (2012)CrossRefGoogle Scholar
  6. 6.
    Gjesteby, L., et al.: Metal artifact reduction in ct: where are we after four decades? IEEE Access 4, 5826–5849 (2016)CrossRefGoogle Scholar
  7. 7.
    Milletari, F., Birodkar, V., Sofka, M.: Straight to the point: reinforcement learning for user guidance in ultrasound. arXiv preprint arXiv:1903.00586 (2019)CrossRefGoogle Scholar
  8. 8.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 [cs] (2014)
  9. 9.
    Stayman, J.W., Siewerdsen, J.H.: Task-based trajectories in iteratively reconstructed interventional cone-beam CT. In: Proceedings 12th International Meeting Fully 3D Image Reconstruction Radiology Nuclear Medcine, pp. 257–260 (2013)Google Scholar
  10. 10.
    Unberath, M., et al.: Enabling machine learning inx-ray-based procedures via realistic simulation of image formation. Int. J. Comput. Assist. Radiol. Surg. (2019).,
  11. 11.
    Unberath, M., et al.: DeepDRR – a catalyst for machine learning in fluoroscopy-guided procedures. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 98–106. Springer, Cham (2018). Scholar
  12. 12.
    United States Bone and Joint Initiative: The Burden of Musculoskeletal Diseases in the United States (BMUS). Third Edition edn. (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Laboratory for Computational Sensing and RoboticsJohns Hopkins UniversityBaltimoreUSA
  2. 2.Pattern Recognition LabFriedrich-Alexander-Universität Erlangen-NürnbergErlangenGermany
  3. 3.Computer Vision LaboratoryEidgenössische Technische Hochschule ZürichZürichSwitzerland

Personalised recommendations