X-ray-transform Invariant Anatomical Landmark Detection for Pelvic Trauma Surgery

  • Bastian BierEmail author
  • Mathias Unberath
  • Jan-Nico Zaech
  • Javad Fotouhi
  • Mehran Armand
  • Greg Osgood
  • Nassir Navab
  • Andreas Maier
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11073)


X-ray image guidance enables percutaneous alternatives to complex procedures. Unfortunately, the indirect view onto the anatomy in addition to projective simplification substantially increase the task-load for the surgeon. Additional 3D information such as knowledge of anatomical landmarks can benefit surgical decision making in complicated scenarios. Automatic detection of these landmarks in transmission imaging is challenging since image-domain features characteristic to a certain landmark change substantially depending on the viewing direction. Consequently and to the best of our knowledge, the above problem has not yet been addressed. In this work, we present a method to automatically detect anatomical landmarks in X-ray images independent of the viewing direction. To this end, a sequential prediction framework based on convolutional layers is trained on synthetically generated data of the pelvic anatomy to predict 23 landmarks in single X-ray images. View independence is contingent on training conditions and, here, is achieved on a spherical segment covering 120\({^\circ }\times \)90\({^\circ }\) in LAO/RAO and CRAN/CAUD, respectively, centered around AP. On synthetic data, the proposed approach achieves a mean prediction error of \(5.6\pm 4.5\) mm. We demonstrate that the proposed network is immediately applicable to clinically acquired data of the pelvis. In particular, we show that our intra-operative landmark detection together with pre-operative CT enables X-ray pose estimation which, ultimately, benefits initialization of image-based 2D/3D registration.



The authors gratefully acknowledge funding support from NIH 5R01AR065248-03.


  1. 1.
    Stöckle, U., Schaser, K., König, B.: Image guidance in pelvic and acetabular surgery-expectations, success and limitations. Injury 38(4), 450–462 (2007)CrossRefGoogle Scholar
  2. 2.
    Starr, R., Jones, A., Reinert, C., Borer, D.: Preliminary results and complications following limited open reduction and percutaneous screw fixation of displaced fractures of the acetabulum. Injury 32, SA45–50 (2001)CrossRefGoogle Scholar
  3. 3.
    Härtl, R., Lam, K.S., Wang, J., Korge, A., Audigé, F.K.L.: Worldwide survey on the use of navigation in spine surgery. World Neurosurg. 379(1), 162–172 (2013)CrossRefGoogle Scholar
  4. 4.
    Wei, S.E., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: CVPR, pp. 4724–4732 (2016)Google Scholar
  5. 5.
    Ghesu, F.C., Georgescu, B., Mansi, T., Neumann, D., Hornegger, J., Comaniciu, D.: An artificial agent for anatomical landmark detection in medical images. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9902, pp. 229–237. Springer, Cham (2016). Scholar
  6. 6.
    Wang, C.W., Huang, C.T., Hsieh, M.C.: Evaluation and comparison of anatomical landmark detection methods for cephalometric x-ray images: a grand challenge. Trans. Med. Imaging 34(9), 1890–1900 (2015)CrossRefGoogle Scholar
  7. 7.
    Chen, C., Xie, W., Franke, J., Grutzner, P., Nolte, L.P., Zheng, G.: Automatic x-ray landmark detection and shape segmentation via data-driven joint estimation of image displacements. Med. Image Anal. 18(3), 487–499 (2014)CrossRefGoogle Scholar
  8. 8.
    Markelj, P., Tomaževič, D., Likar, B., Pernuš, F.: A review of 3D/2D registration methods for image-guided interventions. Med. Image Anal. 16(3), 642–661 (2012)CrossRefGoogle Scholar
  9. 9.
    Aichert, A., Berger, M., Wang, J., Maass, N., Doerfler, A., Hornegger, J., Maier, A.K.: Epipolar consistency in transmission imaging. IEEE Trans. Med. Imag. 34(11), 2205–2219 (2015)CrossRefGoogle Scholar
  10. 10.
    Tucker, E., et al.: Towards clinical translation of augmented orthopedic surgery: from pre-op CT to intra-op x-ray via RGBD sensing. In: SPIE Medical Imaging (2018)Google Scholar
  11. 11.
    Hou, B., et al.: Predicting slice-to-volume transformation in presence of arbitrary subject motion. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 296–304. Springer, Cham (2017). Scholar
  12. 12.
    Roth, H., et al.: A new 2.5D representation for lymph node detection in CT. The Cancer Imaging Archive (2015)Google Scholar
  13. 13.
    Khurana, B., Sheehan, S.E., Sodickson, A.D., Weaver, M.J.: Pelvic ring fractures: what the orthopedic surgeon wants to know. Radiographics 34(5), 1317–1333 (2014)CrossRefGoogle Scholar
  14. 14.
    Unberath, M., et al.: DeepDRR-a catalyst for machine learning in fluoroscopy-guided procedures. In: Frangi, A.F., et al. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 98–106. Springer, Heidelberg (2018)Google Scholar
  15. 15.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2004). ISBN 0521540518CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Bastian Bier
    • 1
    • 2
    Email author
  • Mathias Unberath
    • 2
  • Jan-Nico Zaech
    • 1
    • 2
  • Javad Fotouhi
    • 2
  • Mehran Armand
    • 3
  • Greg Osgood
    • 4
  • Nassir Navab
    • 2
  • Andreas Maier
    • 1
  1. 1.Pattern Recognition LabFriedrich-Alexander-Universität Erlangen-NürnbergErlangenGermany
  2. 2.Computer Aided Medical ProceduresJohns Hopkins UniversityBaltimoreUSA
  3. 3.Applied Physics LaboratoryJohns Hopkins UniversityBaltimoreUSA
  4. 4.Department of Orthopaedic SurgeryJohns Hopkins HospitalBaltimoreUSA

Personalised recommendations