Skip to main content

Synthesis of fracture radiographs with deep neural networks

Abstract

Purpose

We describe a machine learning system for converting diagrams of fractures into realistic X-ray images. We further present a method for iterative, human-guided refinement of the generated images and show that the resulting synthetic images can be used during training to increase the accuracy of deep classifiers on clinically meaningful subsets of fracture X-rays.

Methods

A neural network was trained to reconstruct images from programmatically created line drawings of those images. The images were then further refined with an optimization-based technique. Ten physicians were recruited into a study to assess the realism of synthetic radiographs created by the neural network. They were presented with mixed sets of real and synthetic images and asked to identify which images were synthetic. Two classifiers were trained to detect humeral shaft fractures: one only on true fracture images, and one on both true and synthetic images.

Results

Physicians were 49.63% accurate in identifying whether images were synthetic or real. This is close to what would be expected by pure chance (i.e. random guessing). A classifier trained only on real images detected fractures with 67.21% sensitivity when no fracture fixation hardware was present. A classifier trained on both real images and synthetic images was 75.54% sensitive.

Conclusion

Our method generates X-rays realistic enough to be indistinguishable from real X-rays. We also show that synthetic images generated using this method can be used to increase the accuracy of deep classifiers on clinically meaningful subsets of fracture X-rays.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

References

  1. Donaldson LJ, Cook A, Thomson RG. Incidence of fractures in a geographically defined population. J Epidemiol Community Health. 1990;44(3):241.

    Article  Google Scholar 

  2. Guly H. Diagnostic errors in an accident and emergency department. Emerg Med J. 2001;18(4):263. https://doi.org/10.1136/emj.18.4.263.

    Article  Google Scholar 

  3. Pinto A, Acampora C, Pinto F, Kourdioukova E, Romano L, Verstraete K. Learning from diagnostic errors: a good way to improve education in radiology. Eur J Radiol. 2011;78(3):372.

    Article  Google Scholar 

  4. Rajpurkar P, Irvin J, Zhu K, Yang B, Mehta H, Duan T, Ding D, Bagul A, Langlotz C, Shpanskaya K, et al. CheXNet: radiologist-level pneumonia detection on chest X-rays with deep learning. 2017. arXiv preprint. arXiv:1711.05225.

  5. England JR, Gross JS, White EA, Patel DB, England JT, Cheng PM. Detection of traumatic pediatric elbow joint effusion using a deep convolutional neural network. Am J Roentgenol. 2018;211(6):1361.

    Article  Google Scholar 

  6. Ebsim R, Naqvi J, Cootes TF. Automatic detection of wrist fractures from posteroanterior and lateral radiographs: a deep learning-based approach. In: International workshop on computational methods and clinical applications in musculoskeletal imaging. Cham: Springer; 2018. p. 114–25.

  7. Oakden-Rayner L, Dunnmon J, Carneiro G, Ré C. Hidden stratification causes clinically meaningful failures in machine learning for medical imaging. 2019. arXiv preprint. arXiv:1909.12475.

  8. Wang TC, Liu MY, Zhu JY, Tao A, Kautz J, Catanzaro B. High-resolution image synthesis and semantic manipulation with sonditional GANs. 2017. arXiv preprint. arXiv:1711.11585.

  9. Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision; 2017. p. 2223–32.

  10. Emami H, Dong M, Nejad-Davarani SP, Glide-Hurst CK. Generating synthetic CTs from magnetic resonance images using generative adversarial networks. Med Phys. 2018;45(8):3627.

    Article  Google Scholar 

  11. Zhu JY, Zhang R, Pathak D, Darrell T, Efros AA, Wang O, Shechtman E. Toward multimodal image-to-image translation. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R, editors. Advances in neural information processing systems. Red Hook: Curran Associates Inc; 2017. p. 465–76.

    Google Scholar 

  12. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Advances in neural information processing systems; 2014. p. 2672–80.

  13. Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, van den Berg CAT, Isgum I. Deep MR to CT synthesis using unpaired data. CoRR. 2017. arXiv: abs/1708.01155.

  14. Costa P, Galdran A, Meyer MI, Niemeijer M, Abràmoff M, Mendonça AM, Campilho A. End-to-end adversarial retinal image synthesis. IEEE Trans. Med. Imaging. 2018;37(3):781.

    Article  Google Scholar 

  15. Sadda P, Onofrey JA, Papademetris X. Deep learning retinal vessel segmentation from a single annotated example: an application of cyclic generative adversarial neural networks. In: Stoyanov D, et al., editors. Intravascular imaging and computer assisted stenting and large-scale annotation of biomedical data and expert label synthesis, vol. 11043., Lecture notes in computer scienceCham: Springer; 2018. p. 82–91.

    Chapter  Google Scholar 

  16. Sadda P, Qarni T. Real-time medical video denoising with deep learning: application to angiography. Int J Appl Inf Syst. 2018;12(13):22.

    Google Scholar 

  17. Bissoto A, Perez F, Valle E, Avila S. Skin lesion synthesis with generative adversarial networks. In: OR 2.0 context-aware operating theaters, computer assisted robotic endoscopy, clinical image-based procedures, and skin image analysis. Cham: Springer; 2018. p. 294–302.

  18. Chuquicusma MJ, Hussein S, Burt J, Bagci U. How to fool radiologists with generative adversarial networks? A visual turing test for lung cancer diagnosis. In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018. Washington, DC: IEEE; 2018. p. 240–4.

  19. Korkinof D, Rijken T, O’Neill M, Yearsley J, Harvey H, Glocker B. High-resolution mammogram synthesis using progressive generative adversarial networks; 2018. arXiv preprint. arXiv:1807.03401.

  20. Rajpurkar P, Irvin J, Bagul A, Ding D, Duan T, Mehta H, Yang B, Zhu K, Laird D, Ball RL et al. MURA: large dataset for abnormality detection in musculoskeletal radiographs; 2017. arXiv preprint. arXiv:1712.06957.

  21. Guo S, Sanner S, Bonilla EV. Gaussian process preference elicitation. In: Advances in neural information processing systems; 2010. p. 262–70.

  22. Chorppath AK, Alpcan T. Learning user preferences in mechanism design. In: 2011 50th IEEE conference on decision and control and European control conference. IEEE; 2011. p. 5349–55.

  23. Otsu N. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern. 1979;9(1):62.

    Article  Google Scholar 

  24. Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;327(8476):307.

    Article  Google Scholar 

  25. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. p. 4700–8

  26. Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. p. 2921–9.

Download references

Acknowledgements

The research reported in this publication was supported by the National Institutes of Health under Grant Number T35HL007649 (National Heart, Lung, and Blood Institute). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Nicholas Chedid.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chedid, N., Sadda, P., Gonchigar, A. et al. Synthesis of fracture radiographs with deep neural networks. Health Inf Sci Syst 8, 21 (2020). https://doi.org/10.1007/s13755-020-00111-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13755-020-00111-x

Keywords

  • Deep learning
  • Image synthesis
  • X-ray