Advertisement

TextRay: Mining Clinical Reports to Gain a Broad Understanding of Chest X-Rays

  • Jonathan Laserson
  • Christine Dan Lantsman
  • Michal Cohen-Sfady
  • Itamar Tamir
  • Eli Goz
  • Chen Brestel
  • Shir Bar
  • Maya Atar
  • Eldad Elnekave
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11071)

Abstract

The chest X-ray (CXR) is by far the most commonly performed radiological examination for screening and diagnosis of many cardiac and pulmonary diseases. There is an immense world-wide shortage of physicians capable of providing rapid and accurate interpretation of this study. A radiologist-driven analysis of over two million CXR reports generated an ontology including the 40 most prevalent pathologies on CXR. By manually tagging a relatively small set of sentences, we were able to construct a training set of 959k studies. A deep learning model was trained to predict the findings given the patient frontal and lateral scans. For 12 of the findings we compare the model performance against a team of radiologists and show that in most cases the radiologists agree on average more with the algorithm than with each other.

Keywords

Radiology Chest x-ray Deep learning 

Supplementary material

473975_1_En_62_MOESM1_ESM.pdf (5.6 mb)
Supplementary material 1 (pdf 5741 KB)

References

  1. 1.
    Robinson, P.J., Wilson, D., Coral, A., Murphy, A., Verow, P.: Variation between experienced observers in the interpretation of accident and emergency radiographs. Br. J. Radiol. 72(856), 323–30 (1999)CrossRefGoogle Scholar
  2. 2.
    Brady, A., Laoide, R., McCarthy, P., McDermott, R.: Discrepancy and error in radiology: concepts, causes and consequences. Ulster Med. J. 81(1), 3–9 (2012)Google Scholar
  3. 3.
    Bruno, M.A., Walker, E.A., Abujudeh, H.H.: Understanding and confronting our mistakes: the epidemiology of error in radiology and strategies for error reduction. RadioGraphics 35(6), 1668–1676 (2015)CrossRefGoogle Scholar
  4. 4.
    Hanna, T.N., Lamoureux, C., Krupinski, E.A., Weber, S., Johnson, J.O.: Effect of shift, schedule, and volume on interpretive accuracy: a retrospective analysis of 2.9 million radiologic examinations. Radiology (2017) 170555Google Scholar
  5. 5.
    Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)Google Scholar
  6. 6.
    Rajpurkar, P., et al.: CheXNet: radiologist-level pneumonia detection on chest X-rays with deep learning (2017)Google Scholar
  7. 7.
    Wang, X., et al.: Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases (2017). openaccess.thecvf.com
  8. 8.
    Shin, H.C., Roberts, K., Lu, L., Demner-Fushman, D., Yao, J., Summers, R.M.: Learning to read chest x-rays: recurrent neural cascade model for automated image annotation. In: Computer Vision and Pattern Recognition (CVPR), June 2016Google Scholar
  9. 9.
    Demner-Fushman, D., Shooshan, S.E., Rodriguez, L., Antani, S., Thoma, G.R.: Annotation of chest radiology reports forindexing and retrieval, pp. 99–111 (2015)Google Scholar
  10. 10.
    Jing, B., Xie, P., Xing, E.: On the automatic generation of medical imaging reports, November 2017Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Jonathan Laserson
    • 1
  • Christine Dan Lantsman
    • 2
  • Michal Cohen-Sfady
    • 1
  • Itamar Tamir
    • 3
  • Eli Goz
    • 1
  • Chen Brestel
    • 1
  • Shir Bar
    • 4
  • Maya Atar
    • 5
  • Eldad Elnekave
    • 1
  1. 1.Zebra Medical Vision LTDShefayimIsrael
  2. 2.Sheba Medical Center and Tel Aviv UniversityRamat GanIsrael
  3. 3.Rabin Medical CenterPetah TikvaIsrael
  4. 4.Technion, Israel Institute of TechnologyHaifaIsrael
  5. 5.Ben Gurion UniversityBeershebaIsrael

Personalised recommendations