Advertisement

Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs

  • Tae Kyung Kim
  • Paul H. Yi
  • Jinchi Wei
  • Ji Won Shin
  • Gregory Hager
  • Ferdinand K. Hui
  • Haris I. Sair
  • Cheng Ting LinEmail author
Article

Abstract

Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN’s performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.

Keywords

Deep learning Deep convoluted neural networks Artificial intelligence PACS 

Notes

Compliance with Ethical Standards

All patient data were de-identified and compliant with the Health Insurance Portability and Accountability Act (HIPAA). This retrospective study was approved by the Institutional Review Board.

References

  1. 1.
    Yi PH, Hui FK, Ting DS: Artificial intelligence and radiology: collaboration is key. J Am Coll Radiol 15:781–783, 2018CrossRefGoogle Scholar
  2. 2.
    Litjens G, Kooi T, Bejnordi BE, Setio AA, Ciompi F, Ghafoorian M, van der Laak JA, van Ginneken B, Sánchez CI: A survey on deep learning in medical image analysis. Med Image Anal 42:60–88, 2017CrossRefGoogle Scholar
  3. 3.
    Kim DH, MacKinnon T: Artificial intelligence in fracture detection: transfer learning from deep convolutional neural networks. Clin Radiol 73:439–445, 2018CrossRefGoogle Scholar
  4. 4.
    Lakhani P, Sundaram B: Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 284:574–582, 2017CrossRefGoogle Scholar
  5. 5.
    Wong TY, Bressler NM: Artificial intelligence with deep learning technology looks into diabetic retinopathy screening. JAMA 316:2366, 2016CrossRefGoogle Scholar
  6. 6.
    Prevedello LM, Erdal BS, Ryu JL, Little KJ, Demirer M, Qian S, White RD: Automated critical test findings identification and online notification system using artificial intelligence in imaging. Radiology 285:923–931, 2017CrossRefGoogle Scholar
  7. 7.
    Larson DB, Chen MC, Lungren MP, Halabi SS, Stence NV, Langlotz CP: Performance of a deep-learning neural network model in assessing skeletal maturity on pediatric hand radiographs. Radiology 287:313–322, 2017CrossRefGoogle Scholar
  8. 8.
    Rajkomar A, Lingam S, Taylor AG, Blum M, Mongan J: High-throughput classification of radiographs using deep convolutional neural networks. J Digit Imaging 30:95–101, 2017CrossRefGoogle Scholar
  9. 9.
    Aakre KT, Johnson CD: Plain-radiographic image labeling: a process to improve clinical outcomes. J Am Coll Radiol 3:949–953, 2006CrossRefGoogle Scholar
  10. 10.
    Goodman LR: Felson’s principles of chest Roentgenology, a programmed text, 4th edition. Saunders, 2014Google Scholar
  11. 11.
    Jaeger S, Candemir S, Antani S, Wang Y, Lu PX, Thoma G: Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. Quant Imaging Med Surg 4:475–477, 2014Google Scholar
  12. 12.
    Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM: ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings - IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 3462–3471, 2017Google Scholar
  13. 13.
    He K, Zhang X, Ren S, Sun J: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015Google Scholar
  14. 14.
    Lakhani P: Deep convolutional neural networks for endotracheal tube position and X-ray image classification: challenges and opportunities. J Digit Imaging 30:460–468, 2017CrossRefGoogle Scholar
  15. 15.
    Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A: Learning deep features for discriminative localization. In Proceedings - IEEE Conference on Computer Vision and Pattern Recognition (CVPR): 2921–2929, 2016Google Scholar
  16. 16.
    DeLong ER, DeLong DM, Clarke-Pearson DL: Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 44:837–845, 1988CrossRefGoogle Scholar
  17. 17.
    Youden WJ: Index for rating diagnostic tests. Cancer 3:32–35, 1950CrossRefGoogle Scholar
  18. 18.
    Cheng PM, Malhi HS: Transfer learning with convolutional neural networks for classification of abdominal ultrasound images. J Digit Imaging 30:234–243, 2017CrossRefGoogle Scholar

Copyright information

© Society for Imaging Informatics in Medicine 2019

Authors and Affiliations

  • Tae Kyung Kim
    • 1
    • 2
  • Paul H. Yi
    • 1
    • 2
  • Jinchi Wei
    • 2
  • Ji Won Shin
    • 2
  • Gregory Hager
    • 2
  • Ferdinand K. Hui
    • 1
    • 2
  • Haris I. Sair
    • 1
    • 2
  • Cheng Ting Lin
    • 1
    • 2
    Email author
  1. 1.The Russell H. Morgan Department of Radiology and Radiological ScienceJohns Hopkins University School of MedicineBaltimoreUSA
  2. 2.Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in HealthcareJohns Hopkins University Whiting School of engineeringBaltimoreUSA

Personalised recommendations