Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs


Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN’s performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2


  1. 1.

    Yi PH, Hui FK, Ting DS: Artificial intelligence and radiology: collaboration is key. J Am Coll Radiol 15:781–783, 2018

    Article  Google Scholar 

  2. 2.

    Litjens G, Kooi T, Bejnordi BE, Setio AA, Ciompi F, Ghafoorian M, van der Laak JA, van Ginneken B, Sánchez CI: A survey on deep learning in medical image analysis. Med Image Anal 42:60–88, 2017

    Article  Google Scholar 

  3. 3.

    Kim DH, MacKinnon T: Artificial intelligence in fracture detection: transfer learning from deep convolutional neural networks. Clin Radiol 73:439–445, 2018

    CAS  Article  Google Scholar 

  4. 4.

    Lakhani P, Sundaram B: Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 284:574–582, 2017

    Article  Google Scholar 

  5. 5.

    Wong TY, Bressler NM: Artificial intelligence with deep learning technology looks into diabetic retinopathy screening. JAMA 316:2366, 2016

    Article  Google Scholar 

  6. 6.

    Prevedello LM, Erdal BS, Ryu JL, Little KJ, Demirer M, Qian S, White RD: Automated critical test findings identification and online notification system using artificial intelligence in imaging. Radiology 285:923–931, 2017

    Article  Google Scholar 

  7. 7.

    Larson DB, Chen MC, Lungren MP, Halabi SS, Stence NV, Langlotz CP: Performance of a deep-learning neural network model in assessing skeletal maturity on pediatric hand radiographs. Radiology 287:313–322, 2017

    Article  Google Scholar 

  8. 8.

    Rajkomar A, Lingam S, Taylor AG, Blum M, Mongan J: High-throughput classification of radiographs using deep convolutional neural networks. J Digit Imaging 30:95–101, 2017

    Article  Google Scholar 

  9. 9.

    Aakre KT, Johnson CD: Plain-radiographic image labeling: a process to improve clinical outcomes. J Am Coll Radiol 3:949–953, 2006

    Article  Google Scholar 

  10. 10.

    Goodman LR: Felson’s principles of chest Roentgenology, a programmed text, 4th edition. Saunders, 2014

  11. 11.

    Jaeger S, Candemir S, Antani S, Wang Y, Lu PX, Thoma G: Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. Quant Imaging Med Surg 4:475–477, 2014

    PubMed  PubMed Central  Google Scholar 

  12. 12.

    Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM: ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings - IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 3462–3471, 2017

  13. 13.

    He K, Zhang X, Ren S, Sun J: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015

  14. 14.

    Lakhani P: Deep convolutional neural networks for endotracheal tube position and X-ray image classification: challenges and opportunities. J Digit Imaging 30:460–468, 2017

    Article  Google Scholar 

  15. 15.

    Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A: Learning deep features for discriminative localization. In Proceedings - IEEE Conference on Computer Vision and Pattern Recognition (CVPR): 2921–2929, 2016

  16. 16.

    DeLong ER, DeLong DM, Clarke-Pearson DL: Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 44:837–845, 1988

    CAS  Article  Google Scholar 

  17. 17.

    Youden WJ: Index for rating diagnostic tests. Cancer 3:32–35, 1950

    CAS  Article  Google Scholar 

  18. 18.

    Cheng PM, Malhi HS: Transfer learning with convolutional neural networks for classification of abdominal ultrasound images. J Digit Imaging 30:234–243, 2017

    Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Cheng Ting Lin.

Ethics declarations

All patient data were de-identified and compliant with the Health Insurance Portability and Accountability Act (HIPAA). This retrospective study was approved by the Institutional Review Board.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kim, T.K., Yi, P.H., Wei, J. et al. Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs. J Digit Imaging 32, 925–930 (2019).

Download citation


  • Deep learning
  • Deep convoluted neural networks
  • Artificial intelligence
  • PACS