Abstract
Deep learning has great potential to assist with detecting and triaging critical findings such as pneumoperitoneum on medical images. To be clinically useful, the performance of this technology still needs to be validated for generalizability across different types of imaging systems. This retrospective study included 1,287 chest X-ray images of patients who underwent initial chest radiography at 13 different hospitals between 2011 and 2019. State-of-the-art deep learning models were trained on a subset of this dataset, and the automated classification performance was evaluated on the rest of the dataset by measuring the AUC, sensitivity, and specificity. All deep learning models performed well for identifying radiographs with pneumoperitoneum, while DenseNet161 achieved the highest AUC of 95.7%, Specificity of 89.9%, and Sensitivity of 91.6%. The DenseNet161 model was able to accurately classify radiographs from different imaging systems (Accuracy of 90.8%), while it was trained on images captured from a specific imaging system from a single institution. This result suggests the generalizability of our model for learning salient features in chest X-ray images to detect pneumoperitoneum, independent of the imaging system. If verified in clinical settings, this model could assist practitioners with the diagnosis and management of patients with this urgent condition.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Liu, F., Jang, H., Kijowski, R., Bradshaw, T., McMillan, A.B.: Deep learning MR imaging–based attenuation correction for PET/MR imaging. Radiology 286(2), 676–684 (2018)
Tomita, N., Cheung, Y.Y., Hassanpour, S.: Deep neural networks for automatic detection of osteoporotic vertebral fractures on CT scans. Comput. Biol. Med. 98, 8–15 (2018)
Majkowska, A., Mittal, S., Steiner, D.F., Reicher, J.J., McKinney, et al.: Chest radiograph interpretation with deep learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation. Radiology 294(2), 421–431 (2020)
Ahmad, E., Goyal, M., McPhee, J.S., Degens, H., Yap, M.H.: Semantic segmentation of human thigh quadriceps muscle in magnetic resonance images. arXiv preprint arXiv:1801.00415 (2018)
Yap, M.H., et al.: End-to-end breast ultrasound lesions recognition with a deep learning approach. In: Medical Imaging 2018: Biomedical Applications in Molecular, Structural, and Functional Imaging, vol. 10578, p. 1057819. International Society for Optics and Photonics (2018)
Rajpurkar, P., et al.: ChexNet: radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225 (2017)
Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp. 2097–2106 (2017)
Stapakis, J.C., Thickman, D.: Diagnosis of pneumoperitoneum: abdominal CT vs. upright chest film. J. Comput. Assist. Tomogr. 16(5), 713–716 (1992)
Woodring, J.H., Heiser, M.J.: Detection of pneumoperitoneum on chest radiographs: comparison of upright lateral and posteroanterior projections. AJR Am. J. Roentgenol. 165(1), 45–47 (1995)
Chen, S.C., Yen, Z.S., Wang, H.P., Lin, F.Y., Hsu, C.Y., Chen, W.J.: Ultrasonography is superior to plain radiography in the diagnosis of pneumoperitoneum. Br. J. Surg. 89(3), 351–354 (2002)
Solis, C.V., Chang, Y., De Moya, M.A., Velmahos, G.C., Fagenholz, P.J.: Free air on plain film: do we need a computed tomography too? J. Emerg. Trauma Shock 7(1), 3 (2014)
Marcus, G., Little, M.A.: Advancing AI in health care: it's all about trust, 23 October 2019. https://www.statnews.com/2019/10/23/advancing-ai-health-care-trust/
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Iandola, F., Moskewicz, M., Karayev, S., Girshick, R., Darrell, T., Keutzer, K.: DenseNet: implementing efficient convnet descriptor pyramids. arXiv preprint arXiv:1404.1869 (2014)
Chen, Y., Li, J., Xiao, H., Jin, X., Yan, S., Feng, J.: Dual path networks. In: Advances in neural Information Processing Systems, pp. 4467–4475 (2017)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE, June 2009
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, pp. 8024–8035 (2019)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Goyal, M. et al. (2021). Sensitivity and Specificity Evaluation of Deep Learning Models for Detection of Pneumoperitoneum on Chest Radiographs. In: Tucker, A., Henriques Abreu, P., Cardoso, J., Pereira Rodrigues, P., Riaño, D. (eds) Artificial Intelligence in Medicine. AIME 2021. Lecture Notes in Computer Science(), vol 12721. Springer, Cham. https://doi.org/10.1007/978-3-030-77211-6_35
Download citation
DOI: https://doi.org/10.1007/978-3-030-77211-6_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-77210-9
Online ISBN: 978-3-030-77211-6
eBook Packages: Computer ScienceComputer Science (R0)