Advertisement

Abdominal Radiology

, Volume 43, Issue 5, pp 1120–1127 | Cite as

Detection of high-grade small bowel obstruction on conventional radiography with convolutional neural networks

  • Phillip M. Cheng
  • Tapas K. Tejura
  • Khoa N. Tran
  • Gilbert Whang
Article

Abstract

The purpose of this pilot study is to determine whether a deep convolutional neural network can be trained with limited image data to detect high-grade small bowel obstruction patterns on supine abdominal radiographs. Grayscale images from 3663 clinical supine abdominal radiographs were categorized into obstructive and non-obstructive categories independently by three abdominal radiologists, and the majority classification was used as ground truth; 74 images were found to be consistent with small bowel obstruction. Images were rescaled and randomized, with 2210 images constituting the training set (39 with small bowel obstruction) and 1453 images constituting the test set (35 with small bowel obstruction). Weight parameters for the final classification layer of the Inception v3 convolutional neural network, previously trained on the 2014 Large Scale Visual Recognition Challenge dataset, were retrained on the training set. After training, the neural network achieved an AUC of 0.84 on the test set (95% CI 0.78–0.89). At the maximum Youden index (sensitivity + specificity−1), the sensitivity of the system for small bowel obstruction is 83.8%, with a specificity of 68.1%. The results demonstrate that transfer learning with convolutional neural networks, even with limited training data, may be used to train a detector for high-grade small bowel obstruction gas patterns on supine radiographs.

Keywords

Small bowel obstruction Machine learning Artificial neural networks Digital image processing Deep learning 

Notes

Compliance with ethical standards

Funding

No funding was received for this study.

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. For this type of study formal consent is not required.

Informed consent

This retrospective study was approved by our institutional research ethics board. Informed consent was not required.

References

  1. 1.
    Silva AC, Pimenta M, Guimarães LS (2009) Small bowel obstruction: what to look for. Radiographics 29:423–439CrossRefPubMedGoogle Scholar
  2. 2.
    Paulson EK, Thompson WM (2015) Review of small-bowel obstruction: the diagnosis and when to worry. Radiology 275:332–342CrossRefPubMedGoogle Scholar
  3. 3.
    Thompson WM, Kilani RK, Smith BB, et al. (2007) Accuracy of abdominal radiography in acute small-bowel obstruction: does reviewer experience matter? Am J Roentgenol 188:W233–W238CrossRefGoogle Scholar
  4. 4.
    Goodfellow I, Bengio Y, Courville A (2016) Deep learning. Cambridge: MIT PressGoogle Scholar
  5. 5.
    LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444CrossRefPubMedGoogle Scholar
  6. 6.
    Greenspan H, van Ginneken B, Summers RM (2016) Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Trans Med Imaging 35:1153–1159CrossRefGoogle Scholar
  7. 7.
    Razavian AS, Azizpour H, Sullivan J, Carlsson S (2014) CNN features off-the-shelf: an astounding baseline for recognition, in 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, pp 512–519Google Scholar
  8. 8.
    Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? arXiv:14111792[cs]
  9. 9.
    Shin HC, Roth HR, Gao M, et al. (2016) Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging 35:1285–1298CrossRefPubMedGoogle Scholar
  10. 10.
    Russakovsky O, Deng J, Su H, et al. (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115:211–252CrossRefGoogle Scholar
  11. 11.
    Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G, Isard M, Kudlur M, Levenberg J, Monga R, Moore S, Murray DG, Steiner B, Tucker P, Vasudevan V, Warden P, Wicke M, Yu Y, Zheng X (2016) TensorFlow: a system for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, Berkeley, CA, USA, pp 265–283Google Scholar
  12. 12.
    Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2015) Rethinking the inception architecture for computer vision, arXiv:151200567[cs]
  13. 13.
    R Core Team (2017) R: a language and environment for statistical computing. Vienna: R Foundation for Statistical ComputingGoogle Scholar
  14. 14.
    Robin X, Turck N, Hainard A, et al. (2011) pROC: an open-source package for R and S+ to analyze and compare ROC curves. BMC Bioinform 12:77CrossRefGoogle Scholar
  15. 15.
    Kundel HL, Polansky M (2003) Measurement of observer agreement. Radiology 228:303–308CrossRefPubMedGoogle Scholar
  16. 16.
    Kellow ZS, MacInnes M, Kurzencwyg D, et al. (2008) The role of abdominal radiography in the evaluation of the nontrauma emergency patient. Radiology 248:887–893CrossRefPubMedGoogle Scholar
  17. 17.
    Rajkomar A, Lingam S, Taylor AG, Blum M, Mongan J (2017) High-throughput classification of radiographs using deep convolutional neural networks. J Digit Imaging 30:95–101CrossRefPubMedGoogle Scholar
  18. 18.
    Cheng PM, Malhi HS (2017) Transfer learning with convolutional neural networks for classification of abdominal ultrasound images. J Digit Imaging 30:234–243CrossRefPubMedGoogle Scholar
  19. 19.
    Lakhani P, Sundaram B (2017) Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 284:526–529CrossRefGoogle Scholar
  20. 20.
    Simonyan K, Vedaldi A, Zisserman A (2013) Deep inside convolutional networks: visualising image classification models and saliency maps, arXiv:13126034[cs]
  21. 21.
    Zeiler MD, Fergus R (2013) Visualizing and understanding convolutional networks, arXiv:13112901[cs]

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  • Phillip M. Cheng
    • 1
  • Tapas K. Tejura
    • 1
  • Khoa N. Tran
    • 1
  • Gilbert Whang
    • 1
  1. 1.Department of Radiology, USC Norris Comprehensive Cancer CenterKeck School of Medicine of USCLos AngelesUSA

Personalised recommendations