Advertisement

Journal of Digital Imaging

, Volume 31, Issue 6, pp 923–928 | Cite as

Laterality Classification of Fundus Images Using Interpretable Deep Neural Network

  • Yeonwoo Jang
  • Jaemin Son
  • Kyu Hyung Park
  • Sang Jun Park
  • Kyu-Hwan JungEmail author
Article

Abstract

In this paper, we aimed to understand and analyze the outputs of a convolutional neural network model that classifies the laterality of fundus images. Our model not only automatizes the classification process, which results in reducing the labors of clinicians, but also highlights the key regions in the image and evaluates the uncertainty for the decision with proper analytic tools. Our model was trained and tested with 25,911 fundus images (43.4% of macula-centered images and 28.3% each of superior and nasal retinal fundus images). Also, activation maps were generated to mark important regions in the image for the classification. Then, uncertainties were quantified to support explanations as to why certain images were incorrectly classified under the proposed model. Our model achieved a mean training accuracy of 99%, which is comparable to the performance of clinicians. Strong activations were detected at the location of optic disc and retinal blood vessels around the disc, which matches to the regions that clinicians attend when deciding the laterality. Uncertainty analysis discovered that misclassified images tend to accompany with high prediction uncertainties and are likely ungradable. We believe that visualization of informative regions and the estimation of uncertainty, along with presentation of the prediction result, would enhance the interpretability of neural network models in a way that clinicians can be benefitted from using the automatic classification system.

Keywords

Laterality classification Fundus images Deep neural network Deep learning Interpretability 

Notes

Funder/Sponsor

This study was supported by the Small Grant for Exploratory Research of the National Research Foundation of Korea (NRF), which is funded by the Ministry of Science, ICT, and Future Planning (NRF-2015R1D1A1A02062194). The funding organizations had no role in the design or conduct of this research.

Compliance with Ethical Standards

Conflict of Interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Jaya T, Dheeba J, Singh NA: Detection of hard exudates in colour fundus images using fuzzy support vector machine-based expert system. J Digit Imaging 28(6):761–768, 2015CrossRefGoogle Scholar
  2. 2.
    Oloumi F, Rangayyan RM, Ells AL: Computer-aided diagnosis of proliferative diabetic retinopathy via modeling of the major temporal arcade in retinal fundus images. J Digit Imaging 26(6):1124–1130, 2013CrossRefGoogle Scholar
  3. 3.
    Group, E.T.D.R.S.R: Grading diabetic retinopathy from stereoscopic color fundus photographs--an extension of the modified Airlie house classification. ETDRS report number 10. Early treatment diabetic retinopathy study research group. Ophthalmology 98(5 Suppl):786–806, 1991Google Scholar
  4. 4.
    Krizhevsky, A.a.S., Ilya and Hinton, Geoffrey E, Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems. 2012. p. 1097–1105.Google Scholar
  5. 5.
    Ulyanov, D.a.V., Andrea and Lempitsky, victor, Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.Google Scholar
  6. 6.
    Zhou, B.a.K., Aditya and Lapedriza, Agata and Oliva, Aude and Torralba, Antonio. Learning deep features for discriminative localization. in IEEE conference on computer vision and pattern recognition. 2016.Google Scholar
  7. 7.
    Selvaraju, R.R.a.C., Michael and Das, Abhishek and Vedantam, Ramakrishna and Parikh, Devi and Batra, Dhruv, Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. arXiv preprint arXiv:1610.02391, 2016.Google Scholar
  8. 8.
    Gal, Y.a.G., Zoubin. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. in International conference on machine learning. 2016.Google Scholar
  9. 9.
    Simonyan, K.a.Z., Andrew, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.Google Scholar
  10. 10.
    Schneiderman, H., The Funduscopic Examination, in Clinical Methods: The History, Physical, and Laboratory Examinations, rd, et al., Editors. 1990: Boston.Google Scholar
  11. 11.
    Ronneberger, O.a.F., Philipp and Brox, Thomas, U-net: Convolutional networks for biomedical image segmentation, in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2015, Springer. p. 234–241.Google Scholar
  12. 12.
    Carmona EJ, Rincón M, García-Feijoó J, Martínez-de-la-Casa JM: Identification of the optic nerve head with genetic algorithms. Artif Intell Med 43(3):243–259, 2008CrossRefGoogle Scholar

Copyright information

© Society for Imaging Informatics in Medicine 2018

Authors and Affiliations

  • Yeonwoo Jang
    • 1
  • Jaemin Son
    • 2
  • Kyu Hyung Park
    • 3
  • Sang Jun Park
    • 3
  • Kyu-Hwan Jung
    • 2
    Email author
  1. 1.Department of StatisticsUniversity of OxfordOxfordUK
  2. 2.VUNO Inc.SeoulRepublic of Korea
  3. 3.Department of OphthalmologySeoul National University College of Medicine, Seoul National University Bundang HospitalSeongnamSouth Korea

Personalised recommendations