Skip to main content

Implementation of model explainability for a basic brain tumor detection using convolutional neural networks on MRI slices

Abstract

Purpose

While neural networks gain popularity in medical research, attempts to make the decisions of a model explainable are often only made towards the end of the development process once a high predictive accuracy has been achieved.

Methods

In order to assess the advantages of implementing features to increase explainability early in the development process, we trained a neural network to differentiate between MRI slices containing either a vestibular schwannoma, a glioblastoma, or no tumor.

Results

Making the decisions of a network more explainable helped to identify potential bias and choose appropriate training data.

Conclusion

Model explainability should be considered in early stages of training a neural network for medical purposes as it may save time in the long run and will ultimately help physicians integrate the network’s predictions into a clinical decision.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2

References

  1. 1.

    El Jundi B, Heinze S, Lenschow C et al (2009) The locust standard brain: a 3D standard of the central complex as a platform for neural network analysis. Front Syst Neurosci 3:21

    Article  Google Scholar 

  2. 2.

    Zhou Z, Sanders JW, Johnson JM et al (2020) Computer-aided detection of brain metastases in T1-weighted MRI for stereotactic radiosurgery using deep learning single-shot detectors. Radiology:295(2):407–415

  3. 3.

    Kickingereder P, Isensee F, Tursunova I, Petersen J, Neuberger U, Bonekamp D, Brugnara G, Schell M, Kessler T, Foltyn M, Harting I, Sahm F, Prager M, Nowosielski M, Wick A, Nolden M, Radbruch A, Debus J, Schlemmer HP, Heiland S, Platten M, von Deimling A, van den Bent MJ, Gorlia T, Wick W, Bendszus M, Maier-Hein KH (2019) Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks: a multicentre, retrospective study. Lancet Oncol 20:728–740

    Article  Google Scholar 

  4. 4.

    Jang B-S, Jeon SH, Kim IH, Kim IA (2018) Prediction of pseudoprogression versus progression using machine learning algorithm in glioblastoma. Sci Rep 8:12516

    Article  CAS  Google Scholar 

  5. 5.

    Castelvecchi D (2016) Can we open the black box of AI? Nature 538:20–23

    Article  CAS  Google Scholar 

  6. 6.

    Selvaraju RR, Cogswell M, Das A, et al (2016) Grad-CAM: visual explanations from deep networks via gradient-based localization. arXiv [cs.CV]

  7. 7.

    Gal Y, Ghahramani Z (2015) Dropout as a Bayesian approximation: representing model uncertainty in deep learning. arXiv [statML]

  8. 8.

    He K, Zhang X, Ren S, Sun J (2015) Deep residual learning for image recognition. arXiv [cs.CV]

  9. 9.

    Louis DN, Perry A, Reifenberger G, von Deimling A, Figarella-Branger D, Cavenee WK, Ohgaki H, Wiestler OD, Kleihues P, Ellison DW (2016) The 2016 World Health Organization classification of tumors of the central nervous system: a summary. Acta Neuropathol 131:803–820

    Article  Google Scholar 

  10. 10.

    Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F (2013) The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J Digit Imaging 26:1045–1057

    Article  Google Scholar 

Download references

Code availability

The trained model can be provided by the corresponding author.

The Bayesian neural network was implemented based on https://github.com/DanyWind/fastai_bayesian.

The Grad-CAM implementation can be found at https://github.com/anhquan0412/animation-classification/blob/master/gradcam.py

The pretrained resnet50 was imported from the fastai library.

The IXI dataset can be downloaded from https://brain-development.org/ixi-dataset/.

The glioblastoma dataset can be downloaded from https://wiki.cancerimagingarchive.net/display/Public/TCGA-GBM#715bed1a14224923b50f1f2e7dae54a1.

Funding

No funding was received for this project.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Paul Windisch.

Ethics declarations

Conflict of Interest

Christoph Fürweger received speaker honoraria from Accuray outside of the submitted work. All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Windisch, P., Weber, P., Fürweger, C. et al. Implementation of model explainability for a basic brain tumor detection using convolutional neural networks on MRI slices. Neuroradiology 62, 1515–1518 (2020). https://doi.org/10.1007/s00234-020-02465-1

Download citation

Keywords

  • Deep learning
  • Explainability
  • Machine learning
  • Artificial intelligence
  • Gliobastoma
  • Vestibular Schwannoma