Abstract
Complex deep learning models have shown their impressive power in analyzing high-dimensional medical image data. To increase the trust of applying deep learning models in medical field, it is essential to understand why a particular prediction was reached. Data feature importance estimation is an important approach to understand both the model and the underlying properties of data. Shapley value explanation (SHAP) is a technique to fairly evaluate input feature importance of a given model. However, the existing SHAP-based explanation works have limitations such as 1) computational complexity, which hinders their applications on high-dimensional medical image data; 2) being sensitive to noise, which can lead to serious errors. Therefore, we propose an uncertainty estimation method for the feature importance results calculated by SHAP. Then we theoretically justify the methods under a Shapley value framework. Finally we evaluate our methods on MNIST and a public neuroimaging dataset. We show the potential of our method to discover disease related biomarkers from neuroimaging data.
This work was supported by NIH Grant [R01NS035193, R01MH100028].
Our code is publicly available at: https://github.com/xxlya/DistDeepSHAP/.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The investigation on the repeat sampling times is left in the Appendix (see supplementary material).
References
Simonyan, K., et al.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)
Sundararajan, M., et al.: Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3319–3328. JMLR. org (2017)
Montavon, G., et al.: Methods for interpreting and understanding deep neural networks. Digit. Signal Proc. 73, 1–15 (2018)
Shrikumar, A., et al.: Learning important features through propagating activation differences. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3145–3153. JMLR. org (2017)
Ribeiro, M.T., et al.: “why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)
Schwab, P., Hlavacs, H.: Capturing the essence: towards the automated generation of transparent behavior models. In: Eleventh Artificial Intelligence and Interactive Digital Entertainment Conference (2015)
Shapley, L.S.: A value for N-person games. Contrib. Theory Games 2(28), 307–317 (1953)
Kindermans, P.-J., et al.: The (un)reliability of saliency methods. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 267–280. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_14
Fen, H., et al.: Why should you trust my interpretation? Understanding uncertainty in lime predictions. arXiv preprint arXiv:1904.12991 (2019)
Adebayo, J., et al.: Sanity checks for saliency maps. In: Advances in Neural Information Processing Systems, pp. 9505–9515 (2018)
LeCun, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Springenberg, J.T., et al.: Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806 (2014)
Di Martino, A., et al.: The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Mol. Psychiatry 19(6), 659 (2014)
Goldani, A.A., et al.: Biomarkers in autism. Front. Psychiatry 5, 100 (2014)
Nagarajan, R., et al.: Reduced MeCP2 expression is frequent in autism frontal cortex and correlates with aberrant MECP2 promoter methylation. Epigenetics 1(4), 172–182 (2006)
Watanabe, T., et al.: Mitigation of sociocommunicational deficits of autism through oxytocin-induced recovery of medial prefrontal activity: a randomized trial. JAMA Psychiatry 71(2), 166–175 (2014)
Rivet, T.T., Matson, J.L.: Review of gender differences in core symptomatology in autism spectrum disorders. Res. Autism Spect. Disord. 5(3), 957–976 (2011)
Halladay, A.K., et al.: Sex and gender differences in autism spectrum disorder: summarizing evidence gaps and identifying emerging areas of priority. Mol. Autism 6(1), 1–5 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, X., Zhou, Y., Dvornek, N.C., Gu, Y., Ventola, P., Duncan, J.S. (2020). Efficient Shapley Explanation for Features Importance Estimation Under Uncertainty. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12261. Springer, Cham. https://doi.org/10.1007/978-3-030-59710-8_77
Download citation
DOI: https://doi.org/10.1007/978-3-030-59710-8_77
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59709-2
Online ISBN: 978-3-030-59710-8
eBook Packages: Computer ScienceComputer Science (R0)