Skip to main content

Efficient Shapley Explanation for Features Importance Estimation Under Uncertainty

Part of the Lecture Notes in Computer Science book series (LNIP,volume 12261)

Abstract

Complex deep learning models have shown their impressive power in analyzing high-dimensional medical image data. To increase the trust of applying deep learning models in medical field, it is essential to understand why a particular prediction was reached. Data feature importance estimation is an important approach to understand both the model and the underlying properties of data. Shapley value explanation (SHAP) is a technique to fairly evaluate input feature importance of a given model. However, the existing SHAP-based explanation works have limitations such as 1) computational complexity, which hinders their applications on high-dimensional medical image data; 2) being sensitive to noise, which can lead to serious errors. Therefore, we propose an uncertainty estimation method for the feature importance results calculated by SHAP. Then we theoretically justify the methods under a Shapley value framework. Finally we evaluate our methods on MNIST and a public neuroimaging dataset. We show the potential of our method to discover disease related biomarkers from neuroimaging data.

This work was supported by NIH Grant [R01NS035193, R01MH100028].

Our code is publicly available at: https://github.com/xxlya/DistDeepSHAP/.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-59710-8_77
  • Chapter length: 10 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   119.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-59710-8
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   159.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.

Notes

  1. 1.

    The investigation on the repeat sampling times is left in the Appendix (see supplementary material).

References

  1. Simonyan, K., et al.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)

  2. Sundararajan, M., et al.: Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3319–3328. JMLR. org (2017)

    Google Scholar 

  3. Montavon, G., et al.: Methods for interpreting and understanding deep neural networks. Digit. Signal Proc. 73, 1–15 (2018)

    MathSciNet  CrossRef  Google Scholar 

  4. Shrikumar, A., et al.: Learning important features through propagating activation differences. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3145–3153. JMLR. org (2017)

    Google Scholar 

  5. Ribeiro, M.T., et al.: “why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  6. Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)

    Google Scholar 

  7. Schwab, P., Hlavacs, H.: Capturing the essence: towards the automated generation of transparent behavior models. In: Eleventh Artificial Intelligence and Interactive Digital Entertainment Conference (2015)

    Google Scholar 

  8. Shapley, L.S.: A value for N-person games. Contrib. Theory Games 2(28), 307–317 (1953)

    MathSciNet  MATH  Google Scholar 

  9. Kindermans, P.-J., et al.: The (un)reliability of saliency methods. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 267–280. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_14

    CrossRef  Google Scholar 

  10. Fen, H., et al.: Why should you trust my interpretation? Understanding uncertainty in lime predictions. arXiv preprint arXiv:1904.12991 (2019)

  11. Adebayo, J., et al.: Sanity checks for saliency maps. In: Advances in Neural Information Processing Systems, pp. 9505–9515 (2018)

    Google Scholar 

  12. LeCun, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    CrossRef  Google Scholar 

  13. Springenberg, J.T., et al.: Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806 (2014)

  14. Di Martino, A., et al.: The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Mol. Psychiatry 19(6), 659 (2014)

    CrossRef  Google Scholar 

  15. Goldani, A.A., et al.: Biomarkers in autism. Front. Psychiatry 5, 100 (2014)

    CrossRef  Google Scholar 

  16. Nagarajan, R., et al.: Reduced MeCP2 expression is frequent in autism frontal cortex and correlates with aberrant MECP2 promoter methylation. Epigenetics 1(4), 172–182 (2006)

    CrossRef  Google Scholar 

  17. Watanabe, T., et al.: Mitigation of sociocommunicational deficits of autism through oxytocin-induced recovery of medial prefrontal activity: a randomized trial. JAMA Psychiatry 71(2), 166–175 (2014)

    CrossRef  Google Scholar 

  18. Rivet, T.T., Matson, J.L.: Review of gender differences in core symptomatology in autism spectrum disorders. Res. Autism Spect. Disord. 5(3), 957–976 (2011)

    CrossRef  Google Scholar 

  19. Halladay, A.K., et al.: Sex and gender differences in autism spectrum disorder: summarizing evidence gaps and identifying emerging areas of priority. Mol. Autism 6(1), 1–5 (2015)

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoxiao Li .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 437 KB)

Rights and permissions

Reprints and Permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Li, X., Zhou, Y., Dvornek, N.C., Gu, Y., Ventola, P., Duncan, J.S. (2020). Efficient Shapley Explanation for Features Importance Estimation Under Uncertainty. In: , et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12261. Springer, Cham. https://doi.org/10.1007/978-3-030-59710-8_77

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59710-8_77

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59709-2

  • Online ISBN: 978-3-030-59710-8

  • eBook Packages: Computer ScienceComputer Science (R0)