Abstract
The rise of deep neural networks in machine learning has been remarkable, leading to their deployment in algorithmic decision-making. However, this has raised questions about the explainability and interpretability of these models, given their growing importance in society. To address this, the field of interpretability in machine learning has been developed, with the goal of creating frameworks that can explain the decisions of a machine learning model in a way that is comprehensible to humans. This could be essential in building trust in the system, as well as debugging models for potential errors and meeting legal requirements (e.g., GDPR). Even though the success of deep neural network is attributed to its ability to capture higher level feature interactions, most of existing frameworks still focus on highlighting important individual features (e.g., words in text or pixels in images). Hence, to further improve interpretability, we propose to quantify the importance of feature interactions in addition to individual features. In this work, we introduce integrated directional gradients (IDG), a game-theory inspired method for assigning importance scores to higher level feature interactions. Our experiments with DNN-based text classifiers on the task of sentiment classification demonstrate that IDG is able to effectively capture the importance of feature interactions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
For detailed proofs refer to the original paper [36].
References
Bach S, Binder A, Montavon G, Klauschen F, Müller K-R, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7):e0130140
Camburu O-M (2020) Explaining deep neural networks. arXiv:2010.01496
Chen H, Zheng G, Ji Y (2020) Generating hierarchical explanations on text classification via feature interaction detection. In: Annual meeting of the association for computational linguistics, pp 5578–5593
Chen J, Jordan M (2020) Ls-tree: model interpretation when the data are linguistic. In: AAAI conference on artificial intelligence, vol 34, pp 3454–3461
Cui T, Marttinen P, Kaski S et al (2020) Learning global pairwise interactions with bayesian neural networks. In: European conference on artificial intelligence. IOS Press
Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805
Engler J, Sikdar S, Lutz M, Strohmaier M (2022) SensePOLAR: word sense aware interpretability for pre-trained contextual word embeddings. In: Findings of the association for computational linguistics: EMNLP, pp 4607–4619
Frye C, Rowat C, Feige I (2020) Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability. In: Advances in neural information processing systems, vol 33
Ghorbani A, Zou J (2020) Neuron Shapley: discovering the responsible neurons. arXiv:2002.09815
Harsanyi JC (1963) A simplified bargaining model for the n-person cooperative game. Int Econ Rev 4(2):194–220
Hendricks LA, Akata Z, Rohrbach M, Donahue J, Schiele B, Darrell T (2016) Generating visual explanations. In: European conference on computer vision. Springer, Berlin, pp 3–19
Hooker S, Erhan D, Kindermans P-J, Kim B (2019) A benchmark for interpretability methods in deep neural networks. In: Advances in neural information processing systems
Ibrahim M, Louie M, Modarres C, Paisley J (2019) Global explanations of neural networks: Mapping the landscape of predictions. In: Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society, pp 279–287
Janizek JD, Sturmfels P, Lee S-I (2020) Explaining explanations: axiomatic feature interactions for deep networks. arXiv:2002.04138
Jin X, Wei Z, Du J, Xue X, Ren X (2019) Towards hierarchical importance attribution: explaining compositional semantics for neural sequence models. In: International conference on learning representations
Kim B, Wattenberg M, Gilmer J, Cai C, Wexler J, Viegas F et al (2018) Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: International conference on machine learning. PMLR, pp 2668–2677
Kim J, Rohrbach A, Darrell T, Canny J, Akata Z (2018) Textual explanations for self-driving vehicles. In: Proceedings of the European conference on computer vision (ECCV), pp 563–578
Koh PW, Liang P (2017) Understanding black-box predictions via influence functions. In: International conference on machine learning. PMLR, pp 1885–1894
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444
Lei T, Barzilay R, Jaakkola T (2016) Rationalizing neural predictions. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp 107–117
Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2019) Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv:1910.13461
Liu Z, Song Q, Zhou K, Wang T-H, Shan Y, Hu X (2020) Detecting interactions from neural networks via topological analysis. In: Advances in neural information processing systems, vol 33
Lundberg SM, Erion G, Chen H, DeGrave A, Prutkin JM, Nair B, Katz R, Himmelfarb J, Bansal N, Lee S-I (2020) From local explanations to global understanding with explainable AI for trees. Nat Mach Intell 2(1):2522–5839
Lundberg SM, Lee S-I (2017) A unified approach to interpreting model predictions. In: Advances in neural information processing systems, pp 4765–4774
Maas A, Daly RE, Pham PT, Huang D, Ng AY, Potts C (2011) Learning word vectors for sentiment analysis. In: Annual meeting of the association for computational linguistics: human language technologies, pp 142–150
Mathew B, Sikdar S, Lemmerich F, Strohmaier M (2020) The polar framework: polar opposites enable interpretability of pre-trained word embeddings. In: Proceedings of the web conference, pp 1548–1558
Murdoch WJ, Liu PJ, Yu B (2018) Beyond word importance: contextual decomposition to extract interactions from LSTMS. In: International conference on learning representations
Park DH, Hendricks LA, Akata Z, Rohrbach A, Schiele B, Darrell T, Rohrbach M (2018) Multimodal explanations: justifying decisions and pointing to the evidence. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8779–8788
Rajani NF, McCann B, Xiong C, Socher R (2019) Explain yourself! leveraging language models for commonsense reasoning. arXiv:1906.02361
Ribeiro MT, Singh S, Guestrin C (2016) why should i trust you? explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135–1144
Ribeiro MT, Wu T, Guestrin C, Singh S (2020) Beyond accuracy: behavioral testing of nlp models with checklist. arXiv:2005.04118
Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–215
Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In: IEEE international conference on computer vision, pp 618–626
Şenel LK, Şahinuç F, Yücesoy V, Schütze H, Çukur T, Koç A (2022) Learning interpretable word embeddings via bidirectional alignment of dimensions with semantic concepts. Inf Process Manag 59(3):102925
Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. In: International conference on machine learning. PMLR, pp 3145–3153
Sikdar S, Bhattacharya P, Heese K (2021) Integrated directional gradients: feature interaction attribution for neural NLP models. In: Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing, vol 1: Long Papers, pp 865–878
Singh C, Murdoch WJ, Yu B (2018) Hierarchical interpretations for neural network predictions. In: International conference on learning representations
Socher R, Perelygin A, Wu J, Chuang J, Manning CD, Ng AY, Potts C (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In: Conference on empirical methods in natural language processing, pp 1631–1642
Sun C, Qiu X, Xu Y, Huang X (2019) How to fine-tune bert for text classification? In: China National conference on Chinese computational linguistics. Springer, Berlin, pp 194–206
Sundararajan M, Dhamdhere K, Agarwal A (2020) The Shapley Taylor interaction index. In: International conference on machine learning. PMLR, pp 9259–9268
Sundararajan M, Najmi A (2020) The many Shapley values for model explanation. In: International conference on machine learning. PMLR, pp 9269–9278
Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. In: International conference on machine learning. PMLR, pp 3319–3328
Tsang M, Rambhatla S, Liu Y (2020) How does this interaction affect me? interpretable attribution for feature interactions. In: Advances in neural information processing systems, vol 33
Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov RR, Le QV (2019) Xlnet: generalized autoregressive pretraining for language understanding. In: Advances in neural information processing systems, pp 5753–5763
Yoon J, Jordon J, van der Schaar M (2018) INVASE: instance-wise variable selection using neural networks. In: International conference on learning representations
Zhang X, Zhao J, LeCun Y (2015) Character-level convolutional networks for text classification. In: Advances in neural information processing systems, pp 649–657
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Institution of Engineers (India)
About this chapter
Cite this chapter
Sikdar, S., Bhattacharya, P. (2023). Interpretability of Deep Neural Models. In: Mukherjee, A., Kulshrestha, J., Chakraborty, A., Kumar, S. (eds) Ethics in Artificial Intelligence: Bias, Fairness and Beyond. Studies in Computational Intelligence, vol 1123. Springer, Singapore. https://doi.org/10.1007/978-981-99-7184-8_8
Download citation
DOI: https://doi.org/10.1007/978-981-99-7184-8_8
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-7183-1
Online ISBN: 978-981-99-7184-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)