Abstract
For the longest time, most machine learning models have been viewed as black boxes. However, decisions with high stake cannot be taken without an explanation. So, model explainability and interpretability have become very important. Explainable artificial intelligence is a novel approach that aims to address how a model makes decisions. This paper throws light on understanding contribution of layer wise neurons involved in making decisions. Explanation of correlation between inputs and outputs and how the model arrived at that decision is the main focus of this application. The proposed application provides an explanation in the form of probabilities for numeric attributes and superpixel shading for images. A lot of models can be passed to this application and hence it makes it useful in all the use cases where neural networks are used for decision making. For the experimental purpose, we have considered a model for detecting the presence of malaria parasite in the blood sample image and classify it as parasitized or uninfected class. In the results section, different input images are taken and the analysis about the factors contributing to the model output is discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Molnar C (2019) Interpretable machine learning. Available online: https://christophm.github.io/interpretable-ml-book/
Ribeiro MT, Singh S, Guestrin C (2016) Why should I trust you? Explaining the predictions of any classifier. arXiv:1602.04938v3 [cs.LG]
Montavon G, Samekb W, Muller KR (2017) Methods for interpreting and understanding deep neural networks. arXiv:1706.07979 [cs.LG]
Green DP, Kern HL (2010) Modeling heterogeneous treatment effects in large-scale experiments using Bayesian additive regression trees. In: Proceedings of Annul Summer Meeting of the Society of Political Methodology, pp 1–40
Seifert C et al (2017) Visualizations of deep neural networks in computer vi-sion: a survey. In: Cerquitelli T, Quercia D, Pasquale F (eds) Transparent data mining for big and small data. Studies in Big Data, vol 32. Springer, Cham
Qin et al (2018) How convolutional neural network see the world—a survey of convolutional neural network visualization methods. arXiv:1804.11191 [cs.CV]
Goldstein A, Kapelner A, Bleich J, Pitkin E (2015) Peeking inside the black box: visualizing statistical learning with plots of individual conditional expectation. J Comput Graph Statist 24(1):44–65. https://doi.org/10.1080/10618600.2014.907095
Chalkiadakis (2018) A brief survey of visualization methods for deep learning models from the perspective of explainable AI. Heriot-Watt University
Guidotti et al (2018) A survey of methods for explaining black box models. arXiv:1802.01933 [cs.CY]
Nguyen et al (2019) Understanding neural networks via feature visualization: a survey. arXiv:1904.08939v1 [cs.LG]
Bastani O, Kim C, Bastani H (2017) Interpretability via model extraction. [Online]. Available: https://arxiv.org/abs/1706.09773
Gilpin et al (2019) Explaining explanations: an overview of interpretability of machine learning. arXiv:1806.00069 [cs.AI]
Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138–52160
Das A, Rad P (2020) Opportunities and challenges in explainable artificial intelligence (XAI): a survey. Available at: http://search.ebscohost.com/login.aspx?direct=true&AuthType=cookie,ip,uid,url&db=edsarx&AN=edsarx.2006.11371&site=eds-live
Tjoa E, Guan C. A survey on explainable artificial intelligence (XAI): toward medical XAI. In: IEEE transactions on neural networks and learning systems. https://doi.org/10.1109/tnnls.2020.3027314
Townsend J, Chaton T, Monteiro JM (2020) Extracting relational explanations from deep neural networks: a survey from a neural-symbolic perspective. IEEE Trans Neural Netw Learn Syst 31(9):3456–3470. https://doi.org/10.1109/TNNLS.2019.2944672
Lipton (2016) The mythos of model interpretability. arXiv:1606.03490 [cs.LG]
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Pawade, D., Dalvi, A., Gopani, J., Kachaliya, C., Shah, H., Shah, H. (2021). XAI—An Approach for Understanding Decisions Made by Neural Network. In: Singh Pundir, A.K., Yadav, A., Das, S. (eds) Recent Trends in Communication and Intelligent Systems. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-16-0167-5_17
Download citation
DOI: https://doi.org/10.1007/978-981-16-0167-5_17
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-0166-8
Online ISBN: 978-981-16-0167-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)