Abstract
eXplainable Artificial Intelligence (XAI) has attracted researchers in various domains over the last few years. Explainable AI includes the explainability in the AI systems which capable of explaining their decisions. This study performs a systematic literature review on XAI. In the first phase, we collected 78 high-quality web of science research journal papers from the Scopus data. It revealed that IEEE access and Expert systems with applications are the main targeted journals for researchers for XAI. Our study applies an Apriori algorithm and network analysis to get the dominant theme and check the connectivity among the methods/techniques respectively. The analysis showed that Robotics, Financial Services, Healthcare, Banking, Security, and business are the most dominant areas where XAI provides an explainability to the artificial intelligence (AI) systems. Based on our analysis, this literature review provides a future direction for researchers, academicians, and industrialists.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Berk, R. A., & Bleich, J. (2013). Statistical procedures for forecasting criminal behavior: A comparative assessment. Criminology and Public Policy, 12, 513.
Tan, S., Caruana, R., Hooker, G., & Lou, Y. (2018). Distill-and-compare: Auditing black-box models using transparent model distillation. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 303–310).
Chancellor, S., Baumer, E. P., & De Choudhury, M. (2019). Who is the “human” in human centered machine learning: The case of predicting mental health from social media. In Proceedings of the ACM on human-computer interaction (CSCW) (vol. 3, pp. 1–32).
Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain?. arXiv preprint arXiv:1712.09923.
Katuwal, G. J., & Chen, R. (2016). Machine learning model interpretability for precision medicine. arXiv preprint arXiv:1610.09045.
MacKenzie, D. (2018). Material signals: A historical sociology of high-frequency trading. American Journal of Sociology, 123(6), 1635–1683.
Murawski, J. (2019). Mortgage providers look to AI to process home loans faster. Wall Street Journal, 18.
Hao, K. (2019). AI is sending people to jail—and getting it wrong. Technology Review, 21.
Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2018). Human decisions and machine predictions. The Quarterly Journal of Economics, 133(1), 237–293.
Lipton, Z. C. (2018). The mythos of model interpretability. Queue, 16(3), 30:31–30:57.
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. CoRR, abs/1702.08608.
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138–52160.
Preece, A. (2018). Asking ‘Why’in AI: Explainability of intelligent systems–perspectives and challenges. Intelligent Systems in Accounting, Finance and Management, 25(2), 63–72.
Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296.
Doran, D., Schulz, S., & Besold, T. R. (2017). What Does Explainable AI Really Mean? A New Conceptualization of Perspectives. In Proceedings of the 1st international workshop on comprehensibility and explanation in AI and ML colocated with AI*IA 2017 (vol. 2071). CEUR-WS.org.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1–42.
Holzinger, A., Plass, M., Kickmeier-Rust, M., Holzinger, K., Cris¸an, G. C., Pintea, C. M., & Palade, V. (2019). Interactive machine learning: Experimental evidence for the human in the algorithmic loop. Applied Intelligence, 49(7), 2401–2414. https://doi.org/10.1007/s10489-018-1361-5
Holzinger, A. (2016). Interactive machine learning for health informatics: When do we need the human-in-the-loop? Brain Informatics, 3, 119–131. http://www.springer.com/computer/ai/journal/40708, https://doi.org/10.1007/s40708-0160042-6
Ma, Y., Wang, Z., Yang, H., & Yang, L. (2020). Artificial intelligence applications in the development of autonomous vehicles: A survey. IEEE/CAA Journal of Automatica Sinica, 7(2), 315–329.
Garg, S., Sinha, S., Kar, A. K., & Mani, M. (2021). A review of machine learning applications in human resource management. International Journal of Productivity and Performance Management.
Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4(37).
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284.
Castelvecchi, D. (2016). Can we open the black box of AI? Nature News, 538(7623), 20.
Xie, Y., Chen, X. A., & Gao, G. (2019). Outlining the design space of explainable intelligent systems for medical diagnosis. In C. Trattner, D. Parra, & N. Riche (Eds.), Joint proceedings of the ACM IUI 2019 workshops co-located with the 24th ACM conference on intelligent user interfaces (ACM IUI 2019), Los Angeles, USA, March 20, 2019. ser. CEUR workshop proceedings, (vol. 2327). CEUR-WS.org, 2019. [Online]. http: //ceur-ws.org/Vol-2327/IUI19WS-ExSS2019-18.pdf
Tjoa, E., & Guan, C. (2020). A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE Transactions on Neural Networks and Learning Systems, 1–21.
Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14 410–14 430.
Ma, X., Niu, Y., Gu, L., Wang, Y., Zhao, Y., Bailey, J., & Lu, F. (2021). Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognition, 110, 107332.
Ren, K., Zheng, T., Qin, Z., & Liu, X. (2020). Adversarial attacks and defenses in deep learning. Engineering, 6(3), 346–360.
Nilsson, N. J. (2011). The quest for artificial intelligence: A history of ideas and achievements. In The quest for artificial intelligence: A history of ideas and achievements. https://doi.org/10.1017/CBO9780511819346
Oh, K., Kim, S., & Oh, I.-S. (2020). Salient explanation for fine-grained classification. IEEE Access.
RégisPierrard, J.-P. P. (2020). Spatial relation learning for explainable image classification and annotation in critical applications. Artificial Intelligence.
Jean-BaptisteLamy, K. R. (2020). Explainable decision support through the learning and visualization of preferences from a formal ontology of antibiotic treatments. Journal of Biomedical Informatics. R
Augusto Anguita-Ruiz, A. S.-D.-F. (2020). EXplainable Artificial Intelligence (XAI) for the identification of biologically relevant gene expression patterns in longitudinal human studies, insights from obesity research. National Library of Medicine.
Patrik Sabol, P. S.-F. (2020). Explainable classifier for improving the accountability in decision-making for colorectal cancer diagnosis from histopathological images.
Bum Chul Kwon, M. -J. C. (2018). RetainVis: Visual analytics with interpretable and interactive recurrent neural networks on electronic medical records. Cornell university.
Dasom Seo, K. O.-S. (2019). Regional multi-scale approach for visually pleasing explanations of deep neural networks. Cornell University
Shane O’Sullivan, S. L. (2020). Operational framework and training standard requirements for AI-empowered robotic surgery.
Jelena Fiosina, M. F. (2020). Explainable deep learning for augmentation of small RNA Expression profiles. Journal of Computational Biology.
Tjoa, E., & Guan, C. (2019). A survey on explainable artificial intelligence (XAI): towards Medical XAI. IEEE Transactions on Neural Networks and Learning Systems. R
Kexin Chen, T. H. (2020). Neurorobots as a means toward neuroethology and explainable AI. Frontiers in Neurorobotics.
Loyola-González, O. (2019). Black-Box vs. white-box: understanding their advantages and weaknesses from a practical point of view. IEEE Access.
Sarah Itani, D. T. (2020). Combining anatomical and functional networks for neuropathology identification: A case study on autism spectrum disorder. Cornell University.
A.Parziale, R. A. (2020). Cartesian genetic programming for diagnosis of Parkinson disease through handwriting analysis: Performance vs. interpretability issues. Artificial Intelligence in Medicine.
Katharina Weitz, T. H. (2019). Deep-learned faces of pain and emotions: Elucidating the differences of facial expressions with the help of explainable AI methods. Technisches Messen.
Jasper van der Waa, E. N. (2020). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence.
LucasRizzo, L. (2020). An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems. Expert Systems with Applications.
Dumas, M., La Rosa, M., Mendling, J., & Reijers, H. A. (2013). Fundamentals of business process management (vol. 1, p. 2). Springer.
Verma, S., Sharma, R., Deb, S., & Maitra, D. (2021). Artificial intelligence in marketing: Systematic review and future research direction. International Journal of Information Management Data Insights, 100002.
Ximeng Cheng, O. I. (2020). A method to evaluate task-specific importance of spatiotemporal units based on explainable artificial intelligence. International Journal of Geographical Information Science.
Ahn, S., Kim, J., Park, S. Y., & Cho, S. (2021). Explaining deep learning-based traffic classification using a genetic algorithm. IEEE Access.
Da Lio, M., Donà, R., Papini, G. P. R., & Gurney, K. (2020). Agent architecture for adaptive behaviors in autonomous driving. IEEE Access, 8, 154906–154923.
Carlos Eiras-Franco, B. G. -B. -B. (2019). A scalable decision-tree-based method to explain interactions in dyadic data. Decision Support Systems.
Kar, A. K. (2020). What affects usage satisfaction in mobile payments? Modelling user generated content to develop the “digital service usage satisfaction model”. Information Systems Frontiers, 1–21.
Pedro JoséPereira, P. R. (2020). Multi-objective grammatical evolution of decision trees for mobile marketing user conversion prediction. Expert Systems with Applications.
DanaPessach, G. S. -G. -G. (2020). Employees recruitment: A prescriptive analytics approach via machine learning and mathematical programming. Decision Support System.
Alejandro Barredo Arrieta, N. D. -R. -L. (2019). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible. Information Fusion.
Arroyo, J., Corea, F., Jimenez-Diaz, G., & Recio-Garcia, J. A. (2019). Assessment of machine learning performance for decision support in venture capital investments. IEEE Access.
Raffinetti, P. G. (2020). Shapley-Lorenz eXplainable artificial intelligence. Expert Systems with Applications.
Carta, S. M., Consoli, S., Piras, L., Podda, A. S., & Recupero, D. R. (2021). Explainable machine learning exploiting news and domain-specific lexicon for stock market forecasting. IEEE Access.
Aggour, K. S., Bonissone, P. P., Cheetham, W. E., & Messmer, R. P. (2006). Automating the underwriting of insurance applications. AI magazine, 27(3), 36–36.
Sachan, S., Yang, J. B., Xu, D. L., Benavides, D. E., & Li, Y. (2020). An explainable AI decision-support-system to automate loan underwriting. Expert Systems with Applications, 144, 113100.
Kar, A. K., & Rakshit, A. (2015). Flexible pricing models for cloud computing based on group decision making under consensus. Global Journal of Flexible Systems Management, 16(2), 191–204.
Buehler, K., Freeman, A., & Hulme, R. (2008). The new arsenal of risk management. Harvard Business Review, 86(9), 93–100.
Moscato, V., Picariello, A., & Sperlí, G. (2021). A benchmark of machine learning approaches for credit score prediction. Expert Systems with Applications, 165, 113986.
Mahbooba, B., Timilsina, M., Sahal, R., & Serrano, M. (2021). Explainable artificial intelligence (xai) to enhance trust management in intrusion detection systems using decision tree model. Complexity.
Anupam, S., & Kar, A. K. (2021). Phishing website detection using support vector machines and nature-inspired optimization algorithms. Telecommunication Systems, 76(1), 17–32.
Kumar, G., Kumar, K., & Sachdeva, M. (2010). The use of artificial intelligence based techniques for intrusion detection: A review. Artificial Intelligence Review, 34(4), 369–387.
Jung, Y. J., Han, S. H., & Choi, H. J. (2021). Explaining CNN and RNN using selective layer-wise relevance propagation. IEEE Access, 9, 18670–18681.
Aggarwal, A., Mittal, M., & Battineni, G. (2021). Generative adversarial network: An overview of theory and applications. International Journal of Information Management Data Insights, 100004.
Townsend, J., Chaton, T., & Monteiro, J. M. (2019). Extracting relational explanations from deep neural networks: A survey from a neural-symbolic perspective. IEEE Transactions on Neural Networks and Learning Systems, 31(9), 3456–3470.
Heuillet, A., Couthouis, F., & Díaz-Rodríguez, N. (2021). Explainability in deep reinforcement learning. Knowledge-Based Systems, 214, 106685.
Andrychowicz, O. M., Baker, B., Chociej, M., Jozefowicz, R., McGrew, B., Pachocki, J., ... & Zaremba, W. (2020). Learning dexterous in-hand manipulation. The International Journal of Robotics Research, 39(1), 3–20.
Huang, S. H., Held, D., Abbeel, P., & Dragan, A. D. (2019). Enabling robots to communicate their objectives. Autonomous Robots, 43(2), 309–326.
Kar, A. K., & Navin, L. (2020). Diffusion of blockchain in insurance industry: An analysis through the review of academic and trade literature. Telematics and Informatics, 101532.
Chakraborty, A., & Kar, A. K. (2017). Swarm intelligence: A review of algorithms. NatureInspired Computing and Optimization, 475–494.
Grover, P., Kar, A. K., & Dwivedi, Y. K. (2020). Understanding artificial intelligence adoption in operations management: insights from the review of academic literature and social media discussions. Annals of Operations Research, 1–37.
Mir, U. B., Sharma, S., Kar, A. K., & Gupta, M. P. (2020). Critical success factors for integrating artificial intelligence and robotics. Digital Policy, Regulation and Governance.
Kumar, S., Kar, A. K., & Ilavarasan, P. V. (2021). Applications of text mining in services management: A systematic literature review. International Journal of Information Management Data Insights, 1(1), 100008.
Acknowledgements
This article is based on research funded by the Department of Science and Technology (DST) under the ICPS Scheme, sanctioned 7/01/2019. The authors gratefully acknowledge the opportunity given by DST to conduct academic research on the tourism of India and provide inputs for policy making for improving tourism.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Kumar, S., Sarraf, S., Kar, A.K., Ilavarasan, P.V. (2023). A Study of eXplainable Artificial Intelligence: A Systematic Literature Review of the Applications. In: Singh, P.K., Wierzchoń, S.T., Pawłowski, W., Kar, A.K., Kumar, Y. (eds) IoT, Big Data and AI for Improving Quality of Everyday Life: Present and Future Challenges. Studies in Computational Intelligence, vol 1104. Springer, Cham. https://doi.org/10.1007/978-3-031-35783-1_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-35783-1_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-35782-4
Online ISBN: 978-3-031-35783-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)