Skip to main content

A Study of eXplainable Artificial Intelligence: A Systematic Literature Review of the Applications

  • Chapter
  • First Online:
IoT, Big Data and AI for Improving Quality of Everyday Life: Present and Future Challenges

Part of the book series: Studies in Computational Intelligence ((SCI,volume 1104))

  • 287 Accesses

Abstract

eXplainable Artificial Intelligence (XAI) has attracted researchers in various domains over the last few years. Explainable AI includes the explainability in the AI systems which capable of explaining their decisions. This study performs a systematic literature review on XAI. In the first phase, we collected 78 high-quality web of science research journal papers from the Scopus data. It revealed that IEEE access and Expert systems with applications are the main targeted journals for researchers for XAI. Our study applies an Apriori algorithm and network analysis to get the dominant theme and check the connectivity among the methods/techniques respectively. The analysis showed that Robotics, Financial Services, Healthcare, Banking, Security, and business are the most dominant areas where XAI provides an explainability to the artificial intelligence (AI) systems. Based on our analysis, this literature review provides a future direction for researchers, academicians, and industrialists.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Berk, R. A., & Bleich, J. (2013). Statistical procedures for forecasting criminal behavior: A comparative assessment. Criminology and Public Policy, 12, 513.

    Article  Google Scholar 

  2. Tan, S., Caruana, R., Hooker, G., & Lou, Y. (2018). Distill-and-compare: Auditing black-box models using transparent model distillation. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 303–310).

    Google Scholar 

  3. Chancellor, S., Baumer, E. P., & De Choudhury, M. (2019). Who is the “human” in human centered machine learning: The case of predicting mental health from social media. In Proceedings of the ACM on human-computer interaction (CSCW) (vol. 3, pp. 1–32).

    Google Scholar 

  4. Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain?. arXiv preprint arXiv:1712.09923.

  5. Katuwal, G. J., & Chen, R. (2016). Machine learning model interpretability for precision medicine. arXiv preprint arXiv:1610.09045.

  6. MacKenzie, D. (2018). Material signals: A historical sociology of high-frequency trading. American Journal of Sociology, 123(6), 1635–1683.

    Article  Google Scholar 

  7. Murawski, J. (2019). Mortgage providers look to AI to process home loans faster. Wall Street Journal, 18.

    Google Scholar 

  8. Hao, K. (2019). AI is sending people to jail—and getting it wrong. Technology Review, 21.

    Google Scholar 

  9. Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2018). Human decisions and machine predictions. The Quarterly Journal of Economics, 133(1), 237–293.

    MATH  Google Scholar 

  10. Lipton, Z. C. (2018). The mythos of model interpretability. Queue, 16(3), 30:31–30:57.

    Google Scholar 

  11. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. CoRR, abs/1702.08608.

    Google Scholar 

  12. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138–52160.

    Article  Google Scholar 

  13. Preece, A. (2018). Asking ‘Why’in AI: Explainability of intelligent systems–perspectives and challenges. Intelligent Systems in Accounting, Finance and Management, 25(2), 63–72.

    Article  Google Scholar 

  14. Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296.

  15. Doran, D., Schulz, S., & Besold, T. R. (2017). What Does Explainable AI Really Mean? A New Conceptualization of Perspectives. In Proceedings of the 1st international workshop on comprehensibility and explanation in AI and ML colocated with AI*IA 2017 (vol. 2071). CEUR-WS.org.

    Google Scholar 

  16. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1–42.

    Article  Google Scholar 

  17. Holzinger, A., Plass, M., Kickmeier-Rust, M., Holzinger, K., Cris¸an, G. C., Pintea, C. M., & Palade, V. (2019). Interactive machine learning: Experimental evidence for the human in the algorithmic loop. Applied Intelligence, 49(7), 2401–2414. https://doi.org/10.1007/s10489-018-1361-5

  18. Holzinger, A. (2016). Interactive machine learning for health informatics: When do we need the human-in-the-loop? Brain Informatics, 3, 119–131. http://www.springer.com/computer/ai/journal/40708, https://doi.org/10.1007/s40708-0160042-6

  19. Ma, Y., Wang, Z., Yang, H., & Yang, L. (2020). Artificial intelligence applications in the development of autonomous vehicles: A survey. IEEE/CAA Journal of Automatica Sinica, 7(2), 315–329.

    Article  Google Scholar 

  20. Garg, S., Sinha, S., Kar, A. K., & Mani, M. (2021). A review of machine learning applications in human resource management. International Journal of Productivity and Performance Management.

    Google Scholar 

  21. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4(37).

    Google Scholar 

  22. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.

    Google Scholar 

  23. Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284.

    Article  Google Scholar 

  24. Castelvecchi, D. (2016). Can we open the black box of AI? Nature News, 538(7623), 20.

    Article  Google Scholar 

  25. Xie, Y., Chen, X. A., & Gao, G. (2019). Outlining the design space of explainable intelligent systems for medical diagnosis. In C. Trattner, D. Parra, & N. Riche (Eds.), Joint proceedings of the ACM IUI 2019 workshops co-located with the 24th ACM conference on intelligent user interfaces (ACM IUI 2019), Los Angeles, USA, March 20, 2019. ser. CEUR workshop proceedings, (vol. 2327). CEUR-WS.org, 2019. [Online]. http: //ceur-ws.org/Vol-2327/IUI19WS-ExSS2019-18.pdf

    Google Scholar 

  26. Tjoa, E., & Guan, C. (2020). A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE Transactions on Neural Networks and Learning Systems, 1–21.

    Google Scholar 

  27. Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14 410–14 430.

    Google Scholar 

  28. Ma, X., Niu, Y., Gu, L., Wang, Y., Zhao, Y., Bailey, J., & Lu, F. (2021). Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognition, 110, 107332.

    Article  Google Scholar 

  29. Ren, K., Zheng, T., Qin, Z., & Liu, X. (2020). Adversarial attacks and defenses in deep learning. Engineering, 6(3), 346–360.

    Article  Google Scholar 

  30. Nilsson, N. J. (2011). The quest for artificial intelligence: A history of ideas and achievements. In The quest for artificial intelligence: A history of ideas and achievements. https://doi.org/10.1017/CBO9780511819346

  31. Oh, K., Kim, S., & Oh, I.-S. (2020). Salient explanation for fine-grained classification. IEEE Access.

    Google Scholar 

  32. RégisPierrard, J.-P. P. (2020). Spatial relation learning for explainable image classification and annotation in critical applications. Artificial Intelligence.

    Google Scholar 

  33. Jean-BaptisteLamy, K. R. (2020). Explainable decision support through the learning and visualization of preferences from a formal ontology of antibiotic treatments. Journal of Biomedical Informatics. R

    Google Scholar 

  34. Augusto Anguita-Ruiz, A. S.-D.-F. (2020). EXplainable Artificial Intelligence (XAI) for the identification of biologically relevant gene expression patterns in longitudinal human studies, insights from obesity research. National Library of Medicine.

    Google Scholar 

  35. Patrik Sabol, P. S.-F. (2020). Explainable classifier for improving the accountability in decision-making for colorectal cancer diagnosis from histopathological images.

    Google Scholar 

  36. Bum Chul Kwon, M. -J. C. (2018). RetainVis: Visual analytics with interpretable and interactive recurrent neural networks on electronic medical records. Cornell university.

    Google Scholar 

  37. Dasom Seo, K. O.-S. (2019). Regional multi-scale approach for visually pleasing explanations of deep neural networks. Cornell University

    Google Scholar 

  38. Shane O’Sullivan, S. L. (2020). Operational framework and training standard requirements for AI-empowered robotic surgery.

    Google Scholar 

  39. Jelena Fiosina, M. F. (2020). Explainable deep learning for augmentation of small RNA Expression profiles. Journal of Computational Biology.

    Google Scholar 

  40. Tjoa, E., & Guan, C. (2019). A survey on explainable artificial intelligence (XAI): towards Medical XAI. IEEE Transactions on Neural Networks and Learning Systems. R

    Google Scholar 

  41. Kexin Chen, T. H. (2020). Neurorobots as a means toward neuroethology and explainable AI. Frontiers in Neurorobotics.

    Google Scholar 

  42. Loyola-González, O. (2019). Black-Box vs. white-box: understanding their advantages and weaknesses from a practical point of view. IEEE Access.

    Google Scholar 

  43. Sarah Itani, D. T. (2020). Combining anatomical and functional networks for neuropathology identification: A case study on autism spectrum disorder. Cornell University.

    Google Scholar 

  44. A.Parziale, R. A. (2020). Cartesian genetic programming for diagnosis of Parkinson disease through handwriting analysis: Performance vs. interpretability issues. Artificial Intelligence in Medicine.

    Google Scholar 

  45. Katharina Weitz, T. H. (2019). Deep-learned faces of pain and emotions: Elucidating the differences of facial expressions with the help of explainable AI methods. Technisches Messen.

    Google Scholar 

  46. Jasper van der Waa, E. N. (2020). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence.

    Google Scholar 

  47. LucasRizzo, L. (2020). An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems. Expert Systems with Applications.

    Google Scholar 

  48. Dumas, M., La Rosa, M., Mendling, J., & Reijers, H. A. (2013). Fundamentals of business process management (vol. 1, p. 2). Springer.

    Google Scholar 

  49. Verma, S., Sharma, R., Deb, S., & Maitra, D. (2021). Artificial intelligence in marketing: Systematic review and future research direction. International Journal of Information Management Data Insights, 100002.

    Google Scholar 

  50. Ximeng Cheng, O. I. (2020). A method to evaluate task-specific importance of spatiotemporal units based on explainable artificial intelligence. International Journal of Geographical Information Science.

    Google Scholar 

  51. Ahn, S., Kim, J., Park, S. Y., & Cho, S. (2021). Explaining deep learning-based traffic classification using a genetic algorithm. IEEE Access.

    Google Scholar 

  52. Da Lio, M., Donà, R., Papini, G. P. R., & Gurney, K. (2020). Agent architecture for adaptive behaviors in autonomous driving. IEEE Access, 8, 154906–154923.

    Article  Google Scholar 

  53. Carlos Eiras-Franco, B. G. -B. -B. (2019). A scalable decision-tree-based method to explain interactions in dyadic data. Decision Support Systems.

    Google Scholar 

  54. Kar, A. K. (2020). What affects usage satisfaction in mobile payments? Modelling user generated content to develop the “digital service usage satisfaction model”. Information Systems Frontiers, 1–21.

    Google Scholar 

  55. Pedro JoséPereira, P. R. (2020). Multi-objective grammatical evolution of decision trees for mobile marketing user conversion prediction. Expert Systems with Applications.

    Google Scholar 

  56. DanaPessach, G. S. -G. -G. (2020). Employees recruitment: A prescriptive analytics approach via machine learning and mathematical programming. Decision Support System.

    Google Scholar 

  57. Alejandro Barredo Arrieta, N. D. -R. -L. (2019). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible. Information Fusion.

    Google Scholar 

  58. Arroyo, J., Corea, F., Jimenez-Diaz, G., & Recio-Garcia, J. A. (2019). Assessment of machine learning performance for decision support in venture capital investments. IEEE Access.

    Google Scholar 

  59. Raffinetti, P. G. (2020). Shapley-Lorenz eXplainable artificial intelligence. Expert Systems with Applications.

    Google Scholar 

  60. Carta, S. M., Consoli, S., Piras, L., Podda, A. S., & Recupero, D. R. (2021). Explainable machine learning exploiting news and domain-specific lexicon for stock market forecasting. IEEE Access.

    Google Scholar 

  61. Aggour, K. S., Bonissone, P. P., Cheetham, W. E., & Messmer, R. P. (2006). Automating the underwriting of insurance applications. AI magazine, 27(3), 36–36.

    Google Scholar 

  62. Sachan, S., Yang, J. B., Xu, D. L., Benavides, D. E., & Li, Y. (2020). An explainable AI decision-support-system to automate loan underwriting. Expert Systems with Applications, 144, 113100.

    Article  Google Scholar 

  63. Kar, A. K., & Rakshit, A. (2015). Flexible pricing models for cloud computing based on group decision making under consensus. Global Journal of Flexible Systems Management, 16(2), 191–204.

    Article  Google Scholar 

  64. Buehler, K., Freeman, A., & Hulme, R. (2008). The new arsenal of risk management. Harvard Business Review, 86(9), 93–100.

    Google Scholar 

  65. Moscato, V., Picariello, A., & Sperlí, G. (2021). A benchmark of machine learning approaches for credit score prediction. Expert Systems with Applications, 165, 113986.

    Article  Google Scholar 

  66. Mahbooba, B., Timilsina, M., Sahal, R., & Serrano, M. (2021). Explainable artificial intelligence (xai) to enhance trust management in intrusion detection systems using decision tree model. Complexity.

    Google Scholar 

  67. Anupam, S., & Kar, A. K. (2021). Phishing website detection using support vector machines and nature-inspired optimization algorithms. Telecommunication Systems, 76(1), 17–32.

    Article  Google Scholar 

  68. Kumar, G., Kumar, K., & Sachdeva, M. (2010). The use of artificial intelligence based techniques for intrusion detection: A review. Artificial Intelligence Review, 34(4), 369–387.

    Article  Google Scholar 

  69. Jung, Y. J., Han, S. H., & Choi, H. J. (2021). Explaining CNN and RNN using selective layer-wise relevance propagation. IEEE Access, 9, 18670–18681.

    Article  Google Scholar 

  70. Aggarwal, A., Mittal, M., & Battineni, G. (2021). Generative adversarial network: An overview of theory and applications. International Journal of Information Management Data Insights, 100004.

    Google Scholar 

  71. Townsend, J., Chaton, T., & Monteiro, J. M. (2019). Extracting relational explanations from deep neural networks: A survey from a neural-symbolic perspective. IEEE Transactions on Neural Networks and Learning Systems, 31(9), 3456–3470.

    Article  MathSciNet  Google Scholar 

  72. Heuillet, A., Couthouis, F., & Díaz-Rodríguez, N. (2021). Explainability in deep reinforcement learning. Knowledge-Based Systems, 214, 106685.

    Article  Google Scholar 

  73. Andrychowicz, O. M., Baker, B., Chociej, M., Jozefowicz, R., McGrew, B., Pachocki, J., ... & Zaremba, W. (2020). Learning dexterous in-hand manipulation. The International Journal of Robotics Research, 39(1), 3–20.

    Google Scholar 

  74. Huang, S. H., Held, D., Abbeel, P., & Dragan, A. D. (2019). Enabling robots to communicate their objectives. Autonomous Robots, 43(2), 309–326.

    Article  Google Scholar 

  75. Kar, A. K., & Navin, L. (2020). Diffusion of blockchain in insurance industry: An analysis through the review of academic and trade literature. Telematics and Informatics, 101532.

    Google Scholar 

  76. Chakraborty, A., & Kar, A. K. (2017). Swarm intelligence: A review of algorithms. NatureInspired Computing and Optimization, 475–494.

    Google Scholar 

  77. Grover, P., Kar, A. K., & Dwivedi, Y. K. (2020). Understanding artificial intelligence adoption in operations management: insights from the review of academic literature and social media discussions. Annals of Operations Research, 1–37.

    Google Scholar 

  78. Mir, U. B., Sharma, S., Kar, A. K., & Gupta, M. P. (2020). Critical success factors for integrating artificial intelligence and robotics. Digital Policy, Regulation and Governance.

    Google Scholar 

  79. Kumar, S., Kar, A. K., & Ilavarasan, P. V. (2021). Applications of text mining in services management: A systematic literature review. International Journal of Information Management Data Insights, 1(1), 100008.

    Article  Google Scholar 

Download references

Acknowledgements

This article is based on research funded by the Department of Science and Technology (DST) under the ICPS Scheme, sanctioned 7/01/2019. The authors gratefully acknowledge the opportunity given by DST to conduct academic research on the tourism of India and provide inputs for policy making for improving tourism.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shagun Sarraf .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Kumar, S., Sarraf, S., Kar, A.K., Ilavarasan, P.V. (2023). A Study of eXplainable Artificial Intelligence: A Systematic Literature Review of the Applications. In: Singh, P.K., Wierzchoń, S.T., Pawłowski, W., Kar, A.K., Kumar, Y. (eds) IoT, Big Data and AI for Improving Quality of Everyday Life: Present and Future Challenges. Studies in Computational Intelligence, vol 1104. Springer, Cham. https://doi.org/10.1007/978-3-031-35783-1_14

Download citation

Publish with us

Policies and ethics