Abstract
Explainable AI (XAI) has emerged as an essential realm aimed at tackling the opacity of intricate AI models and nurturing confidence in their judgments. This study extensively investigates the fundamental underpinnings, methodologies, and practical implementations of XAI. The bedrock of XAI lies in the urgency to demystify the internal mechanisms of AI models, rendering their decision-making transparent for human stakeholders. Within the domain of XAI, diverse methodologies encompass a spectrum of techniques such as interpretable models, scrutiny of feature significance, localized and holistic elucidations, visual representations, and explications in natural language. These methodologies collectively foster intelligibility and amplify the explicable nature of AI models. This research significantly enriches the expanding reservoir of scholarly exploration by clarifying the core tenets of XAI. This comprehensive survey unmistakably demonstrates that XAI assumes a pivotal role in bridging the chasm between intricate AI processes and human comprehension. Consequently, it clears the path for a more reliable and efficacious partnership between human intellect and mechanical ingenuity.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Andrews R, Boyne GA (2010) Capacity, leadership, and organizational performance: testing the black box model of public management. Public Adm Rev 70(3):443–454
Ljung L (2001) Black-box models from input-output measurements. In: IMTC 2001. Proceedings of the 18th IEEE instrumentation and measurement technology conference. Rediscovering measurement in the age of informatics (Cat. No. 01CH 37188), 21 May 2001, vol 1, pp 138–146. IEEE
Strobel M (2019) Aspects of transparency in machine learning. In: Proceedings of the 18th international conference on autonomous agents and multiagent systems, 8 May 2019, pp 2449–2451
Broersma M, Harbers F (2018) Exploring machine learning to study the long-term transformation of news: digital newspaper archives, journalism history, and algorithmic transparency. Digit Journal 6(9):1150–1164
Das A, Rad P (2020) Opportunities and challenges in explainable artificial intelligence (xai): a survey. arXiv preprint arXiv:2006.11371
Antoniadi AM, Du Y, Guendouz Y, Wei L, Mazo C, Becker BA, Mooney C (2021) Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review. Appl Sci 11(11):5088
Taruffo M (1998) Judicial decisions and artificial intelligence. In: Judicial applications of artificial intelligence. Springer, Dordrecht, pp 207–220
Longo L, Goebel R, Lecue F, Kieseberg P, Holzinger A (2020) Explainable artificial intelligence: concepts, applications, research challenges and visions. In: International cross-domain conference for machine learning and knowledge extraction, vol 25. Springer, Cham, pp 1–16
Adadi A, Berrada M (2020) Explainable AI for healthcare: from black box to interpretable models. In: Embedded systems and artificial intelligence. Springer, Singapore, pp 327–337
Xu W (2019) Toward human-centered AI: a perspective from human-computer interaction. Interactions 26(4):42–46
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Ganatra, A. et al. (2024). Introduction to Explainable AI. In: Aluvalu, R., Mehta, M., Siarry, P. (eds) Explainable AI in Health Informatics. Computational Intelligence Methods and Applications. Springer, Singapore. https://doi.org/10.1007/978-981-97-3705-5_1
Download citation
DOI: https://doi.org/10.1007/978-981-97-3705-5_1
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-3704-8
Online ISBN: 978-981-97-3705-5
eBook Packages: Computer ScienceComputer Science (R0)