1 Introduction

Deep learning has been contributing to artificial intelligence (AI) systems to speed up and improve numerous tasks, including decision-making, predictions, identifying anomalies and patterns, and even recommendations and so on. Although the accuracy of deep learning models has dramatically improved during the last decade, this improved accuracy has often been achieved through increased model complexity, which may induce common sense mistakes in practice without providing any reasons for the mistakes, making it impossible to fully trust its decisions. It’s also challenging to achieve targeted model improvement and optimisation [1]. Without reliable explanations that accurately represent the current AI system processes, humans still consider AI untrustworthy due to a variety of dynamics and uncertainties [2] when deploying AI applications in real-world environments. This motivates the inherent need and expectation from human users that AI systems should be explainable to help confirm decisions.

Explainable AI (explainable artificial intelligence (XAI)) is often considered a set of processes and methods that are used to describe deep learning models, by characterizing model accuracy, transparency, and outcomes in AI systems [3]. XAI methods aim to provide human-readable explanations to help users comprehend and trust the outputs created by deep learning algorithms. Additionally, some regulations such as European General Data Protection Regulation (general data protection regulation (GDPR))[4] have been introduced to drive further XAI research, demanding the important ethics [5], justifications [6], trust [7] and bias [8] to explore reliable XAI solutions.

The need for XAI is multi-factorial and depends on the concerned people (Table 1), whether they are end users, AI developers or product engineers. End-users need to trust the decisions and be reassured based on the explainable process and feedback. On the other hand, AI developers need to understand the limitations of current models to validate and improve future versions. Besides, regarding product engineers in different domains, they need to access and optimise explanations of the decision process for the deployment of AI systems, especially in real-world environments.

Table 1 Explainable AI: who need it? Why? For what?

In a more detailed manner, XAI should consider different cultural and contextual factors. For example, in different contexts and cultural backgrounds, XAI may need to provide different interpretations for the same objects and phenomena. To address this, scholars have proposed the Contextual Utility Theory [9], which explains the final decision outcome by assessing the importance and influence of different factors. Additionally, tailoring explanations based on user expertise is another crucial aspect of designing effective explainable artificial intelligence (XAI) systems. By considering the varying levels of technical knowledge and expertise among users, XAI systems can provide explanations that are better suited to their individual needs. For example, in healthcare, patients often have varying levels of medical knowledge and technical understanding. When presenting AI-driven diagnoses or treatment recommendations to patients, explanations should be tailored to their level of health literacy. For patients with limited medical expertise, explanations should use plain language and visual aids to help them comprehend the reasoning behind the AI-generated recommendations. On the other hand, for healthcare professionals who have a deeper understanding of medical concepts, explanations can delve into more technical details and provide insights into the model’s decision-making process [10].

From a long-term perspective, more focus should be done on usability and maintainability, which requires improved personalisation, evolution with time, and data management [11]. These aspects are essential for ensuring the continued effectiveness and relevance of XAI approaches. One area that requires attention is improved personalization, where XAI systems can be tailored to individual users or specific application domains [12]. Also, as AI models and data evolve over time, XAI systems need to adapt and evolve as well. Data management is another critical aspect for the long-term usability and maintainability of XAI systems. As data volumes increase and data distributions change, XAI methods should be able to handle shifting data characteristics and adapt accordingly [13].

At present, XAI has gained a great deal of attention across different application domains. Accordingly, an increasing number of XAI tools and approaches are being introduced both in industry and academia. These advancements aim to address the ongoing trade-off between interpretability and predictive power.There is a growing recognition that highly interpretable models might encounter limitations in capturing complex relationships, which can lead to reduced accuracy. On the other hand, complex models often achieve superior accuracy but at the expense of interpretability. Balancing these considerations becomes crucial and is contingent upon the specific requirements of the application domain. In certain contexts, such as healthcare or finance, interpretability and transparency play pivotal roles in ensuring regulatory compliance and addressing ethical considerations [14, 15]. In other domains, such as medical image or signal recognition, where accuracy is paramount, the focus may be more on predictive power than interpretability [16].

The current XAI methods exhibit various dimensions and descriptions to understand deep learning models and some survey papers [3, 17, 18] have summarized the methods and basic differences among different XAI approaches. However, the state-of-the-art analysis with respect to existing approaches and limitations for different XAI-enabled application domains still lacks investigation.

The field of explainable artificial intelligence (XAI) has witnessed the emergence of numerous methods and techniques aimed at comprehending the intricate workings of deep learning models. Currently, some survey papers have made efforts to summarize these methods and offer a fundamental understanding of the distinctions among various XAI approaches [3, 17, 18]. However, while certain survey papers have focused on specific domains like healthcare [19] or medical applications [20], there still exists a substantial gap in the state-of-the-art analysis pertaining to the existing approaches and their limitations across all XAI-enabled application domains. This gap necessitates a comprehensive investigation encompassing various aspects such as different requirements, suitable XAI approaches, and domain-specific limitations. Conducting such an analysis is crucial as it allows us to gain a deeper understanding of the performance of XAI techniques in real-world scenarios. Additionally, it helps us identify the challenges and opportunities that arise when applying these approaches to different application domains. By bridging this gap, we can make significant strides towards developing more effective and reliable XAI systems tailored to specific domains and their unique characteristics.

In this survey, our primary objective is to provide a comprehensive overview of explainable artificial intelligence (XAI) approaches across various application domains by exploring and analysing the different methods and techniques employed in XAI and their application-specific considerations. We achieve this by utilizing three well-defined taxonomies, as depicted in Fig. 1. Unlike many existing surveys that solely focus on reviewing and comparing methods, we go beyond that by providing domain mapping. This mapping provides insights into how XAI methods are interconnected and utilized across various application domains, and even in cases where domains intersect. Additionally, we delve into a detailed discussion on the limitations of the existing methods, acknowledging the areas where further improvements are necessary. Lastly, we summarize the future directions in XAI research, highlighting potential avenues for advancements and breakthroughs. Our contributions in this survey can be summarized as follows:

  • Develop a new taxonomy for the description of XAI approaches based on three well-defined orientations with a wider range of explanation options;

  • Investigate and examine various XAI-enabled applications to identify the available XAI techniques and domain insights through case studies;

  • Discuss the limitations and gaps in the design of XAI methods for the future directions of research and development.

In order to comprehensively analyze XAI approaches, limitations, and future directions from application perspectives, our survey is structured around two main themes, as depicted in Fig. 1. The first theme focuses on general approaches and limitations in XAI, while the second theme aims to analyze the available XAI approaches and domain-specific insights.

Under each domain, we explore four main sub-themes: problem definition, available XAI approaches, case studies, and domain insights. Before delving into each application domain, it is important to review the general taxonomies of XAI approaches. This provides a foundation for understanding and categorizing the various XAI techniques. In each domain, we discuss the available and suitable XAI approaches that align with the proposed general taxonomies of XAI approaches. Additionally, we examine the domain-specific limitations and considerations, taking into account the unique challenges and requirements of each application area. We also explore cross-disciplinary techniques that contribute to XAI innovations. The findings from these discussions are summarized as limitations and future directions, providing valuable insights into current research trends and guiding future studies in the field of XAI.

Fig. 1
figure 1

The proposed organization to discuss the approaches, limitations and future directions in XAI

2 Taxonomies of XAI Approaches

2.1 Review Scope and Execution

This work is mainly based on a scope of review refers to the specific boundaries and focus of the research being conducted. In the context of an XAI survey, the scope typically includes the following aspects:

  • XAI approaches: The review will focus on examining and analyzing different XAI approaches and methods that have been proposed in the literature. This include visualization techniques, symbolic explanations, ante-hoc explanations, post-hoc explanations, local explanations, global explanations and any other relevant techniques.

  • Application domains: The review may consider various application domains where XAI techniques have been applied, including medical and biomedical, healthcare, finance, law, cyber security, education and training, civil engineering. The scope involve exploring the usage of XAI techniques in these domains and analyzing their effectiveness and limitations across multiple domains.

  • Research papers: The review will involve studying and synthesizing research papers that are relevant to the chosen scope. These papers may include original research articles, survey papers and scholarly publications that contribute to the understanding of XAI approaches and their application in the selected domains through case studies.

  • Limitations and challenges: The scope also encompass examining the limitations and challenges of existing XAI methods and approaches. This could involve identifying common issues, gaps in the literature, and areas that require further research or improvement.

Having the scope of review established, the selected databases and a search engine include Scopus, Web of Science and Google Scholar (Search engine) and arXiv between 2013 and 2023. The search terms based on the scopes are:

  • XAI keywords: explainable, XAI, interpretable.

  • Review keywords: survey, review, overview, literature, bibliometric, challenge, prospect, trend, insight, opportunity, future direction.

  • Domain keywords: medical, biomedical, healthcare, wellness, civil, urban, transportation, cyber security, information security, education, training, learning and teaching, coaching, finance, economics, law, legal system.

With the selected search terms, the two-round search strings were designed to effectively retrieve relevant information and narrow down the search results.

The first round, focusing on general research papers, consisted of the following search string: (explainable OR XAI OR interpretable) AND (survey OR review OR overview OR literature OR bibliometric OR challenge OR prospect OR trend OR opportunity OR "future direction").

The second round, aimed at selecting specific application domains, utilized the following search string: (explainable OR XAI OR interpretable) AND (medical, biomedical OR healthcare OR wellness OR civil OR urban OR transportation OR “cyber security” OR “information security” OR education OR training OR “learning and teaching” OR coaching OR finance OR economics OR law OR “legal system”).

Publications that did not clearly align with the scopes based on their title or abstract were excluded from this review. While not all literature explicitly stated this information, the extracted data was organized and served as the foundation for our analysis.

2.2 XAI Approaches

The taxonomies in the existing survey papers generally categorised XAI approaches based on scope (local or global) [21], stage (ante-hoc or post-hoc) [17] and output format (numerical, visual, textual or mixed) [22]. The main difference between the existing study and our survey is that this paper focuses on the human perspective involving source, representation, and logic reasoning. We summarise the taxonomies categorised in this survey in Fig. 2:

Fig. 2
figure 2

Taxonomies of XAI approaches in this survey

Source-oriented (source-oriented (SO)) the sources that support building explanations can be either subjective (S) or objective (O) cognition, depending on whether the explanations are provided based on the fact or human experience. For example, in the medical field, if the explanation of a diagnosis is provided based on the patient’s clinical symptoms and explains the cause and pathology in detail during the AI learning process, this is from objective cognitive concern. In contrast, explanations with subjective cognitive consider patients’ current physical conditions and doctors’ medical knowledge.

Representation-oriented (representation-oriented (RO)) core representation among the XAI approaches can generally be classified into visualisation-based (V), symbolic-based (S) or even hybrid (H) methods. Visual-based methods are the most common representation ways including input visualisation and model visualisation. Input visualisation methods provide an accessible way to view and understand how input data affect model outputs, while model visualisation methods provide analysis based on the aspect of layers or features inside the model.

Besides visualization-based methods, other formats of explanations, including numerical, graphical, rules, and textual explanations, are covered in symbolic-based methods. Symbolic-based methods tend to describe the process of deep learning models by extracting insightful information, such as meaning and context, and representing them in different formats. The coding symbolic-based explanation is provided directly from the factual features, including numerical, graphical and textual explanations. For instance, a numerical method [23] monitors the features at every layer of a neural network model and measures their suitability for classification, which is beneficial to provide a better understanding of the roles and dynamics of the intermediate layers. Zhang et al. [24] used an explanatory graph to reveal the knowledge hierarchy hidden inside a pre-trained convolutional neural network (CNN) model. In the graph, each node represents a part pattern, and each edge encodes co-activation relationships and spatial relationships between patterns. A phase-critic model is developed to generate a candidate textual explanation for the input image [25].

By contrast, qualifying symbolic explanations, such as rules and graphical explanations, are provided under human knowledge. Rules explanations are usually in the form of “If-Then" rules to obtain the inferential process of neural networks. Rule extraction techniques include neural network knowledge extraction (neural network knowledge extraction (NNKX)) [26], rule extraction from neural network ensemble (rule extraction from neural network ensemble (REFNE)) [27], and Electric Rule Extraction (electric rule extraction (ERE)) [28]. Moreover, knowledge graphs can also help AI models be more explainable and interpretable and are widely used in explainable methods. Andriy and Mathieu [29] applied rule mining using knowledge graph (KG)s to reveal semantic bias in neural network models. Visualisation-based methods and coding symbolic explanations generally belong to objective cognitive, while qualifying symbolic explanations are always subjective cognitive by adding human knowledge and opinions.

Hybrid methods can generate mixed explanations, consisting of visualization explanations and symbolic information, which can be either subjective or objective cognitive. In [30], visual and textual explanations are employed in the visual question answering task. Both subjective and objective cognitive explanations are provided in this work. We summarise the above-mentioned XAI representations with surveyed related work in Table 2.

Table 2 Summarising the XAI approaches from representation-oriented

Logic-oriented (logic-oriented (LO)) a well-developed XAI method with a specific representation can be integrated logic reasoning into deep learning models, including end-end (E-E) relationship, middle-end (M-E) relationship, and correlation (Corr) relationship shown in Table 3. The end-end explanations focus on providing how the AI system processes from the first input stage to the final output result. The middle-end performs explanations by considering the internal AI system structure. The correlation is used to represent the correlations among sequence inputs or consecutive outputs of deep learning models. Most XAI approaches target to clarify the relationship between input features and output results, and very few research relates to middle-end and correlation relationships.

Table 3 Summarising the XAI approaches from logic-oriented

3 Applications Using XAI Approaches

Nowadays, applications using XAI approaches have been covered in various domains. This section provides details for different XAI techniques used for each application. Some of the main applications are summarised in Table 4.

Table 4 Applications of XAI

3.1 Medical and Biomedical

3.1.1 Problem Definition

The use of AI systems in medical and biomedical research has increasing influences on medical advice and therapeutic decision-making processes, especially in one of the most common areas that apply deep learning techniques, the medical and biomedical image analysis, such as image registration and localisation, detection of anatomical and cellular structures, tissue segmentation, and computer-aided disease diagnosis.

Medical and biomedical image analysis refers to the extraction of meaningful information from digital images, which utilised a variety of medical imaging techniques, including computed tomography (computed tomography (CT)), magnetic resonance imaging (Magnetic resonance imaging (MRI)), ultrasound (ultrasound (US)), and X-rays, covering important body parts such as the brain, heart, breast, lung and kidney [153]. Due to the advantages of deep learning in the field of medical image analysis, in recent years, more and more researchers have adopted deep learning to solve problems of medical image analysis and achieved good performances. Although medical image analysis based on deep learning has made great progress, it still faces some urgent problems in medical practice.

Deep learning methods now can automatically extract abstract features by end-to-end prediction processing, which can obtain direct results, but this is insufficient to provide diagnostic evidence and pathology, and the features cannot be completely trusted or accepted. For example, for glaucoma screening, doctors can use intraocular pressure testing, visual field testing, and manual inspection of the optic disc to diagnose the disease and give the cause and pathology based on the patient’s symptoms and pathological reports [154]. However, the deep learning model is difficult to explain the correlation or causality between its input and output in different contexts, lacking an explanation of the process, which is difficult to support reasoning in medical diagnosis and research.

Additionally, due to the data-driven nature of deep learning, models can easily learn the deviations in the training [155]. This phenomenon is common in medical image processing. For example, a deep learning model identifies certain diseases from images during training while the actual diagnosis could be another disease, and this should not occur at all. If users intend to to improve a model, the explanation of the model is a prerequisite, because before being able to solve a problem, its existence and causality need to be identified.

3.1.2 XAI Based Proposals

The research on explainable deep learning can enhance the capabilities for AI-assisted diagnosis by integrating with large-scale medical systems, providing an effective and interactive way to promote medical intelligence. Different from common explainable deep learning methods, the deep learning explanation research of medical image analysis is not only affected by the data, but also related to medical experts’ knowledge.

In terms of source-oriented, the objective cogitative explanation is provided based on visible, measurable findings obtained by medical examinations, tests, or images, while the subjective cogitative explanation needs to consider the medical experts’ knowledge and patient situations. The existing XAI proposals cover both objective and subjective cognitive aspects. For example, the gradient-weighted class activation mapping [gradient-weighted class activation mapping (Grad-CAM)] method proposed in [36] performs explanation by highlighting the important regions in the image, which refers to objective cogitative. Some researchers also consider using subjective sources, such as in [85], authors presented the explanation by combining time series, histopathological images, knowledge databases as well as patient histories.

In terms of representation-oriented, visualisation methods emphasise the visualisation of training data rules and the visualisation inside the model, which is the most popular XAI approaches used in medical image analysis. Some typical examples include attributed-based and perturbation-based methods for model-agnostic explanations as well as CAM-based and concept attribution for model-specific explanations. Locally-interpretable model-agnostic explanations (locally-interpretable model-agnostic explanations (LIME)) [86] is utilised to generate explanations for the classification of medical image patches. Zhu et al. [87] used rule-based segmentation and perturbation-based analysis to generate the explanation for visualising the importance of each feature in the image. The concept attribution [37] is introduced by quantifying the contribution of features of interest to the CNN network’s decision-making. Symbolic methods focus on the symbolic information representations that simulate the doctor’s decision-making process with natural language, along with the generated decision results, such as primary diagnosis reports, etc. For example, Kim et al.[66] introduced concept activation vectors (CAVs), which provided textual interpretation of neural network internal state with user-friendly concepts. Lee et al. [73] provided explainable computer-aided diagnoses by combining a visual pointing map and diagnostic sentences based on the predefined knowledge base.

In terms of logical-oriented, explanations focused on end-end logic reasoning, such as the above-mentioned LIME, perturbation-based methods are utilised to explain the relationship between input medical images and predicted results. For example, a linear regression model is embedded into LIME [86] to identify relevant regions by plotting heat maps with varying color scales. Zhang et al. [56] provided a diagnostic reasoning process and translate gigapixels directly to a series of interpretable predictions. Shen et al. [58] used a hierarchical architecture to present a concept learning-based correlation model. The prediction’s interpretability is aided by the model’s intermediate outputs, which anticipate diagnostic elements connected to the final classification. A correlation-XAI approach proposed by [59] is used for feature selection merging generalised feature importance obtained with shapley additive explanations (shapley additive explanations (SHAP)) and correlation analysis to achieve the optimal feature vector for classification.

3.1.3 Cases Studies

Explainable AI applications in the medical field assist healthcare providers in making accurate diagnoses, treatment decisions, risk assessments, and recommendations. The transparency and interpretability of these AI models ensure that clinicians can trust and validate the outputs, leading to improved patient care and outcomes [92].

Lesion classification: The visualisation of lesion areas mainly refers to heat maps [88], attention mechanisms [89, 90], and associated with other diagnostic means such as structural remodeling [91] and language models to represent report text [156] to find out lesion areas. These methods provide visual evidence to explore the basis for medical decision-making. For example, Biffi et al. [91] used a visualisation method on the original images to measure the specificity of pathology, using interpretable characteristics of specific tasks to distinguish clinical conditions and make the decision-making process transparent. Garcia-Peraza-Herrera et al. [92] used embedded activation charts to detect early squamous cell tumors, showing the focus on the interpretability of the results and use it as a constraint to provide a more detailed attention map. Paschali et al. [88] used a model to activate the fine-grained Logit heat map to explain the medical imaging decision-making process. Lee et al. [90] used head CT scan images to detect acute intracranial hemorrhage, and proposed an interpretable deep learning framework. Liao et al. [89] provided a visual explanation basis for the automatic detection of glaucoma based on the attention mechanism, and in the process of automatic glaucoma detection, the system provides three types of output: prediction result, attention map, and prediction basis, which enhances the result interpretability.

Disease diagnosis and treatment: Research on XAI in disease diagnosis and treatment has recently gained much attention. Amoroso et al.[93] applied clustering and dimension reduction to outline the most important clinical feature for patients and designed oncological therapies in the proposed XAI framework. In the context of high-risk diagnoses, explainable AI techniques have been applied to provide visual cues and explanations to clinicians. For example, Sarp et al. [94] utilized LIME (local interpretable model-agnostic explanations) to generate visual explanations for a CNN-based chronic wound classification model. These visual cues help clinicians understand the model’s decision-making process and provide transparency in the diagnosis. Moreover, Wu et al. [95] proposed a counterfactual multi-granularity graph supporting fact extraction (counterfactual multi-granularity graph supporting fact extraction (CMGE)) for lymphedema diagnosis. CMGE is a graph-based neural network that extracts facts from electronic medical records, providing explanations and causal relationships among features. These explanations assist clinicians in comprehending the reasoning behind the diagnosis and identifying relevant factors contributing to the condition.

In the domain of relatively low-risk screenings, explainable AI research has explored the integration of medical records and natural language processing methods to provide interpretable diagnostic evidence. For instance, Wang et al. [96] and Lucieri et al. [97] have integrated medical records into map and image processing, creating diagnostic reports directly from medical images using multi-modal medical information. This integration enables the generation of interpretable evidence and explanations for clinicians during the screening process.

Additionally, hybrid XAI approaches have been explored, such as Mimir proposed by Hicks et al. [84]. Mimir learns intermediate analysis steps in deep learning models and incorporates these explanations to produce structured and semantically correct reports that include both textual and visual elements. These explanations aid in understanding the screening results and provide insights into the features contributing to the risk assessment, assisting clinicians in making informed decisions and recommendations for further screenings or preventive measures.

3.1.4 Domain-Specific Insights

In terms of biomedical field, the end-users of XAI are mostly pharmaceutical companies and biomedical researchers. With the use of AI and XAI, they can understand the reasoning behind predictions made for disease diagnosis and diagnostic evidence. This insight into the decision-making process can enhance the transparency and trustworthiness of AI-based predictions, leading to more accurate, reliable and efficient disease diagnosis. Nonetheless, XAI techniques have their limitations. For example, due to the inherent complexity of disease diagnosis and treatment, XAI may struggle to provide full and concise explanations of all factors influencing a prediction. Further, if a situation is novel or significantly different from past scenarios used in training the AI, the explanations offered by XAI may be insufficient or not entirely accurate. Moreover, given the dynamic and multifaceted nature of public health and disease spread, XAI might struggle to provide real-time explanations or take into account every single factor influencing disease spread. This includes factors like socioeconomic conditions, behavior changes, and environmental changes. Additionally, if the situation changes rapidly, as in the case of a new disease outbreak, the explanations provided by XAI might be outdated or not entirely accurate. Therefore, XAI provides significant advantages in the biomedical field by enhancing transparency and trust in AI predictions, it also faces challenges related to the complexity and dynamic nature of biomedical data and scenarios. More research and advancements are needed to improve the capability of XAI in handling these challenges and providing clear, concise, and real-time explanations.

3.2 Healthcare

3.2.1 Problem Definition

AI-assisted exploration has broad applications in healthcare including drug discovery [104, 105] and disease diagnosis [103, 106]. Deep learning techniques achieved high accuracy on classification problems—e.g., using MRI images to identify different stages of dementia [157]. The Healthcare industry can now examine data at extraordinary rates without sacrificing accuracy due to the development of deep learning. Healthcare offers unique challenges, with typically much higher requirements for interpretability, model fidelity, and performance than most other fields. More interpretable solutions are in demand apart from the binary classification of positive-negative testing, and they can benefit clinicians, allowing them to have an understanding of the results. In healthcare, critical applications like predicting a patient’s end-of-life may have more stringent conditions on the fidelity of interpretation than just predicting the cost of a procedure[158]. Researchers have analysed the interpretability of deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets [90]. Moreover, some studies focus on explaining how AI improves health-associated records of individual patients’ electronic health records (electronic health records (EHR))[159]. This is because AI models for treating and diagnosing patients might be complex as these tools are not always sufficient on their own to support the medical community and medical staff may not be exposed to these technologies before[159].

3.2.2 XAI Based Proposals

Adding to the numerous research of SARS-CoV-2 mutations, Garvin et al. [160] used a method called iRF-LOOP [161], an XAI algorithm, in combination with Random Intersection Trees (RIT) [162] for the matrix of variable site mutations analysis. The network as the output of the iRF-LOOP model includes a score of the ability to predict the absence or presence of another variable site mutation for each variable site mutation. Ikemura et al. [163] used an unsupervised XAI method BLSOM (batch-learning self-organizing map) for oligonucleotide compositions of SARS-Cov-2 genomes that reveals new noval characteristics of genome sequences and drivers of species-specific clustering. This BLSOM method presented the contribution levels of the variables at each node by visualising the explanation with a heat map.

In terms of source-oriented, XAI in healthcare is mainly visible, measurable findings, medical histories, numerical records and reports. This information conducts from a series of medical examinations, which is not subjective to the explanation methods or requirements. For example, SHAP is an explainer that helps to visualise the output of the machine learning model and compute the contribution of each feature to the prediction [164].

In terms of representation-oriented, in the healthcare field, explanations of AI models are more realistically applicable to overall AI processes, but individual decisions need to be carefully considered. XAI in the healthcare field mainly includes causality, which is the capacity to identify haphazard connections among the system’s many components, and transferability, which is the capacity to apply the information the XAI provides to different issue domains[107].

In terms of logical-oriented, explanations in healthcare mainly focus on correlation analysis. For example, SHAP has been widely used in the healthcare industry to provide explanations for hospital admission [108], quality of life [109], surgery complication [110], Oncology [111] and risk factor analysis of in-hospital mortality [112].

3.2.3 Cases Studies

Pain detection based on facial expressions: Understanding the choices and restrictions of various pain recognition models is essential for the technology’s acceptability in high-risk industries like healthcare. Researchers have provided a method for examining the variances in learned representations of models trained on experimental pain (BioVid heat pain dataset) and clinical pain (UNBC shoulder pain dataset). To do this, they first train two convolutional neural networks, one for each dataset, to recognise pain and the absence of pain automatically. The performance of the heat pain model is then assessed using pictures from the shoulder pain dataset, and vice versa. This is known as a cross-dataset evaluation. Then, to determine which areas of the photographs in the database are most relevant, they employed a Layer-wise Relevance Propagation [165]. In this study, they showed that the experimental pain model is paying more attention to facial expression.

Prediction of caregiver quality of life: The caregiver’s quality of life may be impacted by the patient’s depression and employment status before the onset of symptoms. Some researchers analysed the quality of life of caregivers, using SHAP to visualize the overall impact of all features and then selecting a subset of the seven most predictive features to establish a simpler model (M7) for a global explanation. Simpler models may be easier to comprehend, less likely to be influenced by unimportant or noisy aspects, more accurate, and more useful. In this study, SHAP was used to provide post-hoc explanations. Studies explored the most predictive features that impact the quality of caregivers’ life, including the weekly caregiving duties, age and health of the caregiver, as well as the patient’s physical functioning and age of onset [109].

3.2.4 Domain-Specific Insights

In healthcare, the end-users of XAI systems range from clinicians and healthcare professionals to patients and their families. Given the high-stakes nature of many medical decisions, explainability is often crucial to ensuring these stakeholders understand and trust AI-assisted decisions or diagnoses. One of the primary benefits of XAI in the healthcare domain is the potential to make complex medical decisions more transparent and interpretable, leading to improved patient care. By providing clear explanations for AI-driven predictions, such as identifying risk factors for a particular disease, XAI can help clinicians make more informed decisions about a patient’s treatment plan. Patients, too, can benefit from clearer explanations about their health status and prognosis, which can lead to better communication with their healthcare providers and a greater sense of agency in their care. For instance, in the context of disease diagnosis, AI models equipped with XAI can interpret complex medical imaging data, such as MRI scans, to accurately diagnose a disease like dementia or cancer. Not only can these models highlight which features are most important in reaching a diagnosis, but they can also provide a visual explanation that assists clinicians in understanding the model’s reasoning. Similarly, in drug discovery, XAI can assist in identifying novel therapeutic targets and predicting the efficacy of potential drugs, improving the speed and accuracy of drug development. The transparency provided by XAI in this process can improve trust and confidence in the AI model’s suggestions, and potentially speed up the regulatory approval process. However, the implementation of XAI in the healthcare domain is not without challenges. Privacy and data security are significant concerns when dealing with sensitive health data. Additionally, the ability of XAI to provide clear, comprehensible explanations in cases where the underlying AI model is extremely complex remains a challenge. Moreover, the healthcare field is inherently dynamic and complex, with countless interacting variables, so the explainability provided by current XAI methods might not be complete or fully accurate. Finally, there are also significant regulatory and ethical considerations that come with the application of AI and XAI in healthcare. Regulators will need to establish clear guidelines for the use of these technologies to ensure that they are used responsibly and ethically.

3.3 Cybersecurity

3.3.1 Problem Definition

Cybersecurity is the use of procedures, protections, and technologies to defend against potential online threats to data, applications, networks, and systems [166]. Maintaining cybersecurity is becoming more and more challenging because of the complexity and huge amount of cyber threats, including viruses, intrusions, and spam [167].

In recent years, intelligent network security services and management have benefited from the use of AI technology, such as ML and DL algorithms.

There has been a variety of AI methods and tools developed to defend against threats to network systems and applications that may be present inside an organisation or outside of it. However, it is challenging for humans to comprehend how these outcomes are produced since the developed network security-related decision-making model based on artificial intelligence lacks reasons and rational explanations [168]. This is due to the black-box nature of AI models. As a result of the network vulnerability, the network defence mechanism in this situation transforms into a black box system that is susceptible to information leakage and the effects of AI [169]. In order to combat cyber security that takes advantage of AI’s flaws, XAI is a solution to the growing issue of the black box in AI. Due to XAI’s logical interpretation and key data proof interoperability, experts and general users can comprehend AI-based models [170].

Zhang et al. [171] divided these applications into three categories: defensive network threats applications using XAI, network security XAI applications in different industries, and defence methods against network threats in XAI applications. This section mainly analyses the defensive applications of XAI against network attacks.

3.3.2 XAI Based Proposals

In terms of source-oriented, the objective explanation is based on the detected system or network data, while the subjective explanation depends on the analysts’ expertise and backgrounds. The existing XAI proposals cover both objective and subjective aspects. For example, Hong et al. [113] proposed a framework to provide explanations from the model and also combine with the analysts’ knowledge to eliminate the false positive errors of the decision made by the AI Models. Amarasinghe and Manic [114] proposed the method to give the most relevant input features align with the domain experts’ knowledge for each concept learned from the system.

In terms of representation-oriented, embedding with human understandable texts to interpret the results of the decision-making process is a common way. For example, Amarasinghe et al.[115] used text summary to interpret the reason to the end user of an explainable DNNs-based DoS anomaly detection in process monitoring. Besides, to present the explanation logic, DENAS [116] is a rule-generation approach that extracts knowledge from software-based DNNs. It approximates the nonlinear decision boundary of DNNs, iteratively superimposing a linearized optimization function. Moreover, an image visualisation method was also used to explain, as Gulmezoglu [117] generated LIME and used saliency maps to examine the most dominant features of the website fingerprinting attack after the trained DL model. Feichtner et al. [118] used LIME to calculate a score for each word, showing the importance of output, and used the heat map visualisation method to interpret the text description and application request permissions based on samples of the correlation between groups. Another interpretation proposal [119] used the image to represent a mobile application and localise the useful salient parts of the model by the Grad-CAM algorithm. In this way, the analyst can gain knowledge by using the image symptom region for a specific prediction.

In terms of logic-oriented, explanations focus on end-end logic reasoning similar to LIME used in other areas, while local explanation method using nonlinear approximation (LEMNA) (Local explanation method using nonlinear approximation) is optimised for security applications based on deep learning, such as PDF malware recognition and binary reverse [120]. In contrast to the strong assumption that the model made by LIME is locally linear, the LEMNA scheme can deal with local nonlinearities and takes into account the dependencies between features. LEMNA can be used for the interpretation of a function starting position detection model in a scene in binary reverse. Yan et al. [121] proposed a technique for extracting a rule tree merged from generated rule tree of the hidden layer and the output layer of the DNN, which shows the more important input feature in the prediction task. Also, the middle-to-end explanation is used in this area. Amarasinghe et al. [115] used the layer-wiser relevance propagation (LRP) method [122] to find the relevance features between the input layer and the last layer to explain what input feature contributes to making that decision.

3.3.3 Cases Studies

Intrusion detection systems XAI studies in intrusion detection systems are normally used to provide explanations regarding different user perspectives. Most approaches use already-developed methods to make the results interpretable, with SHAP being the most adopted. LIME, on the other hand, has been adopted in only a few cases. Shraddha et al. [123] proposed a framework with a DNN at its base and apply an XAI method to add transparency at every stage of the deep learning pipeline in intrusion detection system (IDS). Explanations give users measurable factors as to what features influence the prediction of a cyber-attack and to what degree. Researchers create XAI mechanisms depending on who benefits from them. For data scientists, SHAP and BRCG [124] are proposed, while for analysts Protodash is used. For end-users where an explanation of a single instance is required, researchers suggest SHAP, LIME, and CEM. Hong et al. [113] proposed a network intrusion detection framework called FAIXID making use of XAI and data cleaning techniques to enhance the explainability and understandability of intrusion detection alerts. The proposed XAI algorithms included exploratory data analysis (EDA), Boolean Rule column generation (BRCG), and contrastive explanations method (CEM) that deployed in different explainability modules respectively to provide cybersecurity analysts with comprehensive and high-quality explanations about the detection decisions made by the framework. On the other hand, collecting analysts’ feedback through the evaluation module to enhance the explanation models by data cleaning also proved effective in this work as well.

Malware detection The effectiveness of malware detection increases when AI models are applied to signature-based and anomaly-based detection systems. Heuristic-based [172] methods were proposed to understand the behavior of an executable file using data mining and deep learning approaches. The development of interpretable techniques for malware detection in mobile environments-particularly on Android platforms-has received a lot of attention. The adversarial attack method is also used to improve interpretability. Bose et al. [125] provided a framework for interpolating between various classes of samples at various levels to look at many levels of weights and gradients as well as the raw binary bytes to understand how the MalConv architecture [126] learns in an effort to understand the mechanisms at work.

3.3.4 Domain-Specific Insights

AI-based cybersecurity systems carry two different kinds of secondary risks. The possibility of generating misleading negative results that result in erroneous conclusions is present in the first category. The second is the chance of receiving inaccurate notifications or erroneous warnings due to false positive results [173]. In such circumstances, it is imperative to take the required mitigating action to ensure that violations or unique events are handled more accurately, keeping the decision-making process’s ability to be understood and supported [174]. Additionally, the use of AI by hackers makes it possible for them to circumvent security measures so that data tampering goes unnoticed. This makes it challenging for businesses to correct the data supplied into AI-based security systems. As a result, compared to conventional model-based algorithms, the current difficulty with AI-based systems is that they make decisions that lack reason [167]. Hence, XAI is required to enhance trust and confidence in AI-based security systems.

In cybersecurity, XAI requires a more structured approach, utilizing various integrated techniques from diverse fields, guided by a dedicated research community focusing on increasing formalism. In particular, for areas like malware detection, there is a need for unified and clearly explained methods, ensuring a coherent understanding for all stakeholders, including users, analysts, and developers, to enhance the analysis and prevention of cyber-attacks. Moreover, currently, there’s no recognized system for gauging whether one XAI system is more user-intelligent than another. A well-defined evaluation system to select the most effective explainability technique is needed. XAI also needs to grapple with substantial security and privacy issues. Current XAI models are still vulnerable to adversarial attacks, leading to public concerns about XAI security.

3.4 Finance

3.4.1 Problem Definition

In the finance domain, XAI is mainly applied in financial forecasting [175] and credit risk management [176]. Financial forecasting is the problem of predicting different financial-related metrics, such as profit, financial healthy ratios, etc. Credit risk management is the problem of how to manage credit risks from different subjects. For instance, from a bank perspective, when a model predicts a client to be at high risk of default, it should give a reason for different stakeholders to understand.

3.4.2 XAI Based Proposals

XAI in finance is mainly in the form of explaining which features are more important and which features are less important in the AI model. Under this form, different statistical-based techniques can be applied to find the features’ importance. In the legal domain, information is represented as natural language texts, so the objective of XAI models is to identify the important words that contribute to the final prediction.

In terms of source-oriented, most XAI models in the finance domain are objective and cognitive-based, as they are based on fact, not on the human experience. In the legal domain, those XAI models are objective cognitive since the highlighted words are from the input data. As CNNs have been proven useful in text input data, general XAI methods (e.g. Grad-CAM, LIME, SHAP) for CNNs have been used to explain the AI model trained for legal text [134]. To demonstrate the contribution of each word in the given sentence on the final prediction, XAI models, like LIME, can indicate the contribution of the word “awesome" on the positive sentiment prediction result.

In terms of representation-oriented, XAI models in the finance domain vary among visualization-based, symbolic-based and hybrid. For visualisation-based, some tree XAI models [177] can give tree-shape visualisations. For symbolic-based, SHAP and LIME models can give numerical forms. In this case, XAI models try to give the explanation or contribution of different features in making the prediction. Symbolic-based XAI in the financial domain emphasises giving the explanation in a quantitative manner. This is more applied in credit risk management. For hybrid, it is symbolic-based and basically a combination of visualisation. Visualisation in the representation-oriented category has been adopted to the legal domain to visualize the explainability [132].

3.4.3 Cases Studies

Financial data forecasting AI models are being used to predict financial-related figures, such as stocking price, profit of the company, etc. However, sophisticated AI models can only give predictions rather than how these predictions are being made. In a typical application of XAI in financial forecasting cases, Local interpretable model-agnostic explanations (LIME) are used to explain the model predictions locally around each example. LIME is an agnostic-based XAI model which is to simplify the complex model by using a linear regression model locally. For instance, Agarwal et al. [178] applied LIME on a stock price AI prediction model (the AdaBoost model) to explain the prediction. The AI (AdaBoost model) model used 20 previous days’ stock prices as the feature to predict the next day’s stock price. The output of LIME is a 2-dimensional bar chart, with the Y-axis showing features and the X-axis showing the importance of features. In this way, the bar chart clearly shows how the AI prediction model makes the prediction (i.e., which features play dominant roles in the prediction model). Similarly, the SHAP XAI model can also be used to explain the financial forecasting model. The principle of the SHAP model is to compute the contribution level of each feature to the prediction based on game theory. In the same work of Agarwal et al. [178], SHAP is used to explain the AdaBoost and random forest-based stock price prediction model. The output of the SHAP model is a 2-dimensional chart, with the X-axis showing the contribution of features to the model and the Y-axis showing each feature used in the prediction model.

Credit risk management In addition to the financial foresting case, XAI has also been applied in the case of credit risk management. Credit risk may happen to a financial institution due to an expected financial loss. In [177], a SHAP model and a minimal spanning tree model were used to explain an extreme gradient boosting (XGBoost) based credit default risk default probability estimation model. Features used in the estimation model include financial information from the balance sheet. The minimal spanning tree model presents a clustering tree based on financial data, and the SHAP model presents the contribution level of each feature. The principle of the minimal spanning tree model is to present the credit risk prediction in a clustered-based visualisation, while the principle of the SHAP model is to present the explainability at a contribution level.

3.4.4 Domain-Specific Insights

In terms of credit risk management, the end-users of XAI are mostly financial institutions like banks and insurance companies. XAI provides transparency and explanations for AI-driven decisions, allowing these institutions to understand and validate the factors influencing risk assessments and fraud detection. This empowers financial institutions to make more informed and accountable decisions. XAI techniques may struggle to provide clear and concise explanations for every aspect of the decision, potentially leading to incomplete or partial explanations. In addition, credit risk is a dynamic and evolving field, influenced by various economic, regulatory, and market factors, so the XAI may not be able to provide real-time explanations of the risk management decisions. XAI techniques may struggle to provide clear and concise explanations for every aspect of the decision, potentially leading to incomplete or partial explanations. In addition, credit risk is a dynamic and evolving field, influenced by various economic, regulatory, and market factors, so the XAI may not be able to provide real-time explanations of the risk management decisions.

In terms of financial data forecasting, the end-users of XAI are mostly financial institutions like fund management companies and asset management companies. With explanations provided by AI models, financial professionals gain insights into the reasoning behind investment recommendations, risk assessments, and other financial decisions. This helps them validate the suggestions and communicate the rationale more effectively to their clients. One limitation of XAI techniques is that they cannot explain events that never happened in the past. When faced with new and unprecedented circumstances, the explanations provided by XAI may not adequately account for these events, leading to potentially inaccurate forecasts. There is limited current research analyzing the specific computational cost in XAI models in finance. The computational cost of XAI used in the finance domain depends on the complexity of the underlying AI model, feature dimensions, and hardware level. If the underlying AI model is a regression or tree-based model and does not include many factors, the computational cost will be relatively low. However, if the underlying AI model is based on complex neural networks and includes tons of factors in the model, the computational cost will be high.

3.5 Law

3.5.1 Problem Definition

In the legal domain, one of the major concerns of XAI is whether the legal decisions are made fairly towards certain individuals or groups [133], since it is the high-stake domain, and decisions have real significant impacts on real human beings. The explainability of AI models provides transparency on how the decisions are made. It is the desired feature for decisions, commendations, and predictions made in the legal domain by AI algorithms. However, there are few works that have been done in the legal domain apart from general XAI methods [17].

3.5.2 XAI Based Proposals

In the legal domain, information is represented as natural language texts, so the objective of XAI models is to identify the important words that contribute to the final prediction.

In the legal domain, those XAI models are objective cognitive since the highlighted words are from the input data. As CNNs have been proven useful in text input data, general XAI methods (e.g. Grad-CAM, LIME, SHAP) for CNNs have been used to explain the AI model trained for legal text [134]. To demonstrate the contribution of each word in the given sentence on the final prediction, XAI models, like LIME, can indicate the contribution of the word “awesome" on the positive sentiment prediction result.

In terms of logic-oriented, there are few XAI models in the finance domain falling into this end. As in the finance domain, the cases are about the prediction of a probability or a numeric value, and the demand for the explanation of AI models tends to be about which factors contribute more to the final prediction. For the legal domain, the XAI models are end-end relationships, since the explanation has been expressed as the relationship between input words and output predictions.

3.5.3 Cases Studies

Legal text classification Authors in [132] provided a case study from a lawyer’s perspective to utilise Grad-CAM, LIME, and SHAP to explain the legal text classification outcomes. The AI model consists of DistiBERT for embedding and CNN for classification. The method used to represent the explainability is the heatmap visualisation, i.e., highlighted words with different intensities corresponding to different contributions to the final prediction. Apart from two evaluation metrics, the responses of lawyers on the given explanation have also been collected. The scores on visualisations for the selected six correctly classified sentences are from 4.2 to 6.66 with 0 for worst and 10 for best. The key point made by lawyers is that the explanations made by XAI should be understandable for users who have no professional knowledge of the legal domain.

3.5.4 Domain-Specific Insights

Explainability not only is necessary for AI applications in Law, but also is required by law (e.g., GDPR) for AI applications. In this subsection, we address the necessity of explainability in AI applications in law. The decisions made in law require explainability by nature [179] as it forms the important part of outputs. All judgements need reasons. Lawyers need to explain to the clients, judges need reference to relevant articles or cases to support the decision[132]. For the AI-empowered legal consultation or recommender systems, the more important information is why this is relevant instead of just listing the relevant articles or similar cases. For judge results prediction, it is only helpful to professionals like lawyers when explainability is provided.

Although the necessity of explainability in AI applications in Law, its adoption is faced with challenges and difficulties. The explainability can be the relevant articles or similar case, but more importantly, the analysis to link them to the target case. The heatmap, mentioned above as an example, may provide certain extend of explainability by highlighting the key words used to make decisions. However, the explainability in law applications requires more descriptions in natural language as the most inputs of AI systems in law are texts written in natural language. This explainability requires certain level of reasoning capabilities to have the explain make sense to the users.

Another challenge is the linking of the evidences. Many legal decisions made by AI systems involve multiple parts of the input text or documents. Explains using only one piece of the information are incomplete. The advent of the large language models (LLMs), such as GPT series models, may facilitate the reasoning and explaining of decisions made in AI applications in law. The LLMs can be instructed to give reference for the outputs generated. This provides opportunities for explainable use of AI in law, but the models involve extra resources.

3.6 Education and Training

3.6.1 Problem Definition

As one of the essential methods to improve and optimise learning, AI is now widely applied in the field of educational research [180, 181]. The applications of AI in education (AIED) have shown great benefits in several ways, including support instructions, personalised learning systems, and automated assessment systems [182]. At the same time, there are some risks associated with the use of AI, given the specific nature of education. For example, bias is a prominent issue in discussions on AI ethics in education. When using AI techniques for student performance prediction or risk identification, they are likely to produce more biased results for a particular group of students based only on the different demographic variables (such as gender) of the students [183]. Consequently, concerns about AI in relation to fairness, accountability, transparency, and ethics (FATE) have sparked a growing debate [135].

XAI contributes to making the decision-making process of an AI model more transparent and fair by explaining the internal processes and providing a logical chain for generating outcomes. This is essential for human users to build trust and confidence in AIED. Recently, there has been a growing body of research showing the opportunities and demands of applying advanced XAI for education [135].

3.6.2 XAI Based Proposals

In terms of source-oriented, there are different objective factors that can be used to analyse and evaluate the performance of students in the education domain. Also, since each student comes from a different family environment and has a very distinct personality, subjective perceptions and feelings in learning have significant impacts on educational outcomes due to the different circumstances of each student. Alonso and Casalino [136] proposed a method that uses an XAI tool, named ExpliClas, to analyse objective attributes of students’ learning processes. Their method provides both global and local explanations. There are also many studies that use objective characteristics, such as age, gender, and study hours, to predict and explain students’ performance in teaching activities [137].

In terms of representation-oriented, the XAI approaches in education mainly include visualization-based and symbolic-based explanations. In [138], a deep learning-based method was proposed to achieve the automatic classification of online discussion texts in the educational context. Particularly, the authors used gradient-based sensitivity analysis (SA) to visualise the significance of words in conversation texts for recognising the phases of cognitive presence, thus providing the explanation for the deep learning model. Recently, some researchers have also applied the symbolic approach in education, expecting to adopt symbolic knowledge extraction to provide logical reasoning for the AI interpretation. Hooshyar and Yang [139] provided a framework that integrates neural-symbolic computing to address the interpretability of AI models. The framework considered using prior knowledge, such as logic rules and knowledge graphs.

In terms of logical-oriented, XAI in education is primarily required to provide explanations for machine learning black-box and rule-based models. SHAP was employed in [140] to explain the black-box student dropout classification model in relation to Shapley values. Explanation from the rule-based algorithms specialises in showing clear logic from the input data to the output results. For example, in [141], global explanations were provided to train nursing students by analysing the temporal logic between actions.

3.6.3 Cases Studies

Feedback providing For students, getting timely and formative feedback from educators about performance and assignments is an important way of improving the efficiency and quality of learning. Feedback should include not only the student’s marks and evaluation but also, and more importantly, an explanation of the problems with the assignment and learning. XAI has been applied in this area, the relevant techniques include sequence mining, natural language processing, logic analysis and machine learning approaches. Take writing analytics as an example, which aims to provide feedback for students to improve their writing. Knight et al. [142] introduced an open-source tool, AcaWriter, which provides informative feedback to fit different learning contexts. AcaWriter employs the Stanford CoreNLP to process each input sentence, and then it uses a rule-based model to extract the matched patterns. Their research demonstrates the great application of XAI in education by explaining to students about their writing.

Intelligent tutoring systems In addition to providing feedback to students, XAI can also help give personalised tutoring instructions based on the student’s learning activity performance. Conati et al. [143] attempted to integrate multiple machine learning algorithms into an interactive simulation environment so as to provide hints to students regarding their learning behaviours and needs. The proposed XAI mechanism consists of behaviour discovery, user classification, and hint selection. In the behaviour discovery phase, the authors first apply an unsupervised clustering algorithm to group students and then use association rule mining to analyse student behaviour further. In the user classification phase, they build a supervised classifier to predict students’ learning. In the hint selection phase, the previous classification result and association rules will be used to trigger the corresponding hints.

3.6.4 Domain-Specific Insights

The end-users of XAI in education and training mainly include students and educators. XAI can help students understand and interpret the outcomes of AI-driven systems, such as automated grading or recommendation algorithms, providing them with transparency and insights into the feedback. Additionally, XAI provides educators with a great opportunity to gain a deeper understanding of the AI-powered educational tools they employ in their classrooms. By using XAI, educators can acquire insights into the underlying reasons behind specific recommendations or suggestions generated by these systems. Consequently, they can adapt and customize their teaching strategies based on this understanding.

While XAI holds great potential and prospects for application in the field of education, there are currently challenges and limitations that need to be addressed. Many AI algorithms, such as deep learning neural networks, can be complex and difficult to interpret. XAI techniques still have limitations to provide clear and comprehensive explanations for the decisions made by these complex models, which can hinder their adoption in educational settings. Also, there is a Lack of standardization for XAI. We need standardized metrics and a framework to evaluate and assess the explanations provided by XAI, particularly when comparing different XAI techniques and approaches. The absence of standardized practices and guidelines can lead to inconsistency and confusion in implementing XAI solutions. Addressing trade-offs is an essential step in developing machine learning models, and XAI is no exception. Finding the right balance between explainability and performance is crucial, especially in educational contexts where accurate feedback and predictions are necessary.

3.7 Civil Engineering

3.7.1 Problem Definition

AI systems used in civil engineering research have a significant impact on the decision-making processes in road transport and power systems. In particular, autonomous driving techniques in road transport and power system analysis and power systems are the common areas used deep learning techniques, such as navigation and path planning, scene recognition, lane and obstacle detection, as well as planning, monitoring, and controlling the power system [150, 184].

In the field of autonomous driving, deep learning techniques are normally utilised to recognize scenes for digital images [184, 185]. While in the field of power system analysis, deep learning techniques are used to extract features from the underlying data for power system management, such as power grid synthesis, state estimation, and photovoltaic (PV) power prediction [150, 186]. Deep learning explainable techniques are used to automatically extract abstract features of images or depth non-linear features of underlying data through end-to-end predictive processing to obtain results, which is not sufficient to provide the evidence to trust and accept the result of autonomous driving and power system management. For example, one can use traffic lights and signal recognition for driving planning, in which the traffic lights at crosswalks and intersections are an essential function in following traffic rules and preventing traffic accidents. Deep learning methods have achieved prominence in traffic sign and light recognition, but they are hard to explain the correlation between inputs and outputs and lack an explanation to support reasoning in driving planning studies [187]. In power system management, deep learning methods may mislead the output explanations of power stability to provide unreliable recommendations, so explanations can increase user trust [150].

3.7.2 XAI Based Proposals

XAI can improve the management of autonomous driving and power system, providing an effective interaction to promote smart civil engineering. Deep learning interpretation research in autonomous driving and power systems is a common interpretable deep learning method because it is not only influenced by data, but also relates to expert knowledge and ethical principles.

In terms of source-oriented, objective interpretability obtains visible or measurable results from 2D and 3D images or underlying datasets, while subjective interpretability requires consideration of the knowledge from automotive or electrical experts and the ethical standards of their fields. Currently, XAI proposals include objective and subjective cognitive aspects. For example, CAM, as an objective cognition method, is used to explain the highlight of important regions in 2D or 3D images. Time series, 2D images, 3D images, Lidar images, knowledge databases and ethical criteria are utilised as subject sources to explain the model [147, 185, 187].

In terms of representation-oriented, visual interpretation is the highest level semantics to understand which parts of the image impact the model, emphasing on visual structure of data and model, which is the primary XAI method used in autonomous driving. These XAI methods can be divided into gradient-based and back propagation-based. Gradient-based interpretation methods include CAM, and its enhanced variants such as Guided Grad-CAM, Grad-CAM, Grad-CAM++ and Smooth Grad CAM++. CAM can highlight the discriminative regions of a scene image used for scene detection [147]. Backpropagation-based methods contain guided backpropagation, layered relevance propagation, visual backprop and deep lift. Visual Backprop shows which input pixels set contributes to steering self-driving cars [144]. Symbolic interpretation uses understandable language to provide evidence for result recommendations in autonomous driving and power system management. In autonomous driving, proposed AI methods make decisions according to traffic rules. For example, “the traffic light ahead turned red,” thus “the car stopped” [185]. In power system management, it uses the data gathered from occupant actions for resources such as room lighting to forecast patterns of energy resource usage [188]. Hybrid interpretation combines visual interpretation and symbolic interpretation to provide steering determination in autonomous driving. For example, Berkeley Deep Drive-X (BDD-X) is introduced in autonomous driving which includes the description of driving pictures and annotations for textual interpretation [49].

In terms of logical-oriented, the end-end explanations are used to explain the relationship between input images including obstacle and scene images and the prediction. For example, LIME is utilised to explain the relationship between input radar image and prediction results [189]. Middle-end explanations reveal reasons behind the autoencoder-based assessment model and how they can help drivers reach a better understanding and trust in the model and its results. For example, a rule-based local surrogate interpretable method is proposed, namely MuRLoS, which focuses on the interaction between features [149]. Correlation expatriation is used in the risk management of self-driving and power systems. For example, SHAP is used to assess and explain collision risk using real-world driving data for self-driving [190].

3.7.3 Cases Studies

Decisive vehicle actions Decisive vehicle actions in autonomous driving are based on multiple tasks, such as scene recognition, obstacle detection, lane recognition, and path planning. It can use attention mechanisms, heat maps, diagnostic models and texture descriptions to recognise obstacles, scenes and lanes and steer the car operation [147, 185, 187]. As mentioned before, CAM is used to highlight the main area for recognition [63]. Visual Backprop, unlike CAM-based, emphases highlighting pixel-level to filter features of scene images [144]. Grad-CAM is combined with existing fine-grained visualisations to provide a high-resolution class-discriminative visualisation [36]. Visual attention heat maps are used to explain the vehicle controller behaviour through segmenting and filtering simpler and more accurate maps while not degrading control accuracy [145]. A neural motion planner uses 3D detection instances with descriptive information for safe driving [146]. An interpretable tree-based representation as hybrid presentations combines rules, actions, and observation to generate multiple explanations for self-driving [147]. An architecture is used for joint scene prediction to explain object-induced actions [149]. An auto-discern system utilises surroundings observations and common-sense reasoning with answers for driving decisions [148].

Power system management Power system management normally consists of stability assessment, emergency control, power quality disturbance, and energy forecasting. CNN classifier, combined with non-intrusive load monitoring (NILM), is utilised to estimate the activation state and provide feedback for the consumer-user [150]. The shape method is firstly used in emergency control for reinforcement learning for grid control (RLGC) under three different outputs analysis [151]. Deep-SHAP is proposed for the under-voltage load shedding of power systems, and it adds feature classification of the inputs and probabilistic analysis of the outputs to increase clarity [152].

3.7.4 Domain-Specific Insights

In terms of transportation systems, operators, such as drivers and passengers, are the primary end-users in scenarios involving decisive vehicle actions, because they may want to comprehend the reasoning behind the decisions made by the autonomous system. It is very important in high-stake domains which human lives are risk. XAI can provide explanations for AI decisions to enhance the system more transparent and fostering trust. Real-time explanations pose a significant challenge for XAI in decisive vehicle actions, because decisions need to be made with fractions of a second. Rapidly changing environments such as weather conditions, pedestrian movement and other vehicles actions promote XAI should ideally make quick and accurate decisions. Moreover, every driving situation can be unique. XAI needs suitable for diversity situation and adapt its explanations which based on context-aware interoperability. As previously mentioned, XAI demands more computational resources because of real-time explanations based on timely response. Moreover, deceive vehicle actions require high dimensional sensor data, such as the inputs from LiDAR and stereo cameras, which lead the methods, like LIME and SHAP, which adopts approximate local decision boundaries, are expensive for computation and especially for high-dimensional inputs. The requirements in XAI that can generate real-time, informative explanations without overburdening the computational resources of the system.

In terms of infrastructure system management, such as power or water system management, general public, including governments and residents, are the key end-users in power system management. Government bodies want to oversee the safe and fair use of AI in power system management. Meanwhile, residents may be curious about the mechanics of AI used to manage power systems in the city. XAI can be used to evaluate AI systems for safety, fairness, transparency, and adherence to regulatory requirements. Interpretation complexity is a primary challenge for XAI in infrastructure system management due to the multidimensional nature of the data, which includes factors from power generators, transmission lines, and power consumers. Moreover, unlike the case of autonomous driving, power system operations demand more technical expertise and need to adhere to various regulatory requirements. Consequently, XAI is not only to provide coherent and insightful interpretations of the system’s operations but also to demonstrate that these operations comply with all relevant regulations. The entire process in infrastructure system management is starting from generation and distribution to monitor consumer usage patterns. The complexity is future amplified by the demands for load balancing and power outages, which influences the public life and the city operation. Moreover, it also need to fix the various regulations and standers. To evidence such compliance, XAI may need to generate more complex or detailed explanations, thus increasing the computational cost.

3.8 Cross-Disciplinary Techniques for XAI Innovations

XAI innovations for cross-disciplinary refers to the advancements and developments in explainable AI (XAI) that span multiple domains and disciplines. It involves the integration and adaptation of XAI techniques and methodologies to address complex problems and challenges that arise in diverse fields.

One aspect of XAI Innovations for cross-disciplinary is the exploration and utilization of common XAI techniques across different domains. These techniques, such as attention-based models, model-agnostic methods, and rule-based methods, can be applied to various fields to provide transparent and interpretable explanations for AI models. Below are some examples of common XAI techniques:

  1. 1.

    Regression-based partitioned methods:can be applied to any black-box model. For example, LIME approximates the decision boundaries of the model locally and generates explanations by highlighting the features that contribute most to the prediction for a specific instance. LIME can be used in domains such as healthcare, cyber security, finance, or education to provide instance-level interpretability and explainability. SHAP is another common technique based on cooperative game theory, which can be applied to different domains to explain the importance of features in the decision-making process. For example, in medical diagnostics, SHAP can help understand which medical parameters or biomarkers have the most impact on a particular diagnosis.

  2. 2.

    Feature importance: Feature importance techniques assess the relevance and contribution of each feature in the model’s predictions. Methods like permutation importance, Gini importance, or gain-based importance are commonly used. Feature importance can be useful in various domains to identify the factors that drive specific outcomes or decisions. For instance, in finance, feature importance can help understand which financial indicators or market factors play a crucial role in investment decisions.

  3. 3.

    Partial dependence plots: Partial dependence plots visualize the relationship between a feature and the model’s output while holding other features constant. These plots show how changing the value of a specific feature affects the model’s predictions. Partial dependence plots can be employed in domains such as healthcare, where they can provide insights into the impact of certain medical treatments or interventions on patient outcomes.

  4. 4.

    Rule-based models: Rule-based models provide transparent and interpretable decision-making processes by expressing decision rules in the form of “if-then” statements. These models can be used in various domains to generate explanations that are easily understandable by humans. In legal applications, rule-based models can help explain legal reasoning by mapping legal principles and regulations to decision rules.

These are just a few examples of common XAI techniques that can be applied across different domains. The choice of technique depends on the specific requirements and characteristics of each domain. We summarise some typeical suitable XAI approaches for each domain shown in Table 5. By leveraging these techniques, domain experts and practitioners can gain insights into the inner workings of AI models and make informed decisions based on understandable and interpretable explanations.

Another aspect of XAI innovations for cross-disciplinary involves the development of domain-specific XAI approaches. In Table 5, we summarize some typical suitable XAI approaches for different domains. These approaches can be tailored to the unique characteristics and requirements of specific domains, taking into account the specific challenges and complexities of each field. Domain-specific XAI approaches consider various factors, including domain knowledge, regulations, and ethical considerations, to create an XAI framework that is specifically designed for a particular domain. By incorporating domain expertise and contextual information, these approaches provide explanations that are not only interpretable but also relevant and meaningful within their respective domains.

By tailoring XAI approaches to specific domains, practitioners can gain deeper insights into the behavior of AI models within the context of their field. This not only enhances transparency and trust in AI systems but also enables domain-specific considerations to be incorporated into the decision-making process, ensuring the explanations are relevant and aligned with the requirements and constraints of each domain.

Table 5 XAI suitability analysis for application domains

Furthermore, XAI innovations for cross-disciplinary emphasize the importance of collaboration and the integration of expertise from different fields. This approach recognizes that the challenges and complexities of XAI extend beyond individual domains and require a multidisciplinary perspective. Collaboration and integration of expertise enable a holistic approach to XAI, where insights from different disciplines can inform the development of innovative and effective solutions. For example, in the field of healthcare, collaboration between medical practitioners, data scientists, and AI researchers can lead to the development of XAI techniques that not only provide interpretable explanations but also align with medical guidelines and regulations. This integration of expertise ensures that the explanations generated by XAI systems are not only technically sound but also relevant and meaningful in the specific healthcare context.

Similarly, in the domain of cybersecurity, collaboration between cybersecurity experts, AI specialists, and legal professionals can lead to the development of XAI techniques that address the unique challenges of cybersecurity threats. By combining knowledge from these different fields, XAI systems can provide interpretable explanations that enhance the understanding of AI-based security measures, assist in identifying vulnerabilities, and facilitate decision-making processes for cybersecurity professionals.

The collaboration and integration of expertise from different fields also foster a cross-pollination of ideas and perspectives, driving innovation and the development of novel XAI techniques. By leveraging the diverse knowledge and experiences of experts from various domains, XAI can evolve and adapt to meet the evolving needs and challenges of different industries and societal contexts.

4 Discussion

As the concerns on explainability and the attentions for XAI, regulations such as GDPR set out the transparency rules about the data processing. As most modern AI systems are data-driven AI, these requirements are actually applicable to all application domains. Not only the explainability is necessary, but also the way of explaining is required.

In this section, we will summarize the limitations of existing XAI approaches based on the above review in each application domain, and identify future research directions.

4.1 Limitations

Adaptive integration and explanation: many existing approaches provide explanations in a generic manner, without considering the diverse backgrounds (culture, context, etc.) and knowledge levels of users. This one-size-fits-all approach can lead to challenges in effective comprehension for both novice and expert users. Novice users may struggle to understand complex technical explanations, while expert users may find oversimplified explanations lacking in depth. These limitations hinder the ability of XAI techniques to cater to users with different levels of expertise and may impact the overall trust and usability of the system. Furthermore, the evaluation and assessment of XAI techniques often prioritize objective metrics, such as fidelity or faithfulness, which measure how well the explanations align with the model’s internal workings. While these metrics are important for evaluating the accuracy of the explanations, they may not capture the subjective aspects of user understanding and interpretation. The perceived quality of explanations can vary among users with different expertise levels, as well as under different situations or conditions.

Interactive explanation: in the current landscape of XAI research, there is recognition that a single explanation may not be sufficient to address all user concerns and questions in decision-making scenarios. As a result, the focus has shifted towards developing interactive explanations that allow for a dynamic and iterative process. However, there are challenges that need to be addressed in order to effectively implement interactive explanation systems. One of the key challenges is the ability to handle a wide range of user queries and adapt the explanations accordingly. Users may have diverse information needs and may require explanations that go beyond superficial or generic responses. In particular, addressing queries that involve deep domain knowledge or intricate reasoning processes can be complex and requires sophisticated techniques. Another challenge is striking a balance between providing timely responses to user queries and maintaining computational efficiency. Interactive explanation systems need to respond quickly to user interactions to facilitate a smooth and engaging user experience. However, generating accurate and informative explanations within a short response time can be demanding, and trade-offs may need to be made depending on the specific domain and computational resources available. Moreover, the design and implementation of interactive explanation systems should also consider the context and domain-specific requirements. Different domains may have unique challenges and constraints that need to be taken into account when developing interactive explanations. It is important to ensure that the interactive explanation systems are tailored to the specific domain and can effectively address the needs of users in that context.

Connection and consistency in hybrid explanation: in the context of hybrid explanations in XAI, it is crucial to ensure connection and consistency among different sources of explanations. Hybrid approaches aim to leverage multiple techniques to provide users in various domains with different application purposes, achieving robustness and interpretability. However, it is necessary to address potential conflicts and ensure coordinated integration of different components within these hybrid systems. Currently, many works focus on combining various explanation techniques to complement each other and enhance overall system performance. While this integration is valuable, it is important to acknowledge that different techniques may have inherent differences in their assumptions, methodologies, and outputs. These differences can result in conflicts or inconsistencies when combined within a hybrid explanation system. Therefore, careful attention should be given to the design of complex hybrid explanation systems. The structure and architecture need to be thoughtfully planned to ensure seamless connections between components. This involves identifying potential conflicts early on and developing strategies to resolve them. Additionally, efforts should be made to establish a unified framework that allows for effective coordination and integration of the different techniques used in the hybrid system. Furthermore, the evaluation and validation of hybrid explanation systems should include assessing the consistency of explanations provided by different sources. This evaluation process helps identify any discrepancies or inconsistencies and guides the refinement of the system to ensure a coherent and unified user experience.

Balancing model interpretability with predictive accuracy: currently, researchers are developing hybrid approaches that aim to strike a better balance between interpretability and accuracy, such as using post-hoc interpretability techniques with complex models or designing new model architectures that inherently provide both interpretability and high accuracy.However, they also come with their own limitations. Post-hoc interpretability techniques generate explanations after the model has made its predictions, which means they do not directly influence the model’s decision-making process. As a result, the explanations may not capture the full complexity and nuances of the model’s internal workings. Furthermore, post-hoc techniques can be computationally expensive and may not scale well to large datasets or complex models with high-dimensional inputs. Designing new model architectures such as rule-based models or attention mechanisms in neural networks may struggle to capture complex interactions and may require a significant amount of manual rule engineering. It is crucial to recognize that there is no universal solution to the interpretability-accuracy trade-off. The choice of approach depends on the specific requirements of the application, available resources, and acceptable trade-offs in the given context. Researchers and practitioners must carefully consider the limitations and benefits of different techniques to strike an appropriate balance based on their specific use cases.

Long-term usability and maintainability: the current XAI methods face several limitations when deployed in real-world scenarios. One significant limitation is the need for continuous explanation updates. XAI systems generate explanations based on training data, and as the underlying AI models or data evolve, the explanations may become outdated or less accurate. To ensure relevance and usefulness, XAI systems should be designed to incorporate mechanisms for updating explanations to reflect the latest model updates or data changes. Another limitation is the assumption of stationary data distributions. XAI methods are typically trained on historical data, assuming that the future data will follow a similar distribution. However, if the data distribution changes over time, the performance of the XAI system may deteriorate. Adapting XAI methods to handle shifting data distributions is essential for maintaining their effectiveness and ensuring reliable explanations in dynamic environments. Scalability is another crucial consideration, particularly for large-scale AI systems. XAI techniques that work well on small-scale or controlled datasets may face challenges when applied to large-scale AI systems with complex models and massive amounts of data. Efficient algorithms and sufficient computational resources are necessary to handle the increased computational demands of explaining large-scale AI systems without sacrificing performance or usability.

4.2 Future Directions

To address the first limitation, building the context-awareness XAI is important, we need to explore how to generate explanations by considering mission contexts (surrounding environment, situations, time-series datasets.), mapping user roles (end-user, domain expert, business manager, AI developer, etc.) and targeted goals (refine the model, debugging system errors, detecting bias, understand AI learning process, etc.) regardless of the type of AI system. So far, most of these studies were still conceptual with limited consideration, the more general context-driven systems and practical implementations will be an important direction for future research.

Secondly, interactive explanations (e.g., conversation system Interfaces, games, using audio, visuals, video, etc.) should be explored further. This is a promising approach to building truly human-centred explanations by identifying users’ requirements and providing better human-AI collaboration. These incorporating theories and frameworks allow an iterative process from humans, which is a crucial aspect of building successful XAI systems.

Finally, the hybrid explanation should be applied by concerning fusing heterogeneous knowledge from different sources, managing time-sensitive data, inconsistency, uncertainty, etc. Among these conditions, hybrid explanation has been an interesting and increasing topic in recent years. This will also involve a wide range of criteria and strategies that target a clear structure and consensus on what constitutes success and trustworthy explanations.

5 Conclusion

This paper addresses a wide range of explainable AI topics. XAI is a rapidly growing field of research, as it fills a gap in current AI approaches, allowing people to better understand AI models and therefore trust their outputs. By summarising the current literature, we have proposed a new taxonomy for XAI from the human perspective. The taxonomy considers source-oriented, representation-oriented aspect, and logic-oriented perspectives.

It is very important that we have elaborated on the applications of XAI in multiple areas in the paper, including medical, healthcare, cybersecurity, finance and law, education and training, and civil engineering. We provide a comprehensive review of different XAI approaches and identify the key techniques for case studies. Finally, we discuss the limitations of existing XAI methods and present several corresponding areas for further research: (1) context-awareness XAI, (2) interactive explanations, and (3) hybrid explanations.

Overall, this paper provides a clear survey of the current XAI research and application status from the human perspective. We hope this article will provide a valuable reference for XAI-related researchers and practitioners. We believe XAI will build a bridge of trust between humans and AI.