Human-centric and semantics-based explainable event detection: a survey

In recent years, there has been a surge of interest in Artificial Intelligence (AI) systems that can provide human-centric explanations for decisions or predictions. No matter how good and efficient an AI model is, users or practitioners find it difficult to trust it if they cannot understand the AI model or its behaviours. Incorporating explainability that is human-centric in event detection systems is significant for building a decision-making process that is more trustworthy and sustainable. Human-centric and semantics-based explainable event detection will achieve trustworthiness, explainability, and reliability, which are currently lacking in AI systems. This paper provides a survey on human-centric explainable AI, explainable event detection, and semantics-based explainable event detection by answering some research questions that bother on the characteristics of human-centric explanations, the state of explainable AI, methods for human-centric explanations, the essence of human-centricity in explainable event detection, research efforts in explainable event solutions, and the benefits of integrating semantics into explainable event detection. The findings from the survey show the current state of human-centric explainability, the potential of integrating semantics into explainable AI, the open problems, and the future directions which can guide researchers in the explainable AI domain.


Introduction
Event detection requires automatically answering questions pertaining to events, such as when, where, what, and by whom (Panagiotou et al. 2016).Some of the known applications of event detection from social media include identification of first story/breaking news, anomaly detection (online abuse, online bullying, hate speech, fake news), emergency detection, security intelligence alert (crime, terrorism etc.), and event summarisation (profiling of upcoming events that pertains to place within a specific time window) (Giatrakos et al. 2017;Sreenivasulu and Sridevi 2018;Win and Aung 2018).The literature has shown a series of event detection models, but fewer attempts have been made to provide a humancentric explainable event detection (Evans et al. 2022;Khan et al. 2021).An explainable event contains information that embraces the 5W1H dimensions (Who did what, when, where, why and how) (Chen and Li 2019).An event is considered complete only when all the components of 5W1H can be deduced, and answers are provided to who did what, where, when, why and how (Miller 2019).An explainable event detection system can provide information that covers the 5W1H dimensions of an event in a human-comprehensible way (Chakman et al. 2020).Explainable event detection can be realised through the integration of Explainable AI (XAI) methods and semantics for event detection.
XAI provides relevant explanations that justify the decisions or predictions made by AI systems (Gunning et al. 2019;Hall et al. 2022).To put it more precisely, XAI is a developing area of AI research that supports a collection of instruments, methods, and algorithms that may produce superior interpretable, intuitive, and human-comprehensible justifications for AI actions (Das and Rad 2020).XAI has been identified as essential for adopting AI solutions in several real-world domains, including security, healthcare, business, and commerce (Arya et al. 2020).The transparency and explainability of AI results are as important as the accuracy of results.Hence XAI has received huge attention recently (Shin 2021).Although the potential of XAI to facilitate explainable event detection has been highlighted in the literature, it is still incapable of facilitating human-centric explanations (Khan et al. 2021).
The synergistic integration of XAI and semantic technologies have been identified as one of the most efficient ways to improve the explainability of AI and machine learning (ML) systems (Pesquita 2021).Studies have shown that human comprehensibility of the results of ML systems can be significantly enhanced by introducing additional knowledge aspects, such as domain knowledge, common sense knowledge, and case-based knowledge.This type of additional knowledge can be found in external sources such as ontologies, knowledge graphs, and open data/knowledge sources (Ammar and Shaban-Nejad 2020; Confalonieri et al. 2021;Donadello and Dragoni 2021;Ribeiro and Leite 2021).Thus, explainable event detection can be realised by integrating ML and semantic technologies.Integrating semantics into XAI can provide a human-centric explanation more understandably (Pesquita 2021).It will go a long way to capture the 5W1H dimensions required for a human-comprehensible explanation.A human-centric explanation is adaptive to the user's context, understandable, appealing, and gives a basis for trust through provenance (Li et al. 2022).Like what obtains in other critical domains, the advent of human-centric explainable events will increase the uptake of event detection solutions by news agencies, security organisations, health and emergency units, and other relevant public organisations (Bhatt et al. 2020).
Many reviews/survey papers on XAI in various fields exist in the literature (Alicioglu and Sun 2022;Chaddad et al. 2023).However, none has focused on the application of XAI for event detection.In addition, none of the currently available event detection systems has captured the six dimensions of 5W1H to provide explanations and, at the same time, emphasise human-centricity.Thus, an understanding of approaches, methods, and efforts so far made in the area of XAI for event detection is needed.Thus, in this paper, we present a review of human-centric and semantics-based event detection.As a contribution, this paper reveals the state-of-the-art, challenges, and the way forward regarding human-centric explainable AI and semantic-based event detection, which is currently lacking in the literature.
The remaining parts of the paper are arranged as follows.The related work is presented in Sect.2, while the survey's methodology is described in Sect.3. Section 4 presents the answers to the survey's research questions, while the survey findings are discussed in Sect. 5.In Sect.6, the future research directions and open issues are discussed.The paper's conclusion is presented in Sect.7.

Related work
This section summarises previous review papers on explainable AI (XAI).
Many researchers have conducted survey studies on XAI approaches.Islam et al. (2021) surveyed explainable AI approaches.The authors showcase and analyse popular XAI methods to provide meaningful insights on quantifying explainability and provide recommendations towards human-centred AI.Similarly, Chaddad et al. (2023) surveyed explainable AI techniques focusing on healthcare and related medical imaging applications.The authors provided a summary and category of XAI types, algorithms and challenging XAI problems.Guidotti et al. (2018) examined ways to explain black box models.The authors provided a classification of problems addressed in the literature concerning the explainability of black box models.Adadi and Berrada (2018) surveyed the existing approaches of XAI for black-box models and presented the trends surrounding XAI's sphere and future research directions.
The summary of recent developments in explainable AI that bother on supervised learning techniques and its connection with artificial intelligence, along with future research directions, was provided by Dosilovic et al. (2018).For XAI methods, Alicioglu and Sun (2022) presented a survey of visual analytics.Visual analytics for XAI methods that can better interpret neural networks was the topic of discussion in the survey, which covered the current state, obstacles, and potential future paths.Saeed and Omlin (2023) conducted a meta-survey on XAI's challenges and research directions.The discussion focused on the general difficulties and future directions of AI and XAI research concerning the machine learning life cycle.
While many research efforts focused on XAI techniques only, some surveys incorporated human-centricity in explaining AI.Liao and Varshney (2022) conducted a survey on human-centered explainable AI that focused on algorithms for user experience.The survey looked at human-centered approaches for designing, evaluating, and providing conceptual and methodological tools for explainable AI.Ehsan and Riedl (2020) investigated the perception of non-expert users on automatically generated rationales behind AI systems, focusing on human-centricity, confidence, understandability, and adequate justification.Rong et al. ( 2022) conducted a survey which provided a foundational work of user studies in XAI overview, a summary of the XAI design details, current XAI technology overview and potential paradigms of AI systems in understanding human context.Damfeh et al. (2022) examined various theoretical principles and paradigms to investigate human-centered AI concepts.According to the authors, there is an inherent need to strike a balance between increasing XAI systems and human involvement.
The scope of our survey does not just focus on XAI techniques and human-centered XAI, along with explainable event detection.We further explore how to integrate semantics into XAI systems.Semantics-based XAI can provide a human-centric explanation more understandably.Our survey covers three main aspects: human-centric explanation, explainable event detection, and semantic-based explainable event detection.The taxonomy of our survey is presented in Fig. 1, while the comparison of the related work is provided in Table 1.

Methodology
This paper focuses on three main aspects: human-centric explanations, explainable event detection, and semantic-based explainable event detection.We gathered research papers on the topic of interest to realise our study's objectives.The answers to the research questions posed are presented as results in Sect. 4. The research questions used in this survey are presented subsequently.

Research questions
Using the three main aspects of the survey, we asked some research questions, which are presented subsequently.

Results
This section presents the answers to the research questions in Sect.3.1.1-3.1.3.

Human-centric explanations
Artificial intelligence methods are becoming harder for users to explain due to their complexity (Sejr and Schneider-Kamp 2021).AI systems are increasingly algorithmically mediating our lives.The application of these systems has been extended to critical domains such as criminal justice, healthcare, automated-driving, finance, and more (Ehsan et al. 2021).AI technology has great potential to provide professionals with results, building the capacity to enhance decision-making (Alsagheer et al. 2021).Despite the swift recorded achievements of AI, the absence of a human-centered approach and the lack of explainability for practitioners while developing AI systems have remained obstacles to AI adoption.As a result, only a small portion of these achievements has been transferred from the laboratory to practice (Abdul et al. 2018;Arya et al. 2020).Take, for instance, autonomous drones that assist farmers.When, where, and why the drone decides to spray pesticides or water must be known by farmers.Few human-centric applications and studies have been conducted despite the cross-disciplinary challenge of building XAI (Evans et al. 2022).In contrast to real-world scenarios, AI solutions are being implemented in a controlled setting (Okolo 2022).There is a need for AI systems to make the mechanics underlying their decision comprehensible to affected humans.XAI can speed up the adoption of AI solutions as crucial transparency and trust with potential users are fostered (Adadi and Berrada 2018).
The most identified valued propositions of model explanation attributed to different stakeholders covering decision makers, end-users, and researchers, amongst others (Arrieta et al. 2020;Bhatt et al. 2020) are: Trust and confidence It is difficult for a user who does not understand how models are being trained or evaluated to be confident about the model's predictions.Non-technical users can only develop trust if they can comprehend the model and recognise the pattern from such a user's domain.

Transferability
The technical users should be aware of the environment(s) in which a model has been tested and the patterns the model recognises to make predictions.Such knowledge will make the technical user determine the usable model settings.With trust and transferability, It is possible to identify the primary target of the model.

Informativeness and causality
The user's comprehension of the underlying data and decision-making ability can benefit from explanations.Explanations coupled with the existing domain knowledge will make the causal effects in the domain knowledge be understood.

Fair and ethical decision making
Identifying who to hold responsible for the model decisions is challenging with black box models.This is because the internal rules of the black box are not even visible to the developer who wrote the code that generated the model.Understanding the model's prediction patterns will help the decision-makers determine if such decisions are ethical and fair.
Model debugging, adjustment, and monitoring Explanations are useful to the end-users and data scientists.Comprehension of the model by data scientists will help them explain the performance level and how to improve it.An explanation can also help the user tune and monitor the deployed model to ensure consistent performance (Burkart and Huber 2021;Sejr and Schneider-Kamp 2021).

Explainable AI
Researchers often confuse explainability with terms such as intelligibility, transparency, comprehensibility, and interpretability.Scholars often disagree with the scope and the intersection of these terminologies.An important distinction between explainability and interpretability is that explanation does not generally elucidate how a model works but provides users and practitioners with useful information in an accessible manner (Ehsan and Riedl 2020).While interpretability and explainability have human-centric properties, interpretability describes how a model works.In contrast, explainability bothers on what, why, and how the output/decision of the model is made (Hall et al. 2022).Roscher et al. (2020) made a clear distinction between interpretability, explainability, and transparency: Transparency takes into account the AI/machine learning strategy, interpretability considers the AI alongside the data, and explainability takes into account the model, the data, and human involvement.Model transparency, design transparency, and algorithm transparency are all subcategories of transparency, which refers to the processes of building models.The transparency of the model structure, such as the number of layers, activation function, splitting criteria, and decision trees in the random forest, is referred to as model transparency.Design transparency is related to understandable, replicable, and well-motivated choices during the AI algorithm's construction.The transparency of the algorithm is related to the uniqueness of the final solution.The user's comprehension of the AI model is the focus of interpretability.According to Montavon et al. (2018), interpretability converts an abstract concept like a predicted class into an easily comprehendible domain.Interpretability methods reveal the crucial characteristics that are responsible for model prediction.Interpretation and additional contextual information are combined in explainability.The what, how, why, and causality questions are addressed (Miller 2019).A comprehensive comprehension of AI systems and their actions is one broad definition of explainable AI (Vaughan and Wallach 2020).According to Gunning et al. (2019), the true goal of XAI is to ensure that end users can see the results, which will help them make better decisions.With the definition of explainability, it can be inferred that the existing research has focused on interpretability and much work is still needed to achieve explainable AI models.
Users must occupy a central place and go beyond technology to comprehend contextual usage in developing AI systems (Inkpen et al. 2019).Keeping human-in-the-loop will give room to the creation of dynamic AI solutions (Syed et al. 2020).In addition, users must know the capacity of AI systems, what they can and cannot accomplish, the data on which it was trained, and for what it has been optimised (Ontika et al. 2022).Human values such as responsibility, transparency, trustworthiness, and fairness must be integrated into the design of AI solutions (Friedman and Hendry 2019).When humans are engaged in the design process of AI systems, the resulting systems will be safe, useful, ethical, reliable, fair, and adaptable (Bond et al. 2019;Liao and Varshney 2022).

Explainable AI techniques
XAI techniques fall into two main categories: ante-hoc and post-hoc.Ante-hoc explainability incorporates the explanation into the model itself, while post-hoc explainability attempts to generate explanations of the result of the model.Examples of ante-hoc explainability models include fuzzy inference systems, decision trees, and linear regression.Post-hoc explainability models are usually done with black box models such as neural networks (Guidotti et al. 2018).The choice between the two techniques is premised on explainability and performance.Unfortunately, the highest-performance AI models are the least explainable and vice versa (Kelly et al. 2019).According to Guidotti et al. (2018) (Lundberg and Lee 2017;Mangalathu et al. 2020).For deep neural network explanations, additional features to use for explanation are propagation, gradient (Linardatos et al. 2021), and occlusion (Kakogeorgiou and Karantzalos 2021).The counterfactual inspection provides an understanding of the model behaviour with alternative input.Techniques such as Partial Dependence Plot (PDP) (Szepannek and Lubke 2022) and Individual Conditional Expectation (ICE) (Rai 2020) can be used for counterfactual inspection.While any machine learning model's explainability can be achieved with LIME and SHAP, other models like Deep Learning Important Features (DeepLIFT) (Shrikumar et al. 2017) as well as GRADient Class Activation Mapping (GRAD-CAM) (Selvaraju et al. 2017) are used for Deep Learning Models.
Explainability is now more than just trying to understand models; it's now a crucial requirement for people to trust and use AI solutions in various fields (Liao and Varshney 2022).Value Sensitive Design (VSD) is an approach to XAI that assumes that designers should design all technologies principally and comprehensively, accounting for human values throughout the design process (Friedman and Hendry 2019).It has been argued that VSD will influence how solutions are designed in the future (Friedman et al. 2017;Umbrello and de Bellis 2018).A thorough understanding of XAI and hands-on expertise in XAI techniques are needed to make informed decisions (Gill et al. 2022).Figure 2 provides an overview of XAI approaches, and the description of the XAI approaches is presented in Table 2.

Comparison of XAI techniques
A human-centric explanation is key when measuring or comparing XAI tools and techniques.Future AI must exhibit properties such as transparency, accountability, fairness, performance, trustworthiness, and causality (Shin 2021).Such properties will promote the comprehensibility of humans' operability of specific applications.Supporting a wider acceptance requires explainability, encompassing more than interpretability (Hall et al. 2022).
Much research has been done to provide explainability, especially for black box models.Some of the existing explainable models include Local Interpretable Model Agnostic Explanations (LIME), GraphLIME, Deep Taylor Decomposition (DTD), Anchors, SHapley Additive exPlanations (SHAP), Prediction Difference Analysis (PDA), Layer-wise Relevance Propagation (LRP), Asymmetric Shapley Values (ASV), Explainable Graph Neural Network (XGNN), Break-Down, Testing with Concept Activation Vectors (TCAV), Shapely Flow, X-NeSyL, Integrated Gradients, Meaningful Perturbations, Causal Models, Textual Explanations of Visual Models, and more ( (Holzinger et al. 2022a).This section compares XAI techniques in terms of the extent to which they have satisfied Human-centric explanations.Figure 3 presents popular XAI toolboxes.Table 3 presents the analysis of different XAI techniques and to what extent they cover human-centricity properties.
From Table 3, no existing XAI models have captured the entire dimensions of humancentric event detection.In addition, most of the explanations provided by these XAI methods are logical axioms understandable by experts and not common users.More research efforts are still needed to provide explanations that are comprehensible to users.
Moreover, it can be observed that more research efforts are still needed to invent explainable AI methods that address causal dependencies, as few XAI methods currently capture causal relationships (Holzinger et al. 2020).Research efforts should also gear at XAI models that encourage contextual understanding and answer questions and counterfactuals such as "what-if".This allows human-in-the-loop where conceptual knowledge and human experience are utilised in the AI process (Holzinger et al. 2021).The existing XAI methods barely scratch the 'black box' surface (by stressing on features or localities, for instance, within an image) and do not provide explanations understandable to humans.This is quite different from how humans reason, evaluate similarities, make decisions, draw an analogy or make associations (Angelov et al. 2021).The best AI algorithms still lack conceptual understanding, so there is room for the XAI research community to contribute to this open problem.

Evaluation of explainable AI
Explainable AI is still in the infancy stage.As such, there is no standard agreement on how human-centric explanations should be evaluated (Li et al. 2022) due to the subjectivity of the explainability concept, perception, and interest of users (Carvalho et al. 2019).Because the model's inner workings are unknown, there is no ground truth for evaluating post-hoc explanations (Samek et al. 2019).Clearly defining evaluation goals and metrics is necessary to advance research on explainability.Additional efforts in this area are still required (Ribera and Lapedriza 2019).Most existing systems skip or provide an informal evalu-

Post-hoc
Intrinsic explainability incorporates explainability directly into their structures.Explainability is achieved by finding large coefficient features that play a significant role in the prediction.Ante-hoc models are more interpretable.Decision trees, K-nearest neighbours, and linear and logistic regression are interpretable.Useful for training a new model for which comprehension is essential.With post-hoc explainability, a second model is required to provide an explanation for the existing one.Support vector machines, ensemble algorithms, and neural networks are examples of models that are intrinsically uninterpretable.The post-hoc explanation can supply an explanation for the intrinsically interpretable models.Useful to leverage already trained or proven machine learning techniques.Model-specific vs.

Model-agnostic
Post-hoc explanations can be divided into model-specific and model-agnostic explanations.Model-specific is sometimes called white-box explanations because they provide explanations based on the model's internals.Saliency maps are an example of a model-specific explanation.Saliency maps highlight features that are perceived to influence classification.However, model-specific explanations are designed for certain types of models.The model-agnostic explanation provides explanations that are decoupled from the model.Methods for model-agnostics are partial dependency plots (PDP) and individual conditional expectations.Both methods provide an explanation for the whole model using visual interactions of the model under investigation.

Surrogate
Another kind of post-hoc explanation that can be used to explain more complicated methods is surrogate.An illustration of a surrogate method that explains ensemble models is the Combined Multiple Models (CMM).

Global vs. Local
Global explanations try to explain the whole model.Familiar methods used are tree-based models.The local explanation provides explanations based on a region around a single prediction.This is much simpler than global explanations.
Examples of popular methods in this category are Local Interpretable Model Explanation (LIME) and SHapley Additive exPlanations.However, SHAP is computationally costlier than LIME.DeepLIFT, GRAD-CAM are used specifically for Deep Learning models explanation.
ation (Danilevsky et al. 2020).According to Mohseni et al. (2021), three human-centred evaluation methods for XAI are user satisfaction and trust, usefulness, and mental models.
User interview allows for the measurement of user trust and satisfaction.The user's performance can be used to determine the usefulness, for example, event detection with the aid of XAI systems.Mental models show how the user comprehends the system, and it can be measured by asking the user to predict the output of the model.Future research on humancentric XAI evaluations should center on investigating innovative methods and be effective for collecting subjective measures for explanation evaluation of user experiment designs (Zhou et al. 2021).
The only available tool for the quantitative evaluation of explanation methods is Quantus (Hedstrom et al. 2022).Despite the numerous explanation strategies that have been developed, it is necessary to quantify their quality and determine whether or not they achieve the established goals of explainability.More research efforts are still needed to fill this gap.The common evaluation metrics for explainable AI are Accuracy, Fidelity, Sparsity, Contrastivity, and Robustness (Li et al. 2022).
Accuracy refers to the proportion of correct explanations.Accuracy can be measured in two ways.First, you can use the percentage of the important number of features identified by the explainable method to the truly important number of features (Luo et al. 2020).However, due to the absence of dataset ground truth explanations, this is typically not possible in real-world situations.The accuracy measure is depicted as follows:  where |s i | represents the important features identified by the explainable method, |S i | gt is the truly important number of features, and N is the total number of samples.Second, you can derive accuracy measures through the perspective of the model's prediction (Yu et al. 2022).Fidelity measures how faithful the explanations provided are important to the prediction of the model (Yuan et al. 2021).The main idea behind fidelity is that removing salient features should degrade the performance of AI models, such as getting higher prediction error rates or lower classification accuracy.The formal definition of fidelity is given as follows: Sparsity measures the proportion of important features by the explanation methods (Pope et al., 2019) defined as follows: where |s i | represents the important features identified by the explainable method, and it is a subset of |S i | , where |S i | total is the total number of features in the model, and N is the total number of samples.
According to Pope et al. (2019), contrastivity is the ratio of the Hamming distance between binarised heat maps for negative and positive classes.Contrastivity is based on the assumption that an explanation method's highlighted features should vary across classes.
Robustness looks at the consistency of explanations despite input perturbation/corruption, model manipulation, and adversarial attack (Zhang et al. 2021).

Explainable event detection
Social media, for example, account for a significant portion of human interaction (MacAvaney et al. 2019).Machine learning techniques are frequently the foundation of automatic event detection.Despite the superior performance of deep learning models (also known as black boxes), they lack transparency due to self-learning and intricate algorithms.This leads to a tradeoff between explainability and performance (Arrieta et al. 2020).This challenge necessitated XAI's development to explain the black box models without sacrificing performance (Bunde 2021;Gunning and Aha 2019;Machlev et al. 2022).
Anomaly event detection, for instance, is a type of event detection that is context-dependent and can only be detected through interaction with end users, domain expertise, and algorithm insight.The explanation, interpretation and user involvement can provide a missing link to transform complex anomaly event detection algorithms into real-life applications (Sejr and Schneider-Kamp 2021).
Event detection is a good medium for reporting breaking news, terror attacks, instant outbreaks, communicable disease protests, election campaigns, etc. (Win and Aung 2018;Kolajo and Daramola 2017).An event has been largely defined as a major incident that occurred at a specific location and time (Panagiotou et al. 2016;Kolajo et al. 2022).From this definition, we can infer only four dimensions of an explainable event (5W1H).That is, what, who, where, and when.The definition cannot fathom the other two dimensions (why and how).This definition is not complete in the light of providing an explainable event.As it has been established earlier that for an event to be humanly understandable, it must have the six (5W1H) dimensions.A proper definition that fits into the 5W1H concept, as defined by Chen and Li (2019), is that an event is "An action or a series of actions or change that happens at a specific time due to specific reasons, with associated entities such as objects, humans, and locations."However, the issue with the social media feed from which the events are detected is incompleteness.Since there is no style restriction in this user-generated content, the information is usually incomplete.As such, it is difficult to infer the six dimensions from social media feeds without using external knowledge sources.Hence there is a need for semantics and ontologies to complement the information provided by social media (Ai et al. 2018).
A truly explainable event detection system must answer the 5W1H dimension question in a comprehensible human explanation (Chakman et al. 2020;Chen and Li 2019).Human comprehensible explanation of event detection from social media streams cannot be achieved without incorporating domain knowledge.In a broader sense, social media feed characteristics like short messages, grammatical and spelling errors, mixed languages, ambiguity, and improper sentence structure necessitate harnessing the potential of semantics and semantic web technologies for improved human comprehension (Ai et al. 2018;Cherkassky and Dhar 2015;Kolajo et al. 2020;Islam et al. 2021).Existing event detection systems that tried to provide explanations used limited information in the social media streams.None of the existing event detection systems has captured the six dimensions of 5W1H to provide explanations.

Formal definition of explainable event detection
An event is considered complete when all the 5W1H components can be deduced and answers the question of Who did what, when, where, why and how.We subsequently define these components according to Muhammed et al. (2021).
Definition 1 (Event e).An event is referred to as natural, political, social, or other occurrence phenomena at a specific location L e , and time T e , with semantical textual description S e , with one or more participants P e discussing the cause C e and the method M e .It is depicted as follows: e = (L e , T e , S e , P e , C e , M e ) where L e is the spatial information about the location (where) of the event (for example, longitude-latitude pairs); T e is the temporal information about when the event occurs, such as the time it takes to create content; S e represents the textual semantic description of what occurred, such as a name, title, tag, or content description; P e stands for participants (such as a person or organisation) who describe their participation in the event; C e is a causal description is one that explains why the event occurred (or which event is causing it) and thus demonstrates the relationship between two events, with event1 being identified as the cause and event2 as the effect, and M e is the textual details about how an event was carried out.

Definition 2 (Spatial Dimension (Le)
).An event spatial dimension Le defines the location where the event was detected using latitude (∅), longitude (λ) and altitude (h).Formally, it is represented as in Eq. 5: Definition 3 (Temporal Dimension (T e )).It indicates the date/time when an event occurred.
We understand that social media platforms usually have several timestamps, such as when an event was shared, uploaded, or modified.However, we want to stick to the specific time an event occurred in this paper because this will provide the actual time the event occurred.
Definition 4 (Semantic Dimension (S e )).The semantic dimension contains the concept of the description of the event detected.It is usually represented as a graph with three attributes as presented in Eq. 6: where N is the collection of concepts represented by nodes, E is the collection of edges connecting nodes, and R is the collection of semantic relationships.
Definition 5 (Participant Dimension (P e )).Event participant dimension P e refers to an actor (e.g., person/organisation) participating during the event.Extracting a participant can be done by applying Named Entity Recognition on the content description.
Definition 6 (Causal/Reason Dimension (C e )).A causal dimension C e is a set of causal knowledge representing the causes of the effect.The causal dimension determines the relationships among events as cause and effect.Causal dimension is represented in Eq. 7: where C e represents causal dimension; E i represents the causal event, E j represents the effect of the causal event (E i ), and R n represents the relationship among events.

Definition 7 (Manner Dimension (M e )).
A manner dimension M e is defined as a set of textual information representing how an event was performed using the method (or How) M i .Manner dimension is represented in Eq. 8: M e =< E i , M i >

The essence of explainable event detection
No doubt, the existing machine learning tools/algorithms for event detection have recorded success.However, most machine learning approaches use latent features to detect events effectively.Still, they cannot explain why a piece of social media text is classified as an event (Shu et al. 2019).Capturing an explainable event's six dimensions (5W1H) is important because new knowledge and insights are originally hidden from the users, and practi-tioners can be derived from such explanations.In addition, extracting explainable features can further improve the event detection performance and gain users' trust.
Explainable event detection can generate human-understandable explanations without knowing the detailed knowledge of the underlying models for the event detection (Ribeiro et al. 2016).Explainable event detection can increase users' comprehensibility and trust (Adadi and Berrada 2018).
The need for explainable systems, according to Samek et al. (2017), is to (1) verify the system or comprehend the rules governing the decision-making process to identify biases; (2) improve the system, avoid failures by doing a comparison of different models with the intended model and dataset; (3) extract the distilled knowledge from the system by learning from it; (4) comply with applicable legislation by responding to legal inquiries and informing those who the model decision will impact.By revealing the logic behind a model decision, an explanation can be used to prevent errors and ascertain the appropriate use of certain criteria.An explanation will force trade secrets to be revealed (Novelli et al. 2023).

Human-centric explainable event detection
Machine learning systems that can provide human-centered explanations for decisions or predictions have recently gained much attention.No matter how good and efficient a model is, it is difficult for the users or practitioners to trust the model if such users or practitioners cannot understand the model or its behaviours (Mishima and Yamana 2022).In event detection, explainability is crucial for practitioners and users to ensure that models are widely accepted and trusted.Human-centric explainable event detection will achieve trustworthiness, explainability, and reliability, which are currently lacking.Achieving such a humancentric event detection system will necessitate designing and developing more explainable models.Optimising models or regularisers would only be worthwhile if they can solve the human-centric task of providing explanations (Narayanan et al. 2018).Incorporating explainability that is human-centric in event detection systems is significant for building a decision-making process that is more trustworthy and sustainable (Vemula 2022).Unfortunately, even the developer who wrote the code for the model does not understand why a decision was made and, therefore, cannot assure the user to trust such a model.This huge gap necessitates the development of explainable event detection models that are humancentric to promote trust and wider adoption.

Semantics-based explainable AI for event detection
The growing integration of AI capabilities across consumer applications and industries has led to a high demand for explainability.XAI is an emerging approach that promotes accountability, credibility and trust.It combines machine learning with explainability techniques to show how, where, when, who, what, and why a decision such as an event is detected (Ammar and Shaban-Nejad 2020).One of the ways to achieve this is by leveraging semantic information (Donadello and Dragoni 2021) or formal ontologies to provide contextual knowledge (Confalonieri et al. 2021;Ribeiro and Leite 2021).A language with humanunderstandable concepts and meaningful relationships between those concepts is required to justify the output of XAI.The result is a comprehensible description of the reasoning behind the outputs provided by the XAI system.In order words, an ontology can be used to define concepts and relations that can be used to convey justification of XAI outputs.However, presenting the internals of the XAI system in a human-understandable way requires mapping to the existing concepts in the ontology.Justification for XAI output can be achieved by using logic-based reasoning methods coupled with ontology and the observations made with respect to each mapped concept.When a piece of contextual information provided by ontology fortifies AI, the result is more trustworthiness.Such AI becomes easy to train with minimal maintenance (Battaglia et al. 2018).The requirement for semantic-based explainable event detection and possible ways semantic technology can be integrated into XAI for event detection are discussed subsequently.

Requirement for semantic-based explainable event detection
Many early AI systems used the expert system approach (i.e., a rule-based) approach to provide explanations that address what, why and how the decision of AI systems.However, such systems did not address user context when they generated user explanations.Today there are XAI (with deep learning approaches) that focus on explaining the underlying mechanisms of these black boxes.While this is appreciable, it is not enough to provide tailored, personalised, and trustworthy explanations to the AI systems' users or consumers (Yang et al. 2022.Moreover, machine learning models use scores for their prediction.While using a score for prediction may be useful to gain some confidence level, it lacks context and therefore is inadequate for explanation without additional information.Semantic web and reasoning technology is well suited to fill this gap (Chari et al. 2020).AI systems need to include provenance to improve the confidence and trust of AI systems.The strength of the semantic web combined with AI will contribute significantly to XAI systems.
A user-centric semantic-based explanation should be (1) understandable: explanation should include capabilities that will define terms for unfamiliar terminologies; (2) appeal to the user: a resourceful explanation should appeal to the user's current need and mental cognition; (3) adapt to users' context: explanations should be tailored to the current user's context and scenario by leveraging on the user information; and (4) include provenance: a property that is either absent or has not yet received the proper emphasis.Provenance aims to include domain knowledge utilised along with the method used in obtaining the knowledge (Chari et al. 2020).Doran et al. (2017) presented a better perspective of explainability.They opine that rather than focusing solely on mathematical mappings, XAIs should provide justification or reason for their outputs.In addition, they argue that to produce explanations that humans can understand, truly XAI systems must use reasoning engines that can run on knowledge bases that contain explicit semantics.This notion is also supported by Chari et al. (2020).Using logical reasoning on the model's output is inadequate as it performs an explanation guided only by the knowledge base axioms.No explicit link is used from the knowledge base concepts and model learned features in that case.Hence, logical reasoning must be tied to semantics to have a human-centric explanation as it links the model's output and human concepts.Semantic-based XAI can present a sufficient explanation comprehensible by a human.We believe that human-comprehensible explanations cannot be achieved without domain knowledge and that data analysis alone is insufficient for full-fledge explainability.There is a need to integrate semantic web technologies with AI systems to provide explanations in natural language (Ai et al. 2018;Lecue 2020).
Linking explanations to ontologies (i.e., structured knowledge) has multiple advantages, such as enriched explanations with semantic information, facilitation of effective knowledge transmission to users, and provision of potential for customisation of the level of specificity and generality of explanations to specific user profiles (Hind 2019).Integrating a domain background knowledge, such as a knowledge graph with AI models, can provide more insightful, meaningful, and trustworthy explanations (Tiddi and Schlobach 2022).Semantic technologies provide easy access to web knowledge sources.In contrast, symbolic representations such as knowledge bases, ontologies, and graph databases formalise and capture data and knowledge for specific or general domain knowledge.
XAI systems aim to provide a link between the semantic and learned features.The connection between an AI system or, more specifically, a DNN model and its semantic features can be formalised by defining the comprehension axiom, presented as follows: Definition 8 (Comprehension axiom).Given a First-Order Logic (FOL) language with the set of its predicate symbol, a comprehension axiom is of the form as presented in Eq. 9 (Donadello and Dragoni 2021): where {O} n 1 is the set of output symbols of the AI model and {A} m 1 is the corresponding semantic attributes or features.Let us see how this applies to an AI system's main tasks, such as regression, multiclass, and multi-label classification.
Regression The predicates computed by the model, for example, the asking price and the real value of a house for The semantic features are the properties of interest for buying a house.
Multiclass classification represents a class, for example, pounded yam with okra soup for, the semantic features are the ingredients contained in the recognised dish.
Multi-label classification can be part of the list of predicates computed by the model, for example, dinner and party.The semantic features are objects in the scene, such as tables, pizzas, persons, and balloons.

Integrating semantics into explainable event detection models
Integrating semantics into XAI aims to facilitate situational understanding by human analysts ( (Holzinger et al. 2022b).Exploiting semantic relationships between model outputs and human-in-the-loop processes can improve generated explanations (Harbone et al. 2018).It has already been established that for AI solutions to have their full potential in terms of usability, such solutions must be explainable, requiring semantic context (Pesquita 2021).Semantic technologies and artefacts such as knowledge graphs and ontologies can provide a human-centric explanation.
While progress is being made in the AI community to address explainability issues, AI systems are still far from self-explainability. Self-explainability can adapt automatically to any machine learning algorithm, data, model, application, user, and context (Lecue 2020).Semantic representations and connections in the form of knowledge graphs such as DBpedia, OpenCyc, Wikidata, Freebase, YAGO, NELL, ConceptNet, WordNet, and Google Knowledge Graphs can be armed to move XAI closer to human comprehension (Tiddi and Schlobach 2022).Knowledge graphs natively and readily expose connections and relations, encode contexts, and support inference and causation (see Fig. 4).Integrating knowledge graphs with XAI systems can be of great potential (d'Amato 2020).Knowledge-driven structures can adapt to constraints, variables, and search space.Knowledge graphs can also capture knowledge from heterogeneous domains, which makes them a great candidate for explanation (Pakti et al. 2019).In addition, semantic descriptions inspired by knowledge graphs can bridge the missing gap from brute-force machine learning approaches on text analysis to improve explainability.This has yielded positive results on natural language processing tasks such as event extraction, relation extraction, or text classification (Ribeiro et al. 2018).
With knowledge graphs resources, event causality inference can be achieved.The causal relations among events can be inferred in many ways.The simplest approach is determining the likelihood that event y occurs after x has occurred (Zhao 2021).The causal-effect relationship can also be formulated as a classification task, where the cause-and-effect candidate events represent the inputs by incorporating contextual information from knowledge sources (Kruengkrai et al. 2017).Some other methods employ NLP techniques to identify causal relations, such as causal prepositions, connectives, and verbs (Cekinel and Karagoz 2022).Often, cause-and-effect identification using the above methods may result in low generalisation.One way to improve this is to adopt ontology or external knowledge bases to establish the underlying relationships among event candidates.The computation of similarities between two cause-and-effect pairs (c i , e i ) , (c j , e j )is given as: σ (c i , c j ) + σ (e i , e j ) 2 One method of incorporating explainability into XAI involves the following steps: ontology selection, semantic annotation, semantic integration, and semantic explanation.Ontology selection determines the optimal set of ontologies that adequately describe the data.Semantic annotation links the data to ontologies.Semantic integration establishes links between ontologies, while semantic explanation explores background knowledge afforded by the knowledge graphs (Pesquita 2021).

Findings from the Survey
AI has made considerable strides in the last ten years to solve a wide range of issues.This apparent success, however, has been accompanied by a rise in model complexity and the use of opaque black-box models (Saeed and Omlin 2023).These issues call for human-centric XAI that will enable end users to grasp, believe in, and effectively operate the emerging breed of AI systems.So far, finding from the literature reveal the need for more to be done in the aspects of developing robust and human-centric explainable AI, improved explainability, the evaluation metrics of XAI and explainable event detection.

Robust and human-centric explainable AI
The interpretability of existing research has received the majority of attention, and additional research is still required to develop a robust explainable AI.In-depth knowledge of XAI and practical experience with XAI methodologies are prerequisites for using AI to make well-informed decisions.So far, the explanations provided by XAI methods are only understandable by experts and not common users (Holzinger et al. 2021).More research efforts in providing human-comprehensible explanations for AI systems are needed.
Improved explainability AI systems are still far from self-explaining, despite the AI community's progress in addressing explainability issues.According to Lecue 2020, selfexplainability can automatically adapt to any machine learning algorithm, data, model, application, user, or context.The synergistic incorporation of XAI and semantic innovations have been recognised as one of the most proficient ways of expanding the logic of simulated intelligence and AI (ML) frameworks (Pesquita 2021).Combining semantic layers from knowledge graphs and ontologies could lead to explainability in AI.The coordination of information and artificial intelligence results with informative diagrams and ontologies can act as foundational information for XAI applications.According to Pesquita (2021), this kind of integration can provide semantic contexts necessary for human-centric explanation.So far, a few XAI methods have captured causal relationships (Holzinger et al. 2020).
Hence there is a need for explainable AI methods that address causal dependencies.XAI models that encourage contextual understanding and answer questions and counterfactuals such as "what-if" are currently lacking.Also, more XAI methods are needed to emulate how humans reason, evaluate similarities, make decisions, draw an analogy or make associations (Angelov et al. 2021).The existing XAI methods barely scratch the 'black box' surface (by stressing on features or localities, for instance, within an image) and do not provide explanations understandable to humans.

XAI evaluation
Reasonable artificial intelligence is still in the outset stage.As a result, there is no universally accepted method for evaluating human-centred explanations (Li et al. 2022) because of the subjectivity of the reasonableness idea, insight, and interest of clients (Carvalho et al. 2019).Most existing frameworks skip or give a casual assessment (Danilevsky et al. 2020;Mohseni et al. 2021) claim that three human-focused assessment techniques for XAI are client fulfilment and trust, helpfulness, and mental models.The trust and satisfaction of the user can be measured through user interviews.The client's exhibition can be utilised to decide convenience, for instance, occasion discovery with the guide of XAI frameworks.By asking the user to predict the model's output, mental models demonstrate how well the user comprehends the system.The primary focus of research efforts ought to be the creation and standardisation of evaluation metrics for the level of explanation quality produced by XAI systems.Future human-centric XAI evaluation research should center on innovative methods for collecting subjective measures for explanation evaluation and effective user experiment designs.

Explainable event detection
Various event detection models have been described in the literature, but less effort has been made to give a human-centric explainable event detection.
None of the fewer attempts has covered the 5W1H dimensions required for human-centric explainable event detection (Evans et al. 2022;Khan et al. 2021).Still required is a truly explainable event detection framework that can provide a human-readable response to the 5W1H dimension question.A blueprint for a human-centric explainable event detection framework is currently unavailable.There is still no explainable event detection framework that is fair, transparent, trustworthy, reliable, and transferable.A standard human-centric event detection framework must have these essential characteristics and the properties of the 5WIH dimensions.

Open issues and future research directions
This section presents the open issues, future research directions for XAI and explainable event detection.

Need for human-centric explanation in event detection
While human-centric explainable AI is promising, theory-driven XAI now signals future promise (Antoniadi et al. 2021).There are still many unknown topics in social, cognitive, and behavioural theories.Various stakeholders must map out the domain and form the AI discourse (Langer et al. 2021).Implementing XAI's philosophical, methodological, and technological human-centric views will be challenging.In other words, we still don't know how to reliably translate the systems we design into real-world, socially and culturally situated AI systems (Saeed and Omlin 2023).Social sciences, psychology, law, and human-computer interaction researchers must collaborate for human-centric explainable AI to succeed (Adadi and Berrada 2018).It has been contested that VSD will influence how solutions are designed in the future.
Although there are several explanation techniques, the choice of explanation will rely on the purpose and audience of the explanation.The people who create AI systems, the decision-makers who use them, and those who are ultimately affected by the results of those decisions should all be considered (Arrieta et al. 2020).To effectively communicate with a wide range of audiences, precise and relevant explanations must be created (Belle and Papantonis 2021).Future AI must include qualities like transparency, trustworthiness, and comprehensibility to be widely used.A truly human-centric explainable event detection system should provide a human-readable response to the 5W1H dimension question.Future research directions should focus on harnessing human-centric properties required for explainable event detection.

Need for robust explainable approaches
The key issues that current explainability strategies address are those related to model interpretability and post-hoc interpretation of the model's input data or conclusion.More knowledge or intuition is required to fully understand AI systems' inputs, operations, and outputs.Building stronger model-specific techniques is one area that could get more attention in the future (Belle and Papantonis 2021).Exploring this avenue can produce ways to use a model's unique properties to provide explanations, perhaps boosting fidelity and enabling greater examination of the model's internal workings instead of only explaining its conclusion.This would presumably make it easier to develop effective algorithmic implementations because they wouldn't need to use expensive approximations.
Investigating further linkages between AI and the semantic web or statistics, which would allow for using a variety of well-researched techniques, is another promising option to overcome robustness difficulties (Kumar et al. 2020).Additionally, knowledge graphinspired semantic descriptions can fill the gaps left by brute-force machine learning techniques to increase explainability.There is a need for robust explainable event detection approaches that showcase human-in-the-loop properties for better and more understandable explanations.

Need for human-centric explainable event detection Framework
A general framework for detecting explainable events is required since it would direct the creation of comprehensive explainable strategies (Adadi and Berrada 2018).Future research should concentrate on end-to-end frameworks that can be explained from conception to implementation (Saeed and Omlin 2023).
Data quality communication should be considered when creating design elements (Markus et al. 2021).Knowledge infusion (Messina et al. 2022), rule extraction (He et al. 2020), approaches supporting explaining the training process (Gunning et al. 2019), explainability for models and model comparison (Chatzimparmpas et al. 2020), and interpretability for natural language processing (Madsen et al. 2022) should all be specified during the development stage.Humanmachine teaming (Islam et al. 2021), security (Liang et al. 2021), machine-machine explanation (Weller 2019), privacy (Longo et al. 2020), planning (Adadi and Berrada 2018), and improv-ing explanation with ontologies (Burkart and Huber 2021) should all be considered during the deployment stage.

Need for standard evaluation Metrics for Explanation Methods
The only available tool for the quantitative evaluation of explanation methods is Quantus (Hedstrom et al. 2022).Accuracy, Fidelity, Sparsity, Contrastivity, and Robustness are the typical assessment criteria for explainable AI (Li et al. 2022).Although many explanation strategies have been created, it is important to evaluate their effectiveness and see if they meet the predetermined standards for explainability.Further research is still required to close this gap.
In the past, much research has focused on creating new explainability techniques without considering if these methods will satisfy the stakeholders (Adadi and Berrada 2018).In reality, only a small percentage of studies discussing explainability approaches evaluated the proposed techniques.Future studies on human-centric XAI assessments should identify novel approaches that efficiently gather subjective measures to evaluate experimental user designs (Zhou et al. 2021).

Conclusion
In this paper, we have provided a comprehensive survey on the human-centric explainable AI, explainable event detection and semantics-based explainable event detection by answering some research questions that bother on the characteristics of human-centric explanations, the state of explainable AI, methods for human-centric explanations, the essence of human-centricity in explainable event detection, research efforts in explainable event solutions, and the how semantics can integrated into explainable event detection achieve a human-comprehensible event detection solutions.We have argued that semantics-based XAI can provide a human-centric explanation in a more comprehensible manner.To realise these objectives, we looked at the various papers that bother on the topics of interest.We found out that current explainability techniques mainly focus on model interpretability.There is a need for additional information or intuition to explain AI systems' inputs, workings, and conclusions.In addition, none of the existing event detection systems has captured the six dimensions of 5W1H to provide explanations while simultaneously emphasising human-centricity.
While human-centric explainable AI is hugely promising, theory-driven XAI is just beginning to display signs of future potential.Many areas of social, cognitive, and behavioural theories are yet to be explored.This is still open for further research.There is a need to chart the domain and shape the discourse of AI from diverse stakeholders.The major challenge will lie in how to put into operation the human-centric perspectives of XAI at the conceptual, methodological, and technical levels.AI could be explained by combining semantic layers from knowledge graphs and ontologies.Integration of data and AI results with knowledge graphs and ontologies can serve as background knowledge for XAI applications.Such integration can provide semantic contexts necessary for a human-centric explanation, greatly impacting user and decision-makers' adoption of event detection solutions.In other words, future AI must exhibit properties such as trustworthiness, transparency and explainability for such systems to be widely adopted.Making informed decisions from AI will premise on a thorough understanding of XAI and hands-on expertise in XAI techniques.

Fig. 1
Fig. 1 Taxonomy of the Survey

Fig. 3
Fig. 3 Number of stars for the popular XAI tools ((Holzinger et al. 2022a)

Fig. 4
Fig. 4 The Role of Knowledge Graphs in Explainable AI (Lecue 2020) Augmenting input, intermediate features, and output relationship with more semantics to capture causal relationship

Table 2
Description of Explainable AI Approaches