1 Introduction

Effective explanations are crucial for making AI more understandable and trustworthy for users [3, 45, 51]. Although different explanation techniques exist [16, 51, 67], creating effective AI explanations for non-experts is still a challenge. Recent research highlights the need for a stronger user-centered approach to AI explainability (XAI), to develop explanations tailored to end-user’s needs, backgrounds, and expectations [58, 72]. Design Thinking and user-centered design are well-known for their effectiveness in understanding users and developing solutions that satisfy users’ needs in many fields, and especially for interactive systems [11, 14, 30, 33, 54]. They can thus help us to develop better explanations. Moreover, a user-centered lens helps us realize that the challenge of explainability is not unique to AI, but also appears in other contexts that are complex, in-transparent, hard to understand and require dealing with uncertainty.

In fact, such challenges can also be observed in the attempts to explain scientific knowledge and complex information in the COVID-19 pandemic to non-experts [34]. In both cases, the users (i.e. the general public, respectively) are faced with what they perceive as in-transparent, complex systems whose reasoning processes are not readily observable and understandable to them: e.g. recommendations from AI systems in the case of AI, and expert recommendations in the case of the COVID-19 pandemic. Just as the understandability of AI results is important for their trustworthiness and acceptance by the users, so is also the understandability of information about complex phenomena and processes in the COVID-19 pandemic an important factor for the acceptance of COVID-19 containment measures (e.g. vaccination, masks) by the citizens [20]. Translating insights from AI explainability research (e.g. what types of explanations should be best provided for which types of problems and users and in what form) could thus lead to novel approaches for increasing the understandability and acceptance of explanations of complex issues in crises and emergencies such as COVID-19.

While different AI explanation techniques have been proposed (see overviews in [16, 28, 39, 67]), most of them have been developed from a data-driven, rather than from a user-centered perspective. In contrast, recent research has highlighted the need for a more interdisciplinary and user-centered approach to AI explainability in order to provide explanations more suitable for and better tailored to different end-users and stakeholders [16, 28, 39, 44, 45, 67, 72]. This work has also suggested how AI explainability can learn from related work in other domains (e.g. psychology, cognitive science, philosophy), sometimes referred to as “explanation sciences” [44]. Different contributions have pointed out the importance of human factors (e.g. cognitive biases, prior beliefs, mental models etc.) for the interpretation, trust and acceptance of AI explanations [2, 37, 72]. However, the importance of adopting a user-centered design process to achieve this has so-far received little attention (with some exceptions, e.g. [12, 48]).

In addition, we propose to also consider what we could learn from translating methods and insights from explainable AI to developing effective solutions for related “explainability problems” in other fields of science and society. How can a user-centered perspective help us in doing so? And what can we learn back from that experience to inform new user-centered approaches to explainable AI?

Against this background, in our XAI4COVID project we have been exploring how to translate techniques and insights from AI explainability research to developing effective explanations of complex issues in the COVID-19 pandemic through a user-centered approach. Here, we describe our user-centered approach, an interdisciplinary conceptual framework and a first prototype developed to achieve that. The prototype integrates AI explainability techniques with methods from communication science, visualization and HCI to provide a user-centered explanation for a typical question regarding the safety of COVID-19 vaccines. We also discuss results from a first evaluation in an explorative user study with 88 participants and outline future work. By learning how well the selected explanation technique and its extension with methods from other fields worked in this context, we aim to identify suggestions for further work that could inform new ideas for better user-centered design of AI explanations as well.

2 AI Explainability and the Explainability of Complex COVID-19 Information

Not being able to understand why an AI system exhibits a certain behavior and why it has produced a certain result can lead to wrong interpretation of its results and their reliability, to incorrect attribution of cause-effect relationships and ultimately to wrong or harmful decisions [17]. The perceived opacity and inconsistencies this creates in users’ perceptions diminish users’ trust in the system. Similarly, not being able to understand the reasoning behind the recommended COVID-19 containment measures undermines their understandability and trustworthiness for the citizens.

A case in point is the public communication of information related to vaccine safety and vaccine hesitancy in the COVID-19 pandemic. Although vaccine hesitancy has many influence factors (especially deep-seated ideological beliefs and attitudes, social and cultural context, disinformation), the understandability and trustworthiness of the information about COVID-19 vaccine safety and its fast development have also been one of the challenges for people’s acceptance, especially in the early stages of the pandemic [74]. This has been exacerbated by the high uncertainty and complexity of fast changing information from what for non-experts can seem like opaque processes (e.g. vaccine development, vaccine testing).

The resulting feeling of being overwhelmed due to the complexity of the situation and its fast pace, the fear and anxiety induced by high uncertainty can lead to preference for inaction in the face of perceived uncertainty and complexity of potential consequences of taking a decision. Studies of COVID-19 vaccination attitudes (e.g. [20]) and anecdotal evidence from media reports (e.g. [25]), interviews available on YouTube (e.g. [75]) and discussions of vaccine hesitant people in online forums (e.g. [55]) point in this direction. They suggest that beyond groups of people with deep-seated anti-vaccination beliefs (e.g. of ideological or political origin), many people have been authentically struggling to make sense of the huge amounts of for them incomprehensible and contradictory information, including widespread misinformation.

Such situations increase the extent to which people construct their own theories about the reality of the situation, often based on what they are directly experiencing and on false attributions of causal relationships. Similar problems occur when people construct their own explanations of systems (mental models, “folk theories”) to interpret and predict their behavior, as evidenced in HCI and AI explainability research [37], building on cognitive science [59]. Users’ understanding of system behavior is often flawed [37], which in the case of AI systems can lead to risks of incorrect interpretation and wrong decisions, misplaced trust in the system and outright societal harms (see [50] for an overview). Explainability techniques can help address such issues (e.g. [1, 51, 72]) and could thus also help address related problems in COVID-19 communication.

Similarly, just like non-experts reason about AI systems differently than experts, so does the general public reason differently than experts about health risks and vaccines in the COVID-19 pandemic. Public risk perceptions are informed by heuristics such as the affect heuristic (‘how do I feel about it’) and the availability heuristic (how easily a risk can be drawn from memory) [24, 56]. These deviate from the risk decision rules used by experts [46]. Accordingly, just like non-experts need different types of AI explanations than experts [28], so too are user-centered explanations tailored to the needs of non-experts needed for the communication of complex information in crises and emergencies such as the COVID-19 pandemic.

3 Theoretical Background and Conceptual Design Framework

While many different AI explanation techniques have been proposed (see overviews in [3, 7, 16, 28, 45, 67]), most of them have not been developed from a user-centered perspective [7, 28, 45, 72], but rather from a data-driven perspective (e.g. how to identify the importance of different features from input data for a given result [41, 57], or how to identify the most important training examples [35]). Such methods are mostly used by AI experts themselves to e.g. debug or improve their AI models [7, 12].

As pointed out in [39, 44, 45] promising explanation types that could be well-suited for non-expert users include contrastive and counterfactual explanations. Contrastive explanations can enable users to understand possible cause-effect relationships by contrasting the observed outcomes with information about why another possible outcome hasn’t occurred (“why not”) [44, 45]. Counterfactual explanations help understand how the outcomes would change with different inputs or other factors (“what if”). Such explanations allow users to understand how the situation would need to change for alternative outcomes to occur [44, 72].

Different techniques to construct and present such explanations have been proposed (see overviews in [7, 39, 44, 45, 72]). Visualization has been successfully used to support understanding which features in the data have the most influence on system results (e.g. [57, 63]). Already simple visualization techniques can support contrastive and counterfactual reasoning for non-experts by showing e.g. how differences in factors determining a complex situation relate to different outcomes [7, 28, 72]. While visualizations of statistical data were shown to be less effective in vaccination campaigns [62], the use of metaphorical and symbolic visualizations to support behavioral change has demonstrated promising results in other domains [19, 36, 49, 66].

Applying such methods to develop user-centered explanations addressing COVID-19 vaccine concerns requires their integration with approaches from persuasive communication and related work. Approaches combining storytelling and visualization have shown how that can make complex issues and scientific data better graspable for the general public [6, 23, 27, 60, 68, 69], including health risk [27, 73]. Persuasive communication literature suggests that effective communication requires overcoming common defensive psychological responses and cognitive biases (e.g. counterarguing, selective avoidance, confirmation bias) [47] and stresses the importance of asserting the credibility of the message bearer [4]. Approaches based on the elaboration likelihood theory [53, 61] suggest that strategies such as narrative persuasion and parasocial identification can overcome resistance to messages opposing one’s current beliefs (e.g. perceived invulnerability) [22, 24, 47, 70]. Similarly, integrating metaphorical and symbolic visualizations with personalized motivational messages for behavioral recommendations was successful in stimulating pro-environmental behavior [19, 49].

Motivational interviewing also highlights the need to express empathy and acknowledge person’s concerns, rather than using confrontation as part of a strategy for guiding people towards informed decisions by strengthening their motivation to resolve their ambivalences [31]. It has been applied in work with patients hesitant to change their behavior [13] and considered as a method to address COVID-19 vaccine hesitancy [29]. Other work has also shown that correcting false beliefs through direct counterarguing risks perpetuating misinformation [38, 43] and can backfire [52]. Instead of directly dismissing people’s concerns and misconceptions, a more promising approach is to acknowledge them [18] and emphasize factually correct information [38]. Increasing anecdotal evidence also highlights the importance of personal dialogue with trusted persons (e.g. [25]). Accordingly, to design effective explanations it is not enough to just focus on the informativeness und understandability of their content. Rather, we also need to consider how they are framed and communicated to users.

A key element thereof is addressing people’s concerns in a way they will recognize. This requires that the scientific definition of a problem be extended with a user-centered problem description. A scientific problem definition defines people’s concerns in terms of scientific categories, e.g. in case of COVID-19 vaccine hesitancy that would be based on models of antecedents of vaccine behavior [8, 42] determining the main hesitancy factors from user responses to a standardized questionnaire [10]. Scientifically informed communication campaigns then refer to such abstractions to create messages addressing the main concerns (e.g. vaccine safety). They tend to reflect the terms of the scientific abstractions, rather than the language actually used by people, which can adversely impact understanding and acceptance. To create a user-centered problem description (e.g. defining vaccine hesitancy concerns how specific groups of users describe them), methods from user-centered analysis can be applied (e.g. user interviews or content analysis of online discussion forums).

This allows the explanations to be constructed closer to the phrasing of the recipients. Mapping user-centered concern descriptions to the scientific definition (e.g. vaccine hesitancy factors) then allows to identify scientific knowledge from which explanations can be formed. User-centered explanations can then be developed by applying an explainability technique (e.g. contrastive or counterfactual explanation) to validated knowledge addressing a given COVID-19 concern and combining this with communicational and visualization elements. Figure 1 depicts the initial set of conceptual elements we have used to explore this approach by designing a prototype of a user-centered explanation for complex COVID-19 information, described in the next section.

Fig. 1.
figure 1

Initial conceptual design framework for exploring user-centered explanations for complex COVID-19 information

4 Designing User-Centered Explanations for Complex COVID-19 Information

The core of our approach is an iterative user-centered process similar to Design Thinking. We first developed an empathic understanding of the problem by combining a scientific literature analysis with a user-centered analysis of how people informed themselves about COVID-19, and their concerns about COVID-19 vaccines (empathize phase). Based on gained insights, we chose a frequently occurring concern as an example of a typical explainability problem in the COVID-19 pandemic for which to develop a prototypical solution. This led to the following problem definition based on the users’ point of view: “how could COVID-19 vaccines be safe if they have been developed so quickly – unlike previous vaccines that took much longer?” (definition phase).

We enriched the problem definition with the specific needs of this use case based on insights from the empathize phase. We then aimed at answering this question by applying a contrastive explanation technique, borrowed from explainable AI, and adapting it to the needs of this use case by incorporating specific techniques from persuasive communication, HCI, and visualization (ideation phase). By combining these different techniques, we constructed a prototype of our explanation (prototype phase), which we then tested with experts, and with target users (testing phase). A more in detail description of activities and processes that were applied in the explanation development, in each of the phases of a typical Design Thinking process, is given below.

4.1 Phase 1: Empathize

To address the need for better communicational tools for explaining complex COVID-19 vaccine-related information, we first needed to understand how experts from the scientific community frame and construe the knowledge on vaccine hesitancy, but more importantly – how they present and explain it. Turning first to reviewing the literature, we were able to map out the problems identified so-far in scientific research, which led us to using the 5C model [8,9,10, 42, 74] of vaccine hesitancy factors and ongoing studies of COVID-19 vaccination attitudes [20, 21] as the first reference points. However, beyond the existing formal scientific understanding, great emphasis was put on gaining a better user-centered understanding of the problem.

To develop an empathic understanding of the challenges that people have faced with respect to information related to COVID-19 vaccines, we turned to people directly, asking them about their vaccine-related concerns, ways they informed themselves, and how they perceived materials covering these issues that were available to them. For mapping out a user-centered description of COVID-19 vaccination concerns, we also analyzed how people described their concerns from anecdotal evidence in a sample of media articles (e.g. [25]), discussions in online forums (e.g. [55]), and interviews available on YouTube (e.g. [75]). This analysis has been performed on online content ranging from the beginning of the pandemic to October 2021. In this way, we were not only able to discover the variety of different concerns people had related to COVID-19 vaccines, but also deepened our understanding of how they communicated these concerns, the language they used and which information they were already familiar with, and which of the used communication techniques were effective. In this way, we developed an empathic understanding of the problems that people experienced and talked about. This allowed us to take a step back from a more formal scientific understanding, and to become better aware of the users’ perspective and their explanation needs.

4.2 Phase 2: Define

In synthesizing our findings from multiple sources, we first created personas, each representing a certain group and outlining its most common COVID-19 vaccine concerns. Defining the problem by using personas helped us bridge the gap between our original problem statement (there is a need for better explanation of COVID-19 vaccine information) and a human-centered problem statement worded by people themselves, in terms of specific topics that were not appropriately communicated, and that led to the feelings of unease and fear. One insight that became particularly apparent in this stage was that one solution does not fit all, because general explanations don’t answer specific concerns people have. Also, by integrating these insights with findings from related scientific work, we were able to identify specific communicational requirements that an explanation should address in order to be effective (e.g. acknowledge the user concerns and choose a suitable wording and communicational style to avoid defensive responses such as counterarguing). Such requirements were so-far little considered and rarely pointed out in the existing work on developing specific types of AI explanations.

To create explanations better tailored to user needs, we first defined our target group to be young people, who were found to be more vaccine hesitant [20]. Further, for the first prototype we decided to focus on a repeatedly mentioned concern, about how COVID-19 vaccines have been developed too fast to be safe, implying that the time didn’t suffice for proper testing. This is not a usual vaccine hesitancy factor (other vaccines were developed more slowly), but is very specifically related to COVID-19 vaccines. Moreover, in online materials of official health-related sources this question either remains unanswered, or the answer uses complex language, hardly understandable to the general public (see e.g. the readability study of COVID-19 websites [5]).

Additionally, the problem of understandability and completeness of official information was raised by people as well in the interviews and online discussion we have analyzed and in our own interviews (see the testing phase). In all those cases, people often stressed that the concern of COVID-19 vaccines being developed too fast to be safe was addressed in informative materials either only in a generic, reassuring manner, claiming that the COVID-19 vaccines are safe despite the speed of the development process, but not explaining why and how that was possible, or by explaining that in long texts using difficult scientific language. Indeed, to understand how it was possible for COVID-19 vaccines to be developed so quickly without risking their safety, one needs to understand the scientific process of vaccine development and what impacts it.

Being complex and hard to explain to non-experts are features that the scientific process of vaccine development, shares with complex AI systems. This is why using techniques from AI explainability to explain it appeared as a suitable approach in the first place. The choice of a specific explainability technique to be applied (and adapted) in this case was then guided primarily by the users’ framing of the problem, rather than by how an expert would have explained it from their expert perspective. In particular, the users’ problem perception stems from implicit contrasting of two cases: the case of the vaccines developed in the circumstances of the COVID-19 pandemic (fast) and the case of other previously developed vaccines (slow). Consequently, the identification and definition of a core problem requiring an explanation from the users’ point of view was framed as “how could COVID-19 vaccines be safe if they have been developed so quickly - unlike previous vaccines that took much longer?”.

4.3 Phase 3: Ideate

Given this user-centered problem definition, a contrastive explanation technique also used for AI explanations – i.e. explaining “Why outcome P, rather than outcome Q has occurred?” [44] - naturally lends itself as a suitable way to address the user explanation needs. Such contrastive explanations enable people to understand cause-effect relationships by contrasting the observed outcomes with information about why another possible outcome hasn’t occurred [44, 45].

Therefore, we made this contrastive explanatory principle the core of our conceptual solution design. To apply it effectively, the user-centered perspective requires us to consider different factors that can impact its effectiveness and user acceptance (as identified in the empathize and define phases). To overcome defensive responses and cognitive biases (e.g. counterarguing, confirmation bias) [13], we chose to apply the strategy of narrative exposition and parasocial identification [53, 61], that can help overcome resistance to messages opposing one’s beliefs [47, 70]. Hence, we presented the contrastive explanation in form of a dialogue between a user and a scientist character. Furthermore, learning from motivational interviewing techniques that advise acknowledging a person’s concerns in order to help resolve ambivalences [31], an accepting tone was used with the aim to demonstrate empathy, rather than confrontation. Additionally, we integrated metaphorical and symbolic visualizations to make the contrastive principle easy to understand [72] and to stimulate and facilitate acceptance of messages that may require a change in behavior [49]. Finally, best practices from HCI and interaction design were applied to make the prototype easy to use and understand (e.g. giving users control of the flow through the explanation steps, reducing information density by allowing them to control/expand the amount of information shown).

4.4 Phase 4: Prototype

In developing the first prototype we focused on understandability and trustworthiness for the recipients. We aimed at an explanation form that could be easily integrated into websites of official institutions (e.g. FAQs) and shared on social media. The content for the explanation was taken and adapted from official COVID-19-related websites [15, 26, 32, 40, 64, 65, 76]. The accuracy of our text adaptations was verified by several physicians. The process of the prototype development was an iterative one, where we tried out how different elements considered in the ideation phase could be combined to create an interactive explanation (see Fig. 2).

Before presenting the explanation itself, the user’s concern is acknowledged by presenting it in their own terms: “Why have the COVID-19 vaccines been developed so quickly, rather than taking much more time as it was with previously developed vaccines?”. This is done both to elicit an empathic response (a technique adopted from motivational interviewing), as well as to introduce the contrastive principle already in the user-centered problem statement, reflecting how people commonly phrased their worry. The contrastive explanation principle is then implemented by structuring the explanation around four main challenges that were solved more quickly for the COVID-19 vaccine (funding, volunteers, data, and bureaucracy), allowing it to be developed faster than the usual process of vaccine development.

The main differences in the factors determining the two different outcomes are explained for each of the four challenges, i.e. contrasted one against the other, in a stepwise structure. This contrasting is done both visually and with a textual explanation, framed as a narrative and presented in form of a dialogue. Progressing through the different explanation steps is controlled by the user. The clickable “Is there more to it?” and “OK, let’s see!” buttons are examples of the interactivity of the explanation, providing a user with a self-paced exploration, but at the same time eliciting the dialogue element (see Fig. 3). The bearer of the message, i.e. the user’s interlocutor, is visualized as a scientist character, appealing to the trustworthiness of the explanation.

Additional visualization elements further help convey the message: two color-coded progress bars visualize the difference between the COVID-19 vaccine development, and the usual vaccine development process for each challenge. Visual symbols also depict the relation between the main factors responsible for the two different (contrastive) outcomes. A visual summary is presented at the end of the interactive explanation, serving both as a repetition device and as an element easily shareable on social media.

Fig. 2.
figure 2

The initial screen of the prototype with elements of the explanation mapped out.

Fig. 3.
figure 3

One of the four explanation steps with elements of the explanation mapped out.

4.5 Phase 5: Test

Formative tests with target users were done in a form of semi-structured think-aloud interviews in December 2021 (four via Zoom, two in person). The goal of these tests was to verify the overall solution concept and obtain feedback for improving the prototype. A convenience sample of 6 participants (4 female, 2 male, 22–32 years) representing the target groups of our approach took part. The interviews were 1–1.5 h long and started with eliciting participants experience with the pandemic, their information sources, stance on the COVID-19 vaccination and worries they experienced or encountered. All participants were fully vaccinated, but three were vaccine hesitant prior to the vaccination, and three had experience with persuading vaccine hesitant people to vaccinate. They ranged from poorly to well-informed regarding the COVID-19 pandemic. Participants were then shown an FAQ excerpt from a webpage of a health authority (declared as coming from an official source but without revealing which one) [30] and then our explanation prototype. Both were addressing the same concern: that the COVID-19 vaccines were developed too fast. The participants were encouraged to react freely both while reading the text and when exploring the prototype. After each exposure we asked about their impressions of the given example (FAQ excerpt, our explanation prototype), regarding its relevance, suitability, understandability and likeliness that it would help people resolve the given concern. We also asked if it could have a soothing effect, if the participants would share it with a concerned person and how they perceived the individual elements of the prototype. The participants were finally asked to compare our explanation prototype with the FAQ example.

Overall, all interview participants found the explanation prototype easy to understand, well-structured, and written in a user-friendly language. Four of them stated that contrasting the COVID-19 vaccine development process to the usual vaccine development process helped with the comprehension. Five pointed out that the stepwise narrative of 4 challenges gave the explanation a good structure, making it easy to follow. Five interviewees indicated that they didn’t understand there was a dialogue between a person and a scientist. The interviews also provided valuable insights for improving the usability, as all interviewees stated more interactivity and adding further visualizations could provide an even better understanding. While the interviews suggest that the information presented was clear and easily understandable, this was characterized as “necessary, but not enough”. All of the participants stated one of the following: that it has a potential to draw attention, start a conversation, spark curiosity or serve as a resource, but still with awareness that the decision to vaccinate is influenced by many different factors. Three interview participants pointed out that the explanation could backfire, since it isolates one specific worry and goes in depth explaining it, potentially causing feelings of suspicion and skepticism (with respect to why only this specific concern was chosen to be addressed). Other interviewees did not use terms such as “suspicion” or “skepticism”, but did mention how they would like to see such explanations for different concerns causing vaccine hesitancy, instead of just this one.

A valuable insight arises from these remarks – although it is beneficial that the explanation is specific and directed, at the same time it cannot be too narrow, otherwise it may be considered incomplete, or even biased – and thus less trustworthy. This suggests that the perceived completeness of an explanation is important for user acceptance and that completeness needs to be achieved by considering the wider context of the explanation, not just the specific question it is addressing. Designing effective explanations thus requires us to consider which other related issues should also be addressed when explaining a specific point (e.g. based on what other issues users consider to be related). This insight readily translates to AI explanations as well, as many methods provide explanations only of a specific result of an AI system. Although in the case of this prototype the idea was to tackle just one specific COVID-19 vaccine-related concern as a prototypical example, it would have been beneficial to provide users with the option to explore further explanations addressing related issues to avoid the impression of selective exposure and thus reinforce the trustworthiness of the given explanation(s).

5 Evaluation

5.1 Methodology

The developed prototype was evaluated during two interactive online workshops; one took place in December, 2021 with bachelor students in health communication, the second one was conducted in January 2022 with high-school students. After a short introduction to the project the students could explore the prototype (without prior explanation) and give feedback through an online survey. A consent form for survey participation was provided at the beginning of the survey including GDPR compliant information on how their (anonymized) data will be used for research. The survey contained questions about the suitability of the given explanation for this particular concern, the understandability of the explanation and how different elements of the explanation influence it, and lastly, the potential impact of the explanation on future behavior. The responses were elicited on a 5-point Likert scale with labels at the endpoints (1-strongly disagree, 5-strongly agree). The survey was completed by 45 bachelor students (69% female, 29% male, age 19–28), and 43 high-school students (60% female, 21% male, 9% diverse, age 15–18). The majority of the participants were vaccinated against COVID-19 (74%), 14% were not vaccinated, and 8% were recovered from the virus (additional 3% didn’t want to disclose this information).

5.2 Results

Overall, 82.5% of all the participants either partially or strongly agreed that the explanation is comprehensible. 74% partially or strongly agreed that the contrastive element makes the explanation more understandable, while 85% stated that the stepwise explanation process through individual challenges improved understandability. This supports the choice of the contrastive technique and the structured narrative. It supports suggestions from previous work (e.g. [45]) that more closely user-centered explainability techniques, such as contrastive explanations could be well-suited for non-experts and would merit further investigation in this context.

Regarding individual design elements of the explanation, visual elements were found helpful for understanding the explanation by 57% of respondents. The respondents also mostly agreed (55%) that the dialogue format contributed to the explanation being more understandable, although more than a third (34%) were undecided on this matter. Participants’ responses on how specific elements made the explanation more understandable are shown in Fig. 4.

Fig. 4.
figure 4

Perceived impact of the explanation elements on understandability

We performed a non-parametric Mann-Whitney test to check whether there were differences between bachelor and high-school students in perceiving how different elements support the understandability of the explanation We found that high school students found the dialogue and the visualizations to be more helpful for the understandability than the bachelor students (Mann-Whitney U = 1371, p < .001 (dialogue); Mann-Whitney U = 1229, p < .05 (visualizations)). Both in the case of the dialogue, and the visualizations, bachelor students expressed a more neutral stance on average towards how these elements impact the understandability of the explanation (Median = 3), while high-school students exhibited higher agreement with the statements that these elements help with making the explanation more understandable (Median = 4). The source of these differences is not clear, but they suggest that some additional tailoring of the presentation style could be done for these two different groups of users.

Insights from the free feedback field in the survey also confirm that there were difficulties in recognizing the dialogue element. This qualitative part of the survey also affirmed the observation from the interviews in the testing phase that addressing only one user concern in the explanation left the impression of its incompleteness. Another common comment was that there should be even more visualizations, less text, and that the prototype should be more interactive. On one hand, this points to a preference for a visual rather than textual medium of communication for the explanation. On the other hand, the wish for more interactivity could be related to the preference for more user control over the information flow and presentation. Both observations give indications of user preferences that could also play a role in and inform the design of more effective user-centered AI explanations for non-experts.

While being careful not to draw too strong conclusions from this explorative evaluation, the results indicate that the prototype could have a positive impact on the openness of users towards considering the information presented in the explanation, though with caveats. Two-thirds of the survey respondents thought that the worry of the COVID-19 vaccines being developed too fast to be safe was well addressed in the prototype example (68%). Regarding the question if the example could increase the willingness to vaccinate, almost half of the respondents were positive in their assessments (48%), although a sizeable proportion was neutral (39%). Only 14% of the respondents thought that the example could backfire and result in decreased willingness to vaccinate, while 26% were neutral about this. Half of the survey respondents stated that they would share the presented explanation with a vaccine hesitant person (55%), while around 25% were undecided and 18% would not do it. Overall, these results suggest that the explanation was considered effective for the majority of participants, but could use improvement to reach the undecided ones (especially regarding its extension with other related user issues and concerns). The obtained data to not provide any specific evidence for explaining the reasons behind the small but existing proportion of negative responses. These could be due to deep-seated prior beliefs (e.g. “anti-vaxxers”) that cannot be adequately addressed by explanations alone.

Though the results indicate that the prototype could have a positive impact on the openness of users towards considering such explanations and on the propensity to share them with vaccine hesitant people, the very small number of unvaccinated study participants doesn’t allow conclusions about the potential impact on most critical users with stronger negative prior beliefs. This is a limitation, albeit a known big challenge for any work in this area. On the other hand, our study did include participants previously skeptical towards COVID-19 vaccines and those helping vaccine hesitant people resolve their concerns whose assessments can (to a certain extent) be considered as a relative proxy for the lack of a larger number of vaccine hesitant participants.

6 Discussion and Lessons Learned

It is a well-known premise and proven experience of Design Thinking and related user-centered approaches, that adopting a user-centered perspective helps us understand how a given problem is actually experienced and reasoned about from a user’s point of view. In our case, the insights from the empathize and define stages emphasize the importance of not relying solely on expert understanding and definition of a problem that needs to be explained, neither content-wise, nor language-wise. This readily translates from our specific case of explaining the “black box” of COVID-19 related concerns to explanations in complex AI systems. We should always try to understand how users experience a problem or a system, how they think about it (i.e. interpret it and form a mental model), and how they talk about it (i.e. externalize and update their understanding through communication with others). That should guide the decisions regarding which results and/or parts of the system need explaining, what type of explanation technique could be best suited and how the explanations should be formulated or presented (e.g. text vs. visual, degree of interactivity).

The insights obtained in the ideate and prototype stages emphasize the importance of an interdisciplinary approach to designing effective explanations. The evaluation results suggest that extending the chosen technique from explainable AI (contrastive explanation) with techniques and findings from persuasive communication, HCI and visualization has contributed to making the prototype more understandable and effective for users. The differing attitudes to specific elements by different portions of users (e.g. visual elements, dialogue principle) suggest that different users value and need different presentation styles to a different extent. That reflects well-known findings from a long tradition of HCI research, but also from more recent work on motivating and facilitating behavioral change [36, 49, 71].

The integrated approach also made the technique that we have used as the core of our explanation more flexible, because the prototype clearly shows how adaptable it can be, while still obtaining the basic structure of a contrastive principle. In spite of carefully tailoring our explanation to user needs, the feedback from the testing phase shows that for an explanation to be perceived as complete it should include sufficient context beyond its specific focus (e.g. related issues the users might consider after being confronted with the explanation). Otherwise, there is a risk that the explanation is perceived as being insufficiently complete (“not enough”) undermining its trustworthiness.

Moreover, not only the context of the problem addressed by the explanation needs to be considered, but also the user’s context, their prior beliefs, existing knowledge, and expectations. In terms of a common formal framing of contrastive explanations (“Why P, rather than Q?”) [44] we need to know what the “Q” amounts to for different users, i.e. the different alternative outcomes that different users are (often implicitly) considering in their questioning of an observed situation or result of an AI system. Along the same lines, not only is it important to know the alternative outcomes that the users contrast with the observed reality, but perhaps just as importantly, to know who is asking the question, what is their motivation, their prior beliefs, background and values. The latter aspects are a difficult challenge to address not only in future work on expanding our own approach, but also in research on AI explainability in general, where they have yet to receive appropriate attention.

7 Conclusions

In this work we have explored how we could translate techniques and insights from AI explainability research to developing effective explanations of complex issues in the COVID-19 pandemic through a user-centered approach. We have discussed how the problem of AI explainability is related to more general explainability problems that can occur in different contexts, where people face complex systems and phenomena that they cannot directly observe and readily understand, thus perceiving them as in-transparent “black boxes” and questioning their validity and trustworthiness. We have shown how explaining complex COVID-19 information is an example of such an explainability problem.

Accordingly, we have discussed how we developed an interdisciplinary conceptual design framework and applied a user-centered approach based on Design Thinking to develop a first prototype demonstrating the adaptation of an AI explainability technique to explain a complex COVID-19 vaccine concern. The developed prototype integrates a contrastive explanation technique with methods from communication science, visualization and HCI to provide a user-centered explanation for a typical question regarding the safety of COVID-19 vaccines.

The first prototype and results of its evaluation with potential users show that the proposed conceptual approach can inform the design of effective user-centered explanations for complex issues in a way that increases their understandability and comprehension. Our focus on cognitive aspects such as understandability thereby addresses only one type of factors in vaccine hesitation; the reasons of hesitancy are manifold in different individual, social and cultural contexts. The presented explanation approach could thus only ever provide a piece of the solution puzzle.

Overall, the results indicate that it is possible to apply methods and insights from explainable AI to explainability problems in other fields and support the suitability of our conceptual framework to inform that. In addition, we have shown how the insights and lessons learned from this work could inform further work on user-centered approaches to explainable AI itself.

Although we have presented a very specific example aimed at helping people resolve a specific concern, we believe that its structural composition could trigger a broader reflection: the narrative of four typical challenges that impact vaccine development process illustrates some of the broader aspects of how scientific research works, what procedures and challenges are involved, and how they were in this case resolved. To us, this relates our approach to a bigger issue that we need to address: how to effectively communicate complexities of scientific research without neither overwhelming, nor oversimplifying, but supporting trust-building through increased understanding.

In further work we plan to apply further explanation techniques (e.g. counterfactual explanations) to additional types of concerns and to evaluate the ecological validity of the approach in more realistic settings. From this, we hope to derive a more comprehensive conceptual framework for designing effective user-centered explanations both for COVID-19-related communication and for informing further work on user-centered approaches to AI explainability itself.