This paper presents a user-centered approach to translating techniques and insights from AI explainability research to developing effective explanations of complex issues in other fields, on the example of COVID-19. We show how the problem of AI explainability and the explainability problem in the COVID-19 pandemic are related: as two specific instances of a more general explainability problem, occurring when people face in-transparent, complex systems and processes whose functioning is not readily observable and understandable to them (“black boxes”). Accordingly, we discuss how we applied an interdisciplinary, user-centered approach based on Design Thinking to develop a prototype of a user-centered explanation for a complex issue regarding people’s perception of COVID-19 vaccine development. The developed prototype demonstrates how AI explainability techniques can be adapted and integrated with methods from communication science, visualization and HCI to be applied to this context. We also discuss results from a first evaluation in a user study with 88 participants and outline future work. The results indicate that it is possible to effectively apply methods and insights from explainable AI to explainability problems in other fields and support the suitability of our conceptual framework to inform that. In addition, we show how the lessons learned in the process provide new insights for informing further work on user-centered approaches to explainable AI itself.
- AI explainability
- Design thinking
- User-centered explanations
Effective explanations are crucial for making AI more understandable and trustworthy for users [3, 45, 51]. Although different explanation techniques exist [16, 51, 67], creating effective AI explanations for non-experts is still a challenge. Recent research highlights the need for a stronger user-centered approach to AI explainability (XAI), to develop explanations tailored to end-user’s needs, backgrounds, and expectations [58, 72]. Design Thinking and user-centered design are well-known for their effectiveness in understanding users and developing solutions that satisfy users’ needs in many fields, and especially for interactive systems [11, 14, 30, 33, 54]. They can thus help us to develop better explanations. Moreover, a user-centered lens helps us realize that the challenge of explainability is not unique to AI, but also appears in other contexts that are complex, in-transparent, hard to understand and require dealing with uncertainty.
In fact, such challenges can also be observed in the attempts to explain scientific knowledge and complex information in the COVID-19 pandemic to non-experts . In both cases, the users (i.e. the general public, respectively) are faced with what they perceive as in-transparent, complex systems whose reasoning processes are not readily observable and understandable to them: e.g. recommendations from AI systems in the case of AI, and expert recommendations in the case of the COVID-19 pandemic. Just as the understandability of AI results is important for their trustworthiness and acceptance by the users, so is also the understandability of information about complex phenomena and processes in the COVID-19 pandemic an important factor for the acceptance of COVID-19 containment measures (e.g. vaccination, masks) by the citizens . Translating insights from AI explainability research (e.g. what types of explanations should be best provided for which types of problems and users and in what form) could thus lead to novel approaches for increasing the understandability and acceptance of explanations of complex issues in crises and emergencies such as COVID-19.
While different AI explanation techniques have been proposed (see overviews in [16, 28, 39, 67]), most of them have been developed from a data-driven, rather than from a user-centered perspective. In contrast, recent research has highlighted the need for a more interdisciplinary and user-centered approach to AI explainability in order to provide explanations more suitable for and better tailored to different end-users and stakeholders [16, 28, 39, 44, 45, 67, 72]. This work has also suggested how AI explainability can learn from related work in other domains (e.g. psychology, cognitive science, philosophy), sometimes referred to as “explanation sciences” . Different contributions have pointed out the importance of human factors (e.g. cognitive biases, prior beliefs, mental models etc.) for the interpretation, trust and acceptance of AI explanations [2, 37, 72]. However, the importance of adopting a user-centered design process to achieve this has so-far received little attention (with some exceptions, e.g. [12, 48]).
In addition, we propose to also consider what we could learn from translating methods and insights from explainable AI to developing effective solutions for related “explainability problems” in other fields of science and society. How can a user-centered perspective help us in doing so? And what can we learn back from that experience to inform new user-centered approaches to explainable AI?
Against this background, in our XAI4COVID project we have been exploring how to translate techniques and insights from AI explainability research to developing effective explanations of complex issues in the COVID-19 pandemic through a user-centered approach. Here, we describe our user-centered approach, an interdisciplinary conceptual framework and a first prototype developed to achieve that. The prototype integrates AI explainability techniques with methods from communication science, visualization and HCI to provide a user-centered explanation for a typical question regarding the safety of COVID-19 vaccines. We also discuss results from a first evaluation in an explorative user study with 88 participants and outline future work. By learning how well the selected explanation technique and its extension with methods from other fields worked in this context, we aim to identify suggestions for further work that could inform new ideas for better user-centered design of AI explanations as well.
2 AI Explainability and the Explainability of Complex COVID-19 Information
Not being able to understand why an AI system exhibits a certain behavior and why it has produced a certain result can lead to wrong interpretation of its results and their reliability, to incorrect attribution of cause-effect relationships and ultimately to wrong or harmful decisions . The perceived opacity and inconsistencies this creates in users’ perceptions diminish users’ trust in the system. Similarly, not being able to understand the reasoning behind the recommended COVID-19 containment measures undermines their understandability and trustworthiness for the citizens.
A case in point is the public communication of information related to vaccine safety and vaccine hesitancy in the COVID-19 pandemic. Although vaccine hesitancy has many influence factors (especially deep-seated ideological beliefs and attitudes, social and cultural context, disinformation), the understandability and trustworthiness of the information about COVID-19 vaccine safety and its fast development have also been one of the challenges for people’s acceptance, especially in the early stages of the pandemic . This has been exacerbated by the high uncertainty and complexity of fast changing information from what for non-experts can seem like opaque processes (e.g. vaccine development, vaccine testing).
The resulting feeling of being overwhelmed due to the complexity of the situation and its fast pace, the fear and anxiety induced by high uncertainty can lead to preference for inaction in the face of perceived uncertainty and complexity of potential consequences of taking a decision. Studies of COVID-19 vaccination attitudes (e.g. ) and anecdotal evidence from media reports (e.g. ), interviews available on YouTube (e.g. ) and discussions of vaccine hesitant people in online forums (e.g. ) point in this direction. They suggest that beyond groups of people with deep-seated anti-vaccination beliefs (e.g. of ideological or political origin), many people have been authentically struggling to make sense of the huge amounts of for them incomprehensible and contradictory information, including widespread misinformation.
Such situations increase the extent to which people construct their own theories about the reality of the situation, often based on what they are directly experiencing and on false attributions of causal relationships. Similar problems occur when people construct their own explanations of systems (mental models, “folk theories”) to interpret and predict their behavior, as evidenced in HCI and AI explainability research , building on cognitive science . Users’ understanding of system behavior is often flawed , which in the case of AI systems can lead to risks of incorrect interpretation and wrong decisions, misplaced trust in the system and outright societal harms (see  for an overview). Explainability techniques can help address such issues (e.g. [1, 51, 72]) and could thus also help address related problems in COVID-19 communication.
Similarly, just like non-experts reason about AI systems differently than experts, so does the general public reason differently than experts about health risks and vaccines in the COVID-19 pandemic. Public risk perceptions are informed by heuristics such as the affect heuristic (‘how do I feel about it’) and the availability heuristic (how easily a risk can be drawn from memory) [24, 56]. These deviate from the risk decision rules used by experts . Accordingly, just like non-experts need different types of AI explanations than experts , so too are user-centered explanations tailored to the needs of non-experts needed for the communication of complex information in crises and emergencies such as the COVID-19 pandemic.
3 Theoretical Background and Conceptual Design Framework
While many different AI explanation techniques have been proposed (see overviews in [3, 7, 16, 28, 45, 67]), most of them have not been developed from a user-centered perspective [7, 28, 45, 72], but rather from a data-driven perspective (e.g. how to identify the importance of different features from input data for a given result [41, 57], or how to identify the most important training examples ). Such methods are mostly used by AI experts themselves to e.g. debug or improve their AI models [7, 12].
As pointed out in [39, 44, 45] promising explanation types that could be well-suited for non-expert users include contrastive and counterfactual explanations. Contrastive explanations can enable users to understand possible cause-effect relationships by contrasting the observed outcomes with information about why another possible outcome hasn’t occurred (“why not”) [44, 45]. Counterfactual explanations help understand how the outcomes would change with different inputs or other factors (“what if”). Such explanations allow users to understand how the situation would need to change for alternative outcomes to occur [44, 72].
Different techniques to construct and present such explanations have been proposed (see overviews in [7, 39, 44, 45, 72]). Visualization has been successfully used to support understanding which features in the data have the most influence on system results (e.g. [57, 63]). Already simple visualization techniques can support contrastive and counterfactual reasoning for non-experts by showing e.g. how differences in factors determining a complex situation relate to different outcomes [7, 28, 72]. While visualizations of statistical data were shown to be less effective in vaccination campaigns , the use of metaphorical and symbolic visualizations to support behavioral change has demonstrated promising results in other domains [19, 36, 49, 66].
Applying such methods to develop user-centered explanations addressing COVID-19 vaccine concerns requires their integration with approaches from persuasive communication and related work. Approaches combining storytelling and visualization have shown how that can make complex issues and scientific data better graspable for the general public [6, 23, 27, 60, 68, 69], including health risk [27, 73]. Persuasive communication literature suggests that effective communication requires overcoming common defensive psychological responses and cognitive biases (e.g. counterarguing, selective avoidance, confirmation bias)  and stresses the importance of asserting the credibility of the message bearer . Approaches based on the elaboration likelihood theory [53, 61] suggest that strategies such as narrative persuasion and parasocial identification can overcome resistance to messages opposing one’s current beliefs (e.g. perceived invulnerability) [22, 24, 47, 70]. Similarly, integrating metaphorical and symbolic visualizations with personalized motivational messages for behavioral recommendations was successful in stimulating pro-environmental behavior [19, 49].
Motivational interviewing also highlights the need to express empathy and acknowledge person’s concerns, rather than using confrontation as part of a strategy for guiding people towards informed decisions by strengthening their motivation to resolve their ambivalences . It has been applied in work with patients hesitant to change their behavior  and considered as a method to address COVID-19 vaccine hesitancy . Other work has also shown that correcting false beliefs through direct counterarguing risks perpetuating misinformation [38, 43] and can backfire . Instead of directly dismissing people’s concerns and misconceptions, a more promising approach is to acknowledge them  and emphasize factually correct information . Increasing anecdotal evidence also highlights the importance of personal dialogue with trusted persons (e.g. ). Accordingly, to design effective explanations it is not enough to just focus on the informativeness und understandability of their content. Rather, we also need to consider how they are framed and communicated to users.
A key element thereof is addressing people’s concerns in a way they will recognize. This requires that the scientific definition of a problem be extended with a user-centered problem description. A scientific problem definition defines people’s concerns in terms of scientific categories, e.g. in case of COVID-19 vaccine hesitancy that would be based on models of antecedents of vaccine behavior [8, 42] determining the main hesitancy factors from user responses to a standardized questionnaire . Scientifically informed communication campaigns then refer to such abstractions to create messages addressing the main concerns (e.g. vaccine safety). They tend to reflect the terms of the scientific abstractions, rather than the language actually used by people, which can adversely impact understanding and acceptance. To create a user-centered problem description (e.g. defining vaccine hesitancy concerns how specific groups of users describe them), methods from user-centered analysis can be applied (e.g. user interviews or content analysis of online discussion forums).
This allows the explanations to be constructed closer to the phrasing of the recipients. Mapping user-centered concern descriptions to the scientific definition (e.g. vaccine hesitancy factors) then allows to identify scientific knowledge from which explanations can be formed. User-centered explanations can then be developed by applying an explainability technique (e.g. contrastive or counterfactual explanation) to validated knowledge addressing a given COVID-19 concern and combining this with communicational and visualization elements. Figure 1 depicts the initial set of conceptual elements we have used to explore this approach by designing a prototype of a user-centered explanation for complex COVID-19 information, described in the next section.
4 Designing User-Centered Explanations for Complex COVID-19 Information
The core of our approach is an iterative user-centered process similar to Design Thinking. We first developed an empathic understanding of the problem by combining a scientific literature analysis with a user-centered analysis of how people informed themselves about COVID-19, and their concerns about COVID-19 vaccines (empathize phase). Based on gained insights, we chose a frequently occurring concern as an example of a typical explainability problem in the COVID-19 pandemic for which to develop a prototypical solution. This led to the following problem definition based on the users’ point of view: “how could COVID-19 vaccines be safe if they have been developed so quickly – unlike previous vaccines that took much longer?” (definition phase).
We enriched the problem definition with the specific needs of this use case based on insights from the empathize phase. We then aimed at answering this question by applying a contrastive explanation technique, borrowed from explainable AI, and adapting it to the needs of this use case by incorporating specific techniques from persuasive communication, HCI, and visualization (ideation phase). By combining these different techniques, we constructed a prototype of our explanation (prototype phase), which we then tested with experts, and with target users (testing phase). A more in detail description of activities and processes that were applied in the explanation development, in each of the phases of a typical Design Thinking process, is given below.
4.1 Phase 1: Empathize
To address the need for better communicational tools for explaining complex COVID-19 vaccine-related information, we first needed to understand how experts from the scientific community frame and construe the knowledge on vaccine hesitancy, but more importantly – how they present and explain it. Turning first to reviewing the literature, we were able to map out the problems identified so-far in scientific research, which led us to using the 5C model [8,9,10, 42, 74] of vaccine hesitancy factors and ongoing studies of COVID-19 vaccination attitudes [20, 21] as the first reference points. However, beyond the existing formal scientific understanding, great emphasis was put on gaining a better user-centered understanding of the problem.
To develop an empathic understanding of the challenges that people have faced with respect to information related to COVID-19 vaccines, we turned to people directly, asking them about their vaccine-related concerns, ways they informed themselves, and how they perceived materials covering these issues that were available to them. For mapping out a user-centered description of COVID-19 vaccination concerns, we also analyzed how people described their concerns from anecdotal evidence in a sample of media articles (e.g. ), discussions in online forums (e.g. ), and interviews available on YouTube (e.g. ). This analysis has been performed on online content ranging from the beginning of the pandemic to October 2021. In this way, we were not only able to discover the variety of different concerns people had related to COVID-19 vaccines, but also deepened our understanding of how they communicated these concerns, the language they used and which information they were already familiar with, and which of the used communication techniques were effective. In this way, we developed an empathic understanding of the problems that people experienced and talked about. This allowed us to take a step back from a more formal scientific understanding, and to become better aware of the users’ perspective and their explanation needs.
4.2 Phase 2: Define
In synthesizing our findings from multiple sources, we first created personas, each representing a certain group and outlining its most common COVID-19 vaccine concerns. Defining the problem by using personas helped us bridge the gap between our original problem statement (there is a need for better explanation of COVID-19 vaccine information) and a human-centered problem statement worded by people themselves, in terms of specific topics that were not appropriately communicated, and that led to the feelings of unease and fear. One insight that became particularly apparent in this stage was that one solution does not fit all, because general explanations don’t answer specific concerns people have. Also, by integrating these insights with findings from related scientific work, we were able to identify specific communicational requirements that an explanation should address in order to be effective (e.g. acknowledge the user concerns and choose a suitable wording and communicational style to avoid defensive responses such as counterarguing). Such requirements were so-far little considered and rarely pointed out in the existing work on developing specific types of AI explanations.
To create explanations better tailored to user needs, we first defined our target group to be young people, who were found to be more vaccine hesitant . Further, for the first prototype we decided to focus on a repeatedly mentioned concern, about how COVID-19 vaccines have been developed too fast to be safe, implying that the time didn’t suffice for proper testing. This is not a usual vaccine hesitancy factor (other vaccines were developed more slowly), but is very specifically related to COVID-19 vaccines. Moreover, in online materials of official health-related sources this question either remains unanswered, or the answer uses complex language, hardly understandable to the general public (see e.g. the readability study of COVID-19 websites ).
Additionally, the problem of understandability and completeness of official information was raised by people as well in the interviews and online discussion we have analyzed and in our own interviews (see the testing phase). In all those cases, people often stressed that the concern of COVID-19 vaccines being developed too fast to be safe was addressed in informative materials either only in a generic, reassuring manner, claiming that the COVID-19 vaccines are safe despite the speed of the development process, but not explaining why and how that was possible, or by explaining that in long texts using difficult scientific language. Indeed, to understand how it was possible for COVID-19 vaccines to be developed so quickly without risking their safety, one needs to understand the scientific process of vaccine development and what impacts it.
Being complex and hard to explain to non-experts are features that the scientific process of vaccine development, shares with complex AI systems. This is why using techniques from AI explainability to explain it appeared as a suitable approach in the first place. The choice of a specific explainability technique to be applied (and adapted) in this case was then guided primarily by the users’ framing of the problem, rather than by how an expert would have explained it from their expert perspective. In particular, the users’ problem perception stems from implicit contrasting of two cases: the case of the vaccines developed in the circumstances of the COVID-19 pandemic (fast) and the case of other previously developed vaccines (slow). Consequently, the identification and definition of a core problem requiring an explanation from the users’ point of view was framed as “how could COVID-19 vaccines be safe if they have been developed so quickly - unlike previous vaccines that took much longer?”.
4.3 Phase 3: Ideate
Given this user-centered problem definition, a contrastive explanation technique also used for AI explanations – i.e. explaining “Why outcome P, rather than outcome Q has occurred?”  - naturally lends itself as a suitable way to address the user explanation needs. Such contrastive explanations enable people to understand cause-effect relationships by contrasting the observed outcomes with information about why another possible outcome hasn’t occurred [44, 45].
Therefore, we made this contrastive explanatory principle the core of our conceptual solution design. To apply it effectively, the user-centered perspective requires us to consider different factors that can impact its effectiveness and user acceptance (as identified in the empathize and define phases). To overcome defensive responses and cognitive biases (e.g. counterarguing, confirmation bias) , we chose to apply the strategy of narrative exposition and parasocial identification [53, 61], that can help overcome resistance to messages opposing one’s beliefs [47, 70]. Hence, we presented the contrastive explanation in form of a dialogue between a user and a scientist character. Furthermore, learning from motivational interviewing techniques that advise acknowledging a person’s concerns in order to help resolve ambivalences , an accepting tone was used with the aim to demonstrate empathy, rather than confrontation. Additionally, we integrated metaphorical and symbolic visualizations to make the contrastive principle easy to understand  and to stimulate and facilitate acceptance of messages that may require a change in behavior . Finally, best practices from HCI and interaction design were applied to make the prototype easy to use and understand (e.g. giving users control of the flow through the explanation steps, reducing information density by allowing them to control/expand the amount of information shown).
4.4 Phase 4: Prototype
In developing the first prototype we focused on understandability and trustworthiness for the recipients. We aimed at an explanation form that could be easily integrated into websites of official institutions (e.g. FAQs) and shared on social media. The content for the explanation was taken and adapted from official COVID-19-related websites [15, 26, 32, 40, 64, 65, 76]. The accuracy of our text adaptations was verified by several physicians. The process of the prototype development was an iterative one, where we tried out how different elements considered in the ideation phase could be combined to create an interactive explanation (see Fig. 2).
Before presenting the explanation itself, the user’s concern is acknowledged by presenting it in their own terms: “Why have the COVID-19 vaccines been developed so quickly, rather than taking much more time as it was with previously developed vaccines?”. This is done both to elicit an empathic response (a technique adopted from motivational interviewing), as well as to introduce the contrastive principle already in the user-centered problem statement, reflecting how people commonly phrased their worry. The contrastive explanation principle is then implemented by structuring the explanation around four main challenges that were solved more quickly for the COVID-19 vaccine (funding, volunteers, data, and bureaucracy), allowing it to be developed faster than the usual process of vaccine development.
The main differences in the factors determining the two different outcomes are explained for each of the four challenges, i.e. contrasted one against the other, in a stepwise structure. This contrasting is done both visually and with a textual explanation, framed as a narrative and presented in form of a dialogue. Progressing through the different explanation steps is controlled by the user. The clickable “Is there more to it?” and “OK, let’s see!” buttons are examples of the interactivity of the explanation, providing a user with a self-paced exploration, but at the same time eliciting the dialogue element (see Fig. 3). The bearer of the message, i.e. the user’s interlocutor, is visualized as a scientist character, appealing to the trustworthiness of the explanation.
Additional visualization elements further help convey the message: two color-coded progress bars visualize the difference between the COVID-19 vaccine development, and the usual vaccine development process for each challenge. Visual symbols also depict the relation between the main factors responsible for the two different (contrastive) outcomes. A visual summary is presented at the end of the interactive explanation, serving both as a repetition device and as an element easily shareable on social media.
4.5 Phase 5: Test
Formative tests with target users were done in a form of semi-structured think-aloud interviews in December 2021 (four via Zoom, two in person). The goal of these tests was to verify the overall solution concept and obtain feedback for improving the prototype. A convenience sample of 6 participants (4 female, 2 male, 22–32 years) representing the target groups of our approach took part. The interviews were 1–1.5 h long and started with eliciting participants experience with the pandemic, their information sources, stance on the COVID-19 vaccination and worries they experienced or encountered. All participants were fully vaccinated, but three were vaccine hesitant prior to the vaccination, and three had experience with persuading vaccine hesitant people to vaccinate. They ranged from poorly to well-informed regarding the COVID-19 pandemic. Participants were then shown an FAQ excerpt from a webpage of a health authority (declared as coming from an official source but without revealing which one)  and then our explanation prototype. Both were addressing the same concern: that the COVID-19 vaccines were developed too fast. The participants were encouraged to react freely both while reading the text and when exploring the prototype. After each exposure we asked about their impressions of the given example (FAQ excerpt, our explanation prototype), regarding its relevance, suitability, understandability and likeliness that it would help people resolve the given concern. We also asked if it could have a soothing effect, if the participants would share it with a concerned person and how they perceived the individual elements of the prototype. The participants were finally asked to compare our explanation prototype with the FAQ example.
Overall, all interview participants found the explanation prototype easy to understand, well-structured, and written in a user-friendly language. Four of them stated that contrasting the COVID-19 vaccine development process to the usual vaccine development process helped with the comprehension. Five pointed out that the stepwise narrative of 4 challenges gave the explanation a good structure, making it easy to follow. Five interviewees indicated that they didn’t understand there was a dialogue between a person and a scientist. The interviews also provided valuable insights for improving the usability, as all interviewees stated more interactivity and adding further visualizations could provide an even better understanding. While the interviews suggest that the information presented was clear and easily understandable, this was characterized as “necessary, but not enough”. All of the participants stated one of the following: that it has a potential to draw attention, start a conversation, spark curiosity or serve as a resource, but still with awareness that the decision to vaccinate is influenced by many different factors. Three interview participants pointed out that the explanation could backfire, since it isolates one specific worry and goes in depth explaining it, potentially causing feelings of suspicion and skepticism (with respect to why only this specific concern was chosen to be addressed). Other interviewees did not use terms such as “suspicion” or “skepticism”, but did mention how they would like to see such explanations for different concerns causing vaccine hesitancy, instead of just this one.
A valuable insight arises from these remarks – although it is beneficial that the explanation is specific and directed, at the same time it cannot be too narrow, otherwise it may be considered incomplete, or even biased – and thus less trustworthy. This suggests that the perceived completeness of an explanation is important for user acceptance and that completeness needs to be achieved by considering the wider context of the explanation, not just the specific question it is addressing. Designing effective explanations thus requires us to consider which other related issues should also be addressed when explaining a specific point (e.g. based on what other issues users consider to be related). This insight readily translates to AI explanations as well, as many methods provide explanations only of a specific result of an AI system. Although in the case of this prototype the idea was to tackle just one specific COVID-19 vaccine-related concern as a prototypical example, it would have been beneficial to provide users with the option to explore further explanations addressing related issues to avoid the impression of selective exposure and thus reinforce the trustworthiness of the given explanation(s).
The developed prototype was evaluated during two interactive online workshops; one took place in December, 2021 with bachelor students in health communication, the second one was conducted in January 2022 with high-school students. After a short introduction to the project the students could explore the prototype (without prior explanation) and give feedback through an online survey. A consent form for survey participation was provided at the beginning of the survey including GDPR compliant information on how their (anonymized) data will be used for research. The survey contained questions about the suitability of the given explanation for this particular concern, the understandability of the explanation and how different elements of the explanation influence it, and lastly, the potential impact of the explanation on future behavior. The responses were elicited on a 5-point Likert scale with labels at the endpoints (1-strongly disagree, 5-strongly agree). The survey was completed by 45 bachelor students (69% female, 29% male, age 19–28), and 43 high-school students (60% female, 21% male, 9% diverse, age 15–18). The majority of the participants were vaccinated against COVID-19 (74%), 14% were not vaccinated, and 8% were recovered from the virus (additional 3% didn’t want to disclose this information).
Overall, 82.5% of all the participants either partially or strongly agreed that the explanation is comprehensible. 74% partially or strongly agreed that the contrastive element makes the explanation more understandable, while 85% stated that the stepwise explanation process through individual challenges improved understandability. This supports the choice of the contrastive technique and the structured narrative. It supports suggestions from previous work (e.g. ) that more closely user-centered explainability techniques, such as contrastive explanations could be well-suited for non-experts and would merit further investigation in this context.
Regarding individual design elements of the explanation, visual elements were found helpful for understanding the explanation by 57% of respondents. The respondents also mostly agreed (55%) that the dialogue format contributed to the explanation being more understandable, although more than a third (34%) were undecided on this matter. Participants’ responses on how specific elements made the explanation more understandable are shown in Fig. 4.
We performed a non-parametric Mann-Whitney test to check whether there were differences between bachelor and high-school students in perceiving how different elements support the understandability of the explanation We found that high school students found the dialogue and the visualizations to be more helpful for the understandability than the bachelor students (Mann-Whitney U = 1371, p < .001 (dialogue); Mann-Whitney U = 1229, p < .05 (visualizations)). Both in the case of the dialogue, and the visualizations, bachelor students expressed a more neutral stance on average towards how these elements impact the understandability of the explanation (Median = 3), while high-school students exhibited higher agreement with the statements that these elements help with making the explanation more understandable (Median = 4). The source of these differences is not clear, but they suggest that some additional tailoring of the presentation style could be done for these two different groups of users.
Insights from the free feedback field in the survey also confirm that there were difficulties in recognizing the dialogue element. This qualitative part of the survey also affirmed the observation from the interviews in the testing phase that addressing only one user concern in the explanation left the impression of its incompleteness. Another common comment was that there should be even more visualizations, less text, and that the prototype should be more interactive. On one hand, this points to a preference for a visual rather than textual medium of communication for the explanation. On the other hand, the wish for more interactivity could be related to the preference for more user control over the information flow and presentation. Both observations give indications of user preferences that could also play a role in and inform the design of more effective user-centered AI explanations for non-experts.
While being careful not to draw too strong conclusions from this explorative evaluation, the results indicate that the prototype could have a positive impact on the openness of users towards considering the information presented in the explanation, though with caveats. Two-thirds of the survey respondents thought that the worry of the COVID-19 vaccines being developed too fast to be safe was well addressed in the prototype example (68%). Regarding the question if the example could increase the willingness to vaccinate, almost half of the respondents were positive in their assessments (48%), although a sizeable proportion was neutral (39%). Only 14% of the respondents thought that the example could backfire and result in decreased willingness to vaccinate, while 26% were neutral about this. Half of the survey respondents stated that they would share the presented explanation with a vaccine hesitant person (55%), while around 25% were undecided and 18% would not do it. Overall, these results suggest that the explanation was considered effective for the majority of participants, but could use improvement to reach the undecided ones (especially regarding its extension with other related user issues and concerns). The obtained data to not provide any specific evidence for explaining the reasons behind the small but existing proportion of negative responses. These could be due to deep-seated prior beliefs (e.g. “anti-vaxxers”) that cannot be adequately addressed by explanations alone.
Though the results indicate that the prototype could have a positive impact on the openness of users towards considering such explanations and on the propensity to share them with vaccine hesitant people, the very small number of unvaccinated study participants doesn’t allow conclusions about the potential impact on most critical users with stronger negative prior beliefs. This is a limitation, albeit a known big challenge for any work in this area. On the other hand, our study did include participants previously skeptical towards COVID-19 vaccines and those helping vaccine hesitant people resolve their concerns whose assessments can (to a certain extent) be considered as a relative proxy for the lack of a larger number of vaccine hesitant participants.
6 Discussion and Lessons Learned
It is a well-known premise and proven experience of Design Thinking and related user-centered approaches, that adopting a user-centered perspective helps us understand how a given problem is actually experienced and reasoned about from a user’s point of view. In our case, the insights from the empathize and define stages emphasize the importance of not relying solely on expert understanding and definition of a problem that needs to be explained, neither content-wise, nor language-wise. This readily translates from our specific case of explaining the “black box” of COVID-19 related concerns to explanations in complex AI systems. We should always try to understand how users experience a problem or a system, how they think about it (i.e. interpret it and form a mental model), and how they talk about it (i.e. externalize and update their understanding through communication with others). That should guide the decisions regarding which results and/or parts of the system need explaining, what type of explanation technique could be best suited and how the explanations should be formulated or presented (e.g. text vs. visual, degree of interactivity).
The insights obtained in the ideate and prototype stages emphasize the importance of an interdisciplinary approach to designing effective explanations. The evaluation results suggest that extending the chosen technique from explainable AI (contrastive explanation) with techniques and findings from persuasive communication, HCI and visualization has contributed to making the prototype more understandable and effective for users. The differing attitudes to specific elements by different portions of users (e.g. visual elements, dialogue principle) suggest that different users value and need different presentation styles to a different extent. That reflects well-known findings from a long tradition of HCI research, but also from more recent work on motivating and facilitating behavioral change [36, 49, 71].
The integrated approach also made the technique that we have used as the core of our explanation more flexible, because the prototype clearly shows how adaptable it can be, while still obtaining the basic structure of a contrastive principle. In spite of carefully tailoring our explanation to user needs, the feedback from the testing phase shows that for an explanation to be perceived as complete it should include sufficient context beyond its specific focus (e.g. related issues the users might consider after being confronted with the explanation). Otherwise, there is a risk that the explanation is perceived as being insufficiently complete (“not enough”) undermining its trustworthiness.
Moreover, not only the context of the problem addressed by the explanation needs to be considered, but also the user’s context, their prior beliefs, existing knowledge, and expectations. In terms of a common formal framing of contrastive explanations (“Why P, rather than Q?”)  we need to know what the “Q” amounts to for different users, i.e. the different alternative outcomes that different users are (often implicitly) considering in their questioning of an observed situation or result of an AI system. Along the same lines, not only is it important to know the alternative outcomes that the users contrast with the observed reality, but perhaps just as importantly, to know who is asking the question, what is their motivation, their prior beliefs, background and values. The latter aspects are a difficult challenge to address not only in future work on expanding our own approach, but also in research on AI explainability in general, where they have yet to receive appropriate attention.
In this work we have explored how we could translate techniques and insights from AI explainability research to developing effective explanations of complex issues in the COVID-19 pandemic through a user-centered approach. We have discussed how the problem of AI explainability is related to more general explainability problems that can occur in different contexts, where people face complex systems and phenomena that they cannot directly observe and readily understand, thus perceiving them as in-transparent “black boxes” and questioning their validity and trustworthiness. We have shown how explaining complex COVID-19 information is an example of such an explainability problem.
Accordingly, we have discussed how we developed an interdisciplinary conceptual design framework and applied a user-centered approach based on Design Thinking to develop a first prototype demonstrating the adaptation of an AI explainability technique to explain a complex COVID-19 vaccine concern. The developed prototype integrates a contrastive explanation technique with methods from communication science, visualization and HCI to provide a user-centered explanation for a typical question regarding the safety of COVID-19 vaccines.
The first prototype and results of its evaluation with potential users show that the proposed conceptual approach can inform the design of effective user-centered explanations for complex issues in a way that increases their understandability and comprehension. Our focus on cognitive aspects such as understandability thereby addresses only one type of factors in vaccine hesitation; the reasons of hesitancy are manifold in different individual, social and cultural contexts. The presented explanation approach could thus only ever provide a piece of the solution puzzle.
Overall, the results indicate that it is possible to apply methods and insights from explainable AI to explainability problems in other fields and support the suitability of our conceptual framework to inform that. In addition, we have shown how the insights and lessons learned from this work could inform further work on user-centered approaches to explainable AI itself.
Although we have presented a very specific example aimed at helping people resolve a specific concern, we believe that its structural composition could trigger a broader reflection: the narrative of four typical challenges that impact vaccine development process illustrates some of the broader aspects of how scientific research works, what procedures and challenges are involved, and how they were in this case resolved. To us, this relates our approach to a bigger issue that we need to address: how to effectively communicate complexities of scientific research without neither overwhelming, nor oversimplifying, but supporting trust-building through increased understanding.
In further work we plan to apply further explanation techniques (e.g. counterfactual explanations) to additional types of concerns and to evaluate the ecological validity of the approach in more realistic settings. From this, we hope to derive a more comprehensive conceptual framework for designing effective user-centered explanations both for COVID-19-related communication and for informing further work on user-centered approaches to AI explainability itself.
Abdul, A., Vermeulen, J., Wang, D., Lim., B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In: CHI 2018: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Paper No.: 582, pp. 1–18 (2018)
Amershi, S., et al.: Guidelines for human-AI interaction. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–13, May 2019
Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Aronson, E., Wilson, T.D., Akert, R.M., Sommers, S.R.: Social Psychology, 9th edn. Pearson Education, Upper Saddle River (2016)
Basch, C., Mohlman, J., Hillyer, G., Garcia, P.: Public health communication in time of crisis: readability of on-line COVID-19 information. Disaster Med. Public Health Prep. 14(5), 635–637 (2020). https://doi.org/10.1017/dmp.2020.151
Bach, B., et al.: Narrative design patterns for data-driven storytelling. In: Riche, N., Hurter, C., Diakopoulos, N., Carpendale, S. (eds.) Data-Driven Storytelling, pp. 107–133. CRC Press, Boca Raton (2018)
Belle, V., Papantonis, I.: Principles and practice of explainable machine learning. Front Big Data 4, 688969 (2021). https://doi.org/10.3389/fdata.2021.688969. PMID: 34278297. PMCID: PMC8281957
Betsch, C., Böhm, R., Chapman, G.B.: Using behavioral insights to increase vaccination policy effectiveness. Policy Insights Behav Brain Sci 2, 61–73 (2015)
Betsch, C., Schmid, P., Heinemeier, D., Korn, L., Holtmann, C., Böhm, R.: Beyond confidence: development of a measure assessing the 5C psychological antecedents of vaccination. PLoS ONE 13(12), e0208601 (2018). https://doi.org/10.1371/journal.pone.0208601
Betsch, C., et al.: Sample study protocol for adapting and translating the 5C scale to assess the psychological antecedents of vaccination. BMJ Open 10, e034869 (2020). https://doi.org/10.1136/bmjopen-2019-034869
Beyer, H., Holtzblatt, K., Baker, L.: An agile customer-centered method: rapid contextual design. In: Zannier, C., Erdogmus, H., Lindstrom, L. (eds.) XP/Agile Universe 2004. LNCS, vol. 3134, pp. 50–59. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-27777-4_6
Bhatt, U., Xiang, A., Sharma, S., et al.: Explainable machine learning in deployment. In: Proceedings of the ACM FAT* 2020, pp. 648–657 (2020)
Britt, E., Hudson, S.M., Blampied, N.M.: Motivational interviewing in health settings: a review. Patient Educ. Couns. 53(2), 147–155 (2004)
Brown, T.: Design thinking. Harv. Bus. Rev. 86(6), 84 (2008)
Centers for Disease Control and Prevention: Developing COVID-19 Vaccines. https://www.cdc.gov/coronavirus/2019-ncov/vaccines/distributing/steps-ensure-safety.html. Accessed 12 Jan 2022
Chari, S., Seneviratne, O., Gruen, D.M., Foreman, M.A., Das, A.K., McGuinness, D.L.: Explanation ontology: a model of explanations for user-centered AI. In: Pan, J.Z., et al. (eds.) ISWC 2020. LNCS, vol. 12507, pp. 228–243. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-62466-8_15
Chohlas-Wood, A.: Understanding risk assessment instruments in criminal justice. Brookings (2020). https://www.brookings.edu/research/understanding-risk-assessment-instruments-in-criminal-justice/
Chou, W.S., Budenz, A.: Considering emotion in covid-19 vaccine communication: addressing vaccine hesitancy and fostering vaccine confidence. Health Commun. 35(14), 1718–1722 (2020). https://doi.org/10.1080/10410236.2020.1838096
Cominola, A., et al.: Long-term water conservation is fostered by smart meter-based feedback and digital user engagement. NPJ Clean Water 4(1), 1–10 (2021). https://doi.org/10.1038/s41545-021-00119-0
COSMO COVID-19 Snapshot Monitoring: Summaries. https://projekte.uni-erfurt.de/cosmo2020/web/summary/. Accessed 12 Jan 2022
COVIMO - COVID-19 vaccination rate monitoring in Germany. https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Projekte_RKI/covimo_studie.html;jsessionid=052DF2BB3F912EAD0582759BA5BF1B16.internet082?nn=2444038. Accessed 12 Jan 2022
Das, E., De Wit, J.B.F., Stroebe, W.: Fear appeals motivate acceptance of action recommendations: Evidence for a positive bias in the processing of persuasive messages. Pers. Soc. Psychol. Bull. 29(5), 650–664 (2003)
Dahlstrom, M.F.: Using narratives and storytelling to communicate science with nonexpert audiences. Proc. Natl. Acad. Sci. 111(Supplement 4), 13614–13620 (2014)
De Wit, J.B.F., Das, E., Vet, R.: What works best: objective statistics or a personal testimonial? An assessment of the persuasive effects of different types of message evidence on risk perception. Health Psychol. 27(1), 110–115 (2008)
DW: COVID: Why are so many people against vaccination? https://www.dw.com/en/covid-why-are-so-many-people-against-vaccination/a-58264733. Accessed 12 Jan 2022
European Medicines Agency: COVID-19 vaccines: development, evaluation, approval and monitoring. https://www.ema.europa.eu/en/human-regulatory/overview/public-health-threats/coronavirus-disease-covid-19/treatments-vaccines/vaccines-covid-19/covid-19-vaccines-development-evaluation-approval-monitoring. Accessed 12 Jan 2022
Farinella, M.: The potential of comics in science communication. J. Sci. Commun. 17(1), Y01 (2018)
Fernández-Loría, C., Provost, F., Han, X.: Explaining data-driven decisions made by AI systems: the counterfactual approach. arXiv preprint arXiv:2001.07417 (2020)
Gabarda, A., Butterworth, S.W.: Using best practices to address COVID-19 vaccine hesitancy: the case for the motivational interviewing approach. Health Promot. Pract. 22(5), 611-615 (2021)
Gulliksen, J. Goransson, B., Boivie, I., Blomkvist, S. Persson, J, Cajander, Å.: Key principles for user-centred systems design. Behav. Inf. Technol. 22(6), 397–409 (2003)
Limpens, M.: Motivational interviewing. Podosophia 24(3), 65 (2016). https://doi.org/10.1007/s12481-016-0129-2
Infektionsschutz: Entwicklung und Zulassung von COVID-19-Impfstoffen. https://www.infektionsschutz.de/coronavirus/schutzimpfung/entwicklung-und-zulassung/#c15463. Accessed 12 Jan 2022
ISO 13407: Human-centered design processes for interactive system. International Organization for Standardization), Geneva (1999)
Kelp, N.C., Witt, J.K., Sivakumar, G.: To vaccinate or not? The role played by uncertainty communication on public understanding and behavior regarding COVID-19. Sci. Commun. (2021). https://doi.org/10.1177/10755470211063628
Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: International Conference on Machine Learning, pp. 1885–1894. PMLR (2017)
Koroleva, K., Melenhorst, M., Novak, J., Herrera Gonzalez, S.L., Fraternali, P., Rizzoli, A.E.: Designing an integrated socio-technical behaviour change system for energy saving. Energy Inform. 2(1), 1–20 (2019). https://doi.org/10.1186/s42162-019-0088-9
Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., Wong, W.K.: Too much, too little, or just right? Ways explanations impact end users’ mental models. In: IEEE Symposium on Visual Languages and Human Centric Computing, pp. 3–10 (2013)
Lewandowsky, S., Ecker, U.K., Seifert, C.M., Schwarz, N., Cook, J.: Misinformation and its correction: continued influence and successful debiasing. Psychol. Sci. Public Interest 13(3), 106–131 (2012)
Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy 23(1), 18 (2021). https://doi.org/10.3390/e23010018
London School of Hygiene and Tropical Medicine: Vaccine FAQs. https://www.lshtm.ac.uk/research/centres/vaccine-centre/vaccine-faqs. Accessed 12 Jan 2022
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, 30 (2017)
MacDonald, N.E., SAGE Working Group on Vaccine Hesitancy: Vaccine hesitancy: definition, scope and determinants. Vaccine 33(41), 61–64 (2015)
Mayo, R., Schul, Y., Burnstein, E.: “I am not guilty” vs “I am innocent”: successful negation may depend on the schema used for its encoding. J. Exp. Soc. Psychol. 40(4), 433–449 (2004)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2018)
Mittelstadt, B., Russell, C., Wachter, S.: Explaining Explanations in AI. In: FAT* 2019: Conference on Fairness, Accountability, and Transparency (FAT* 2019), Atlanta, GA, USA, 29–31 January 2019. ACM, New York (2019). https://doi.org/10.1145/3287560.3287574A
Mollema, L., et al.: Disease detection or public opinion reflection? Content analysis of tweets, other social media, and online newspapers during the measles outbreak in The Netherlands in 2013. J. Med. Internet Res. 17(5), e128 (2015)
Moyer-Gusé, E.: Toward a theory of entertainment persuasion: explaining the persuasive effects of entertainment-education messages. Commun. Theory 18, 407–425 (2008)
Mueller, S.T., et al.: Principles of explanation in human-AI systems. arXiv preprint arXiv:2102.04972 (2021)
Novak, J., Melenhorst, M., Micheel, I., Pasini, C., Fraternali, P., Rizzoli, A.E.: Integrating behavioural change and gamified incentive modelling for stimulating water saving. Environ. Model. Softw. 102, 120–137 (2018)
Novak, J., et al.: Towards reflective AI: needs, challenges and directions for further research. European Institute for Participatory Media, Berlin, Germany (2021). https://doi.org/10.5281/zenodo.5345643
Nunes, I., Jannach, D.: A systematic review and taxonomy of explanations in decision support and recommender systems. User Model. User Adapt. Interact. 27(3–5), 393–444 (2017). https://doi.org/10.1007/s11257-017-9195-0
Nyhan, B., Reifler, J., Richey, S., Freed, G.L.: Effective messages in vaccine promotion: a randomized trial. Pediatrics 133(4), e835–e842 (2014)
Petty, R.E., Cacioppo, J.T.: The elaboration likelihood model of persuasion. In: Petty, R.E., Cacioppo, J.T. (eds.) Communication and Persuasion. Springer, New York (1986). https://doi.org/10.1007/978-1-4612-4964-1_1
Plattner, H., Meinel, C., Leifer, L. (eds.): Design Thinking: Understand–Improve–Apply. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13757-0
Reddit: r/changemyview. https://www.reddit.com/r/changemyview/comments/p04fzy/cmv_i_am_afraid_to_take_the_covid_vaccine_due_to/. Accessed 12 Jan 2022
Reintjes, R., Das, E., Klemm, C., Richardus, J.H., Keßler, V., Ahmad, A.: “Pandemic Public Health Paradox”: time series analysis of the 2009/10 Influenza A/H1N1 epidemiology, media attention, risk perception and public reactions in 5 European countries. PLoS ONE 11(3), e0151258 (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144, August 2016
Ribera, M., Lapedriza, A.: Can we do better explanations? A proposal of user-centered explainable AI. In: IUI Workshops, vol. 2327, p. 38, March 2019
Rozenblit, F.K.: The misunderstood limits of folk science: an illusion of explanatory depth. Cogn. Sci. 26(5), 521–562 (2020). https://doi.org/10.1207/s15516709cog2605_1
Segel, E., Heer, J.: Narrative visualization: telling stories with data. IEEE TVCG 16(6), 1139–1148 (2010)
Slater, M.D., Rouner, D.: Entertainment — education and elaboration likelihood: understanding the processing of narrative persuasion. Commun. Theory 12(2), 173–191 (2002)
Spiegelhalter, D.: Risk and uncertainty communication. Annu. Rev. Stat. Appl. 4, 31–60 (2017)
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, August 2017, vol. 70, pp. 3319–3328. JMLR.org (2017)
The COVID-19 Vaccine Communication Handbook: The COVID-19 Vaccine Development Process. https://hackmd.io/@scibehC19vax/vaxprocess#The-COVID-19-Vaccine-Development-Process. Accessed 12 Jan 2022
The Guardian: Ten reasons we got Covid-19 vaccines so quickly without ‘cutting corners’ https://www.theguardian.com/commentisfree/2020/dec/26/ten-reasons-we-got-covid-19-vaccines-so-quickly-without-cutting-corners?CMP=Share_iOSApp_Other. Accessed 12 Jan 2022
Tiefenbeck, V.: Behavioral interventions to reduce residential energy and water consumption: impact, mechanisms, and side effects. Dissertation, Eidgenössische Technische Hochschule ETH Zürich, Nr. 22054 (2014)
Tiddi, I., d’Aquin, M., Motta, E.: An ontology design pattern to define explanations. In: Proceedings of the 8th International Conference on Knowledge Capture, pp. 1–8 (2015)
Tong, C., et al.: Storytelling and visualization: an extended survey. Information 9, 65 (2018)
Tufte, E.R.: Visual Explanations: Images and Quantities, Evidence and Narrative. Graphics Press, Cheshire (1997)
van Koningsbruggen, G.M., Das, E.: Don’t derogate this message! Self-affirmation promotes online type 2 diabetes risk test taking. Psychol. Health 24(6), 635–649 (2009)
Voorheis, P., et al.: Integrating behavioral science and design thinking to develop mobile health interventions: systematic scoping review. JMIR Mhealth Uhealth 10(3), e35799 (2022). https://doi.org/10.2196/35799
Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing Theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (601), pp. 1–15. Association for Computing Machinery, New York (2019)
Winterbottom, A., Bekker, H.L., Conner, M., Mooney, A.: Does narrative information bias individual’s decision making? A systematic review. Soc. Sci. Med. 67(12), 2079–2088 (2008)
Wismans, A., Thurik, R., Baptista, R., Dejardin, M., Janssen, F., Franken, I.: Psychological characteristics and the mediating role of the 5C Model in explaining students’ COVID- 19 vaccination intention. PLoS ONE 16(8), e0255382 (2021). https://doi.org/10.1371/journal.pone.0255382
YouTube: Covid-19 Vaccine Skeptics Explain Why They Don’t Want The Shot | NBC News NOW. https://www.youtube.com/watch?v=cw0IAAleJxw&ab_channel=NBCNews. Accessed 12 Jan 2022
Zusammen gegen Corona: Impfstoffentwicklung und Zulassung. https://www.zusammengegencorona.de/impfen/impfstoffe/impfstoffentwicklung-und-zulassung/. Accessed 12 Jan 2022
The work presented in this paper has been funded by the Volkswagen Stiftung (grant nr: 97260-1). We also thank Prof. Dr. Enny Das and Prof. Dr. Martha Larson from Radboud University for their feedback and support of the project, as well as Boryana Krasimirova for her work on the visual and interaction design of the prototype.
Editors and Affiliations
© 2022 The Author(s)
About this paper
Cite this paper
Novak, J., Maljur, T., Drenska, K. (2022). Transferring AI Explainability to User-Centered Explanations of Complex COVID-19 Information. In: Chen, J.Y.C., Fragomeni, G., Degen, H., Ntoa, S. (eds) HCI International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence. HCII 2022. Lecture Notes in Computer Science, vol 13518. Springer, Cham. https://doi.org/10.1007/978-3-031-21707-4_31
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21706-7
Online ISBN: 978-3-031-21707-4