1 Introduction

Artificial intelligence (AI) is an umbrella term for algorithms aiming at delivering task solving capabilities comparable to humans. A dominant sub-field is automatic (or autonomous) machine learning (aML) with the aim to develop software that can learn fully automatically from previous experience to make predictions based on new data. One currently very successful family of aML methods includes deep learning (DL), which is based on the concepts of neural networks, and the insight that the depth of such networks yields surprising capabilities.

Automatic approaches are present in daily practice of human society, supporting and enhancing our quality of life. A good example is the breakthrough achieved with DL [2] on the task of phonetic classification for automatic speech recognition. Actually, speech recognition was the first commercially successful application of DL [3]. Autonomous software is able today to conduct conversations with clients in call centers; Siri, Alexa and Cortana make suggestions to smartphone users. A further example is automatic game playing without human intervention [4]. Mastering the game of Go has a long tradition and is a good benchmark for progress in automatic approaches, because Go is hard for computers [5] because it is strategic, although games are a closed environment with clear rules and a large number of games can be simulated for big data.

Even in the medical domain, automatic approaches recently demonstrated impressive results: automatic image classification algorithms are on par with human experts or even outperforms them [6]; automatic detection of pulmonary nodules in tomography scans detected the tumoral formations missed by the same human experts who provided the test data [7]; neural networks outperformed a traditional segmentation methods [8], consequently, automatic deep learning approaches became quickly a method of choice for medical image analysis [9]

Undoubtedly, automatic approaches are well motivated for theoretical, practical and commercial reasons. Unfortunately, such approaches have also several disadvantages. They are resource consuming, require much engineering effort, need large amounts of training data (“big data”), but most of all they are often considered as black-box approaches which do not foster trust and acceptance and most of all responsibility. International concerns are raised on ethical, legal and moral aspects of developments of AI in the last years, particularly in the medical domain [10]. One example of such international effort is the Declaration of Montreal.Footnote 1

Lacking transparency means that such approaches do not expose explicitly the decision process [11]. This is due to the fact that such models have no explicit declarative knowledge representation, hence they have difficulty in generating the required explanatory structures which considerably limits the achievement of their full potential [12].

Consequently, in the medical domain a human expert involved in the decision process can be beneficial yet mandatory [13]. However, the problem is that many algorithms, e.g. deep learning, are inherently opaque, which causes difficulties both for the developers of the algorithms, as well as for the human-in-the-loop.

Understanding the reasons behind predictions, queries and recommendations [14] is important for many reasons. Among the most important reasons is trust in the results which is improved by an explanatory interactive learning framework, where the algorithm is able to explain each step to the user and the user can interactively correct the explanation [15]. The advantage of this approach, called interactive machine learning (iML) [16], is to include the strengths of humans, in learning and explaining abstract concepts [17].

Current ML algorithms work asynchronously in connection with a human expert who is expected to help in data pre-processing (refer to [18] for a recent example of the importance of data quality). Also the human is expected to help in data interpretation - either before or after the learning algorithm. The human expert is supposed to be aware of the problem’s context and to correctly evaluate specific data sets.

The iML-approaches can therefore be effective on problems with scarce and/or complex data sets, when aML methods become inefficient. Moreover, iML enables important mechanisms, including re-traceability, transparency and explainability, which are important characteristics for any future information system [19].

The efficiency and the effectiveness of explanations provided by ML and iML require further study [20]. One approach to the problem examines how people understand explanations from ML by qualitatively rating the effectiveness of three explanatory models [21, 22]. Another approach measures a proxy for utility such as simplicity [11, 23] or response time in an application [24]. Our contribution is to directly measure the user’s perception of an explanation’s utility, including cause aspects, by adapting a well-accepted approach in usability [25].

2 Causability and Explainability

2.1 Definitions

A statement s (see Fig. 1) is either be made by a human \(s_h\) or a machine \(s_m\). \(s = f(r, k, c)\) is a function with the following parameters:

r:

representations of an unknown (or unobserved) fact \(u_e\) related to an entity,

k:

pre-existing knowledge, which is for a machine embedded in an algorithm, or made up for human by explicit, implicit and tacit knowledge,

c:

context, for a machine the technical runtime environment, and for humans the physical environment the decision was made (pragmatic dimension).

An unknown (or unobserved) fact \(u_e\) represents a ground truth gt that we try to model with machines \(m_m\) or as humans \(m_h\). Unobserved, hidden or latent variables are found in the literature for Bayesian models [26], hidden Markov models [27] and methods like probabilistic latent component analysis [28].

The overall goal is, that a statement is congruent with the ground truth and the explanation of a statement highlights applied parts of the model.

Fig. 1
figure 1

The Process of Explanation. Explanations (e) by humans and machines (subscripts h and m) must be congruent with statements (s) and models (m) which in turn are based on the ground truth (gt). Statements are a function of representations (r), knowledge (k) and context (c)

3 Process of Explanation and the Importance of a Ground Truth

In an ideal world the human and machine statement are identical, \(s_h=s_m\), and congruent with the ground truth, which is defined for machines and humans within the same, \(m_h=m_m\) (a connection between them, see Fig. 1).

However, in the real world we face two problems:

  1. (i)

    ground truth is not always well defined, especially when making a medical diagnosis; and

  2. (ii)

    although human (scientific) models are often based on understanding causal mechanisms, today’s successful machine models or algorithms are typically based on correlation or related concepts of similarity and distance.

The latter approach in ML is probabilistic in nature and is viewed as an intermediate step which can only provide a basis for further establishing causal models. When discussing the explainability of a machine statement we therefore propose to distinguish between

  • Explainability, which in a technical sense highlights decision relevant parts of machine representations \(r_m\) and machine models \(m_m\)—i.e., parts which contributed to model accuracy in training, or to a specific prediction. It does not refer to a human model \(m_h\).

  • Causability [1] as the extent to which an explanation of a statement to a user achieves a specified level of causal understanding with effectiveness, efficiency and satisfaction in a specified context of use.

As causability is measured in terms of effectiveness, efficiency, satisfaction related to causal understanding and its transparency for a user, it refers to a human understandable model \(m_h\). This is always possible for an explanation of a human statement, as the explanation is per se defined related to \(m_h\). To measure the causability of an explanation \(e_m\) of a machine statement \(s_m\) either \(m_h\) has to be based on a causal model (which is not the case for most ML algorithms) or a mapping between \(m_m\) and \(m_h\) has to be defined.

4 Background

The System Usability Scale (SUS) has been in use for three decades and proved to be very efficient and necessary to rapidly determine the usability of a newly designed user interface. The SUS measures how usable a system’s user-interface is, while our proposed System Causability Scale measures how useful explanations are and how usable the explanation interface is.

The SUS was created by John Brooke already in 1986 when working at the Digital Equipment Corporation (DEC). 10 years later he published it as a book chapter [25] which received (as of 01.10.2019) 7949 citations on Google Scholar with an amazing trend upwards.

The success factor is simplicity: SUS consists of a 10 item questionnaire, each item having five response options for the end-users. Consequently, it provides a quick and dirty tool for measuring the usability, which proofed to be very reliable [29], and it is used for a wide variety of any products, not only user-interfaces [30].

When a SUS is used, participants are asked to score the following ten items with one of five responses that range from strongly agree to strongly disagree:

  1. 1.

    I think that I would like to use this system frequently.

  2. 2.

    I found the system unnecessarily complex.

  3. 3.

    I thought the system was easy to use.

  4. 4.

    I think that I would need the support of a technical person to be able to use this system.

  5. 5.

    I found the various functions in this system were well integrated.

  6. 6.

    I thought there was too much inconsistency in this system.

  7. 7.

    I would imagine that most people would learn to use this system very quickly.

  8. 8.

    I found the system very cumbersome to use.

  9. 9.

    I felt very confident using the system.

  10. 10.

    I needed to learn a lot of things before I could get going with this system

Interpreting SUS scores can be difficult and one big disadvantage is that the scores (since they are on a scale from 0 to 100) are often wrongly interpreted as percentages. The best way to interpret results involves normalizing the scores to produce a percentile ranking. Consequently, the participants scores for each question are converted to a new number, added together and then multiplied by 2.5 to convert the original scores of 0–40 to 0–100. Though the scores are 0–100, these are not percentages and should be considered only in terms of their percentile ranking.

Based on a lot of research, a SUS score above 68 would be considered above average and anything below 68 is below average, however the best way to interpret the results involves normalizing the scores to produce a percentile ranking.

A further disadvantage is that SUS has been assumed to be unidimensional. However, factor analysis of two independent SUS data sets reveals that the SUS actually has two factors Usable (8 items) and Learnable (2 items specifically, Items 4 and 10). These new scales have reasonable reliability (coefficient alpha of 0.91 and 0.70, respectively). They correlate highly with the overall SUS (r = 0.985 and 0.784, respectively) and correlate significantly with one another (r = 0.664), but at a low enough level to use as separate scales [31].

5 The System Causability Scale

In the following we propose our System Causability Scale (SCS) using the Likert scale similar to SUS. The Likert method [32] is widely used as a standard psychometric scale to measure human responses (see about the limitations in the conclusions). The purpose of our SCS is to quickly determine whether and to what extent an explainable user interface (human–AI interface), an explanation, or an explanation process itself is suitable for the intended purpose.

  1. 1.

    I found that the data included all relevant known causal factors with sufficient precision and granularity.

  2. 2.

    I understood the explanations within the context of my work.

  3. 3.

    I could change the level of detail on demand.

  4. 4.

    I did not need support to understand the explanations.

  5. 5.

    I found the explanations helped me to understand causality.

  6. 6.

    I was able to use the explanations with my knowledge base.

  7. 7.

    I did not find inconsistencies between explanations.

  8. 8.

    I think that most people would learn to understand the explanations very quickly.

  9. 9.

    I did not need more references in the explanations: e.g., medical guidelines, regulations.

  10. 10.

    I received the explanations in a timely and efficient manner.

As an illustration, SCS was applied by a medical doctor from the Ottawa Hospital (see the acknowledgement section) to the Framingham Risk Tool (FRT) [33]. FRT was selected as a classic example of a prediction model that is in use today.

FRT estimates the risk of coronary artery disease in 10 years for a patient without diabetes mellitus or clinically evident cardiovascular disease, and uses data from the Framingham Heart Study [34]. FRT includes the following input features: sex, age, total cholesterol smoking, HDL (high density lipoprotein) cholesterol, systolic blood pressure and hypertension treatment. The ratings for the SCS score are reported in Table 1.

Table 1 Using SCS with the Framingham Model. Ratings are: \(1=\hbox {strongly disagree}\), \(2=\hbox {disagree}\), \(3=\hbox {neutral}\), \(4=\hbox {agree}\), \(5=\hbox {strongly agree}\)

6 Conclusions

The purpose of the System Causability Scale is to provide a simple and rapid evaluation tool to measure the quality of an explanation interface (human–AI interface) or an explanation process itself. We were inspired by the System Usability Scale and the Framingham model which is often in use in daily routine. The limitations of the SCS is that Likert scales fall within the ordinal level of measurement, meaning that the response categories have a rank order. However, the intervals between values cannot be presumed equal (it is illegitimate to infer that the intensity of feeling between strongly disagree and disagree is equivalent to the intensity of feeling between other consecutive categories on the Likert scale). The legitimacy of assuming an interval scale for Likert-type categories is an important issue, because the appropriate descriptive and inferential statistics differ for ordinal and interval variables and if the wrong statistical technique is used, the researcher increases the chance of coming to the wrong conclusion [35]. We are convinced that our Systems Causability Scale is useful for the international machine learning research community. Currently we are working on an evaluation study with the application in the medical domain.