Keywords

1 Introduction

Interpretable machine learning methods have become very important in recent years to explain the behavior of black box machine learning (ML) models. A useful method for explaining single predictions of a model are counterfactual explanations. ML credit risk prediction is a common motivation for counterfactuals. For people whose credit applications have been rejected, it is valuable to know why they have not been accepted, either to understand the decision making process or to assess their actionable options to change the outcome. Counterfactuals provide these explanations in the form of “if these features had different values, your credit application would have been accepted”. For such explanations to be plausible, they should only suggest small changes in a few features. Therefore, counterfactuals can be defined as close neighbors of an actual data point, but their predictions have to be sufficiently close to a (usually quite different) desired outcome. Counterfactuals explain why a certain outcome was not reached, can offer potential reasons to object against an unfair outcome and give guidance on how the desired prediction could be reached in the future [35]. Note that counterfactuals are also valuable for predictive modelers on a more technical level to investigate the pointwise robustness and the pointwise bias of their model.

2 Related Work

Counterfactuals are closely related to adversarial perturbations. These have the aim to deceive ML models instead of making the models interpretable [30]. Attribution methods such as Local Interpretable Model-agnostic Explanations (LIME) [27] and Shapley Values [22] explain a prediction by determining how much each feature contributed to it. Counterfactual explanations differ from feature attributions since they generate data points with a different, desired prediction instead of attributing a prediction to the features.

Counterfactual methods can be model-agnostic or model-specific. The latter usually exploit the internal structure of the underlying ML model, such as the trained weights of a neural network, while the former are based on general principles which work for arbitrary ML models - often by only assuming access to the prediction function of an already fitted model. Several model-agnostic counterfactual methods have been proposed [8, 11, 16, 18, 25, 29, 37]. Apart from Grath et al. [11], these approaches are limited to classification. Unlike the other methods, the method of Poyiadzi et al. [25] can obtain plausible counterfactuals by constructing feasible paths between data points with opposite predictions.

A model-specific approach was proposed by Wachter et al. [35], who also introduced and formalized the concept of counterfactuals in predictive modeling. Like many model-specific methods [15, 20, 24, 28, 33] their approach is limited to differentiable models. The approach of Tolomei et al. [32] generates explanations for tree-based ensemble binary classifiers. As with [35] and [20], it only returns a single counterfactual per run.

3 Contributions

In this paper, we introduce Multi-Objective Counterfactuals (MOC), which to the best of our knowledge is the first method to formalize the counterfactual search as a multi-objective optimization problem. We argue that the mathematical problem behind the search for counterfactuals should be naturally addressed as multi-objective. Most of the above methods optimize a collapsed, weighted sum of multiple objectives to find counterfactuals, which are naturally difficult to balance a-priori. They carry the risk of arbitrarily reducing the solution set to a single candidate without the option to discuss inherent trade-offs – which should be especially relevant for model interpretation that is by design very hard to precisely capture in a (single) mathematical formulation.

Compared to Wachter et al. [35], we use a distance metric for mixed feature spaces and two additional objectives: one that measures the number of feature changes to obtain sparse and therefore more interpretable counterfactuals, and one that measures the closeness to the nearest observed data points for more plausible counterfactuals. MOC returns a Pareto set of counterfactuals that represents different trade-offs between our proposed objectives, and which are constructed to be diverse in feature space. This seems preferable because changes to different features can lead to a desired counterfactual predictionFootnote 1 and it is more likely that some counterfactuals meet the (hidden) preferences of a user. A single counterfactual might even suggest a strategy that is interpretable but not actionable (e.g., ‘reduce your number of pregnancies’) or counterproductive in more general contexts (e.g., ‘increase your age to reduce the risk of diabetes’). In addition, if multiple otherwise quite different counterfactuals suggest changes to the same feature, the user may have more confidence that the feature is an important lever to achieve the desired outcome. We refer the reader to Appendix A for two concrete examples illustrating the above.

Compared to other counterfactual methods, MOC is model-agnostic and handles classification, regression and mixed feature spaces, which furthermore increases its practical usefulness in general applications. Together with [16], our paper also includes one of the first benchmark studies that compares multiple counterfactual methods on multiple, heterogeneous datasets.

4 Methodology

[35] loosely define counterfactuals as:

“You were denied a loan because your annual income was \(\pounds \)30,000. If your income had been \(\pounds \)45,000, you would have been offered a loan. Here the statement of decision is followed by a counterfactual, or statement of how the world would have to be different for a desirable outcome to occur. Multiple counterfactuals are possible, as multiple desirable outcomes can exist, and there may be several ways to achieve any of these outcomes.”

We now formalize this statement by stating four objectives, which a counterfactual should adhere to. In the subsequent section we provide detailed definitions of these objectives and tie them together as a multi-objective optimization problem in order to generate a diverse set of different trade-off solutions.

4.1 Multi-Objective Counterfactuals

Definition 1 (Counterfactual Explanation)

Let \(\hat{f}:\mathcal {X} \rightarrow \mathbb {R}\) be a prediction function, \(\mathcal {X}\) the feature space and \(Y' \subset \mathbb {R}\) a set of desired outcomes. The latter can either be a single value or an interval of values. We define a counterfactual explanation \(\mathbf {x}'\) for an observation \(\mathbf {x}^*\) as a data point fulfilling the following: (1) its prediction \(f(\mathbf {x}')\) is close to the desired outcome set \(Y'\), (2) it is close to \(\mathbf {x}^*\) in the \(\mathcal {X}\) space, (3) it differs from \(\mathbf {x}^*\) only in a few features, and (4) it is a plausible data point according to the probability distribution \(\mathbb {P}_{\mathcal {X}}\). For classification models, we assume that \(\hat{f}\) returns the probability for a user-selected class and \(Y'\) has to be the desired probability (range).

This can be translated into a multi-objective minimization task:

$$\begin{aligned} \min _\mathbf {x}\mathbf {o}(\mathbf {x}) := \min _\mathbf {x}\big (o_1(\hat{f}(\mathbf {x}), Y'),\, o_2(\mathbf {x}, \mathbf {x}^*), o_3(\mathbf {x}, \mathbf {x}^*), o_4(\mathbf {x}, \mathbf {X}^{obs})\big ), \end{aligned}$$
(1)

with \(\mathbf {o}:\mathcal {X} \rightarrow \mathbb {R}^4\) and \(\mathbf {X}^{obs}\) as the observed (i.e. training) data. The first component \(o_1\) quantifies the distance between \(\hat{f}(\mathbf {x})\) and \(Y'\). We define it as:Footnote 2

$$\begin{aligned}o_1(\hat{f}(\mathbf {x}), Y') = {\left\{ \begin{array}{ll} 0 &{} \text {if } \hat{f}(\mathbf {x}) \in Y' \\ \inf \limits _{y' \in Y'}|\hat{f}(\mathbf {x}) - y'| &{} \text {else} \end{array}\right. }. \end{aligned}$$

The second component \(o_2\) quantifies the distance between \(\mathbf {x}^*\) and \(\mathbf {x}\) using the Gower distance to account for mixed features [10]:

$$\begin{aligned} o_2(\mathbf {x}, \mathbf {x}^*) = \frac{1}{p}\sum _{j = 1}^{p} \delta _G(x_j, x^*_j)\in [0, 1] \end{aligned}$$

with p being the number of features. The value of \(\delta _G\) depends on the feature type:

$$\begin{aligned} \delta _G(x_j, x^*_j) = {\left\{ \begin{array}{ll} \frac{1}{\widehat{R}_j}|x_j- x^*_j| &{} \text {if } x_j \text { is numerical} \\ \mathbb {I}_{x_j \ne x_j^*} &{} \text {if } x_j \text { is categorical} \end{array}\right. } \end{aligned}$$

with \(\widehat{R}_j\) as the value range of feature j, extracted from the observed dataset.

Since the Gower distance does not take into account how many features have been changed, we introduce objective \(o_3\), which counts the number of changed features using the \(L_0\) norm:

$$\begin{aligned} o_3(\mathbf {x}, \mathbf {x}^*) = ||\mathbf {x}-\mathbf {x}^*||_0 = \sum _{j = 1}^{p}\mathbb {I}_{x_j\ne x^*_j}. \end{aligned}$$

The fourth objective \(o_4\) measures the weighted average Gower distance between \(\mathbf {x}\) and the k nearest observed data points \(\mathbf {x}^{[1]}, ..., \mathbf {x}^{[k]} \in \mathbf{X} ^{obs}\) as an empirical approximation of how likely \(\mathbf {x}\) originates from the distribution of \(\mathcal {X}\):

$$\begin{aligned} o_4(\mathbf {x}, \mathbf{X} ^{obs}) = \sum _{i = 1}^k w^{[i]} \frac{1}{p} \sum _{j = 1}^{p} \delta _G(x_j, x^{[i]}_j) \in [0, 1] \text { where } \sum _{i = 1}^k w^{[i]} = 1. \end{aligned}$$

Throughout this paper, we set k to 1. Further procedures to increase the plausibility of the counterfactuals are integrated into the optimization algorithm and are described in Sect. 4.3.

Balancing the four objectives is difficult since the objectives contradict each other. For example, minimizing the distance between counterfactual outcome and desired outcome \(Y'\) (\(o_1\)) becomes more difficult when we require counterfactual feature values close to \(\mathbf {x}^*\) (\(o_2\) and \(o_3\)) and to the observed data (\(o_4\)).

4.2 Counterfactual Search

Our proposed method MOC uses the Nondominated Sorting Genetic Algorithm II (NSGA-II) [7] with modifications specific to the problem considered. First, unlike the original NSGA-II, it uses mixed integer evolutionary strategies (MIES) [19] to work with the mixed discrete and continuous search space. Furthermore, a different crowding distance sorting algorithm is used, and we propose some optional adjustments tailored to the counterfactual search in the upcoming section.

For MOC, each candidate is described by its feature vector (the ‘genes’) and the objective values of the candidates are evaluated by Eq. (1). Features of candidates are recombined and mutated with predefined probabilities – some of the control parameters of MOC. Numerical features are recombined by the simulated binary crossover recombinator [6], all other feature types by the uniform crossover recombinator [31]. Based on [19], numerical features are mutated by the scaled Gaussian mutator. Categorical features are altered by uniformly sampling from their admissible levels, while binary and logical features are simply flipped. After recombination and mutation, some feature values are randomly set to the values of \(\mathbf {x}^*\) with a given (low) probability – another control parameter – to prevent all features from deviating from \(\mathbf {x}^*\).

Contrary to NSGA-II, the crowding distance is computed not only in the objective space \(\mathbb {R}^4\) (\(L_1\) norm) but also in the feature space \(\mathcal {X}\) (Gower distance), and the distances are summed up with equal weighting. As a result, candidates are more likely kept if they differ greatly from another candidate in their feature values although they are similar in the objective values. Diversity in \(\mathcal {X}\) is desired because the chances of obtaining counterfactuals that meet the (hidden) preferences of users are higher. This approach is based on Avila et al. [2].

MOC stops if either a predefined number of generations is reached (default) or the performance no longer improves for a given number of successive generations.

4.3 Further Modifications

Initialization. Naively, we could initialize a population by uniformly sampling some feature values from their full range of possible values, while randomly setting other features to the values of \(\mathbf {x}^*\) to induce sparsity. However, if a feature has a large influence on the prediction, it should be more likely that the counterfactual values differ from \(\mathbf {x}^*\). The importance of a feature for an entire dataset can be measured as the standard deviation of the partial dependence plot [12]. Analogously, we propose to measure the feature importance for a single prediction with the standard deviation of the Individual Conditional Expectation (ICE) curve of \(\mathbf {x}^*\). ICE curves show for one observation and for one feature how the prediction changes when the feature is changed, while other features are fixed to the values of the considered observation [9]. The greater the standard deviation of the ICE curve, the higher we set the probability that the feature value is initialized with a different value than the one of \(\mathbf {x}^*\). Therefore, the standard deviation \( \sigma ^{ICE}_j\) of each feature \(x_j\) is transformed into probabilities within \([p_{min}, p_{max}] \cdot 100\%\):

$$\begin{aligned} P(\textit{value differs}) = \frac{(\sigma ^{ ICE }_j - min (\sigma ^{ ICE }))\cdot (p_{max} - p_{min})}{ max (\sigma ^{ ICE }) - min (\sigma ^{ ICE })} + p_{min} \end{aligned}$$

with \(\varvec{\sigma }^{ICE} := (\sigma ^{ICE}_1, ..., \sigma ^{ICE}_p)\). \(p_{min}\) and \(p_{max}\) are control parameters with default values 0.01 and 0.99.

Actionability. To get more actionable counterfactuals, extreme values of numerical features outside a predefined range are capped to the upper or lower bound after recombination and mutation. The ranges can either be derived from the minimum and maximum values of the features in the observed dataset or users can define these ranges. In addition, users can identify non-actionable features such as the country of birth or gender. The values of these features are permanently set to the values of \(\mathbf {x}^*\) for all candidates within MOC.

Penalization. Furthermore, candidates whose predictions are further away from the target than a predefined distance \(\epsilon \in \mathbb {R}\) can be penalized. After the candidates have been sorted into fronts \(F_{1}\) to \(F_{K}\) using nondominated sorting, the candidate that violates the constraint least will be reassigned to front \(F_{K+1}\), the candidate with the second smallest violation to \(F_{K+2}\), and so on. The concept is based on Deb et al. [7]. Since the constraint violators are in the last fronts, they are less likely to be selected for the next generation.

Mutation. Since the aforementioned mutators do not take the data distribution into account and can potentially generate unlikely new candidates, we suggest a conditional mutator. It generates plausible feature values conditional on the values of the other features. For each input feature, we trained a transformation tree [14] on \(X^{obs}\), which is then used to sample values from the conditional distribution. We mutate the feature in randomized order since a feature mutation now depends on the previous changes.

How our proposed strategies for initialization and mutation affect MOC is later examined in a benchmark study (Sects. 6 and 7).

4.4 Evaluation Metric

We use the popular hypervolume indicator (HV) [38] to evaluate the quality of our estimated Pareto front, with reference point \(\mathbf {s} = (\inf \limits _{y' \in Y'}|\hat{f}(\mathbf {x}^*) - y'|, 1, p, 1)\), representing the maximal values of the objectives. We compute the HV always over the complete archive of evaluated solutions.

4.5 Tuning of Parameters

We also use HV, when we tune MOC’s control parameters – population size, the probabilities for recombining and mutating a feature of a candidate – with iterated F-racing [21]. Furthermore, we let iterated F-racing decide whether our proposed strategies for initialization and mutation of Sect. 4.3 are preferable. Tuning is performed on six binary classification datasets from OpenML [34] – which were not used in the benchmark. A summary of the tuning setup and results can be found in Table 5 in Appendix B. Iterated F-racing found both our initialization and mutation strategy to be advantageous. The tuned parameters were used for the credit data application and the benchmark study.

5 Credit Data Application

This section demonstrates the usefulness of MOC to explain the prediction of credit risk using the German credit dataset [13]. The dataset has 522 complete observations and nine features containing credit and customer information. Categories with few case numbers were combined. The binary target indicates whether a customer has a ‘good’ or ‘bad’ credit risk. We chose the first observation of the dataset as \(\mathbf {x}^*\) with the following feature values:

Age

Sex

Job

Housing

Saving accounts

Checking account

Credit amount

Duration

Purpose

22

Female

2

Own

Little

Moderate

5951

48

Radio/TV

We tuned a support vector machine (with radial-basis (RBF) kernel) on the remaining data with the same tuning setup as for the benchmark (Appendix C). To obtain a single numerical outcome, only the predicted probability for the class ‘good’ credit risk was returned. We obtained an accuracy of 0.64 for the model using two nested cross-validations (CV) (5-fold CV in outer and inner loop) and a predicted probability for ‘good’ credit risk of 0.41 for \(\mathbf {x}^*\).

We set the desired outcome interval to \(Y' = [0.5, 1]\), which indicates a change to a ‘good’ credit risk. We generated counterfactuals using MOC with the parameter setting selected by iterated F-racing. Candidates with a prediction below 0.5 were penalized.

A total of 136 counterfactuals were found by MOC. In the following, we focus upon the 82 of them with predictions within [0.5, 1]. Credit duration was changed for all counterfactuals, followed by credit amount (86%). Since a user might not want to investigate all returned counterfactuals individually (in feature space), we provide a visual summary of the Pareto set in Fig. 1, either as a parallel coordinate plot or a response surface plotFootnote 3 along two features. All counterfactuals had values equal to or smaller than the values of \(\mathbf {x}^*\) for duration and credit amount. The response surface plot illustrates why these feature changes were recommended. The color gradient and contour lines indicate that either duration or both credit amount and duration must be decreased to reach the desired outcome. Due to the fourth objective and the conditional mutator, we obtained counterfactuals in high density areas (indicated by histograms). Counterfactuals in the lower left corner seem to be in a less favorable region far from \(\mathbf {x}^*\), but they are close to the training data.

Fig. 1.
figure 1

Visualization of counterfactuals for the first data point \(\mathbf {x}^*\) of the credit dataset. (a) Feature values of the counterfactuals. Only changed features are shown. The given numbers indicate the minimum and maximum feature values of the counterfactuals. (b) Response surface plot for the model prediction along features duration and credit amount, holding other feature values constant at the value of \(\mathbf {x}^*\). Colors and contour lines indicate the predicted value. The white point is \(\mathbf {x}^*\) and the black points are the counterfactuals that only proposed changes in duration and/or credit amount. The histograms show the marginal distributions of the features in the observed dataset.

6 Experimental Setup

In this section, the performance of MOC is evaluated in a benchmark study for binary classification. The datasets are from the OpenML platform [34] and are briefly described in Table 1. We selected datasets with no missing values, with up to 3500 observations and a maximum of 40 features. We randomly selected ten observed data points per dataset as \(\mathbf {x}^*\) and excluded them from the training data. For each dataset, we tuned and trained the following models: logistic regression, random forest, xgboost, RBF support vector machine and a one-hidden-layer neural network. The tuning parameter set and the performance using nested resampling are in Table 8 in Appendix C. Each model returned only the probability for one class. The desired target for each \(\mathbf {x}^*\) was set to the opposite of the predicted class:

Table 1. Description of benchmark datasets. Legend: task: OpenML task id; Obs: Number of rows; Cont/Cat: Number of continuous/categorical features.
Table 2. MOC’s coverage rate of methods to be compared per dataset averaged over all models. The number of nondominated counterfactuals for each method are given in parentheses. Higher values of coverage indicate that MOC dominates the other method. The \(^*\) indicates that the binomial test with \(H_0: p < 0.5\) that a counterfactual is covered by MOC is significant at the 0.05 level.

The benchmark study aimed to answer two research questions:

  • Q1) How does MOC perform compared to other state-of-the-art methods for counterfactuals?

  • Q2) How do our proposed strategies for initialization and mutation of Sect. 4.3 influence the performance of MOC?

For the first one, we compared MOC – once with and once without our proposed strategies for initialization and mutation – with ‘DiCE’ by Mothilal et al. [24], ‘Recourse’ by Ustun et al. [33] and ‘Tweaking’ by Tolomei et al. [32]. We chose DiCE, Recourse and Tweaking because they are implemented in general open source code libraries.Footnote 4 The methods are only applicable to certain models: DiCE can handle neural networks and logistic regressions, Recourse can handle logistic regressions and Tweaking can handle random forests. Since Recourse can only process binary and numerical features, we did not train logistic regression on cmc, tic-tac-toe, kr-vs-kp and plasma_retinol. As a baseline, we selected the closest observed data point to \(\mathbf {x}^*\) (according to the Gower distance) that has a prediction equal to our desired outcome. Since this approach is part of the What-If Tool [36], we call this approach ‘Whatif’.

The parameters of DiCE, Recourse and Tweaking were set to the default values recommended by the authors (Appendix D). To allow for a fair comparison, we initialized MOC with the parameters of iterated F-racing which were tuned on other binary classification datasets (Appendix B). While MOC can potentially return several hundreds of counterfactuals, the other methods are designed to either return one or a few. We have therefore limited the maximum number of counterfactuals to ten for all approaches.Footnote 5 Tweaking and Whatif generated only one counterfactual by design. For MOC we reduced the number of counterfactuals by preferring the ones that achieved the target prediction \(Y'\) and/or the highest HV contribution.

For all methods, only nondominated counterfactuals were considered for the evaluation. Since we are interested in a diverse set of counterfactuals, we evaluate the methods based on the size of their counterfactual set, its objective values, and the coverage rate derived from the coverage indicator by Zitzler and Thiele [38]. The coverage rate is the relative frequency with which counterfactuals of a method are dominated by MOC’s counterfactuals for a certain model and \(\mathbf {x}^*\). A counterfactual covers another counterfactual if it dominates it, and it does not cover the other if both have the same objective values or the other has lower values in at least one objective. A coverage rate of 1 implies that for each generated counterfactual of a method MOC generated at least one dominating counterfactual. We only computed the coverage rate over counterfactuals that met the desired target \(Y'\).

To answer the second research question, we compared the dominated HV over the generations of MOC with and without our proposed strategies for initialization and mutation. As a baseline, we used a random search approach that has the same population size (20) and number of generations (175) as MOC. In each generation, some feature values were uniformly sampled from their set of possible values derived from the observed data and \(\mathbf {x}^*\), while other features were set to the values of \(\mathbf {x}^*\). The HV for one generation was computed over the newly generated candidates combined with the candidates of the previous generations.

Fig. 2.
figure 2

Boxplots of the objective values and number of nondominated counterfactuals (count) per model for MOC with our proposed strategies for initialization and mutation (mocmod), MOC without these modifications, Whatif, DiCE, Recourse and Tweaking for the datasets diabetes and no2. Lower values are better except for count.

7 Results

Q1) MOC vs. State-of-the-Art Counterfactual Methods

Table 2 shows the coverage rate of each method (to be compared) by the tuned MOC per dataset. Some fields are empty because Recourse could not process features with more than two classes and Tweaking never achieved the desired outcome for pc1. MOC’s counterfactuals dominated all counterfactuals of DiCE for all datasets. The same holds for Tweaking except for kr-vs-kp and tic-tac-toe because the counterfactuals of Tweaking had the same objective values as the ones of MOC. MOC’s coverage rate of Recourse only exceeded 90% for boston and ilpd since Recourse’s counterfactuals often deviated less from \(\mathbf {x}^*\) (but performed worse in other objectives).

Figure 2 compares MOC (with (mocmod) and without (moc) our proposed strategies for initialization and mutation) with the other methods for the datasets diabetes and no2 and for each model separately. The resulting boxplots for all other datasets are shown in Figs. 4 and 5 in the Appendix. They agree with the results shown here. Compared to the other methods, both versions of MOC found the most nondominated solutions, which met the target and changed the least features. DiCE performed worse than MOC in all objectives. Tweaking’s counterfactuals were often closer to \(\mathbf {x}^*\), but they were further away from the nearest training data point and more features were changed. Tweaking’s counterfactuals often did not reach the desired outcome because they stayed too close to \(\mathbf {x}^*\). The MOC with our proposed modifications found counterfactuals closer to \(\mathbf {x}^*\) and the observed data, but required more feature changes compared to MOC without the modifications.

Q2) MOC Strategies for Initialization and Mutation

Figure 3 shows the ranks of the dominated HVs for MOC without modifications, for each modification of MOC and random search. Ranks were calculated per dataset, model, \(\mathbf {x}^*\) and generation, and were averaged over all datasets, models and \(\mathbf {x}^*\). We transformed HVs to ranks because the HVs are not comparable across \(\mathbf {x}^*\). It can be seen that the MOC with our proposed modifications clearly outperforms the MOC without these modifications. The ranks of the initial population were higher when the ICE curve variance was used to initialize the candidates. The use of the conditional mutator led to higher dominated HVs over the generations. We received the best performance over the generations when both modifications were used. At each generation, all versions of MOC outperformed random search. Figure 6 in the Appendix shows the ranks over the generations for each dataset separately. They largely agree with the results shown here. The performance gains of MOC compared to random search were particularly evident for higher-dimensional datasets.

Fig. 3.
figure 3

Comparison of the ranks w.r.t. the dominated HV (domhv) per generation averaged over all models and datasets. For each approach, the population size of each generation was 20. A higher HV and therefore a higher rank is better. Legend: moc: MOC without our proposed modifications; moccond: MOC with the conditional mutator; mocice: MOC with the ICE curve variance initialization; mocmod: MOC with both modifications; random: random search.

8 Conclusion and Outlook

In this paper, we introduced Multi-Objective Counterfactuals (MOC), which to the best of our knowledge is the first method to formalize the counterfactual search as a multi-objective optimization problem. Compared to state-of-the-art approaches, MOC returns a diverse set of counterfactuals with different trade-offs between our proposed objectives. Furthermore, MOC is model-agnostic and suited for classification, regression and mixed feature spaces. We demonstrated the usefulness of MOC to explain a prediction on the German credit dataset and showed in a benchmark study that MOC finds more counterfactuals than other counterfactual methods that are closer to the training data and required fewer feature changes. Our proposed initialization strategy (based on ICE curve variances) and our conditional mutator resulted in higher performance in fewer evaluations and in counterfactuals that were closer to the data point we were interested in and to the observed data.

MOC has only been evaluated on binary classification, and only with respect to the dominated HV and the individual objectives. It is an open question how to let users select the counterfactuals that meet their – a-priori unknown – trade-off between the objectives. We leave these investigations to future research.

9 Electronic Submission

The complete code of the algorithm and the code to reproduce the experiments and results of this paper are available at https://github.com/susanne-207/moc. The implementation of MOC is based on our implementation of [19], which we also used for [3]. We will provide an open source R library with our implementation of the method based on the iml package [23].