Keywords

1 Introduction

Although supervised machine learning became state-of-the-art solutions to many predictive problems, there is an emerging discussion on the underspecification of such methods which exhibits differing model behaviour in training and practical setting [14]. This is especially crucial when proper accountability for the systems supporting decisions is required by the domain [35, 38, 42]. Living with black-boxes, several explainability methods were presented to help us understand models’ behaviour [5, 19, 23, 36, 40], many are designed specifically for deep neural networks [6, 34, 44, 49]. Explanations are widely used in practice through their (often estimation-based) implementations available to machine learning practitioners in various software contributions [4, 8, 11].

Nowadays, robustness and certainty become crucial when using explanations in the data science practice to understand black-box machine learning models; thus, facilitate rationale explanation, knowledge discovery and responsible decision-making [9, 22]. Notably, several studies evaluate explanations [1, 2, 10, 27, 29, 51] showcasing their various flaws from which we perceive an existing robustness gap; in critical domains, one can call it a security breach. Apart from promoting wrong explanations, this phenomenon can be exploited to use adversarial attacks on model explanations to achieve manipulated results. In regulated areas, these types of attacks may be carried out to deceive an auditor.

Fig. 1.
figure 1

Framework for fooling model explanations via data poisoning. The red color indicates the adversarial route, a potential security breach, which an attacker may use to manipulate the explanation. Researchers could use this method to provide a misleading rationale for a given phenomenon, while auditors may purposely conceal the suspected, e.g. biased or irresponsible, reasoning of a black-box model. (Color figure online)

Figure 1 illustrates a process in which the developer aims to conceal the undesired behaviour of the model by supplying a poisoned dataset for model audit. Not every explanation is equally good—just as models require proper performance validation, we need similar assessments of explainability methods. In this paper, we evaluate the robustness of Partial Dependence (PD) [19], moreover highlight the possibility of adversarial manipulation of PD (see Figs. 4 and 5 in the latter part of the paper). We summarize the contributions as follows:

  • (1) We introduce a novel concept of using a genetic algorithm for manipulating model explanations. This allows for a convenient generalization of the attacks in a model-agnostic and explanation-agnostic manner, which is not the case for most of the related work. Moreover, we use a gradient algorithm to perform fooling via data poisoning efficiently for neural networks.

  • (2) We explicitly target PD to highlight the potential of their adversarial manipulation, which was not done before. Our method provides a sanity check for the future use of PD by responsible machine learning practitioners. Evaluation of the constructed methods in extensive experiments shows that model complexity significantly affects the magnitude of the possible explanation manipulation.

2 Related Work

In the literature, there is a considerable amount of attacks on model explanations specific to deep neural networks [16, 21, 25, 30, 53]. At their core, they provide various algorithms for fooling neural network interpretability and explainability, mainly of image-based predictions. Such explanations are commonly presented through saliency maps [45], where each model input is given its attribution to the prediction [6, 43, 44, 49]. Although explanations can be used to improve the adversarial robustness of machine learning models [37], we target explanations instead. When considering an explanation as a function of model and data, there is a possibility to change one of these variables to achieve a different result [54]. Heo et al. [25] and Dimanov et al. [15] propose fine-tuning a neural network to undermine its explainability capabilities and conceal model unfairness. The assumption is to alter the model’s parameters without a drop in performance, which can be achieved with an objective function minimizing the distance between explanations and an arbitrarily set target. Note that [15] indirectly change partial dependence while changing the model. Aivodji et al. [3] propose creating a surrogate model aiming to approximate the unfair black-box model and explain its predictions in a fair manner, e.g. with relative variable dependence.

Alternate idea is to fool fairness and explainability via data change since its (background) distribution greatly affects interpretation results [28, 30]. Solans et al. [48] and Fukuchi et al. [20] investigate concealing unfairness via data change by using gradient methods. Dombrowski et al. [16] propose an algorithm for saliency explanation manipulation using gradient-based data perturbations. In contrast, we introduce a genetic algorithm and focus on other machine learning predictive models trained on tabular data. Slack et al. [46] contributed adversarial attacks on post-hoc, model-agnostic explainability methods for local-level understanding; namely LIME [40] and SHAP [36]. The proposed framework provides a way to construct a biased classifier with safe explanations of the model’s predictions.

Since we focus on global-level explanations, instead, the results will modify a view of overall model behaviour, not specific to a single data point or image. Lakkaraju and Bastani [32] conducted a thought-provoking study on misleading effects of manipulated Model Understanding through Subspace Explanations (MUSE) [33], which provide arguments for why such research becomes crucial to achieve responsibility in machine learning use. Further, the robustness of neural networks [13, 50] and counterfactual explanations [47] have became important, as one wants to trust black-box models and extend their use to sensitive tasks. Our experiments further extend to global explanations the indication of Jia et al. [29] that there is a correlation between model complexity and explanation quality.

3 Partial Dependence

In this paper, we target one of the most popular explainability methods for tabular data, which at its core presents the expected value of the model’s predictions as a function of a selected variable. Partial Dependence, formerly introduced as plots by Friedman [19], show the expected value fixed over the marginal joint distribution of other variables. These values can be relatively easily estimated and are widely incorporated into various tools for model explainability [7, 8, 11, 24, 39]. The theoretical explanation has its practical estimator used to compute the results, later visualized as a line plot showing the expected prediction for a given variable; also called profiles [12]. PD for model f and variable c in a random vector \(\mathcal X\) is defined as \(\mathcal{P}\mathcal{D}_c(\mathcal X,z) \,{:}{=}\, E_{\mathcal X_{-c}}\left[ f(\mathcal X^{c|=z})\right] \), where \(\mathcal X^{c|=z}\) stands for random vector \(\mathcal X\), where c-th variable is replaced by value z. By \(\mathcal X_{-c}\), we denote distribution of random vector \(\mathcal X\) where c-th variable is set to a constant. We defined PD in point z as the expected value of model f given the c-th variable is set to z. The standard estimator of this value for data X is given by the following formula \(\widehat{\mathcal{P}\mathcal{D}_c}(X,z) \,{:}{=}\, \frac{1}{N}\sum _{i=1}^N f\left( X_i^{c|=z}\right) ,\) where \(X_i\) is the i-th row of the matrix X and the previously mentioned symbols are used accordingly. To simplify the notation, we will use \(\mathcal{P}\mathcal{D}\), and omit z and c where context is clear.

4 Fooling Partial Dependence via Data Poisoning

Many explanations treat the dataset X as fixed; however, this is precisely a single point of failure on which we aim to conduct the attack. In what follows, we examine \(\mathcal{P}\mathcal{D}\) behaviour by looking at it as a function whose argument is an entire dataset. For example, if the dataset has N instances and P variables, then \(\mathcal{P}\mathcal{D}\) is treated as a function over \(N\times P\) dimensions. Moreover, because of the complexity of black-box models, \(\mathcal{P}\mathcal{D}\) becomes an extremely high-dimensional space where variable interactions cause unpredictable behaviour. Explanations are computed using their estimators where a significant simplification may occur; thus, a slight shift of the dataset used to calculate \(\mathcal{P}\mathcal{D}\) may lead to unintended results (for example, see [26] and the references given there).

We aim to change the underlying dataset used to produce the explanation in a way to achieve the desired change in \(\mathcal{P}\mathcal{D}\). Figure 1 demonstrates the main threat of an adversarial attack on model explanation using data poisoning, which is concealing the suspected behaviour of black-box models. The critical assumption is that an adversary has a possibility to modify the dataset arbitrarily, e.g. in healthcare and financial audit, or research review. Even if this would not be the case, in modern machine learning, wherein practice dozens of variables are used to train complex models, such data shifts might be only a minor change that a person looking at a dataset or distribution will not be able to identify.

We approach fooling \(\mathcal{P}\mathcal{D}\) as an optimization problem for given criteria of attack efficiency, later called the attack loss. It originates from [16], where a similar loss function for manipulation of local-level model explanations for an image-based predictive task was introduced. This work introduces the attack loss that aims to change the output of a global-level explanation via data poisoning instead. The explanation’s weaknesses concerning data distribution and causal inference are exploited using two ways of optimizing the loss:

  • Genetic-basedFootnote 1 algorithm that does not make any assumption about the model’s structure – the black-box path from data inputs to the output prediction; thus, is model-agnostic. Further, we posit that for a vast number of explanations, clearly post-hoc global-level, the algorithm does not make assumption about their structure either; thus, becomes explanation-agnostic.

  • Gradient-based algorithm that is specifically designed for models with differentiable outputs, e.g. neural networks [15, 16].

We discuss and evaluate two possible fooling strategies:

  • Targeted attack changes the dataset to achieve the closest explanation result to the predefined desired function [16, 25].

  • Robustness check aims to achieve the most distant model explanation from the original one by changing the dataset, which refers to the sanity check [1].

For practical reasons, we define the distance between the two calculated \(\mathcal{P}\mathcal{D}\) vectors as \(\Vert x - y\Vert \,{:}{=}\, \frac{1}{I}\sum _{i=1}^{I} (x_i - y_i)^{2}\), yet other distance measures may be used to extend the method.

4.1 Attack Loss

The intuition behind the attacks is to find a modified dataset that minimizes the attack loss. A changed dataset denoted as \(X\in \mathbb {R}^{N \times P}\) is an argument of that function; hence, an optimal X is a result of the attack. Let \(Z \subset \mathbb {R}\) be the set of points used to calculate the explanation. Let \(T:Z\rightarrow \mathbb {R}\) be the target explanation; we write just T to denote a vector over whole Z. Let \(g_c^Z:\mathbb {R}^{N \times P}\rightarrow \mathbb {R}^{|Z|}\) be the actual explanation calculated for points in Z; we write \(g_c\) for simplicity. Finally, let \(X'\in \mathbb {R}^{N \times P}\) be the original (constant) dataset. We define the attack loss as \(\mathcal {L}(X) \,{:}{=}\, \mathcal {L}^{g,\;s}(X)\), where g is the explanation to be fooled, and an objective is minimized depending on the strategy of the attack, denoted as s. The aim is to minimize \(\mathcal {L}\) with respect to the dataset X used to calculate the explanation. We never change values of the explained variable c in the dataset.

In the targeted attack, we aim to minimize the distance between the target model behaviour T and the result of model explanation calculated on the changed dataset. We denote this strategy by t and define \(\mathcal {L}^{g,\;t}(X) = \Vert g_c(X) - T\Vert .\) Since we focus on a specific model-agnostic explanation, we substitute \(\mathcal{P}\mathcal{D}\) in place of g to obtain \(\mathcal {L}^{\mathcal{P}\mathcal{D},\;t}(X) = \Vert \mathcal{P}\mathcal{D}_c(X) - T\Vert .\) This substitution can be generalized for various global-level model explanations, which rely on using a part of the dataset for computation.

In the robustness check, we aim to maximize the distance between the result of model explanation calculated on the original dataset \(g_c(X')\), and the changed one; thus, minus sign is required. We denote this strategy by r and define \(\mathcal {L}^{g,\;r}(X) = -\Vert g_c(X) - g_c(X')\Vert .\) Accordingly, we substitute \(\mathcal{P}\mathcal{D}\) in place of g to obtain \(\mathcal {L}^{\mathcal{P}\mathcal{D},\;r}(X) = -\Vert \mathcal{P}\mathcal{D}_c(X) - \mathcal{P}\mathcal{D}_c(X')\Vert .\) Note that \(\mathcal {L}^{g,\;s}\) may vary depending on the explanation used, specifically for \(\mathcal{P}\mathcal{D}\) it is useful to centre the explanation before calculating the distances, which is the default behaviour in our implementation: \(\mathcal {L}^{\mathcal {\overline{PD}},\;r}(X) = -\Vert \mathcal {\overline{PD}}_c(X) - \mathcal {\overline{PD}}_c(X') \Vert ,\) where \(\mathcal {\overline{PD}}_c \,{:}{=}\, \mathcal{P}\mathcal{D}_c(X) - \frac{1}{|Z|}\sum _{z\in Z} \mathcal{P}\mathcal{D}_c(X,z).\) We consider the second approach of comparing explanations using centred \(\mathcal {\overline{PD}}\), as it forces changes in the shape of the explanation, instead of promoting to shift the profile vertically while the shape changes insignificantly.

4.2 Genetic-Based Algorithm

We introduce a novel strategy for fooling explanations based on the genetic algorithm as it is a simple yet powerful method for real parameter optimization [52]. We do not encode genes conventionally but deliberately use this term to distinguish from other types of evolutionary algorithms [18]. The method will be invariant to the model’s definition and the considered explanations; thus, it becomes model-agnostic and explanation-agnostic. These traits are crucial when working with black-box machine learning as versatile solutions are convenient.

Fooling \(\mathcal{P}\mathcal{D}\) in both strategies include a similar genetic algorithm. The main idea is to define an individual as an instance of the dataset, iteratively perturb its values to achieve the desired explanation target, or perform the robustness check to observe the change. These individuals are initialized with a value of the original dataset \(X'\) to form a population. Subsequently, the initialization ends with mutating the individuals using a higher-than-default variance of perturbations. Then, in each iteration, they are randomly crossed, mutated, evaluated with the attack loss, and selected based on the loss values. Crossover swaps columns between individuals to produce new ones, which are then added to the population. The number of swapped columns can be randomized; also, the number of parents can be parameterized. Mutation adds Gaussian noise to the individuals using scaled standard deviations of the variables. It is possible to constrain the data change into the original range of variable values; also keep some variables unchanged. Evaluation calculates the loss for each individual, which requires to compute explanations for each dataset. Selection reduces the number of individuals using rank selection, and elitism to guarantee several best individuals to remain in the next population.

We considered the crossover through an exchange of rows between individuals, but it might drastically shift the datasets and move them apart. Additionally, a worthy mutation is to add or subtract whole numbers from the integer-encoded (categorical) variables. We further discuss the algorithm’s details in the Supplementary material. The introduced attack is model-invariant because no derivatives are needed for optimization, which allows evaluating explanations of black-box models. While we found this method a sufficient generalization of our framework, there is a possibility to perform a more efficient optimization assuming the prior knowledge concerning the structure of model and explanation.

4.3 Gradient-Based Algorithm

Gradient-based methods are state-of-the-art optimization approaches, especially in the domain of deep neural networks [34]. This algorithm’s main idea is to use gradient descent to optimize the attack loss, considering the differentiability of the model’s output with respect to input data. Such assumption allows for a faster and more accurate convergence into a local minima using one of the stochastic optimizers; in our case, Adam [31]. Note that the differentiability assumption is with respect to input data, not with respect to the model’s parameters. We shall derive the gradients \(\nabla _{X_{-c}} \mathcal {L}^{g,\;s}\) for fooling explanations based on their estimators, not the theoretical definitions. This is because the input data is assumed to be a random variable in a theoretical definition of \(\mathcal{P}\mathcal{D}\), making it impossible to calculate a derivative over the input dataset. In practice, we do not derive our method directly from the definition as the estimator produces the explanation.

Although we specifically consider the usage of neural networks because of their strong relation to differentiation, the algorithm’s theoretical derivation does not require this type of model. For brevity, we derive the theoretical definitions of gradients \(\nabla _{X_{-c}} \mathcal {L}^{\mathcal{P}\mathcal{D},\;t}\), \(\nabla _{X_{-c}} \mathcal {L}^{\mathcal{P}\mathcal{D},\;r}\), and \(\nabla _{X_{-c}} \mathcal {L}^{\mathcal {\overline{PD}},\;r}\) in the Supplementary material. Overall, the gradient-based algorithm is similar to the genetic-based algorithm in that we aim to iteratively change the dataset used to calculate the explanation. Nevertheless, its main assumption is that the model provides an interface for the differentiation of output with respect to the input, which is not the case for black-box models.

5 Experiments

Setup. We conduct experiments on two predictive tasks to evaluate the algorithms and conclude with illustrative scenario examples, which refer to the framework shown in Fig. 1. The first dataset is a synthetic regression problem that refers to the Friedman’s work [19] where inputs X are independent variables uniformly distributed on the interval [0, 1], while the target y is created according to the formula: \(y(X) = 10 \sin (\pi \boldsymbol{\cdot } X_1 \boldsymbol{\cdot } X_2) + 20 (X_3 - 0.5)^{2} + 10 X_4 + 5 X_5.\) Only 5 variables are actually used to compute y, while the remaining are independent of. We refer to this dataset as friedman and target explanations of the variable \(X_1\). The second dataset is a real classification task from UCI [17], which has 5 continuous variables, 8 categorical variables, and an evenly-distributed binary target. We refer to this dataset as heart and target explanations of the variable age. Additionally, we set the discrete variables as constant during the performed fooling because we mainly rely on incremental change in the values of continuous variables, and categorical variables are out of the scope of this work.

Fig. 2.
figure 2

Fooling Partial Dependence of neural network models (rows) fitted to the friedman and heart datasets (columns). We performed multiple randomly initiated gradient-based fooling algorithms on the explanations of variables \(X_1\) and age respectively. The blue line denotes the original explanation, the red lines are the fooled explanations, and in the targeted attack, the grey line denotes the desired target. We observe that the explanations’ vulnerability greatly increases with model complexity. Interestingly, the algorithm seems to converge to two contrary optima when no target is provided. (Color figure online)

Results. Figure 2 present the main result of the paper, which is that PD can be manipulated. We use the gradient-based algorithm to change the explanations of feedforward neural networks via data poisoning. The targeted attack aims to arbitrarily change the monotonicity of PD, which is evident in both predictive tasks. The robustness check finds the most distant explanation from the original one. We perform the fooling 30 times for each subplot, and the Y-axis denotes the model’s architecture: layers\(\times \)neurons. We observe that PD explanations are especially vulnerable in complex models.

Next, we aim to evaluate the PD of various state-of-the-art machine learning models and their complexity levels; we denote: Linear Model (LM), Random Forest (RF), Gradient Boosting Machine (GBM), Decision Tree (DT), K-Nearest Neighbours (KNN), feedforward Neural Network (NN). The model-agnostic nature of the genetic-based algorithm allows this comparison as it might be theoretically and/or practically impossible to differentiate the model’s output with respect to the input; thus, differentiate the explanations and loss.

Table 1. Attack loss values of the robustness checks for Partial Dependence of various machine learning models (top), and complexity levels of tree-ensembles (bottom). Each value corresponds to the scaled distance between the original explanation and the changed one. We perform the fooling 6 times and report the \(\text {mean}{\pm }\text {sd}\). We observe that the explanations’ vulnerability increases with GBM complexity.

Table 1 presents the results of robustness checks for Partial Dependence of various machine learning models and complexity levels. Each value corresponds to the distance between the original explanation and the changed one; multiplied by \(10^3\) in friedman and \(10^6\) in heart for clarity. We perform the checks 6 times and report the \(\text {mean} \pm \text {standard deviation}\). Note that we cannot compare the values between tasks, as their magnitudes depend on the prediction range. We found the explanations of NN, SVM and deep DT the most vulnerable to the fooling methods (top Table). In contrast, RF seems to provide robust explanations; thus, we further investigate the relationship between the tree-models’ complexity and the explanation stability (bottom Table) to conclude that an increasing complexity yields more vulnerable explanations, which is consistent with Fig. 2. We attribute the differences between the results for RF and GBM to the concept of bias-variance tradeoff. In some cases (heart, RF), explanations of too simple models become vulnerable too, since underfitted models may be as uncertain as overfitted ones.

Fig. 3.
figure 3

Fooling Partial Dependence of a \(3\times 32\) neural network fitted to the friedman (top row) and heart (bottom row) datasets. We performed multiple randomly initiated gradient-based fooling algorithms on the explanations of variables \(X_1\) and age respectively. We observe that centring PD is beneficial because it stops the manipulated explanation from shifting.

Table 2. Attack loss values of the robustness checks for PD of various ReLU neural networks. We add additional noise variables to the data before model fitting, e.g. friedman+2 denotes the referenced dataset with 2 additional variables sampled from the normal distribution. We perform the fooling 30 times and report the \(\text {mean} \pm \text {sd}\). We observe that the explanations’ vulnerability greatly increases with task complexity.

Ablation Study. We further discuss the additional results that may be of interest to gain a broader context of this work. Figure 3 presents the distinction between the robustness check for centred Partial Dependence, which is the default algorithm, and the robustness check for not centred PD. We use the gradient-based algorithm to change the explanations of a 3 layers \(\times \) 32 neurons ReLU neural network and perform the fooling 30 times for each subplot. We observe that centring the explanation in the attack loss definition is necessary to achieve the change in explanation shape. Alternatively, the explanation shifts upwards or downwards by essentially changing the mean of prediction. This observation was consistent across most of the models despite their complexity.

Table 2 presents the impact of additional noise variables in data on the performed fooling. We observe that higher data dimensions favor vulnerable explanations (higher loss). The analogous results for targeted attack were consistent; however, showcased almost zero variance (partially observable in Figs. 2 and 3).

Adversarial Scenario. Following the framework shown in Fig. 1, we consider three stakeholders apparent in explainable machine learning: developer, auditor, and prediction recipients. Let us assume that the model predicting a heart attack should not take into account a patient’s sex; although, it might be a valuable predictor. An auditor analyses the model using Partial Dependence; therefore, the developer supplies a poisoned dataset for this task. Figure 5 presents two possible outcomes of model audit: concealed and suspected, which are unequivocally bound to the explanation result and dataset. In first, the model is unchanged while the stated assumption of even dependence between sex is concealed (equal to about 0.5); thus, the prediction recipients become vulnerable. Additionally, we supply an alternative scenario where the developer wants to provide evidence of model unfairness to raise suspicion (dependence for class 0 equal to about 0.7).

Supportive Scenario. In this work, we consider an equation of three variables: data, model, and explanation; thus, we poison the data to fool the explanation while the model remains unchanged. Figures 4 and 5 showcase an exemplary data shift occurring in the dataset after the attack where changing only a few explanatory variables results in bending PD. We present a moderate change in data distribution to introduce a concept of analysing such relationships for explanatory purposes, e.g. the first result might suggest that resting blood pressure and maximum heart rate contribute to the explanation of age; the second result suggests how these variables contribute to the explanation of sex. We conclude that the data shift is worth exploring to analyse variable interactions in models.

6 Limitations and Future Work

We find these results both alarming and informative yet proceed to discuss the limitations of the study. First is the assumption that, in an adversarial scenario, the auditor has no access to the original (unknown) data, e.g. in research or healthcare audit. While the detectability of fooling is worth analyzing, our work focuses not only on an adversarial manipulation of PD, as we sincerely hope such data poisoning is not occurring in practice. Even more, we aim to underline the crucial context of data distribution in the interpretation of explanations and introduce a new way of evaluating PD; black-box explanations by generalizing the methods with genetic-based optimization.

Fig. 4.
figure 4

Partial Dependence of age in the SVM model prediction of a heart attack (class 0). Left: Two manipulated explanations suggest an increasing or decreasing relationship between age and the predicted outcome depending on a desired outcome. Right: Distribution of the explained variable age and the two poisoned variables from the data, in which the remaining ten variables attributing to the explanation remain unchanged. The mean of the variables’ Jensen-Shannon distance equals only 0.027 in the upward scenario and 0.021 in the downward scenario, which might seem like an insignificant change of the data distribution.

Fig. 5.
figure 5

Partial Dependence of sex in the SVM model prediction of a heart attack (class 0). Left: Two manipulated explanations present a suspected or concealed variable contribution into the predicted outcome. Right: Distribution of the three poisoned variables from the data, in which sex and the remaining nine variables attributing to the explanation remain unchanged. The mean of the variables’ Jensen-Shannon distance equals only 0.023 in the suspected scenario and 0.026 in the concealed scenario.

Another limitation is the size of the used datasets. We have engaged with larger datasets during experiments but were turned off by a contradictory view that increasing the dataset size might be considered as exaggerating the results. PD clearly becomes more complex with increasing data dimensions; moreover, higher-dimensional space should entail more possible ways of manipulation, which is evident in the ablation study. We note that in practice, the explanations might require 100–1000 observations for the estimation (e.g. kernel SHAP, PDP), hence the size of the datasets in this study. Finally, we omit datasets like Adult and COMPAS because they mainly consist of categorical variables.

Future Work. We foresee several directions for future work, e.g. evaluating the successor to PD – Accumulated Local Effects (ALE) [5]; although the practical estimation of ALE presents challenges. Second, the attack loss may be enhanced by regularization, e.g. penalty for substantial change in data or mean of model’s prediction, to achieve more meaningful fooling with less evidence. We focus in this work on univariate PD, but targeting bivariate PD can also be examined. Overall, the landscape of global-level, post-hoc model explanations is a broad domain, and the potential of a security breach in other methods, e.g. SHAP, should be further examined. Enhancements to the model-agnostic and explanation-agnostic genetic algorithm are thereby welcomed.

Another future direction would be to enhance the stability of PD. Rieger and Hansen [41] present a defence strategy against the attack via data change [16] by aggregating various explanations, which produces robust results without changing the model.

7 Conclusion and Impact

We highlight that Partial Dependence can be maliciously altered, e.g. bent and shifted, with adversarial data perturbations. The introduced genetic-based algorithm allows evaluating explanations of any black-box model. Experimental results on various models and their sizes showcase the hidden debt of model complexity related to explainable machine learning. Explanations of low-variance models prove to be robust to the manipulation, while very complex models should not be explained with PD as they become vulnerable to change in reference data distribution. Robustness checks lead to varied modifications of the explanations depending on the setting, e.g. may propose two opposite PD, which is why it is advised to perform the checks multiple times.

This work investigates the vulnerability of global-level, post-hoc model explainability from the adversarial setting standpoint, which refers to the responsibility and security of the artificial intelligence use. Possible manipulation of PD leads to the conclusion that explanations used to explain black-box machine learning may be considered black-box themselves. These explainability methods are undeniably useful through implementations in various popular software. However, just as machine learning models cannot be developed without extensive testing and understanding of their behaviour, their explanations cannot be used without critical thinking. We recommend ensuring the reliability of the explanation results through the introduced methods, which can also be used to study models behaviour under the data shift. Code for this work is available at https://github.com/MI2DataLab/fooling-partial-dependence.