Abstract
Many methods have been developed to understand complex predictive models and high expectations are placed on posthoc model explainability. It turns out that such explanations are not robust nor trustworthy, and they can be fooled. This paper presents techniques for attacking Partial Dependence (plots, profiles, PDP), which are among the most popular methods of explaining any predictive model trained on tabular data. We showcase that PD can be manipulated in an adversarial manner, which is alarming, especially in financial or medical applications where auditability became a musthave trait supporting blackbox machine learning. The fooling is performed via poisoning the data to bend and shift explanations in the desired direction using genetic and gradient algorithms. We believe this to be the first work using a genetic algorithm for manipulating explanations, which is transferable as it generalizes both ways: in a modelagnostic and an explanationagnostic manner.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Although supervised machine learning became stateoftheart solutions to many predictive problems, there is an emerging discussion on the underspecification of such methods which exhibits differing model behaviour in training and practical setting [14]. This is especially crucial when proper accountability for the systems supporting decisions is required by the domain [35, 38, 42]. Living with blackboxes, several explainability methods were presented to help us understand models’ behaviour [5, 19, 23, 36, 40], many are designed specifically for deep neural networks [6, 34, 44, 49]. Explanations are widely used in practice through their (often estimationbased) implementations available to machine learning practitioners in various software contributions [4, 8, 11].
Nowadays, robustness and certainty become crucial when using explanations in the data science practice to understand blackbox machine learning models; thus, facilitate rationale explanation, knowledge discovery and responsible decisionmaking [9, 22]. Notably, several studies evaluate explanations [1, 2, 10, 27, 29, 51] showcasing their various flaws from which we perceive an existing robustness gap; in critical domains, one can call it a security breach. Apart from promoting wrong explanations, this phenomenon can be exploited to use adversarial attacks on model explanations to achieve manipulated results. In regulated areas, these types of attacks may be carried out to deceive an auditor.
Figure 1 illustrates a process in which the developer aims to conceal the undesired behaviour of the model by supplying a poisoned dataset for model audit. Not every explanation is equally good—just as models require proper performance validation, we need similar assessments of explainability methods. In this paper, we evaluate the robustness of Partial Dependence (PD) [19], moreover highlight the possibility of adversarial manipulation of PD (see Figs. 4 and 5 in the latter part of the paper). We summarize the contributions as follows:

(1) We introduce a novel concept of using a genetic algorithm for manipulating model explanations. This allows for a convenient generalization of the attacks in a modelagnostic and explanationagnostic manner, which is not the case for most of the related work. Moreover, we use a gradient algorithm to perform fooling via data poisoning efficiently for neural networks.

(2) We explicitly target PD to highlight the potential of their adversarial manipulation, which was not done before. Our method provides a sanity check for the future use of PD by responsible machine learning practitioners. Evaluation of the constructed methods in extensive experiments shows that model complexity significantly affects the magnitude of the possible explanation manipulation.
2 Related Work
In the literature, there is a considerable amount of attacks on model explanations specific to deep neural networks [16, 21, 25, 30, 53]. At their core, they provide various algorithms for fooling neural network interpretability and explainability, mainly of imagebased predictions. Such explanations are commonly presented through saliency maps [45], where each model input is given its attribution to the prediction [6, 43, 44, 49]. Although explanations can be used to improve the adversarial robustness of machine learning models [37], we target explanations instead. When considering an explanation as a function of model and data, there is a possibility to change one of these variables to achieve a different result [54]. Heo et al. [25] and Dimanov et al. [15] propose finetuning a neural network to undermine its explainability capabilities and conceal model unfairness. The assumption is to alter the model’s parameters without a drop in performance, which can be achieved with an objective function minimizing the distance between explanations and an arbitrarily set target. Note that [15] indirectly change partial dependence while changing the model. Aivodji et al. [3] propose creating a surrogate model aiming to approximate the unfair blackbox model and explain its predictions in a fair manner, e.g. with relative variable dependence.
Alternate idea is to fool fairness and explainability via data change since its (background) distribution greatly affects interpretation results [28, 30]. Solans et al. [48] and Fukuchi et al. [20] investigate concealing unfairness via data change by using gradient methods. Dombrowski et al. [16] propose an algorithm for saliency explanation manipulation using gradientbased data perturbations. In contrast, we introduce a genetic algorithm and focus on other machine learning predictive models trained on tabular data. Slack et al. [46] contributed adversarial attacks on posthoc, modelagnostic explainability methods for locallevel understanding; namely LIME [40] and SHAP [36]. The proposed framework provides a way to construct a biased classifier with safe explanations of the model’s predictions.
Since we focus on globallevel explanations, instead, the results will modify a view of overall model behaviour, not specific to a single data point or image. Lakkaraju and Bastani [32] conducted a thoughtprovoking study on misleading effects of manipulated Model Understanding through Subspace Explanations (MUSE) [33], which provide arguments for why such research becomes crucial to achieve responsibility in machine learning use. Further, the robustness of neural networks [13, 50] and counterfactual explanations [47] have became important, as one wants to trust blackbox models and extend their use to sensitive tasks. Our experiments further extend to global explanations the indication of Jia et al. [29] that there is a correlation between model complexity and explanation quality.
3 Partial Dependence
In this paper, we target one of the most popular explainability methods for tabular data, which at its core presents the expected value of the model’s predictions as a function of a selected variable. Partial Dependence, formerly introduced as plots by Friedman [19], show the expected value fixed over the marginal joint distribution of other variables. These values can be relatively easily estimated and are widely incorporated into various tools for model explainability [7, 8, 11, 24, 39]. The theoretical explanation has its practical estimator used to compute the results, later visualized as a line plot showing the expected prediction for a given variable; also called profiles [12]. PD for model f and variable c in a random vector \(\mathcal X\) is defined as \(\mathcal{P}\mathcal{D}_c(\mathcal X,z) \,{:}{=}\, E_{\mathcal X_{c}}\left[ f(\mathcal X^{c=z})\right] \), where \(\mathcal X^{c=z}\) stands for random vector \(\mathcal X\), where cth variable is replaced by value z. By \(\mathcal X_{c}\), we denote distribution of random vector \(\mathcal X\) where cth variable is set to a constant. We defined PD in point z as the expected value of model f given the cth variable is set to z. The standard estimator of this value for data X is given by the following formula \(\widehat{\mathcal{P}\mathcal{D}_c}(X,z) \,{:}{=}\, \frac{1}{N}\sum _{i=1}^N f\left( X_i^{c=z}\right) ,\) where \(X_i\) is the ith row of the matrix X and the previously mentioned symbols are used accordingly. To simplify the notation, we will use \(\mathcal{P}\mathcal{D}\), and omit z and c where context is clear.
4 Fooling Partial Dependence via Data Poisoning
Many explanations treat the dataset X as fixed; however, this is precisely a single point of failure on which we aim to conduct the attack. In what follows, we examine \(\mathcal{P}\mathcal{D}\) behaviour by looking at it as a function whose argument is an entire dataset. For example, if the dataset has N instances and P variables, then \(\mathcal{P}\mathcal{D}\) is treated as a function over \(N\times P\) dimensions. Moreover, because of the complexity of blackbox models, \(\mathcal{P}\mathcal{D}\) becomes an extremely highdimensional space where variable interactions cause unpredictable behaviour. Explanations are computed using their estimators where a significant simplification may occur; thus, a slight shift of the dataset used to calculate \(\mathcal{P}\mathcal{D}\) may lead to unintended results (for example, see [26] and the references given there).
We aim to change the underlying dataset used to produce the explanation in a way to achieve the desired change in \(\mathcal{P}\mathcal{D}\). Figure 1 demonstrates the main threat of an adversarial attack on model explanation using data poisoning, which is concealing the suspected behaviour of blackbox models. The critical assumption is that an adversary has a possibility to modify the dataset arbitrarily, e.g. in healthcare and financial audit, or research review. Even if this would not be the case, in modern machine learning, wherein practice dozens of variables are used to train complex models, such data shifts might be only a minor change that a person looking at a dataset or distribution will not be able to identify.
We approach fooling \(\mathcal{P}\mathcal{D}\) as an optimization problem for given criteria of attack efficiency, later called the attack loss. It originates from [16], where a similar loss function for manipulation of locallevel model explanations for an imagebased predictive task was introduced. This work introduces the attack loss that aims to change the output of a globallevel explanation via data poisoning instead. The explanation’s weaknesses concerning data distribution and causal inference are exploited using two ways of optimizing the loss:

Geneticbased^{Footnote 1} algorithm that does not make any assumption about the model’s structure – the blackbox path from data inputs to the output prediction; thus, is modelagnostic. Further, we posit that for a vast number of explanations, clearly posthoc globallevel, the algorithm does not make assumption about their structure either; thus, becomes explanationagnostic.

Gradientbased algorithm that is specifically designed for models with differentiable outputs, e.g. neural networks [15, 16].
We discuss and evaluate two possible fooling strategies:

Targeted attack changes the dataset to achieve the closest explanation result to the predefined desired function [16, 25].

Robustness check aims to achieve the most distant model explanation from the original one by changing the dataset, which refers to the sanity check [1].
For practical reasons, we define the distance between the two calculated \(\mathcal{P}\mathcal{D}\) vectors as \(\Vert x  y\Vert \,{:}{=}\, \frac{1}{I}\sum _{i=1}^{I} (x_i  y_i)^{2}\), yet other distance measures may be used to extend the method.
4.1 Attack Loss
The intuition behind the attacks is to find a modified dataset that minimizes the attack loss. A changed dataset denoted as \(X\in \mathbb {R}^{N \times P}\) is an argument of that function; hence, an optimal X is a result of the attack. Let \(Z \subset \mathbb {R}\) be the set of points used to calculate the explanation. Let \(T:Z\rightarrow \mathbb {R}\) be the target explanation; we write just T to denote a vector over whole Z. Let \(g_c^Z:\mathbb {R}^{N \times P}\rightarrow \mathbb {R}^{Z}\) be the actual explanation calculated for points in Z; we write \(g_c\) for simplicity. Finally, let \(X'\in \mathbb {R}^{N \times P}\) be the original (constant) dataset. We define the attack loss as \(\mathcal {L}(X) \,{:}{=}\, \mathcal {L}^{g,\;s}(X)\), where g is the explanation to be fooled, and an objective is minimized depending on the strategy of the attack, denoted as s. The aim is to minimize \(\mathcal {L}\) with respect to the dataset X used to calculate the explanation. We never change values of the explained variable c in the dataset.
In the targeted attack, we aim to minimize the distance between the target model behaviour T and the result of model explanation calculated on the changed dataset. We denote this strategy by t and define \(\mathcal {L}^{g,\;t}(X) = \Vert g_c(X)  T\Vert .\) Since we focus on a specific modelagnostic explanation, we substitute \(\mathcal{P}\mathcal{D}\) in place of g to obtain \(\mathcal {L}^{\mathcal{P}\mathcal{D},\;t}(X) = \Vert \mathcal{P}\mathcal{D}_c(X)  T\Vert .\) This substitution can be generalized for various globallevel model explanations, which rely on using a part of the dataset for computation.
In the robustness check, we aim to maximize the distance between the result of model explanation calculated on the original dataset \(g_c(X')\), and the changed one; thus, minus sign is required. We denote this strategy by r and define \(\mathcal {L}^{g,\;r}(X) = \Vert g_c(X)  g_c(X')\Vert .\) Accordingly, we substitute \(\mathcal{P}\mathcal{D}\) in place of g to obtain \(\mathcal {L}^{\mathcal{P}\mathcal{D},\;r}(X) = \Vert \mathcal{P}\mathcal{D}_c(X)  \mathcal{P}\mathcal{D}_c(X')\Vert .\) Note that \(\mathcal {L}^{g,\;s}\) may vary depending on the explanation used, specifically for \(\mathcal{P}\mathcal{D}\) it is useful to centre the explanation before calculating the distances, which is the default behaviour in our implementation: \(\mathcal {L}^{\mathcal {\overline{PD}},\;r}(X) = \Vert \mathcal {\overline{PD}}_c(X)  \mathcal {\overline{PD}}_c(X') \Vert ,\) where \(\mathcal {\overline{PD}}_c \,{:}{=}\, \mathcal{P}\mathcal{D}_c(X)  \frac{1}{Z}\sum _{z\in Z} \mathcal{P}\mathcal{D}_c(X,z).\) We consider the second approach of comparing explanations using centred \(\mathcal {\overline{PD}}\), as it forces changes in the shape of the explanation, instead of promoting to shift the profile vertically while the shape changes insignificantly.
4.2 GeneticBased Algorithm
We introduce a novel strategy for fooling explanations based on the genetic algorithm as it is a simple yet powerful method for real parameter optimization [52]. We do not encode genes conventionally but deliberately use this term to distinguish from other types of evolutionary algorithms [18]. The method will be invariant to the model’s definition and the considered explanations; thus, it becomes modelagnostic and explanationagnostic. These traits are crucial when working with blackbox machine learning as versatile solutions are convenient.
Fooling \(\mathcal{P}\mathcal{D}\) in both strategies include a similar genetic algorithm. The main idea is to define an individual as an instance of the dataset, iteratively perturb its values to achieve the desired explanation target, or perform the robustness check to observe the change. These individuals are initialized with a value of the original dataset \(X'\) to form a population. Subsequently, the initialization ends with mutating the individuals using a higherthandefault variance of perturbations. Then, in each iteration, they are randomly crossed, mutated, evaluated with the attack loss, and selected based on the loss values. Crossover swaps columns between individuals to produce new ones, which are then added to the population. The number of swapped columns can be randomized; also, the number of parents can be parameterized. Mutation adds Gaussian noise to the individuals using scaled standard deviations of the variables. It is possible to constrain the data change into the original range of variable values; also keep some variables unchanged. Evaluation calculates the loss for each individual, which requires to compute explanations for each dataset. Selection reduces the number of individuals using rank selection, and elitism to guarantee several best individuals to remain in the next population.
We considered the crossover through an exchange of rows between individuals, but it might drastically shift the datasets and move them apart. Additionally, a worthy mutation is to add or subtract whole numbers from the integerencoded (categorical) variables. We further discuss the algorithm’s details in the Supplementary material. The introduced attack is modelinvariant because no derivatives are needed for optimization, which allows evaluating explanations of blackbox models. While we found this method a sufficient generalization of our framework, there is a possibility to perform a more efficient optimization assuming the prior knowledge concerning the structure of model and explanation.
4.3 GradientBased Algorithm
Gradientbased methods are stateoftheart optimization approaches, especially in the domain of deep neural networks [34]. This algorithm’s main idea is to use gradient descent to optimize the attack loss, considering the differentiability of the model’s output with respect to input data. Such assumption allows for a faster and more accurate convergence into a local minima using one of the stochastic optimizers; in our case, Adam [31]. Note that the differentiability assumption is with respect to input data, not with respect to the model’s parameters. We shall derive the gradients \(\nabla _{X_{c}} \mathcal {L}^{g,\;s}\) for fooling explanations based on their estimators, not the theoretical definitions. This is because the input data is assumed to be a random variable in a theoretical definition of \(\mathcal{P}\mathcal{D}\), making it impossible to calculate a derivative over the input dataset. In practice, we do not derive our method directly from the definition as the estimator produces the explanation.
Although we specifically consider the usage of neural networks because of their strong relation to differentiation, the algorithm’s theoretical derivation does not require this type of model. For brevity, we derive the theoretical definitions of gradients \(\nabla _{X_{c}} \mathcal {L}^{\mathcal{P}\mathcal{D},\;t}\), \(\nabla _{X_{c}} \mathcal {L}^{\mathcal{P}\mathcal{D},\;r}\), and \(\nabla _{X_{c}} \mathcal {L}^{\mathcal {\overline{PD}},\;r}\) in the Supplementary material. Overall, the gradientbased algorithm is similar to the geneticbased algorithm in that we aim to iteratively change the dataset used to calculate the explanation. Nevertheless, its main assumption is that the model provides an interface for the differentiation of output with respect to the input, which is not the case for blackbox models.
5 Experiments
Setup. We conduct experiments on two predictive tasks to evaluate the algorithms and conclude with illustrative scenario examples, which refer to the framework shown in Fig. 1. The first dataset is a synthetic regression problem that refers to the Friedman’s work [19] where inputs X are independent variables uniformly distributed on the interval [0, 1], while the target y is created according to the formula: \(y(X) = 10 \sin (\pi \boldsymbol{\cdot } X_1 \boldsymbol{\cdot } X_2) + 20 (X_3  0.5)^{2} + 10 X_4 + 5 X_5.\) Only 5 variables are actually used to compute y, while the remaining are independent of. We refer to this dataset as friedman and target explanations of the variable \(X_1\). The second dataset is a real classification task from UCI [17], which has 5 continuous variables, 8 categorical variables, and an evenlydistributed binary target. We refer to this dataset as heart and target explanations of the variable age. Additionally, we set the discrete variables as constant during the performed fooling because we mainly rely on incremental change in the values of continuous variables, and categorical variables are out of the scope of this work.
Results. Figure 2 present the main result of the paper, which is that PD can be manipulated. We use the gradientbased algorithm to change the explanations of feedforward neural networks via data poisoning. The targeted attack aims to arbitrarily change the monotonicity of PD, which is evident in both predictive tasks. The robustness check finds the most distant explanation from the original one. We perform the fooling 30 times for each subplot, and the Yaxis denotes the model’s architecture: layers\(\times \)neurons. We observe that PD explanations are especially vulnerable in complex models.
Next, we aim to evaluate the PD of various stateoftheart machine learning models and their complexity levels; we denote: Linear Model (LM), Random Forest (RF), Gradient Boosting Machine (GBM), Decision Tree (DT), KNearest Neighbours (KNN), feedforward Neural Network (NN). The modelagnostic nature of the geneticbased algorithm allows this comparison as it might be theoretically and/or practically impossible to differentiate the model’s output with respect to the input; thus, differentiate the explanations and loss.
Table 1 presents the results of robustness checks for Partial Dependence of various machine learning models and complexity levels. Each value corresponds to the distance between the original explanation and the changed one; multiplied by \(10^3\) in friedman and \(10^6\) in heart for clarity. We perform the checks 6 times and report the \(\text {mean} \pm \text {standard deviation}\). Note that we cannot compare the values between tasks, as their magnitudes depend on the prediction range. We found the explanations of NN, SVM and deep DT the most vulnerable to the fooling methods (top Table). In contrast, RF seems to provide robust explanations; thus, we further investigate the relationship between the treemodels’ complexity and the explanation stability (bottom Table) to conclude that an increasing complexity yields more vulnerable explanations, which is consistent with Fig. 2. We attribute the differences between the results for RF and GBM to the concept of biasvariance tradeoff. In some cases (heart, RF), explanations of too simple models become vulnerable too, since underfitted models may be as uncertain as overfitted ones.
Ablation Study. We further discuss the additional results that may be of interest to gain a broader context of this work. Figure 3 presents the distinction between the robustness check for centred Partial Dependence, which is the default algorithm, and the robustness check for not centred PD. We use the gradientbased algorithm to change the explanations of a 3 layers \(\times \) 32 neurons ReLU neural network and perform the fooling 30 times for each subplot. We observe that centring the explanation in the attack loss definition is necessary to achieve the change in explanation shape. Alternatively, the explanation shifts upwards or downwards by essentially changing the mean of prediction. This observation was consistent across most of the models despite their complexity.
Table 2 presents the impact of additional noise variables in data on the performed fooling. We observe that higher data dimensions favor vulnerable explanations (higher loss). The analogous results for targeted attack were consistent; however, showcased almost zero variance (partially observable in Figs. 2 and 3).
Adversarial Scenario. Following the framework shown in Fig. 1, we consider three stakeholders apparent in explainable machine learning: developer, auditor, and prediction recipients. Let us assume that the model predicting a heart attack should not take into account a patient’s sex; although, it might be a valuable predictor. An auditor analyses the model using Partial Dependence; therefore, the developer supplies a poisoned dataset for this task. Figure 5 presents two possible outcomes of model audit: concealed and suspected, which are unequivocally bound to the explanation result and dataset. In first, the model is unchanged while the stated assumption of even dependence between sex is concealed (equal to about 0.5); thus, the prediction recipients become vulnerable. Additionally, we supply an alternative scenario where the developer wants to provide evidence of model unfairness to raise suspicion (dependence for class 0 equal to about 0.7).
Supportive Scenario. In this work, we consider an equation of three variables: data, model, and explanation; thus, we poison the data to fool the explanation while the model remains unchanged. Figures 4 and 5 showcase an exemplary data shift occurring in the dataset after the attack where changing only a few explanatory variables results in bending PD. We present a moderate change in data distribution to introduce a concept of analysing such relationships for explanatory purposes, e.g. the first result might suggest that resting blood pressure and maximum heart rate contribute to the explanation of age; the second result suggests how these variables contribute to the explanation of sex. We conclude that the data shift is worth exploring to analyse variable interactions in models.
6 Limitations and Future Work
We find these results both alarming and informative yet proceed to discuss the limitations of the study. First is the assumption that, in an adversarial scenario, the auditor has no access to the original (unknown) data, e.g. in research or healthcare audit. While the detectability of fooling is worth analyzing, our work focuses not only on an adversarial manipulation of PD, as we sincerely hope such data poisoning is not occurring in practice. Even more, we aim to underline the crucial context of data distribution in the interpretation of explanations and introduce a new way of evaluating PD; blackbox explanations by generalizing the methods with geneticbased optimization.
Another limitation is the size of the used datasets. We have engaged with larger datasets during experiments but were turned off by a contradictory view that increasing the dataset size might be considered as exaggerating the results. PD clearly becomes more complex with increasing data dimensions; moreover, higherdimensional space should entail more possible ways of manipulation, which is evident in the ablation study. We note that in practice, the explanations might require 100–1000 observations for the estimation (e.g. kernel SHAP, PDP), hence the size of the datasets in this study. Finally, we omit datasets like Adult and COMPAS because they mainly consist of categorical variables.
Future Work. We foresee several directions for future work, e.g. evaluating the successor to PD – Accumulated Local Effects (ALE) [5]; although the practical estimation of ALE presents challenges. Second, the attack loss may be enhanced by regularization, e.g. penalty for substantial change in data or mean of model’s prediction, to achieve more meaningful fooling with less evidence. We focus in this work on univariate PD, but targeting bivariate PD can also be examined. Overall, the landscape of globallevel, posthoc model explanations is a broad domain, and the potential of a security breach in other methods, e.g. SHAP, should be further examined. Enhancements to the modelagnostic and explanationagnostic genetic algorithm are thereby welcomed.
Another future direction would be to enhance the stability of PD. Rieger and Hansen [41] present a defence strategy against the attack via data change [16] by aggregating various explanations, which produces robust results without changing the model.
7 Conclusion and Impact
We highlight that Partial Dependence can be maliciously altered, e.g. bent and shifted, with adversarial data perturbations. The introduced geneticbased algorithm allows evaluating explanations of any blackbox model. Experimental results on various models and their sizes showcase the hidden debt of model complexity related to explainable machine learning. Explanations of lowvariance models prove to be robust to the manipulation, while very complex models should not be explained with PD as they become vulnerable to change in reference data distribution. Robustness checks lead to varied modifications of the explanations depending on the setting, e.g. may propose two opposite PD, which is why it is advised to perform the checks multiple times.
This work investigates the vulnerability of globallevel, posthoc model explainability from the adversarial setting standpoint, which refers to the responsibility and security of the artificial intelligence use. Possible manipulation of PD leads to the conclusion that explanations used to explain blackbox machine learning may be considered blackbox themselves. These explainability methods are undeniably useful through implementations in various popular software. However, just as machine learning models cannot be developed without extensive testing and understanding of their behaviour, their explanations cannot be used without critical thinking. We recommend ensuring the reliability of the explanation results through the introduced methods, which can also be used to study models behaviour under the data shift. Code for this work is available at https://github.com/MI2DataLab/foolingpartialdependence.
Notes
 1.
For convenience, we shorten the algorithm based on the genetic algorithm phrase to geneticbased algorithm.
References
Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: NeurIPS (2018)
Adebayo, J., Muelly, M., Liccardi, I., Kim, B.: Debugging tests for model explanations. In: NeurIPS (2020)
Aivodji, U., Arai, H., Fortineau, O., Gambs, S., Hara, S., Tapp, A.: Fairwashing: the risk of rationalization. In: ICML (2019)
Alber, M., Lapuschkin, S., Seegerer, P., Hägele, M., Schütt, K.T., et al.: iNNvestigate neural networks! J. Mach. Learn. Res. 20(93), 1–8 (2019)
Apley, D.W., Zhu, J.: Visualizing the effects of predictor variables in black box supervised learning models. J. Roy. Stat. Soc. Ser. B (Stat. Methodol.) 82(4), 1059–1086 (2020)
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixelwise explanations for nonlinear classifier decisions by layerwise relevance propagation. PLoS ONE 10(7), 1–46 (2015)
Baniecki, H., Biecek, P.: modelStudio: interactive studio with explanations for ML predictive models. J. Open Source Softw. 4(43), 1798 (2019)
Baniecki, H., Kretowicz, W., Piatyszek, P., Wisniewski, J., Biecek, P.: dalex: responsible machine learning with interactive explainability and fairness in Python. J. Mach. Learn. Res. 22(214), 1–7 (2021)
Barredo Arrieta, A., DíazRodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Bhatt, U., Weller, A., Moura, J.M.F.: Evaluating and aggregating featurebased model explanations. In: IJCAI (2020)
Biecek, P.: DALEX: explainers for complex predictive models in R. J. Mach. Learn. Res. 19(84), 1–5 (2018)
Biecek, P., Burzykowski, T.: Explanatory Model Analysis. Chapman and Hall/CRC (2021)
Boopathy, A., et al.: Proper network interpretability helps adversarial robustness in classification. In: ICML (2020)
D’Amour, A., Heller, K., Moldovan, D., Adlam, B., Alipanahi, B., et al.: Underspecification presents challenges for credibility in modern machine learning. arXiv preprint arXiv:2011.03395 (2020)
Dimanov, B., Bhatt, U., Jamnik, M., Weller, A.: You shouldn’t trust me: learning models which conceal unfairness from multiple explanation methods. In: AAAI SafeAI (2020)
Dombrowski, A.K., Alber, M., Anders, C., Ackermann, M., Müller, K.R., Kessel, P.: Explanations can be manipulated and geometry is to blame. In: NeurIPS (2019)
Dua, D., Graff, C.: UCI Machine Learning Repository (2017). https://www.kaggle.com/ronitf/heartdiseaseuci/version/1
Elbeltagi, E., Hegazy, T., Grierson, D.: Comparison among five evolutionarybased optimization algorithms. Adv. Eng. Inform. 19(1), 43–53 (2005)
Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29(5), 1189–1232 (2001)
Fukuchi, K., Hara, S., Maehara, T.: Faking fairness via stealthily biased sampling. In: AAAI (2020)
Ghorbani, A., Abid, A., Zou, J.: Interpretation of neural networks is fragile. In: AAAI (2019)
Gill, N., Hall, P., Montgomery, K., Schmidt, N.: A responsible machine learning workflow with focus on interpretable models, posthoc explanation, and discrimination testing. Information 11(3), 137 (2020)
Goldstein, A., Kapelner, A., Bleich, J., Pitkin, E.: Peeking inside the black box: visualizing statistical learning with plots of individual conditional expectation. J. Comput. Graph. Stat. 24(1), 44–65 (2015)
Greenwell, B.M.: pdp: an R package for constructing partial dependence plots. R J. 9(1), 421–436 (2017)
Heo, J., Joo, S., Moon, T.: Fooling neural network interpretations via adversarial model manipulation. In: NeurIPS (2019)
Hooker, G.: Generalized functional ANOVA diagnostics for highdimensional functions of dependent variables. J. Comput. Graph. Stat. 16(3), 709–732 (2007)
Hooker, S., Erhan, D., Kindermans, P.J., Kim, B.: A benchmark for interpretability methods in deep neural networks. In: NeurIPS (2019)
Janzing, D., Minorics, L., Blöbaum, P.: Feature relevance quantification in explainable AI: a causal problem. In: AISTATS (2020)
Jia, Y., Frank, E., Pfahringer, B., Bifet, A., Lim, N.: Studying and exploiting the relationship between model accuracy and explanation quality. In: Oliver, N., PérezCruz, F., Kramer, S., Read, J., Lozano, J.A. (eds.) ECML PKDD 2021. LNCS (LNAI), vol. 12976, pp. 699–714. Springer, Cham (2021). https://doi.org/10.1007/9783030865207_43
Kindermans, P.J., et al.: The (un)reliability of saliency methods. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 267–280. Springer, Cham (2019). https://doi.org/10.1007/9783030289546_14
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)
Lakkaraju, H., Bastani, O.: “How do i fool you?”: Manipulating user trust via misleading black box explanations. In: AIES (2020)
Lakkaraju, H., Kamar, E., Caruana, R., Leskovec, J.: Faithful and customizable explanations of black box models. In: AIES (2019)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 31–57 (2018)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: NeurIPS (2017)
Mangla, P., Singh, V., Balasubramanian, V.N.: On saliency maps and adversarial robustness. In: Hutter, F., Kersting, K., Lijffijt, J., Valera, I. (eds.) ECML PKDD 2020. LNCS (LNAI), vol. 12458, pp. 272–288. Springer, Cham (2021). https://doi.org/10.1007/9783030676612_17
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Molnar, C., Casalicchio, G., Bischl, B.: iml: an R package for interpretable machine learning. J. Open Source Softw. 3(26), 786 (2018)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: Explaining the predictions of any classifier. In: KDD (2016)
Rieger, L., Hansen, L.K.: A simple defense against adversarial attacks on heatmap explanations. In: ICML WHI (2020)
Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: GradCAM: visual explanations from deep networks via gradientbased localization. Int. J. Comput. Vis. 128(2), 336–359 (2020)
Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: ICML (2017)
Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: ICLR (2014)
Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling LIME and SHAP: adversarial attacks on post hoc explanation methods. In: AIES (2020)
Slack, D., Hilgard, S., Lakkaraju, H., Singh, S.: Counterfactual explanations can be manipulated. In: NeurIPS (2021)
Solans, D., Biggio, B., Castillo, C.: Poisoning attacks on algorithmic fairness. In: Hutter, F., Kersting, K., Lijffijt, J., Valera, I. (eds.) ECML PKDD 2020. LNCS (LNAI), vol. 12457, pp. 162–177. Springer, Cham (2021). https://doi.org/10.1007/9783030676582_10
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: ICML (2017)
Wang, Z., Wang, H., Ramkumar, S., Mardziel, P., Fredrikson, M., Datta, A.: Smoothed geometry for robust attribution. In: NeurIPS (2020)
Warnecke, A., Arp, D., Wressnegger, C., Rieck, K.: Evaluating explanation methods for deep learning in security. In: IEEE EuroS &P (2020)
Wright, A.H.: Genetic algorithms for real parameter optimization. Found. Genet. Algorithms 1, 205–218 (1991)
Zhang, X., Wang, N., Shen, H., Ji, S., Luo, X., Wang, T.: Interpretable deep learning under fire. In: USENIX Security (2020)
Zhao, Q., Hastie, T.: Causal interpretations of blackbox models. J. Bus. Econ. Stat. 39(1), 272–281 (2019)
Acknowledgements
This work was financially supported by the NCN Opus grant 2017/27/B/ST6/01307 and NCN Sonata Bis9 grant 2019/34/E/ST6/00052.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this paper
Cite this paper
Baniecki, H., Kretowicz, W., Biecek, P. (2023). Fooling Partial Dependence via Data Poisoning. In: Amini, MR., Canu, S., Fischer, A., Guns, T., Kralj Novak, P., Tsoumakas, G. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2022. Lecture Notes in Computer Science(), vol 13715. Springer, Cham. https://doi.org/10.1007/9783031264092_8
Download citation
DOI: https://doi.org/10.1007/9783031264092_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783031264085
Online ISBN: 9783031264092
eBook Packages: Computer ScienceComputer Science (R0)