Fooling Partial Dependence via Data Poisoning

Many methods have been developed to understand complex predictive models and high expectations are placed on post-hoc model explainability. It turns out that such explanations are not robust nor trustworthy, and they can be fooled. This paper presents techniques for attacking Partial Dependence (plots, profiles, PDP), which are among the most popular methods of explaining any predictive model trained on tabular data. We showcase that PD can be manipulated in an adversarial manner, which is alarming, especially in financial or medical applications where auditability became a must-have trait supporting black-box machine learning. The fooling is performed via poisoning the data to bend and shift explanations in the desired direction using genetic and gradient algorithms. We believe this to be the first work using a genetic algorithm for manipulating explanations, which is transferable as it generalizes both ways: in a model-agnostic and an explanation-agnostic manner.


Introduction
Although supervised machine learning became state-of-the-art solutions to many predictive problems, there is an emerging discussion on the underspecification of such methods which exhibits differing model behaviour in training and practical setting [14]. This is especially crucial when proper accountability for the systems supporting decisions is required by the domain [35,38,42]. Living with blackboxes, several explainability methods were presented to help us understand models' behaviour [5,19,23,36,40], many are designed specifically for deep neural networks [6,34,44,49]. Explanations are widely used in practice through their (often estimation-based) implementations available to machine learning practitioners in various software contributions [4,8,11].
Nowadays, robustness and certainty become crucial when using explanations in the data science practice to understand black-box machine learning models; thus, facilitate rationale explanation, knowledge discovery and responsible decisionmaking [9,22]. Notably, several studies evaluate explanations [1,2,10,27,29,51] showcasing their various flaws from which we perceive an existing robustness gap; in critical domains, one can call it a security breach. Apart from promoting wrong explanations, this phenomenon can be exploited to use adversarial attacks on model explanations to achieve manipulated results. In regulated areas, these types of attacks may be carried out to deceive an auditor.  Fig. 1. Framework for fooling model explanations via data poisoning. The red color indicates the adversarial route, a potential security breach, which an attacker may use to manipulate the explanation. Researchers could use this method to provide a misleading rationale for a given phenomenon, while auditors may purposely conceal the suspected, e.g. biased or irresponsible, reasoning of a black-box model. Figure 1 illustrates a process in which the developer aims to conceal the undesired behaviour of the model by supplying a poisoned dataset for model audit. Not every explanation is equally good-just as models require proper performance validation, we need similar assessments of explainability methods. In this paper, we evaluate the robustness of Partial Dependence (PD) [19], moreover highlight the possibility of adversarial manipulation of PD (see Figures 4 & 5 in the latter part of the paper). We summarize the contributions as follows: (1) We introduce a novel concept of using a genetic algorithm for manipulating model explanations. This allows for a convenient generalization of the attacks in a model-agnostic and explanation-agnostic manner, which is not the case for most of the related work. Moreover, we use a gradient algorithm to perform fooling via data poisoning efficiently for neural networks.
(2) We explicitly target PD to highlight the potential of their adversarial manipulation, which was not done before. Our method provides a sanity check for the future use of PD by responsible machine learning practitioners. Evaluation of the constructed methods in extensive experiments shows that model complexity significantly affects the magnitude of the possible explanation manipulation.

Related Work
In the literature, there is a considerable amount of attacks on model explanations specific to deep neural networks [16,21,25,30,53]. At their core, they provide various algorithms for fooling neural network interpretability and explainability, mainly of image-based predictions. Such explanations are commonly presented through saliency maps [45], where each model input is given its attribution to the prediction [6,43,44,49]. Although explanations can be used to improve the adversarial robustness of machine learning models [37], we target explanations instead. When considering an explanation as a function of model and data, there is a possibility to change one of these variables to achieve a different result [54]. Heo et al. [25] and Dimanov et al. [15] propose fine-tuning a neural network to undermine its explainability capabilities and conceal model unfairness. The assumption is to alter the model's parameters without a drop in performance, which can be achieved with an objective function minimizing the distance between explanations and an arbitrarily set target. Note that [15] indirectly change partial dependence while changing the model. Aivodji et al. [3] propose creating a surrogate model aiming to approximate the unfair black-box model and explain its predictions in a fair manner, e.g. with relative variable dependence.
Alternate idea is to fool fairness and explainability via data change since its (background) distribution greatly affects interpretation results [28,30]. Solans et al. [48] and Fukuchi et al. [20] investigate concealing unfairness via data change by using gradient methods. Dombrowski et al. [16] propose an algorithm for saliency explanation manipulation using gradient-based data perturbations. In contrast, we introduce a genetic algorithm and focus on other machine learning predictive models trained on tabular data. Slack et al. [46] contributed adversarial attacks on post-hoc, model-agnostic explainability methods for local-level understanding; namely LIME [40] and SHAP [36]. The proposed framework provides a way to construct a biased classifier with safe explanations of the model's predictions.
Since we focus on global-level explanations, instead, the results will modify a view of overall model behaviour, not specific to a single data point or image. Lakkaraju and Bastani [32] conducted a thought-provoking study on misleading effects of manipulated Model Understanding through Subspace Explanations (MUSE) [33], which provide arguments for why such research becomes crucial to achieve responsibility in machine learning use. Further, the robustness of neural networks [13,50] and counterfactual explanations [47] have became important, as one wants to trust black-box models and extend their use to sensitive tasks. Our experiments further extend to global explanations the indication of Jia et al. [29] that there is a correlation between model complexity and explanation quality.

Partial Dependence
In this paper, we target one of the most popular explainability methods for tabular data, which at its core presents the expected value of the model's predictions as a function of a selected variable. Partial Dependence, formerly introduced as plots by Friedman [19], show the expected value fixed over the marginal joint distribution of other variables. These values can be relatively easily estimated and are widely incorporated into various tools for model explainability [7,8,11,24,39]. The theoretical explanation has its practical estimator used to compute the results, later visualized as a line plot showing the expected prediction for a given variable; also called profiles [12]. PD for model f and variable c in a random vector X is defined as PD c (X , z) := E X−c f (X c|=z ) , where X c|=z stands for random vector X , where c-th variable is replaced by value z. By X −c , we denote distribution of random vector X where c-th variable is set to a constant. We defined PD in point z as the expected value of model f given the c-th variable is set to z. The standard estimator of this value for data X is given by the following formula PD c (X, z) : , where X i is the i-th row of the matrix X and the previously mentioned symbols are used accordingly. To simplify the notation, we will use PD, and omit z and c where context is clear.

Fooling Partial Dependence via Data Poisoning
Many explanations treat the dataset X as fixed; however, this is precisely a single point of failure on which we aim to conduct the attack. In what follows, we examine PD behaviour by looking at it as a function whose argument is an entire dataset. For example, if the dataset has N instances and P variables, then PD is treated as a function over N × P dimensions. Moreover, because of the complexity of black-box models, PD becomes an extremely high-dimensional space where variable interactions cause unpredictable behaviour. Explanations are computed using their estimators where a significant simplification may occur; thus, a slight shift of the dataset used to calculate PD may lead to unintended results (for example, see [26] and the references given there).
We aim to change the underlying dataset used to produce the explanation in a way to achieve the desired change in PD. Figure 1 demonstrates the main threat of an adversarial attack on model explanation using data poisoning, which is concealing the suspected behaviour of black-box models. The critical assumption is that an adversary has a possibility to modify the dataset arbitrarily, e.g. in healthcare and financial audit, or research review. Even if this would not be the case, in modern machine learning, wherein practice dozens of variables are used to train complex models, such data shifts might be only a minor change that a person looking at a dataset or distribution will not be able to identify.
We approach fooling PD as an optimization problem for given criteria of attack efficiency, later called the attack loss. It originates from [16], where a similar loss function for manipulation of local-level model explanations for an image-based predictive task was introduced. This work introduces the attack loss that aims to change the output of a global-level explanation via data poisoning instead. The explanation's weaknesses concerning data distribution and causal inference are exploited using two ways of optimizing the loss: -Genetic-based 1 algorithm that does not make any assumption about the model's structure -the black-box path from data inputs to the output prediction; thus, is model-agnostic. Further, we posit that for a vast number of explanations, clearly post-hoc global-level, the algorithm does not make assumption about their structure either; thus, becomes explanation-agnostic.
-Gradient-based algorithm that is specifically designed for models with differentiable outputs, e.g. neural networks [15,16].
We discuss and evaluate two possible fooling strategies: -Targeted attack changes the dataset to achieve the closest explanation result to the predefined desired function [16,25]. -Robustness check aims to achieve the most distant model explanation from the original one by changing the dataset, which refers to the sanity check [1].
For practical reasons, we define the distance between the two calculated PD vectors as x − y := 1 , yet other distance measures may be used to extend the method.

Attack Loss
The intuition behind the attacks is to find a modified dataset that minimizes the attack loss. A changed dataset denoted as X ∈ R N ×P is an argument of that function; hence, an optimal X is a result of the attack. Let Z ⊂ R be the set of points used to calculate the explanation. Let T : Z → R be the target explanation; we write just T to denote a vector over whole Z. Let g Z c : R N ×P → R |Z| be the actual explanation calculated for points in Z; we write g c for simplicity. Finally, let X ∈ R N ×P be the original (constant) dataset. We define the attack loss as L(X) := L g, s (X), where g is the explanation to be fooled, and an objective is minimized depending on the strategy of the attack, denoted as s. The aim is to minimize L with respect to the dataset X used to calculate the explanation. We never change values of the explained variable c in the dataset.
In the targeted attack, we aim to minimize the distance between the target model behaviour T and the result of model explanation calculated on the changed dataset. We denote this strategy by t and define L g, t (X) = g c (X) − T . Since we focus on a specific model-agnostic explanation, we substitute PD in place of g to obtain L PD, t (X) = PD c (X) − T . This substitution can be generalized for various global-level model explanations, which rely on using a part of the dataset for computation.
In the robustness check, we aim to maximize the distance between the result of model explanation calculated on the original dataset g c (X ), and the changed one; thus, minus sign is required. We denote this strategy by r and define L g, r (X) = − g c (X) − g c (X ) . Accordingly, we substitute PD in place of g to obtain L PD, r (X) = − PD c (X) − PD c (X ) . Note that L g, s may vary depending on the explanation used, specifically for PD it is useful to centre the explanation before calculating the distances, which is the default behaviour in our implementation: . We consider the second approach of comparing explanations using centred PD, as it forces changes in the shape of the explanation, instead of promoting to shift the profile vertically while the shape changes insignificantly.

Genetic-based Algorithm
We introduce a novel strategy for fooling explanations based on the genetic algorithm as it is a simple yet powerful method for real parameter optimization [52]. We do not encode genes conventionally but deliberately use this term to distinguish from other types of evolutionary algorithms [18]. The method will be invariant to the model's definition and the considered explanations; thus, it becomes model-agnostic and explanation-agnostic. These traits are crucial when working with black-box machine learning as versatile solutions are convenient.
Fooling PD in both strategies include a similar genetic algorithm. The main idea is to define an individual as an instance of the dataset, iteratively perturb its values to achieve the desired explanation target, or perform the robustness check to observe the change. These individuals are initialized with a value of the original dataset X to form a population. Subsequently, the initialization ends with mutating the individuals using a higher-than-default variance of perturbations. Then, in each iteration, they are randomly crossed, mutated, evaluated with the attack loss, and selected based on the loss values. Crossover swaps columns between individuals to produce new ones, which are then added to the population. The number of swapped columns can be randomized; also, the number of parents can be parameterized. Mutation adds Gaussian noise to the individuals using scaled standard deviations of the variables. It is possible to constrain the data change into the original range of variable values; also keep some variables unchanged. Evaluation calculates the loss for each individual, which requires to compute explanations for each dataset. Selection reduces the number of individuals using rank selection, and elitism to guarantee several best individuals to remain in the next population.
We considered the crossover through an exchange of rows between individuals, but it might drastically shift the datasets and move them apart. Additionally, a worthy mutation is to add or subtract whole numbers from the integer-encoded (categorical) variables. We further discuss the algorithm's details in the Supplementary material. The introduced attack is model-invariant because no derivatives are needed for optimization, which allows evaluating explanations of black-box models. While we found this method a sufficient generalization of our framework, there is a possibility to perform a more efficient optimization assuming the prior knowledge concerning the structure of model and explanation.

Gradient-based Algorithm
Gradient-based methods are state-of-the-art optimization approaches, especially in the domain of deep neural networks [34]. This algorithm's main idea is to use gradient descent to optimize the attack loss, considering the differentiability of the model's output with respect to input data. Such assumption allows for a faster and more accurate convergence into a local minima using one of the stochastic optimizers; in our case, Adam [31]. Note that the differentiability assumption is with respect to input data, not with respect to the model's parameters. We shall derive the gradients ∇ X−c L g, s for fooling explanations based on their estimators, not the theoretical definitions. This is because the input data is assumed to be a random variable in a theoretical definition of PD, making it impossible to calculate a derivative over the input dataset. In practice, we do not derive our method directly from the definition as the estimator produces the explanation.
Although we specifically consider the usage of neural networks because of their strong relation to differentiation, the algorithm's theoretical derivation does not require this type of model. For brevity, we derive the theoretical definitions of gradients ∇ X−c L PD, t , ∇ X−c L PD, r , and ∇ X−c L PD, r in the Supplementary material. Overall, the gradient-based algorithm is similar to the genetic-based algorithm in that we aim to iteratively change the dataset used to calculate the explanation. Nevertheless, its main assumption is that the model provides an interface for the differentiation of output with respect to the input, which is not the case for black-box models.

Experiments
Setup. We conduct experiments on two predictive tasks to evaluate the algorithms and conclude with illustrative scenario examples, which refer to the framework shown in Figure 1. The first dataset is a synthetic regression problem that refers to the Friedman's work [19] where inputs X are independent variables uniformly distributed on the interval [0, 1], while the target y is created according to the formula: y(X) = 10 sin(π · X 1 · X 2 ) + 20(X 3 − 0.5) 2 + 10X 4 + 5X 5 . Only 5 variables are actually used to compute y, while the remaining are independent of. We refer to this dataset as friedman and target explanations of the variable X 1 . The second dataset is a real classification task from UCI [17], which has 5 continuous variables, 8 categorical variables, and an evenly-distributed binary target. We refer to this dataset as heart and target explanations of the variable age. Additionally, we set the discrete variables as constant during the performed fooling because we mainly rely on incremental change in the values of continuous variables, and categorical variables are out of the scope of this work.
Results. Figure 2 present the main result of the paper, which is that PD can be manipulated. We use the gradient-based algorithm to change the explanations of feedforward neural networks via data poisoning. The targeted attack aims to arbitrarily change the monotonicity of PD, which is evident in both predictive tasks. The robustness check finds the most distant explanation from the original one. We perform the fooling 30 times for each subplot, and the Y-axis denotes the model's architecture: layers×neurons. We observe that PD explanations are especially vulnerable in complex models.
Next, we aim to evaluate the PD of various state-of-the-art machine learning models and their complexity levels; we denote: Linear Model (LM), Random Fooling Partial Dependence of neural network models (rows) fitted to the friedman and heart datasets (columns). We performed multiple randomly initiated gradient-based fooling algorithms on the explanations of variables X1 and age respectively. The blue line denotes the original explanation, the red lines are the fooled explanations, and in the targeted attack, the grey line denotes the desired target. We observe that the explanations' vulnerability greatly increases with model complexity. Interestingly, the algorithm seems to converge to two contrarary optima when no target is provided.  Table 1 presents the results of robustness checks for Partial Dependence of various machine learning models and complexity levels. Each value corresponds to the distance between the original explanation and the changed one; multiplied by 10 3 in friedman and 10 6 in heart for clarity. We perform the checks 6 times and report the mean ± standard deviation. Note that we cannot compare the values between tasks, as their magnitudes depend on the prediction range. We found the explanations of NN, SVM and deep DT the most vulnerable to the fooling methods (top Table). In contrast, RF seems to provide robust explanations; thus, we further investigate the relationship between the tree-models' complexity and the explanation stability (bottom Table) to conclude that an increasing complexity yields more vulnerable explanations, which is consistent with Figure 2. We attribute the differences between the results for RF and GBM to the concept of bias-variance tradeoff. In some cases (heart, RF), explanations of too simple models become vulnerable too, since underfitted models may be as uncertain as overfitted ones.
Ablation Study. We further discuss the additional results that may be of interest to gain a broader context of this work. Figure 3 presents the distinction between the robustness check for centred Partial Dependence, which is the default algorithm, and the robustness check for not centred PD. We use the gradient-based algorithm to change the explanations of a 3 layers×32 neurons ReLU neural network and perform the fooling 30 times for each subplot. We observe that centring the explanation in the attack loss definition is necessary to achieve the change in explanation shape. Alternatively, the explanation shifts upwards or downwards by essentially changing the mean of prediction. This observation was consistent across most of the models despite their complexity. Fooling Partial Dependence of a 3×32 neural network fitted to the friedman (top row) and heart (bottom row) datasets. We performed multiple randomly initiated gradient-based fooling algorithms on the explanations of variables X1 and age respectively. We observe that centring PD is beneficial because it stops the manipulated explanation from shifting. Table 2. Attack loss values of the robustness checks for PD of various ReLU neural networks. We add additional noise variables to the data before model fitting, e.g. friedman+2 denotes the referenced dataset with 2 additional variables sampled from the normal distribution. We perform the fooling 30 times and report the mean ± sd. We observe that the explanations' vulnerability greatly increases with task complexity.
Adversarial Scenario. Following the framework shown in Figure 1, we consider three stakeholders apparent in explainable machine learning: developer, auditor, and prediction recipients. Let us assume that the model predicting a heart attack should not take into account a patient's sex; although, it might be a valuable predictor. An auditor analyses the model using Partial Dependence; therefore, the developer supplies a poisoned dataset for this task. Figure 5 presents two possible outcomes of model audit: concealed and suspected, which are unequivocally bound to the explanation result and dataset. In first, the model is unchanged while the stated assumption of even dependence between sex is concealed (equal to about 0.5); thus, the prediction recipients become vulnerable. Additionally, we supply an alternative scenario where the developer wants to provide evidence of model unfairness to raise suspicion (dependence for class 0 equal to about 0.7).
Supportive Scenario. In this work, we consider an equation of three variables: data, model, and explanation; thus, we poison the data to fool the explanation while the model remains unchanged. Figures 4 and 5 showcase an exemplary data shift occurring in the dataset after the attack where changing only a few explanatory variables results in bending PD. We present a moderate change in data distribution to introduce a concept of analysing such relationships for explanatory purposes, e.g. the first result might suggest that resting blood pressure and maximum heart rate contribute to the explanation of age; the second result suggests how these variables contribute to the explanation of sex. We conclude that the data shift is worth exploring to analyse variable interactions in models.

Limitations and Future Work
We find these results both alarming and informative yet proceed to discuss the limitations of the study. First is the assumption that, in an adversarial scenario, the auditor has no access to the original (unknown) data, e.g. in research or healthcare audit. While the detectability of fooling is worth analyzing, our work focuses not only on an adversarial manipulation of PD, as we sincerely hope such data poisoning is not occurring in practice. Even more, we aim to underline the crucial context of data distribution in the interpretation of explanations and introduce a new way of evaluating PD; black-box explanations by generalizing the methods with genetic-based optimization.
Another limitation is the size of the used datasets. We have engaged with larger datasets during experiments but were turned off by a contradictory view that increasing the dataset size might be considered as exaggerating the results. PD clearly becomes more complex with increasing data dimensions; moreover, higher-dimensional space should entail more possible ways of manipulation, which is evident in the ablation study. We note that in practice, the explanations might require 100-1000 observations for the estimation (e.g. kernel SHAP, PDP), hence the size of the datasets in this study. Finally, we omit datasets like Adult and COMPAS because they mainly consist of categorical variables.    Future Work. We foresee several directions for future work, e.g. evaluating the successor to PD -Accumulated Local Effects (ALE) [5]; although the practical estimation of ALE presents challenges. Second, the attack loss may be enhanced by regularization, e.g. penalty for substantial change in data or mean of model's prediction, to achieve more meaningful fooling with less evidence. We focus in this work on univariate PD, but targeting bivariate PD can also be examined. Overall, the landscape of global-level, post-hoc model explanations is a broad domain, and the potential of a security breach in other methods, e.g. SHAP, should be further examined. Enhancements to the model-agnostic and explanation-agnostic genetic algorithm are thereby welcomed.
Another future direction would be to enhance the stability of PD. Rieger and Hansen [41] present a defence strategy against the attack via data change [16] by aggregating various explanations, which produces robust results without changing the model.

Conclusion and Impact
We highlight that Partial Dependence can be maliciously altered, e.g. bent and shifted, with adversarial data perturbations. The introduced genetic-based algorithm allows evaluating explanations of any black-box model. Experimental results on various models and their sizes showcase the hidden debt of model complexity related to explainable machine learning. Explanations of low-variance models prove to be robust to the manipulation, while very complex models should not be explained with PD as they become vulnerable to change in reference data distribution. Robustness checks lead to varied modifications of the explanations depending on the setting, e.g. may propose two opposite PD, which is why it is advised to perform the checks multiple times.
This work investigates the vulnerability of global-level, post-hoc model explainability from the adversarial setting standpoint, which refers to the responsibility and security of the artificial intelligence use. Possible manipulation of PD leads to the conclusion that explanations used to explain black-box machine learning may be considered black-box themselves. These explainability methods are undeniably useful through implementations in various popular software. However, just as machine learning models cannot be developed without extensive testing and understanding of their behaviour, their explanations cannot be used without critical thinking. We recommend ensuring the reliability of the explanation results through the introduced methods, which can also be used to study models behaviour under the data shift. Code for this work is available at https://github.com/MI2DataLab/fooling-partial-dependence.

A Genetic-based Algorithm
Attacks on PD in both strategies include a similar Algorithm 1. The main idea is defining an individual as an instance of the dataset, iteratively perturb its values to achieve the desired explanation target, or perform the robustness check to observe the change. These individuals are initialized with a value of original dataset X to form a population P . Subsequently, the initialization ends with mutating P using a higher-than-default variance of perturbations. Then, in each iteration, they are randomly crossed, mutated, evaluated with the loss function, and selected based on the evaluation. The algorithm stops after a defined number of repetitions, and the best individual, with its corresponding explanation, is the result. The initialized population moves to the crossover phase. The crossover presented in Algorithm 2 swaps columns between parent individuals to produce new ones. The proportion of the population which becomes parents is parameterized by crossover_ratio, and the parent pairs are randomly sampled without replacement from the subset P crossover_ratio . For each pair, the set of variable columns (full dataset) to swap is randomly selected, and becomes a newly created individual q. Such constructed childs Q are added to the population. The enlarged population moves to the mutation phase.
The mutation presented in Algorithm 3 adds Gaussian noise to the individuals (datasets) using the variables' standard deviations std(X ), which results in a changed population P . These standard deviations are scaled by the std_ratio parameter to lower the variance of noise. There is a possibility to constraint the changes in the datasets only to the original range of variable values. Then, the potential incorrect values that might occur, are substituted with new ones from the uniform distribution of the range between the original dataset value and the boundaries. It is also practicable to treat chosen elements of the dataset as constant. The mutated population moves to the evaluation phase.
The evaluation presented in Algorithm 4 uses the Attack loss function; thus, depends on the strategy s. For the robustness check r we use the original dataset X to calculate the loss L g, r , while for the targeted attack t we require T in L g, t . Genetic algorithms usually maximize the fitness function, but we decided to minimize the loss function so that both considered algorithms are similar. Algorithm 4 returns loss values l for each individual which are passed to the selection phase. The selection presented in Algorithm 5 uses the rank selection algorithm to reduce the number of individuals to the pop_count starting number and ensure attack convergence. Rank selection uses the probability of survival of each individual, which depends on their ranking based on the corresponding loss values l. We added fundamental elitism to the selection algorithm, meaning that in each iteration, we guarantee several best individuals to remain into the next population. This addition ensures that the genetic-based attack's solution quality will not decrease from one iteration to the next. The cycle continues until max_iter iterations are reached, and the best individual is selected. ALGORITHM 5: Selection phase.

B Gradient-based Algorithm
Gradient-based algorithm uses gradient of the attack loss, which can be further enhanced by optimizers such as Adam. The implementation relies on automatic differentiation, which is easily accessible using nowadays tools like PyTorch or TensorFlow. Next, we provide Equations 1 and 2 of the attack loss derivatives used in the algorithm. When calculating the derivative of PD c , we assume that the explained variable c is constant; thus, we denote the gradient as ∇ X−c .
Lemma 1 (Derivative of L PD, t and L PD, r ). Let f : R N ×P −→ R N represents the differentiable function that is explained by PD. Let Z be the set of points used to calculate PD. Let T : Z → R. Finally, let X ∈ R N ×P be the original dataset. Then (1) Proof. We derive the formula for the targeted attack, while a formula for the robustness check is derived by analogy. We calculate the derivative with respect to a particular value X i,j in the dataset. Note that we want to leave column c intact, so j = c. T (z) is independent of X i,j , so it is dropped during the differentation. Lemma 2 (Derivative of L PD, r ). Let f : R N ×P −→ R N represents the differentiable function that is explained by PD. Let Z be the set of points used to calculate PD. Let PD denote the centred PD, which is obtained by substracting the mean in each point. Finally, let X ∈ R N ×P be the original dataset. Then