Marginal Effects for Non-Linear Prediction Functions

Beta coefficients for linear regression models represent the ideal form of an interpretable feature effect. However, for non-linear models and especially generalized linear models, the estimated coefficients cannot be interpreted as a direct feature effect on the predicted outcome. Hence, marginal effects are typically used as approximations for feature effects, either in the shape of derivatives of the prediction function or forward differences in prediction due to a change in a feature value. While marginal effects are commonly used in many scientific fields, they have not yet been adopted as a model-agnostic interpretation method for machine learning models. This may stem from their inflexibility as a univariate feature effect and their inability to deal with the non-linearities found in black box models. We introduce a new class of marginal effects termed forward marginal effects. We argue to abandon derivatives in favor of better-interpretable forward differences. Furthermore, we generalize marginal effects based on forward differences to multivariate changes in feature values. To account for the non-linearity of prediction functions, we introduce a non-linearity measure for marginal effects. We argue against summarizing feature effects of a non-linear prediction function in a single metric such as the average marginal effect. Instead, we propose to partition the feature space to compute conditional average marginal effects on feature subspaces, which serve as conditional feature effect estimates.


Introduction
The lack of interpretability of most machine learning (ML) models has been considered one of their major drawbacks (Breiman 2001b).As a consequence, researchers have developed a variety of model-agnostic techniques to explain the behavior of ML models.These techniques are commonly referred to by the umbrella terms of interpretable machine learning (IML) or explainable artificial intelligence.In this paper, we focus on feature effects that indicate the direction and magnitude of a change in a predicted outcome due to a change in feature values.We distinguish between local explanations on the observational level and global ones for the entire feature space.In many applications, we are concerned with feature effects; for example, in medical research, we might want to assess the increase in risk of contracting a disease due to a change in a patient's health characteristics such as age or body weight.
Consider the interpretation of a linear regression model (LM) without interaction terms where β j denotes the coefficient of the j-th feature.Increasing a feature value x j by one unit causes a change in predicted outcome of β j .LMs are therefore often interpreted by merely inspecting the estimated coefficients.When the terms are non-linear, interactions are present, or when the expected target is transformed such as in generalized linear models (GLMs), interpretations are both inconvenient and unintuitive.For instance, in logistic regression, the expectation of the target variable is logit-transformed, and the predictor term cannot be interpreted as a direct feature effect on the predicted risk.It follows that even linear terms have a non-linear effect on the predicted target that varies across the feature space and makes interpretations through the model parameters difficult to impossible.A more convenient and intuitive interpretation corresponds to the derivative of the prediction function w.r.t. the feature or inspecting the change in prediction due to an intervention in the data.These two approaches are commonly referred to as marginal effects (MEs) in statistical literature (Bartus 2005).MEs are often aggregated to an average marginal effect (AME), which represents an estimate of the expected ME.Furthermore, marginal effects at means (MEM) and marginal effects at representative values (MER) correspond to MEs where all features are set to the sample mean or where some feature values are set to manually chosen values (Williams 2012).These can be used to answer common research questions, e.g., what the average effect of age or body weight is on the risk of contracting the disease (AME), what the effect is for a patient with average age and body weight (MEM), and what the effect is for a patient with pre-specified age and body weight values (MER).An increasing amount of scientific disciplines now rely on the predictive power of black box ML models instead of using intrinsically interpretable models such as GLMs, e.g., econometrics (Athey 2017) or psychology (Stachl et al. 2017).This creates an incentive to review and refine the theory of MEs for the application to non-linear models.

Motivation
In their current form, MEs are not an ideal tool to interpret many statistical models such as GLMs.Furthermore, the shortcomings of MEs are exacerbated when applying MEs to black box models such as the ones created by many ML algorithms.The ME is an unfit tool to determine multivariate feature effects, which is a requirement for model interpretability.Moreover, for non-linear prediction functions, MEs based on derivatives provide misleading interpretations, as the derivative is evaluated at a different location in the feature space where it may substantially deviate from the prediction function.The alternative and often overlooked definition based on forward differences suffers from a loss in information about the shape of the prediction function (see Section 4).Furthermore, predictive models typically do not behave reliably in areas with a low density of training observations.MEs are often based on such model extrapolations in order to answer specific research questions (Hainmueller et al. 2019).For instance, one may evaluate feature effects on the disease risk for an artificially created patient observation which was not included in the training data.This problem is aggravated when switching from derivatives to forward differences (see Section 4).Lastly, for linear models, the ME is identical across the entire feature space.For non-linear models, one typically estimates the global feature effect by computing the AME (Bartus 2005;Onukwugha et al. 2015).However, a global average does not accurately represent the nuances of a non-linear predictive model.A more informative summary of the prediction function corresponds to the conditional feature effect on a feature subspace, e.g., patients with an entire range of health characteristics might be associated with homogeneous feature effects.Instead of global interpretations on the entire feature space, one should instead aim for semi-aggregated (semi-global) interpretations.More specifically, one should work towards computing multiple, semi-global conditional AMEs (cAMEs) instead of a single, global AME.

Contributions
The purpose of this research paper is twofold.First, we aim to provide a review of the theory of MEs to compute feature effects.Second, we refine MEs for the application to non-linear models.To avoid misinterpretations resulting from applying derivative MEs (dMEs) to non-linear prediction functions, we argue to abandon them in favor of forward differences which we term forward marginal effects (fMEs).Furthermore, we extend the univariate definition of fMEs to multivariate feature changes in order to determine multivariate feature effects.We demonstrate the superiority of fMEs over dMEs to interpret tree-based prediction functions.Although the resulting fMEs provide a precise feature effect measure on the predicted outcome, we are at risk of misjudging the shape of the prediction function as a linear function.To counteract this loss in information, we propose a non-linearity measure (NLM) for fMEs based on the similarity between the prediction function and the intersecting linear secant.Furthermore, for a more nuanced interpretation, we introduce conditional MEs (cMEs) on feature subspaces as a semi-global feature effect measure that more accurately describes feature effects across the feature space.Moreover, we propose to estimate cMEs with conditional AMEs (cAMEs), which can be computed by recursively partitioning the feature space with a regression tree on fMEs.Furthermore, we provide proofs on additive recovery for the univariate and multivariate fME and a proof on the relation between the individual conditional expectation (ICE) / partial dependence (PD) and the fME / forward AME.

Structure of the Paper
The paper is structured as follows: In Section 2, we introduce the notation and general background on predictive modeling and finite differences.In Section 3, we review existing methodology of MEs.We begin with a survey of related work in theoretical and applied statistics, IML, and sensitivity analysis.We review existing definitions of MEs, including categorical MEs, (numeric) derivatives and univariate forward differences, as well as variants and aggregations of MEs, i.e., the AME, MEM and MER.In Section 4, we provide recommendations for current practice and discuss shortcomings of existing methodology.We argue to abandon dMEs in favor of fMEs and discuss the selection of step sizes, including potential model extrapolations.In Section 5, we introduce a new class of fMEs, i.e., an observation-wise definition of the categorical ME, the multivariate fME, the NLM, the cME, and a framework to estimate the cME via cAMEs.In Section 6, we compare fMEs -including our introduced novelties -to the competing state-of-the-art method LIME.In Section 7, we run multiple simulations with fMEs and the NLM.In Section 8, we present our suggested application workflow and an applied example with the Boston housing data.The Appendix contains background information on additive decompositions of prediction functions and on model extrapolations, as well as several mathematical proofs.

Notation and General Background
This section introduces the notation and general background information on predictive modeling and finite differences.See Appendix A for further background information on additive decompositions of prediction functions and on model extrapolations.

Data and Predictive Model
We consider a p-dimensional feature space X = X 1 × • • • × X p and a target space Y .The random variables on the feature space are denoted by X X X = (X 1 , . . ., X p )1 .The random variable on the target space is denoted by Y .An undefined subspace of all features is denoted by X [ ] ⊆ X .Correspondingly, X X X with a restricted sample space is denoted by X X X [ ] .A realization of X X X and Y is denoted by x x x = (x 1 , . . ., x p ) and y.The probability distribution P is defined on the sample space X × Y .A learning algorithm trains a predictive model f : R p → R on data drawn from P, where f (x x x) denotes the model prediction based on the p-dimensional feature vector x x x.To simplify our notation, we only consider one-dimensional predictions.However, the results on MEs can be generalized to multi-dimensional predictions, e.g., for multi-class classification.We denote the value of the j-th feature in x x x by x j .A set of features is denoted by S ⊆ {1, . . ., p}.The values of the feature set are denoted by x x x S2 .All complementary features are indexed by − j or −S, so that x x x − j = x x x {1, ... , p} \ { j} , or x x x −S = x x x {1, ... , p} \ S .An instance x x x can be partitioned so that x x x = (x j , x x x − j ), or x x x = (x x x S , x x x −S ).With slight abuse of notation, we may denote the vector x x x S by (x 1 , . . ., x s ) regardless of the elements of S, or the vector (x j , x x x − j ) by (x 1 , . . ., x j , . . ., x p ) although j ∈ {1, . . ., p}.The i-th observed feature vector is denoted by x x x (i) and corresponds to the target value y (i) .We evaluate the prediction function with a set of training or test data D = x x x (i) n i=1 .

Finite Differences and Derivatives
A finite difference (FD) of the prediction function f (x x x) w.r.t.x j is defined as: The FD can be considered a movement on the prediction function (see Fig. 1).There are three common variants of FDs: forward (a = h, b = 0), backward (a = 0, b = −h), and central differences (a = h, b = −h).In the following, we only consider forward differences with b = 0 where the FD is denoted without b.Dividing the FD by (a − b) corresponds to the difference quotient: Fig. 1: The surface represents an exemplary prediction function dependent on two features.The FD can be considered a movement on the prediction function.We travel from point (0, -7) to point (7, 0).
The derivative is defined as the limit of the forward difference quotient when a = h approaches zero: We can numerically approximate the derivative with small values of h.We can use forward, backward, or symmetric FD quotients, which have varying error characteristics.As an example, consider a central FD quotient which is often used for dMEs (Leeper 2018): This section provides a review of existing methodology.We survey related work and review existing definitions of MEs, including categorical MEs, (numeric) derivatives and univariate forward differences, and variants and aggregations of MEs.

Statistics and Applied Fields
MEs have been discussed extensively in the literature on statistics and statistical software, e.g., by Ai and Norton (2003), Greene (2012), Norton et al. (2019), or Mullahy (2017).The margins command is a part of Stata (StataCorp 2019) and was originally implemented by Bartus (2005).A brief description of the margins command is given by Williams (2012).Leeper (2018) provides an overview on dMEs and their variations as well as a port of Stata's functionality to R. Ramsey and Bergtold (2021) compute an ME for a single-hidden layer forward back-propagation artificial neural network by demonstrating its equivalent interpretation with a logistic regression model with a flexible index function.Zhao et al. (2020) apply model-agnostic dMEs to ML models in the context of analyzing travel behavior.Furthermore, they mention the unsuitability of derivatives for tree-based prediction functions such as random forests.Mize et al. (2019) provide a test framework for cross-model differences of MEs.They refer to a marginal effect with a forward difference as a discrete change and to the corresponding AMEs as average discrete changes.Gelman and Pardoe (2007) propose the predictive effect as a local feature effect measure.The predictive effect is a univariate forward difference, divided by the change in feature values (i.e., the step size).This differentiates it from the fME which we also define for multivariate feature changes and which is not divided by the step size, i.e., it provides a change in prediction as opposed to a rate of change.Furthermore, the authors propose an average predictive effect that corresponds to the average of multiple predictive effects that were measured at distinct feature values and model parameters.As such, it is a generalization of the AME that may be estimated with artificially created data points (as opposed to the sample at hand) and incorporates model comparisons (measured with different model parameters).

Interpretable Machine Learning
The most commonly used techniques to determine feature effects include the ICE (Goldstein et al. 2015), the PD (Friedman 2001), accumulated local effects (ALE) (Apley and Zhu 2020), Shapley values ( Štrumbelj and Kononenko 2014), Shapley additive explanations (SHAP) (Lundberg and Lee 2017), local interpretable model-agnostic explanations (LIME) (Ribeiro et al. 2016), and counterfactual explanations (Wachter et al. 2018).Counterfactual explanations indicate the smallest necessary change in feature values to receive the desired prediction and represent the counterpart to MEs.Goldstein et al. (2015) propose derivative ICE (d-ICE) plots to detect interactions.The d-ICE is a univariate ICE where the numeric derivative w.r.t. the feature of interest is computed pointwise after a smoothing procedure.Symbolic derivatives are commonly used to determine the importance of features for neural networks (Ancona et al. 2017).While MEs provide interpretations in terms of prediction changes, most methods provide an interpretation in terms of prediction levels.LIME is an alternative option that returns interpretable parameters (i.e., rates of change in prediction) of a local surrogate model.LIME, and to a lesser extent SHAP, have been demonstrated to provide unreliable interpretations in some cases.For instance, LIME is strongly influenced by the chosen kernel width parameter (Slack et al. 2019).In Section 6, we compare our new class of fMEs to LIME.Furthermore, many techniques in IML are interpreted visually (e.g., ICEs, the PD, ALE plots) and are therefore limited to feature value changes in at most two dimensions.MEs are not limited by the dimension of the intervention in feature values, as any change in feature values -regardless of its dimensionality -always results in a single ME.

Sensitivity Analysis
The goal of sensitivity analysis (SA) is to determine how uncertainty in the model output can be attributed to uncertainty in the model input, i.e., determining the importance of input variables (Saltelli et al. 2008).Techniques based on FDs are common in SA (Razavi et al. 2021).The numeric derivative of the function to be evaluated w.r.t. an input variable serves as the natural definition of local importance in SA.The elementary effect (EE) was first introduced as part of the Morris method (Morris 1991) as a screening tool for important inputs.The EE corresponds to a univariate forward difference quotient with variable step sizes, i.e., it is a generalization of the derivative.Variogram-based methods analyze forward differences computed at numerous pairs of points across the feature space (Razavi and Gupta 2016).Derivative-based global sensitivity measures (DGSM) provide a global feature importance metric by averaging derivatives at points obtained via random or quasi-random sampling.

Categorical Features
MEs for categorical features are often computed as the change in prediction when the feature value changes from a reference category to another category (Williams 2012).In other words, for each observation, the observed categorical feature value is set to the reference category, and we record the change in prediction when changing it to every other category.Given k categories, this results in k − 1 MEs for each observation.Consider a categorical feature x j with categories C = {c 1 , . . ., c k }.We denote the reference category by c r .The categorical ME for an observation x x x and a single category c l corresponds to Eq.
(2): The most commonly used definition of MEs for continuous features corresponds to the derivative of the prediction function w.r.t. a feature.We will refer to this definition as the derivative ME (dME).In case of a linear prediction function, the interpretation of dMEs is simple: if the feature value increases by one unit, the prediction will increase by the dME estimate.
Note that even the prediction function of a linear regression model can be non-linear if exponents of order ≥ 2 are included in the feature term.Similarly, in GLMs, the linear predictor is transformed (e.g., log-transformed in Poisson regression or logit-transformed in logistic regression).Interpreting the dME is problematic for non-linear prediction functions, which we further explore in the following section.

Forward Differences with Univariate Feature Value Changes
A distinct and often overlooked definition of MEs corresponds to the change in prediction with adjusted feature values.Instead of computing the tangent at the point of interest and inspecting it at another feature value, this definition corresponds to putting the secant through the point of interest and the prediction with another feature value (see Fig. 3).This definition of MEs is based on a forward difference instead of a symmetric difference.We refer to this variant as the forward ME (fME).
Unlike the dME, this variant does not require dividing the FD by the interval width.Typically, one uses a one-unit-change (h j = 1) in feature values (Mize et al. 2019).
The fME is illustrated in Fig. 3.It corresponds to the change in prediction along the secant (green) through the point of interest (prediction at x = 0.5) and the prediction at the feature value we receive after the feature change (x = 1.5).Table 1 gives an exemplary intervention in the Boston housing data to compute an fME.The goal is to determine the fME of increasing the average number of rooms per dwelling by one room on the median value of owner-occupied homes per Boston census tract.

Variants and Aggregations of Marginal Effects
There are three established variants or aggregations of univariate MEs: The AME, MEM, and MER (Williams 2012), which can be computed for both dMEs and fMEs.In the following, we will use the notation of fMEs.Although we technically refer to the fAME, fMEM and fMER, we omit the "forward" prefix in this case for reasons of simplicity.
(i) Average marginal effects (AME): The AME represents an estimate of the expected ME w.r.t. the distribution of X X X.We estimate it via Monte Carlo integration, i.e., we average the fMEs that were computed for each (randomly sampled) observation: (ii) Marginal effect at means (MEM): The MEM can be considered the reverse of the AME, i.e., it is the ME evaluated at the expectation of X X X.We estimate the MEM by substituting all feature values with their sampling distribution means: fME Note that averaging values is only sensible for continuous features.Williams (2012) defines a categorical MEM where all remaining features x x x − j are set to their sample means (conditional on being continuous) and x j changes from each category to a reference category.(iii) Marginal effects at representative values (MER): Furthermore, we can substitute specific feature values for all observations by manually specified values x x x * .It follows that the MEM is a special case of the MER where the specified values correspond to the sample means.MERs can be considered conditional MEs, i.e., we compute MEs while conditioning on certain feature values.The MER for a single observation with modified feature values x x x * corresponds to: 4 Recommendations for Current Practice and Shortcomings This section gives recommendations for current practice and discusses shortcomings of existing methods that form the basis for the modifications we present in the subsequent section.

Forward Difference versus Derivative
The most common way of using MEs in practice is through dMEs.In case of non-linear prediction functions, dMEs can lead to substantial misinterpretations (see Fig. 3).The slope of the tangent (grey) at the point of interest (prediction at x = 0.5) corresponds to the dME.The default way to interpret the dME is to evaluate the tangent at the feature value we receive after a unit change (x = 1.5).This leads to substantial misinterpretations for non-linear prediction functions.In this case, there is an error (red) almost as large as the actual change in prediction.Although the computation of the dME does not require a step size, its interpretation does and is therefore error-prone.We recommend to use fMEs instead of the more popular dMEs.An fME always indicates an exact change in prediction for any prediction function and is therefore much more interpretable.For linear prediction functions, the interpretation of both variants is equivalent.MEs are often discarded in favor of differences in adjusted predictions (APs), i.e., predictions computed with different feature values (Williams 2012).In the following section, we formulate a multivariate fME which bridges the gap between fMEs and differences in APs.

Step Size and Extrapolation Risk
The step size is determined both by the question that is being addressed and the scale of the feature at training time.Consider a feature weight that is scaled in kilograms.If we want to evaluate the change in prediction if each observation's bodyweight increases by a kilogram, the appropriate step size is 1.We could also assess the change in prediction if each observation's weight decreases by a kilogram by specifying a step size of −1.However, if weight is scaled in grams, the appropriate step sizes are 1000 and −1000, respectively.Furthermore, one could think of dispersion-based measures to select step sizes.Mize et al. (2019) suggest one standard deviation in observed feature values as an informative step size.Other options include, e.g., a percentage of the interquartile range or the mean / median absolute deviation.
The step size cannot vary indefinitely without risking model extrapolations.This issue becomes especially prevalent when switching from dMEs (with instantaneous changes in feature values) to fMEs (with variable step sizes of arbitrary magnitude).Furthermore, when using non-training data (or training data in low-density regions) to compute fMEs, we are at risk of the model extrapolating without actually traversing the feature space.If the points x x x or (x j + h j , x x x − j ) are classified as extrapolation points (EPs), the fME should be interpreted with caution.If the point x x x is located inside an extrapolation area (e.g., if it is located outside of the multivariate envelope of the training data), we suggest to exclude x x x from the data used to compute fMEs.Given the point x x x is not an EP, but (x j + h j , x x x − j ) is, the step size can be modified until (x j + h j , x x x − j ) is located inside a non-extrapolation area.It is debatable how to classify points as EPs.In Appendix A.2, we describe several options for extrapolation detection.In Section 5, we extend the univariate definition of fMEs to multivariate steps.The same holds for multivariate fMEs, i.e., one can modify the marginal step widths independently until the model does not extrapolate anymore.
Fig. 2 demonstrates the pitfall of model extrapolations when using fMEs.We draw points of a single feature x from a uniform distribution on the interval [−5, 5] with y = x 2 + ε, where ε ∼ N(0, 1).A random forest is trained to predict y.All points x ∈ [−5, 5] fall outside the range of the training data and are classified as EPs.We compute fMEs with a step size of 1.By implication, all fMEs with x > 4 are based on model extrapolations.The resulting fMEs in the extrapolation area behave considerably different from the fMEs in the non-extrapolation area and should not be used for interpretation purposes, as they convey an incorrect impression of the feature effect of x on y.

Shortcomings of Existing Methodology
First, in many cases, one wants to interpret the model based on changes in multiple features.The effect of multiple features may consist of univariate effects, as well as on interactions between features of various orders (see Appendix A.1 for an introduction to decompositions of prediction functions).However, MEs are only capable of computing univariate feature effects.This inability to assess the combined effects of multiple features creates a barrier for their adoption.Second, the existing definition of categorical MEs in Eq. ( 2) is counter-intuitive and especially prone to extrapolations, as it replaces observed values on both sides of the forward difference with artificially constructed ones.Third, although usage of the fME results in an exact change in prediction, we lose information about the shape of the prediction function along the forward difference (see Fig. 3).One therefore risks misinterpreting the fME as a linear effect.Fourth, the common practice to aggregate MEs to an AME, which acts as an estimate of the global feature effect, is misleading for non-linear prediction functions.Averaging may even result in canceling out individual effects with opposite signs, falsely suggesting that there is no influence of the feature on the target.We address these shortcomings in the following section.

A New Class of Forward Marginal Effects
This section introduces a new class of fMEs.First, we address the inadequate conception of categorical MEs by introducing a new, intuitive definition.Moreover, we extend univariate fMEs to multivariate changes in feature values.Next, we introduce the NLM for fMEs.The concept of estimating the expected ME through the AME is extended to a semi-global cME, which is conditional on a feature subspace.Lastly, we present ways to estimate cMEs through partitioning the feature space.

Observation-Wise Categorical Marginal Effect
Recall that the common definition of categorical MEs is based on first changing all observations' value of x j to each category and then computing the difference in predictions when changing it to the reference category.However, one is often interested in prediction changes if aspects of an actual observation change.We therefore propose an observation-wise categorical ME.We first select a single reference category c h .For each observation whose feature value x j = c h , we predict once with the observed value x j and once where x j has been replaced by c h : This definition of categorical MEs is in line with the definition of fMEs, as we receive a single ME for a single observation with the observed feature value as the reference point.For k categories, we receive k − 1 sets of observation-wise categorical MEs.Keep in mind that changing categorical values also creates the risk of model extrapolations.However, observation-wise categorical MEs reduce the potential for extrapolations, as only one prediction of the FD is based on manipulated feature values.
As an example, consider the Boston housing data (see Table 3).We could be interested in the effect of changing the categorical feature chas (indicating whether a district borders the Charles river in Boston) on a district's median housing value in US dollars.For a district that does not border the river (chas = 0), the observation-wise categorical ME would investigate the effect of it bordering the river and vice versa.
Observation-wise categorical MEs are also suited for computing a categorical AME.Note that we have a smaller number of categorical MEs, depending on the marginal distribution of x j , which may affect the variance of the mean.However, observation-wise categorical MEs are not suited for MEMs, as we lose track of the observed values when averaging x x x −S .The same partially holds for MERs, i.e., their computation is possible, but the MER obfuscates the interpretation we wish to have for observation-wise categorical MEs.

Forward Differences with Multivariate Feature Value Changes
We can extend the univariate fME to an arbitrary set of features, which allows us to explore the surface of the prediction function in various directions simultaneously.Consider a multivariate change in feature values of a feature subset S = {1, . . ., s} with S ⊆ P. The fME corresponds to Eq. ( 5).We denote the multivariate change (x 1 + h 1 , . . ., x s + h s ) by (x x x S + h h h S ): It is straightforward to extend the univariate definition of the AME, MEM, and MER to multivariate feature value changes.
One can demonstrate that the multivariate fME is equivalent to a difference in APs, which is used as an alternative to MEs.Consider two observations x x x and x x x * = (x * 1 , . . ., x * p ): It follows: The multivariate fME bridges the gap between MEs and differences in APs, thus making a distinction obsolete.
A technique with the additive recovery property only recovers terms of the prediction function that depend on the feature(s) of interest x x x S or consist of interactions between the feature(s) of interest and other features, i.e., the method recovers no terms that exclusively depend on the remaining features x x x −S (Apley and Zhu 2020).In Appendix B.1, we derive the additive recovery property for fMEs.Fig. 5: Three ICE curves are black colored.The PD is the average of all ICE curves and is yellow colored.For each ICE curve, we have a single observation, visualized by the corresponding green dot.We compute the fME at each observation with a step size of 1, which results in the corresponding red dot.The fMEs are equivalent to the vertical difference between two points on the ICE curves.If the prediction function is linear in the feature of interest, the average of all fMEs is equivalent to the vertical difference between two points on the PD.

Relation between Forward Marginal Effects, the Individual Conditional Expectation, and Partial Dependence
Given a data point x x x, the ICE of a feature set S corresponds to the prediction as a function of replaced values x x x * S where x x x −S is kept constant: The PD on a feature set S corresponds to the expectation of f (X X X) w.r.t. the marginal distribution of X X X −S .It is estimated via Monte Carlo integration where the draws x x x −S correspond to the sample values: We can visually demonstrate that in the univariate case, the fME is equivalent to the vertical difference between two points on an ICE curve.However, the AME is only equivalent to the vertical difference between two points on the PD curve for linear prediction functions (see Fig. 5).We generalize this result to the multivariate fME and ICE , as well as the multivariate forward AME and PD (see Theorem 3 and Theorem 4 in Appendix B.2). Visually assessing changes in prediction due to a change in feature values is difficult to impossible in more than two dimensions.High-dimensional feature value changes therefore pose a natural advantage for fMEs as opposed to techniques such as the ICE, PD, or ALEs, which are mainly interpreted visually.

Non-Linearity Measure
Although an fME represents the exact change in prediction and always accurately describes the movement on the prediction function, we lose information about the function's shape along the forward difference.It follows that when interpreting fMEs, we are at risk of misjudging the shape of the prediction function as a piecewise linear function.However, prediction functions created by ML algorithms are not only non-linear but also differ considerably in shape across the feature space.We suggest to augment the change in prediction with an NLM that quantifies the deviation between the prediction function and a linear reference function.First, the fME tells us the change in prediction for pre-specified changes in feature values.
Then, the NLM tells us how accurately a linear effect resembles the change in prediction.Given only univariate changes in feature values, we may visually assess the non-linearity of the feature effect with an ICE curve.However, the NLM quantifies the non-linearity in a single metric, which can be utilized in an informative summary output of the prediction function.In Section 5.5, we estimate feature effects conditional on specific feature subspaces.The individual NLM values can be used to more accurately describe the feature subspace.Given changes in more than two features, visual interpretation techniques such as the ICE and PD are not applicable.As opposed to this, the NLM is defined in arbitrary dimensions.

Linear Reference Function
A natural choice for the linear reference function is the secant intersecting both points of the forward difference (see Fig. 6).Using the multivariate secant as the linear reference is based on the assumption that a multivariate change in feature values is a result from equally proportional changes in all features.Although this assumption is somewhat restrictive, it allows for a great simplification of the non-linearity assessment, as the secant is unique and easy to evaluate.The secant for a multivariate fME corresponds to Eq. (6): Fig. 7 visualizes the discrepancy between the prediction function and the secant along a two-dimensional fME.If the NLM indicates linearity, we can infer that if all individual feature changes are multiplied by a scalar t ∈ [0, 1], the fME would change by t as well.

Definition and Interpretation
Comparing the prediction function against the linear reference function along the fME is a general concept that can be implemented in various ways.One requires a normalized metric that indicates the degree of similarity between functions or sets of points.The Hausdorff (Belogay et al. 1997) and Fréchet (Alt and Godau 1995) distances are established metrics Fig. 7: A non-linear prediction function, the path along its surface, and the corresponding secant along a two-dimensional fME from point (-5, -5) to point (5, 5).that originate in geometry.Another option is to integrate the absolute or squared deviation between both functions.These approaches have the common disadvantage of not being normalized, i.e., the degree of non-linearity is scale-dependent.
Molnar et al. ( 2020) compare non-linear function segments against linear models via the coefficient of determination R 2 .In this case, R 2 indicates how well the linear reference function is able to explain the non-linear prediction function compared to the most uninformative baseline model, i.e., one that always predicts the prediction function through its mean value.As we do not have observed data points along the forward difference, points would need to be obtained through (Quasi-)Monte-Carlo sampling, whose error rates heavily depend on the number of sampled points.As both the fME and the linear reference function are evaluated along the same single path across the feature space, their deviation can be formulated as a line integral.Hence, we are able to extend the concept of R 2 to continuous integrals, comparing the integral of the squared deviation between the prediction function and the secant, and the integral of the squared deviation between the prediction function and its mean value.The line integral is univariate and can be numerically approximated with various techniques such as Gaussian quadrature.
The parametrization of the path through the feature space is given by γ : [0, 1] → X , where γ(0) = x x x and γ(1) = (x x x S + h h h S , x x x −S ).The line integral of the squared deviation between prediction function and secant along the forward difference corresponds to: The integral of the squared deviation between the prediction function and the mean prediction is used as a baseline.The mean prediction is given by the integral of the prediction function along the forward difference, divided by the length of the path: The NLM x x x,h h h S is defined as: The values of the NLM have an upper limit of 1 and indicate how well the secant can explain the prediction function, compared to the baseline model of using the mean prediction.For a value of 1, the prediction function is equivalent to the secant.A lower value indicates an increasing non-linearity of the prediction function.For negative values, the mean prediction better predicts values on the prediction function than the secant, i.e., the prediction function is highly non-linear.We suggest to use 0 as a hard bound to indicate non-linearity and values on the interval ]0, 1[ as an optional soft bound.
Instead of conditioning on pre-specified feature changes, we may also reverse the procedure and optimize the NLM by adjusting the step sizes.For instance, given a patient with certain health characteristics, we could search for the maximum step sizes in age and weight that correspond to an approximately linear fME on the disease risk, e.g., with an NLM ≥ 0.9.

Conditional Forward Marginal Effects on Feature Subspaces
It is desirable to summarize the feature effect in a single metric, similarly to the parameter-focused interpretation of linear models.For instance, one is often interested in the expected ME, which can be estimated via the AME.However, averaging heterogeneous MEs to the AME is not globally representative of non-linear prediction functions such as the ones created by ML algorithms.A heterogeneous distribution of MEs requires a more local evaluation.As opposed to conditioning on feature values in the case of MERs (local), we further suggest to condition on specific feature subspaces (semi-global).We can partition the feature space into mutually exclusive subspaces (see Fig. 8) where MEs are more homogeneous.In such a feature subspace, we can average the fMEs to a conditional AME (cAME), which is an estimate of the conditional ME (cME).The feature space partitioning, including the cAMEs, serves as a compact semi-global summary of the prediction function or as a descriptor of population subgroups.Subgroup analysis is of interest in a multitude of application contexts (Atzmueller 2015).The cME and cAME (conditional on the feature subspace

Partitioning the Feature Space
Decision tree learning is an ideal scheme to partition the entire feature space into mutually exclusive subspaces.Growing a tree by global optimization poses considerable computational difficulties and corresponds to an NP-complete problem (Norouzi et al. 2015).Recent developments in computer science and engineering can be explored to revisit global decision tree optimization from a different perspective, e.g., Bertsimas and Dunn (2017) explore mixed-integer optimization (which has seen speedups in the billion factor range over the last decades) to find globally optimal decision trees.To reduce computational complexity, the established way (which is also commonly available through many software implementations) is through recursive partitioning (RP), optimizing an objective function in a greedy way for each tree node.
Fig. 8: An exemplary three-dimensional feature space is partitioned into four hyperrectangles of different sizes.Our goal is to partition the entire feature space into hyperrectangles such that the variance of MEs on each subspace is minimized.
Over the last decades, a large variety of RP methods has been proposed (Loh 2014), with no gold standard having crystallized to date.In principle, any RP method that is able to process continuous targets can be used to find cAMEs, e.g., classification and regression trees (CART) (Breiman et al. 1984;Hastie et al. 2001), which is one of the most popular approaches.Trees have been demonstrated to be notoriously unstable w.r.t.perturbations in input data (Zhou et al. 2018;Last et al. 2002).Tree ensembles, such as random forests (Breiman 2001a), reduce variance but lose interpretability as a single tree structure.Exchanging splits along a single path results in structurally different but logically equivalent trees (Turney 1995).It follows that two structurally very distinct trees can create the same or similar subspaces.We are therefore not interested in the variance of the tree itself, but in the subspaces it creates.
Subspace instabilities affect the interpretation, e.g., when trying to find population subgroups.One should therefore strive to stabilize the found subspaces, e.g., by stabilizing the splits.A branch of RP methods incorporates statistical theory into the split procedure.Variants include conditional inference trees (CTREE) (Hothorn et al. 2006), which use a permutation test to find statistically significant splits; model-based recursive partitioning (MOB) (Zeileis et al. 2008), which fits node models and tests the instability of the model parameters w.r.t.partitioning the data; or approximation trees (Zhou et al. 2018), which generate artificially created samples for significance testing of tree splits.Seibold et al. (2016) use MOB to find patient subgroups with similar treatment effects in a medical context.The variance and instability of decision trees partly stems from binary splits, as a decision higher up cascades through the entire tree and results in different splits lower down the tree (Hastie et al. 2001).Using multiway trees, which also partition the entire feature space, would therefore improve stability.However, multiway splits are associated with a considerable increase in computational complexity and are therefore often discarded in favor of binary splitting (Zeileis et al. 2008).For the remainder of the paper, we use CTREE to compute cAMEs.

Interpretation and Confidence Intervals
As the SSE, and therefore also the variance and standard deviation (SD), of the fMEs is highly scale-dependent, the regression tree can be interpreted via the coefficient of variation (CoV).The CoV, also referred to as the relative SD, corresponds to the SD divided by the (absolute) mean and is scale-invariant.In our case, it is the SD of the fMEs divided by the absolute cAME inside each subspace.We strive for the lowest value possible, which corresponds to the highest homogeneity of fMEs.Given a cME estimate, it is desirable to estimate the conditional NLM (cNLM) on the corresponding subspace.It is trivial to include a conditional average NLM (cANLM) in the feature subspace summary, i.e., we simply average all NLMs on the found subspaces.The cANLM gives us an estimate of the expected non-linearity of the prediction function for the given movements along the feature space, inside the specific feature subspace.
Fig. 9 visualizes an exemplary feature space partitioning for the Boston housing data on the fMEs resulting from increasing the average number of rooms per dwelling (feature rm) by 1.The global AME is 5.43 with a CoV of 0.68.The leftmost leaf node contains 113 observations with a cAME of 6.79 (CoV = 0.31), i.e., the homogeneity of the fMEs is considerably higher than on the entire feature space.The average linearity of the fMEs on this subspace is high (cANLM = 0.93 with Fig. 9: We use CTREE to recursively partition the feature space so that the homogeneity of fMEs is maximized on each subspace.The average of the fMEs on each subspace is the corresponding cAME.For example, the leftmost leaf node contains 113 observations with a cAME of 6.79 and a cANLM of 0.93, i.e., the average linearity of the fMEs on this subspace is high.The respective CoV values indicate a high homogeneity of the fME and NLM values.The leftmost leaf node is summarized in Table 2. Table 2: One of 4 leaf nodes (the leftmost) of a decision tree (visualized in Fig. 9) that partitions the feature space into mutually exclusive subspaces where MEs are more homogeneous compared to the entire feature space.The features whose range is restricted to span the subspace are emphasized.CoV = 0.36).Given the global ranges of all features, every decision rule can be transformed into a compact summary of the feature space, such as in Table 2.The goal of finding feature subspaces where the feature effect is homogeneous essentially corresponds to finding population subgroups.Given the subspace, we may reverse the analysis as follows.If we sampled an observation on this feature subspace (which may be quite large in hypervolume and comprise many different possible data points), what is the expected change in predicted outcome given the specified changes in feature values, what is the expected non-linearity of the expected change in predicted outcome, and how confident are we in our estimates?The first question is addressed by the cAME, the second by the cANLM, and the third by the SD and the number of observations.Lower SD values increase confidence in the estimate and vice versa, and a larger number of observations located in the subspace increase confidence in the estimate and vice versa.Although we do not specify a distribution of the underlying fMEs or NLMs, constructing a confidence interval (CI) is possible via the central limit theorem.As the cAME and cANLM are sample averages of all fMEs and NLM values on each subspace, they both approximately follow a t-distribution (as the SD is estimated) for large sample sizes.Given a subspace X [ ] that contains n [ ] observations, mean (cAME D [ ] ,h h h S and cANLM D [ ] ,h h h S ) and SD (SD fME, [ ] and SD NLM, [ ] ) values, the confidence level α, and the values of the t-statistic with n [ ] − 1 degrees of freedom at the 1 − α 2 percentile (t 1− α 2 , n [ ] −1 ), the CIs correspond to: One option to ensure that the lower sample size threshold for CIs is valid is to specify a minimum node size for the computation of cAMEs, i.e., not growing the tree too large.When estimating the conditional feature effect for a hypothetical observation, it is important to avoid model extrapolations of the tree.We may create a hypothetical observation, use the tree to predict the cME estimate, and express our confidence in the estimate with a CI.However, the feature values of the observation may be located outside of the distribution of observations that were used to train the decision tree and therefore result in unreliable predictions.
6 Comparison to LIME LIME (Ribeiro et al. 2016), one of the state-of-the-art model-agnostic feature effect methods (see Section 3), most closely resembles the interpretation given by an fME.It also serves as a local technique, explaining the model for a single observation.LIME samples instances, predicts, and weighs the predictions by the instances' proximity to the instance of interest using a kernel function.Afterwards, an interpretable surrogate model is trained on the weighted predictions.The authors choose a sparse linear model, whose beta coefficients provide an interpretation similar to the fME.
But there is a fundamental difference between both approaches.The fME directly works on the prediction function, while LIME trains a local surrogate model.The latter is therefore affected by an additional layer of complexity and uncertainty.The authors suggest to use LASSO regression, which requires choosing a regularization constant.Furthermore, one must select a similarity kernel defined on a distance function with a width parameter.The model interpretation is therefore fundamentally determined by multiple parameters.Furthermore, certain surrogate models are incapable of explaining certain model behaviors and may potentially mislead the practitioner to believe the interpretation (Ribeiro et al. 2016).A linear surrogate model may not be able to describe extreme non-linearities of the prediction function, even within a single locality of the feature space.In contrast, the only parameters for the fME are the features and the step sizes.Without question, the choice of parameters for fMEs also significantly affects the interpretation.However, we argue that their impact is much clearer than in LIME, e.g., a change in a feature such as age is much more meaningful than a different width parameter in LIME.In fact, we argue that the motivation behind both approaches is fundamentally different.For fMEs, we start with a meaningful interpretation concept in mind, e.g., we may be interested in the combined effects of increasing age and weight on the disease risk.For LIME, we start with a single observation, trying to distill the black box model behavior within this specific locality into a surrogate model.
In addition to the sensitivity of results regarding parameter choices (Slack et al. 2019), LIME is notoriously unstable even with fixed parameters.Zhou et al. (2021) note that repeated runs using the same explanation algorithm on the same model for the same observation results in different model explanations, and they suggest significance testing as a remedy.In contrast, fMEs with fixed parameters are deterministic.
Furthermore, the authors of LIME note that the faithfulness of the local surrogate may be diminished by extreme nonlinearities of the model, even within the locality of the instance of interest.This exact same critique holds for the fME (see Section 5.4).Hence, we introduce the NLM, which essentially corresponds to a measure of faithfulness of the fME and whose concept can potentially be used for other methods as well.One could also use the coefficient of determination R 2 to measure the goodness-of-fit of the linear surrogate to the pseudo sample in LIME.However, we argue that the goodness-of-fit to a highly uncertain pseudo sample is a questionable way of measuring an explanation's faithfulness.
The authors of LIME note that insights into the global workings of the model may be gained by evaluating multiple local explanations.As there usually are time constraints so that not all instances can be evaluated, an algorithm suggests a subset of representative instances.Although this approach avoids the issue of misrepresenting global effects by averaging local explanations, it also misses the opportunity to provide meaningful semi-global explanations.This is where the cAME comes into play.It is motivated by the goal to aggregate local interpretations while staying faithful to the underlying predictive model.

Simulations
Here, we present three simulation scenarios to highlight the interplay between fMEs, the NLM, and the cAME.In all following sections, we use Simpson's 3/8 rule for the computation of the NLM and CTREE to compute cAMEs.

Univariate Data without Noise
We start with a univariate scenario without random noise and work directly with the data generating process (DGP).This way, we can evaluate how introducing noise affects the information gained from fMEs in the subsequent simulation.We simulate a single feature x, uniformly distributed on [−5, 5], and define f as: The data is visualized in Fig. 10.An fME with step size h = 2 is computed for each observation.We use CTREE on the fMEs to find cAMEs.Subsequently, all observations' NLM values are averaged to cANLM values on the subspaces of the cAMEs.Our computations are visualized in Fig. 11.In the univariate case, we see a direct relationship between the shape of the DGP and the fMEs and NLM values.The NLM has ramifications on the interpretation of the fMEs.For instance, for x = −3, the fME of increasing x by 2 units increases the predicted target value by 2 units, and we can conclude that the same holds proportionally for feature value changes of smaller magnitudes, e.g., a change of 1 unit results in an fME of 1, etc. On the contrary, given an observation x = 1, the NLM indicates considerable non-linearity.As a result, we cannot draw conclusions about this region of the feature space regarding fMEs with smaller step sizes than 2 units.

Univariate Data with Noise
We proceed to add random noise ε ∼ N(0, 1) on top of the data and tune the regularization and sigma parameters of a support vector machine (SVM) with a radial basis function kernel (see Fig. 10).As we now employ a predictive model, we must avoid potential model extrapolations.The forward location of all points with x > 3 falls outside the range of the training data.After removing all extrapolation points, we evaluate the fMEs and NLMs of all observations with x ∈ [−5, 3] (see Fig.  12).In this case, we can visually assess that the predictions of the SVM resemble the DGP but also factor in noise (see Fig. 10).e.g., the SVM prediction function is non-linear in linear regions of the DGP, which affects the fMEs and NLMs.This demonstrates that fMEs can only be used to explain the DGP if the model describes it accurately.We next augment the univariate data with one additional feature in order to empirically evaluate the additive recovery property of the fME (see Appendix B.1). Due to potential model extrapolations, we only make use of the DGP.In the first example, the DGP corresponds to a supplementary additively linked feature x 2 : In the second example, the DGP corresponds to a supplementary multiplicatively linked feature x 2 , i.e., we have a pure interaction: The fMEs and NLM values for both DGPs are given in Fig. 13.For the additive DGP, given the value of x 1 , moving in x 2 direction does not influence the fMEs due to the additive recovery property.As a result, we receive the same fMEs with an additively linked feature x 2 as without it (as long as the feature change does not occur in x 2 ).For the multiplicative DGP, the fMEs now vary for a given x 1 value, even though the feature change only occurs in x 1 .The NLM values are both affected by the presence of an additively linked and a multiplicatively linked feature x 2 , even though the feature change only occurs in x 1 .As opposed to the additive DGP, the cAME tree makes use of x 2 as a split variable for the multiplicative DGP.
7.4 Bivariate Data with Bivariate Feature Change In the last simulation example, we demonstrate bivariate fMEs and the corresponding NLM.We use the same DGPs as for the univariate feature change.The fMEs and NLM values are given in Fig. 14.As opposed to the univariate feature change for additively linked data, the fME values now also vary in x 2 direction for a given x 1 value due to the simultaneous change in x 2 .The NLM indicates linearity for a multitude of observations, given both the additive and the multiplicative DGP.For these observations, we can infer that multiplying both step sizes by a value on the interval [0, 1] results in an equally proportional reduced fME.

Application Workflow and Applied Example
We first present a structured application workflow (see Fig. 15) that incorporates all the theory presented in the preceding sections and demonstrate it with the Boston housing data (see Table 3 for a description).We start by tuning the regularization and sigma parameters of an SVM with a radial basis function kernel and proceed to evaluate feature effects on the predicted median housing value per census tract in Boston.The housing prices reflect the market demand, which in turn represents the attractiveness of living in the city or a certain district.For instance, the local authorities might be interested in the effects of lowering the crime rate by 5% (e.g., due to a larger police force) on the housing market.In order to avoid model extrapolations, we exclude all points that are located outside of the multivariate envelope of the training data.We start with the first order effect (see Fig. 16a).For the majority of observations, decreasing the crime rate by 5% considerably increases  the predicted median housing value by several thousand US dollars (USD).For a fraction of observations, the predicted change in median housing value is close to zero or below zero.The effect is approximately linear for all observations.Next, we evaluate potential interactions.In Fig. 16b, we visualize the interaction between the features crim and nox with the step sizes h crim = −5 and h nox = −0.1,i.e., a decrease in the crime rate by 5% and a simultaneous decrease in the nitrogen oxides concentration by 0.1 parts per 10 million (range between 0.385 and 0.871).For the majority of observations, there is a considerable increase in the median housing value, which is larger than for the isolated decrease in the crime rate.We proceed to three-way interactions.In Fig. 16c, we visualize the kernel density estimate of the distribution of fMEs and NLM values for the step sizes h crim = −5, h nox = −0.1 and h ptratio = −1.For a simultaneous threefold decrease of the given magnitudes, there is a large increase in predicted median housing value for the majority of observations.The threefold  interaction demonstrates an advantage of fMEs over competing techniques such as ICEs or the PD, which are difficult to interpret in more than two dimensions.Most three-way fMEs have a high degree of linearity with an NLM close to 1.
We proceed to evaluate cAMEs with the step size h crim = −5.In Fig. 17, we recursively partition the feature space with CTREE to find feature subspaces with a high fME homogeneity.In this example, the fMEs on the feature space are already rather homogeneous to begin with (CoV = 0.32), as can be seen by the large cluster of fMEs in Fig 16a .The minimum node size is set to 30 so we can compute CIs.We succeed in finding two subspaces with a lower fME CoV value than the root node, except for one region with a higher CoV value.This demonstrates that, although the aggregate impurity of the fMEs can be reduced via recursive partitioning, we may still find subspaces with a larger impurity than the root node.Lowering the Fig. 15: Application workflow for forward marginal effects.
1. Train and tune a predictive model.2. Based on the application context, choose evaluation points D, the features of interest S, and the step sizes h h h S .3. Check whether any x x x (i) or (x x x (i) S + h h h S , x x x (i) −S ) are subject to model extrapolations.See Appendix A.2 for possible options.4. Either modify the step sizes so no points fall inside areas of extrapolation or remove the ones that do. 5. Compute fMEs for selected observations and the chosen step sizes.6. Compute the NLM for every computed fME.crime rate by 5% has the smallest effect in districts with high crime rates.For lower crime rates, the feature effect is slightly larger in areas with a large nitrogen oxides concentration.
In a final step, we demonstrate the construction of CIs.For an observation that falls into the rightmost leaf node, the estimated cME is 0.86 (given by the cAME), and the estimated cNLM is 0.99 (given by the cANLM).The 95% CIs correspond to CI cAME, 0.95 = [0.65,1.07] and CI cANLM, 0.95 = [0.98,1].

Conclusion
This research paper intends to both provide a comprehensive review of the theory of MEs and to refine it for the application to non-linear prediction functions, e.g., in the context of ML applications.We argue to abandon dMEs in favor of using more accurate fMEs.To estimate multivariate feature effects, we extend the existing definition of univariate changes in feature values to multiple dimensions.Furthermore, we introduce an NLM for fMEs, which indicates the similarity between the prediction function and the intersecting linear secant.Due to the complexity and non-linearity of ML models, we suggest to focus on semi-global instead of global feature effects.We propose a means of estimating cMEs via cAMEs through partitioning the feature space.The resulting subspaces are augmented with cANLM values and CIs in order to receive a compact summary of the prediction function across feature subspaces.Furthermore, we propose an observation-wise categorical ME.In the Appendix, we provide proofs on the additive recovery property of fMEs and their relation to the ICE and PD.
Given arbitrary predictive models, this new class of of fMEs can be used to address questions on a model's behavior such as the following: Given pre-specified changes in one or multiple feature values, what is the expected change in predicted outcome?What is the change in prediction for an average observation?What is the change in prediction for a pre-specified observation?What are population subgroups with homogeneous expected effects?What is the degree of non-linearity in these effects?What is our confidence in these estimates?What is the expected change in prediction when switching observed categorical feature values to a reference category?
However, there are certain limitations as well.Molnar et al. (2021) discuss various general pitfalls of model-agnostic interpretation methods, e.g., model extrapolations, estimation uncertainty, or unjustified causal interpretations.Model-agnostic methods in general are favorable tools to explain the model behavior but often fail to explain the underlying DGP, as the quality of the explanations relies on the closeness between model and reality.
Throughout the manuscript, we noted various directions that may be explored in future work.These include quantifying extrapolation risk for the selection of step sizes, stabilizing the found subspaces for cAMEs, or quantifying the subspaces' uncertainty.The motivation behind an interpretation fundamentally determines whether a method is used as an effect or importance method.An effect can also be considered an importance, i.e., features with larger effects on the predicted outcome are considered more important to the model.We formulated fMEs as a method to determine feature effects.However, finite differences are already widely used for importance computations in SA (see Section 2), e.g., the elementary effects / Morris method, or derivative-based global sensitivity measures.One could think of ways to utilize fMEs for importance computations as well.For instance, although we formulated the cME conditional on a feature subspace as a semi-global effect, it can also be interpreted as a semi-global feature importance.
Many disciplines that have been relying on traditional statistical models are starting to utilize the predictive power of modern ML models.These disciplines are used to interpretations in terms of MEs, the AME, MEM, or MER.With this research paper, we wish to bridge the gap between the established and restricted method of using MEs with traditional statistical models and the more flexible and capable approach of interpreting modern ML models with this new class of fMEs.

A.1 Decomposition of the Prediction Function
The prediction function to be analyzed may be very complex or even a black box.However, there are multiple ways to decompose the prediction function into a sum of components of increasing order.Although the goal of MEs is not to decompose the prediction function, it is convenient to either regard the prediction function as an additive decomposition or to keep in mind that it may be decomposed into one.An additive decomposition of the prediction function has the following general form: In SA, the additive decomposition is typically referred to as a high-dimensional model representation (HDMR) or ANOVA-HDMR (Saltelli et al. 2008).In IML, it often is called the functional ANOVA decomposition (Hooker 2004b).Various approaches exist to estimate Eq. ( 9) or a truncated variant, e.g., the PD (Friedman 2001), random sampling HDMR (Li et al. 2006), or accumulated local effects (Apley and Zhu 2020).Further assumptions are needed to make the decomposition unique, e.g., feature independence (Chastaing et al. 2011).
For instance, we may recursively estimate Eq. ( 9) as follows: where E X X X −S f (X X X) | X X X S corresponds to the minimum L2 loss approximation of f (X X X) given only the features X X X S (Saltelli et al. 2008) and is typically referred to as the PD in ML terms.
A.2 Model Extrapolation King and Zeng (2006) define extrapolation as predicting outside of the convex hull of the training data.The authors demonstrate that the task of determining whether a point is located inside of the convex hull can be efficiently solved using linear programming.However, the convex hull may be comprised of many empty areas without training observations, especially in the case of correlated and high-dimensional data.Therefore, it seems plausible to define model extrapolation differently, e.g., as predictions in areas of the feature space with a low density of training points.Hooker (2004a) summarizes two main predicaments of model extrapolations.First, the model creates predictions which do not accurately reflect the target distribution given the features.Second, the predictions are subject to a high variance.Many model-agnostic techniques are subject to model extrapolation risks (Molnar et al. 2021).Hooker (2007) warns against model extrapolations when applying the functional ANOVA decomposition.Hooker et al. (2021) urge not to permute feature values to compute feature importance values.It is important to note that this issue highly depends on the behavior of the chosen model.The issue of determining whether the model extrapolates essentially boils down to quantifying the prediction uncertainty.Some models might diverge considerably from a scenario where they would have been supplied with enough training data (high prediction uncertainty), while other models might be relatively robust against such issues (low prediction uncertainty).Although MEs based on model extrapolations are still correct in terms of the model output, they might not represent any underlying DGP in an accurate way.Therefore, it is important to take into account (and preferably avoid) potential model extrapolations when selecting feature values to compute fMEs.For some models, built-in measures exist to quantify the prediction uncertainty, e.g., the proximity measure for tree ensembles which counts how often a pair of points is located in the same leaf node for all trees of the ensemble (Hastie et al. 2001).The same can be done for the pairwise proximity between points in the training and the test set.For instance, given n training observations and a test observation x x x, we can create an (n × 1) vector of proximities which can be used to detect model extrapolations.However, it is desirable to detect model extrapolations via auxiliary extrapolation risk metrics (AERM) which are independent of the trained model.Detecting an EP is similar in concept to the detection of outliers.Although a unified definition of outliers does not exist, they are generally considered to differ as much from other observations as to suspect they were generated by a different mechanism (Hawkins 1980).We can therefore consider an outlier to be drawn from a different distribution than the training data (and one that does not overlap with it), which suits our definition of EPs.In clustering, outliers are often found using local densitybased outlier scores such as local outlier probabilities (LOP) (Kriegel et al. 2009).Based on the nearest data points, LOP provides an interpretable score on the scale [0, 1], indicating the probability of a point being an outlier.However, clustering techniques such as LOP are often based on the assumption that the data exhibits a structure of clusters or on assumptions about the clusters' distributions.In theory, one could use various other outlier detection (also referred to as anomaly detection) mechanisms for extrapolation detection, e.g., isolation forests (Liu et al. 2012).Hooker (2004a) proposes a statistical test to classify a point as an EP or non-EP.It tests whether a point was more likely to be drawn from the data distribution (non-EP) or the uniform distribution (EP).The uniform distribution is used as an uninformative baseline distribution.The AERM R(x x x) corresponds to: with U(x x x) being the density function of the uniform distribution and P(x x x) the density function of the data distribution.R(x x x) has a range of [0, 1] with 0 indicating the lowest and 1 the highest extrapolation risk.R(x x x) > 0.5 indicates extrapolation.As the support of U(x x x) we may either choose the recommendations of an application domain expert or the observed feature ranges.Eq. ( 10) cannot be directly computed, as the density of the training data is unknown.If x x x falls outside the multivariate envelope of the training data, it is plausible to set R(x x x) to 1.We may estimate Eq. ( 10) by creating a binary classification problem on a dataset augmented with uniform Monte-Carlo samples (Hooker 2004a).The training data is labeled as the foreground class.Next, artificial data points are sampled from a uniform distribution and labeled as the background class.A predictive model is trained on the augmented dataset and predicts for a given point whether it is more probable that it was drawn from the data distribution or the uniform distribution.We term this approach Monte-Carlo extrapolation classification (MCEC).Fig. 18 visualizes the concept behind MCEC for a classification tree.Consider two independent standard normally distributed features.We augment the training data with a uniform Monte-Carlo sample with support [min(x 1 ), max(x 1 )] × [min(x 2 ) × max(x 2 )] and use CART to partition the feature space into extrapolation areas and non-extrapolation areas.Some training points are located outside of the center rectangles in a low-density end of the bivariate normal distribution.As such, it is correct to be cautious when evaluating predictions in this area, even if a point was drawn from the training data.Hooker (2004a) argues that in high-dimensional settings, the Monte-Carlo sample will leave lots of areas of the feature space unoccupied which results in poor classification performance.Classification performance may be boosted by directly utilizing distributional information about the uniform distribution instead of a Monte-Carlo sample.This technique termed confidence and extrapolation representation trees (CERT) exploits For the tree growing and pruning strategy, CERT uses a mixture of both CART (e.g., splitting based on the Gini index) and C4.5 (Quinlan 1993) (e.g., missing values and surrogate splits).Apart from letting us directly supply the classification tree with distributional information instead of data, its interpretability is advantageous.The tree partitions the entire feature space at once into hyperrectangles that indicate extrapolation or non-extrapolation areas.Hooker (2004a) argues that CERT provides a markedly lower misclassification rate as opposed to using MCEC with a classification tree.However, it is unclear whether this advantage holds for other classification algorithms.

B.1 Additive Recovery
We provide several proofs on additive recovery based on a prediction function in additive form.Any prediction function can be decomposed into a sum of effect terms of various orders (see Appendix A.1).The sum of effect terms of a feature set K is denoted by Θ K (x x x K ).For notational simplicity, the union { j} ∪ K of the j-th feature index and the index set K is denoted by { j, K}.The sum of effect terms is denoted by Θ { j,K} (x j , x x x K ).
Theorem 1 (Additive Recovery of Finite Difference) An FD w.r.t.x j only recovers terms that depend on x j and no terms that exclusively depend on x x x − j .
Proof.Consider a prediction function f that consists of a sum, including the main effect of x j , denoted by g { j} (x j ), a sum of higher order terms (interactions) between x j and other features x x x K , denoted by Θ { j,K} (x j , x x x K ), and terms that depend on the remaining features x −{ j,K} , denoted by Θ −{ j,K} (x x x −{ j,K} ): f (x x x) = g { j} (x j ) +Θ { j,K} (x j , x x x K ) +Θ −{ j,K} (x x x −{ j,K} ) It follows that the FD of predictions corresponds to a function that only depends on x j , i.e., it locally recovers the relevant terms on the interval [x j + a, x j + b].
Proof.Consider a prediction function f that consists of a sum, including the main effect of x j , denoted by g { j} (x j ), a sum of higher order terms (interactions) between x j and other features x x x K , denoted by Θ { j,K} (x j , x x x K ), and terms that depend on the remaining features x x x −{ j,K} , denoted by Θ −{ j,K} (x x x −{ j,K} ): f (x x x) = g { j} x j +Θ { j,K} x j , x x x K +Θ −{ j,K} x x x −{ j,K} The FD w.r.t.x j is equivalent to the fME w.r.t.x j with a = h j and b = 0. Using Theorem 1, it follows that: fME x x x,h j = g { j} x j + h j − g { j} x j +Θ { j, K} x j + h j , x x x K −Θ { j, K} x j , x x x K Theorem 2 (Additive Recovery of Multivariate Forward Marginal Effect) The multivariate fME w.r.t.x x x S only recovers terms that depend on x x x S and no terms that exclusively depend on x x x −S .
Proof.Consider a feature set S. The power set of S excluding the empty set is denoted by P * = P(S) \ / 0. The prediction function f consists of a sum, including the sum of effects of all subsets of features K ∈ P * , denoted by ∑ K∈P * g K (x x x K ), and a sum of terms that depend on the remaining features, denoted by Θ −S (x x x −S ): B.2 Relation between Marginal Effects, the Individual Conditional Expectation, and Partial Dependence Theorem 3 (Equivalence between Forward Marginal Effect and Forward Difference of Individual Conditional Expectation) The fME with step size h h h S is equivalent to the forward difference with step size h h h S between two locations on the ICE.Proof.fME x x x,h h h S = f (x x x S + h h h S , x x x −S ) − f (x x x) = ICE x x x,S (x x x S + h h h S ) − ICE x x x,S (x x x S ) Theorem 4 (Equivalence between Average Marginal Effect and Forward Difference of Partial Dependence for Linear Prediction Functions) The AME with step size h h h S is equivalent to the forward difference with step size h h h S between two locations on the PD for prediction functions that are linear in x x x S .
Proof.If f is linear in x x x S : f x x x (i) S + h h h S = f (x x x S + h h h S ) ∀ i ∈ {1, . . ., n}, x x x S , h h h S ∈ × j∈S X j

Fig. 2 :
Fig. 2: Left: A random forest is trained on a single feature x with a quadratic effect on the target.The training space corresponds to the interval [−5, 5].Right:We compute an fME with a step size of 1 for each observation.After moving 1 unit in x direction, points with x > 4 are considered EPs (red).The random forest extrapolates and predicts unreliably in this area of the feature space.The resulting fMEs are irregular and should not be used for interpretation purposes.

Fig. 3 :
Fig.3: The prediction function is yellow colored.The dME is given by the slope of the tangent (grey) at the point of interest (x = 0.5).The interpretation of the dME corresponds to the evaluation of the tangent value at x = 1.5, which is subject to an error (red) almost as large as the actual change in prediction.The fME equals the change in prediction along the secant (green) through the prediction at x = 0.5 and at x = 1.5.

Fig. 4 :
Fig. 4: Consider a quadratic relationship between the target y and a single feature x.A decision tree fits a piecewise constant prediction function (black line) to the training data (blue points).The dME at the point x = −2.5 (green dot) is zero, while the fME with h = 1 traverses the jump discontinuity and reaches the point x = −1.5 (red dot).

Fig. 6 :
Fig. 6: We wish to determine the area between the prediction function and the secant.

Fig. 10 :
Fig. 10: The target is determined by a single feature x.On the interval [−5, 0[ there is a linear feature effect.On the interval [0, 5] the functional relationship consists of a transformed sine wave.We first use the DGP, then add random noise on top of the data and train an SVM.

Fig. 11 :
Fig. 11: Univariate data without noise.For each point, moving in x direction by the length of the arrow results in the fME / NLM indicated on the vertical axis.Left: fMEs with step size h = 2.A regression tree partitions the feature space into subspaces (in this case intervals) where the fMEs are most homogeneous.The horizontal lines correspond to the cAMEs.Right: NLM values and cANLMs for each subspace.

Fig. 12 :
Fig. 12: Univariate data with noise.For each point, moving in x direction by the length of the arrow results in the fME / NLM indicated on the vertical axis.Left: fMEs with step size h = 2 and cAMEs.Right: NLM values and cANLMs.
(a) DGP with additive link.(b) DGP with multiplicative link.

Fig. 13 :
Fig. 13: Bivariate data and univariate feature change h 1 = 2.For each point, moving in x 1 direction by the length of the arrow results in the fME / NLM indicated by the color.fMEs (left) and NLM (right).Negative NLM values are red colored.
(a) DGP with additive link.(b) DGP with multiplicative link.

Fig. 14 :
Fig. 14: Bivariate data and bivariate feature change h 1 = 2 and h 2 = 3.For each point, moving in x 1 and x 2 directions by the lengths of the respective arrows results in the fME / NLM indicated by the color.fMEs (left) and NLM (right).Negative NLM values are red colored.

Fig. 17 :
Fig. 17: cAME and cANLM values for the step size h crim = −5 on feature subspaces created via CTREE.

Fig. 18 :
Fig. 18: We augment the training data (blue) with uniform points (red).A classification tree partitions the feature space into non-extrapolation areas (predominantly occupied with training observations) and extrapolation areas (predominantly occupied with uniform Monte-Carlo samples).
fME x x x,h h h S = ∑ K∈P * g K (x x x K + h h h K ) +Θ −S (x x x −S ) − ∑ K∈P * g K (x x x K ) +Θ −S (x x x −S ) = ∑ K∈P * [g K (x x x K + h h h K ) − g K (x x x K )]

Table 1 :
Observation used to compute the fME of the feature rm with a step size of 1, before (top) and after (bottom) the intervention in the Boston housing data.

Table 3 :
Description of the Boston housing data.