Explainable Empirical Risk Minimization

The successful application of machine learning (ML) methods becomes increasingly dependent on their interpretability or explainability. Designing explainable ML systems is instrumental to ensuring transparency of automated decision-making that targets humans. The explainability of ML methods is also an essential ingredient for trustworthy artificial intelligence. A key challenge in ensuring explainability is its dependence on the specific human user ("explainee"). The users of machine learning methods might have vastly different background knowledge about machine learning principles. One user might have a university degree in machine learning or related fields, while another user might have never received formal training in high-school mathematics. This paper applies information-theoretic concepts to develop a novel measure for the subjective explainability of the predictions delivered by a ML method. We construct this measure via the conditional entropy of predictions, given a user feedback. The user feedback might be obtained from user surveys or biophysical measurements. Our main contribution is the explainable empirical risk minimization (EERM) principle of learning a hypothesis that optimally balances between the subjective explainability and risk. The EERM principle is flexible and can be combined with arbitrary machine learning models. We present several practical implementations of EERM for linear models and decision trees. Numerical experiments demonstrate the application of EERM to detecting the use of inappropriate language on social media.


I. INTRODUCTION
We consider machine learning (ML) methods that learn a hypothesis map that reads in features of a data point and outputs a prediction for some quantity of interest (label).Explainable ML (XML) aims at supporting human end users in understanding (at least partially) how a ML method arrives at its final predictions [1], [2], [3], [4], [5], [6], [7].One key challenge of XML is the variation in background knowledge of human end users [8], [9].ML methods that are explainable for a domain expert might be opaque ("black-box") for a lay user.
As a point in case, consider a ML method that uses a deep net to diagnose skin cancer from images [10].
The predictions obtained from a deep net which is fed by an image, might be explained by quantifying the influence of individual pixels on the resulting prediction [11].We can conveniently visualize such influence measures as a saliency map [12].While the so-obtained saliency map is a useful explanation to a dermatologist, it might not offer a sufficient explanation for a lay person without a university-level training in dermatology.
There seems to be no widely accepted definition for the (level of) explainability of a ML method [13], [14].Basic requirements for explainable ML include stability (obtaining similar predictions in similar cases) and interpretability (relating predictions to known concepts) . . .[15], [7].We try to capture important aspects of explainable ML by identifying explainability with a notion of predictability or the lack of uncertainty [7].In particular, we use a subjective (or personalized) notion of explainability that reflects the discrepancy between ML predictions and the intuition of a specific user.We encode this intuition by a user-specific notion of similarity between data points.Subjective explainability of ML method requires similar predictions for data points considered similar by a user.This paper proposes explainable empirical risk minimization (EERM) as a novel XML method.The main idea behind EERM is to learn a hypothesis whose predictions do not deviate too much across data points that are considered similar by the human consumer of these predictions ("end user").To this end, we require the user to provide a feedback signal for data points in a training set.We allow for wide range of possible feedback signals.The user feedback might be answers to a survey, bio-physical measurements or visual observations of facial expressions [16], [17], [18].The goal is to learn a hypothesis that conforms with the user intuition in the sense of delivering similar predictions for data points with similar user feedback signals.Our approach is somewhat related to prototype or example-based explanations with candidate prototypes (or examples) being data points having similar user feedback signals [19].
It is important to note that our approach allows for arbitrary user feedback signals.We might call our method as "user-agnostic" as we only require the user feedback signal for data points in a training set.
Besides the training set we do not require any input from or information about the user.Depending on the quality of the user feedback signal, enforcing subjective explainability of a learnt hypothesis might be beneficial or detrimental to the resulting prediction accuracy (see Section III-A).If the user feedback is strongly correlated with the true label of a data point, our requirement for explainability helps to steer or regularize the learning task.Indeed, we might interpret the user feedback signal as a manifestation of domain expertise and, in tun, the explainability requirement as a means to incorporate domain expertise into a ML method.

A. State of the Art
The increasing need for explainable ML sparked a considerable amount of research on different methods for explainable ML [13].Methods for explainable ML can be categorized based on different aspects [13], [14].One important aspect is the restrictions placed on the underlying ML models.In this regard, there are two orthogonal approaches to explainable ML: either use an intrinsically-explainable ("simple") model or model-agnostic methods that explicitly generate explanations for a given ("black-box") ML method [20].Model-agnostic methods provide post-hoc (after model training) explanations [1], [9].These methods do not require the details of a ML method but only require its predictions for some training examples.
Examples of intrinsically explainable models include linear models using few features and shallow decision trees [20].The interpretation of a linear model is typically obtained from an inspection of the learned weights for the individual features.A large (in magnitude) weight is then read as an indicator for a high relevance of the corresponding feature.The prediction delivered by a decision tree might be in the form of the path from root node to the decision node (essentially a sequence elementary tests on the features of the data point).In general, however, there is no widely accepted definition of which model is considered intrinsically explainable.Moreover, there is no consensus about how to measure the explainability of a "simple" model [14].
A main contribution of this paper is a method for constructing an explainable model by regularizing a given (high-dimensional) ML model [21], [12], [3].As regularization term, we use a novel measure for subjective explainability of predictions obtained from a hypothesis map.What sets this work apart from most existing work on explainable ML is that we use a novel measure of subjective explainability.This measure is implemented using the concept of a user feedback.Broadly speaking, the user feedback is some user-specific attribute that is assigned or associated with a data point.Formally, we can think of the user feedback signal as an additional (user-specific) feature of a data point.This additional feature is measured or determined via the user and revealed to our method for each data point.
Similar to [6], we use information-theoretic concepts to measure subjective explainability.However, while [6] uses the mutual information between an explanation and the prediction, we measure the subjective explainability of a hypothesis using the conditional entropy of its predictions given a user feedback signal.This conditional entropy is then used as a regularizer for empirical risk minimization (ERM) resulting in explainable empirical risk minimization (EERM).
The EERM principle requires a training set consisting of data points for which, beside their features, also the label and user signal values are known.The user signal values for the data points in the training set are used to estimate the subjective explainability of a hypothesis.We obtain different instances of EERM form different hypothesis spaces (models).Two specific instances are explainable linear regression (see Section III-A) and explainable decision tree classification (see Section III-B).
We illustrate the usefulness of EERM using the task of detecting hate speech in social media.Hate speech is a main obstacle towards embracing the Internet's potential for deliberation and freedom of speech [22].Moreover, the detrimental effect of hate speech seems to have been amplified during the current Covid-19 pandemic [23].Detecting hate speech requires multi-disciplinary expertise from both social science and computer science expertise [24], [25].Providing subjective explainability for ML users with different backgrounds is crucial for the diagnosis and improvement of hate speech detection systems [22], [23], [26].

B. Contributions
Our main contributions can be summarized as follows: • We introduce a novel measure for the subjective explainability of the predictions delivered by a ML method to a specific user.This measure is constructed from the conditional entropy of the predictions given some user signal (see Section II-B).
• Our main methodological contribution is EERM which uses subjective explainability as regularizer.
We present two equivalent (dual) formulations of EERM as optimization problems (see Section III).
• We detail practical implementations of the EERM principle for linear regression and decision tree classification (see Section III-A -III-B).
• We illustrate the usefulness of the EERM principle by some illustrative numeric experiments.These experiments revolve around explainable weather forecasting and explainable hate-speech detection in social media (see Section IV).We use EERM to learn an explainable decision tree classifier for a user that associates hate speech with the presence of specific keywords.

II. PROBLEM SETUP
We consider a ML application that involve data points, each characterized by a label (quantity of interest) y and some features (attributes) x = x 1 , . . ., x n T ∈ R n [27], [28].ML methods aim at learning a hypothesis map h that allows to predict the label of a data point based solely on its features.
In contrast to standard ML approaches, we explicitly take the specific user of the ML method into account.Each data point is also assigned a user signal u that characterizes it from the perspective of a specific human user.The concept of a user signal u is similar to the features x of a data point.Like features, also a user signal is a quantity or property of a data point that can be measured easily in an automated fashion.However, while features typically represent objective measurements of physical quantities, the user signal u is a subjective measurement provided (actively or passively) by the human user of the ML method.
Let us illustrate the rather abstract notion of a user signal by some examples.One important example for a user signal is a manually constructed feature of the data point.Section IV considers hate speech detection in social media where data points represent short messages ("tweets").Here, the user signal u for a specific data point could be defined via the presence of a certain word that is considered a strong indicator for hate speech.
The user signal u might also be collected in a more indirect fashion.Consider an application where data points are images that have to be classified into different categories.Here, the user signal u might be derived from EEG measurements taking when a data point (image) is revealed to the user [16].
The goal of supervised ML is to learn a hypothesis that is used to compute the predicted label ŷ = h(x) from the features x = x 1 , . . ., x n T ∈ R n of a data point.Any ML method that can only use finite computational resources can only use a subset of (computationally) feasible maps.We refer to this subset as the hypothesis space (model) H of a ML method.Examples for such a hypothesis space are linear maps, decision trees or artificial neural networks [29], [30].
For a given data point with features x and label y, we measure the quality of a hypothesis h using some loss function L((x, y), h).The number L x, y , h measures the error incurred by predicting the label y of a data point using the prediction ŷ = h(x).Popular examples for loss functions are the squared error loss L x, y , h = (h(x) − y) 2 (for numeric labels y ∈ R) or the logistic loss L x, y , h = log(1 + exp(−h(x)y)) (for binary labels y ∈ {−1, 1}).
Roughly speaking, we would like to learn a hypothesis h that incurs small loss on any data point.To make this informal goal precise we can use the notion of expected loss or risk Ideally, we would like to learn a hypothesis ĥ with minimum risk It seems natural to learn a hypothesis by solving the risk minimization problem (3).
There are two caveats to consider when using the risk minimization principle (3).First, we typically do not know the underlying probability distribution p(x, y) required for evaluating the risk (2).We will see in Section II-A how empirical risk minimization (ERM) is obtained by approximating the risk using an average loss over some training set.
The second caveat to a direct implementation of risk minimization (3) is its ignorance about the explainability of the learned hypothesis ĥ.In particular, we are concerned with the subjective explainability of the predictions ĥ(x) for a user that is characterized via a user signal u for each data point.We construct a measure for this subjective explainability in Section II-B and use it as a regularizer to obtain explainable ERM (EERM) (see Section III).

A. Empirical Risk Minimization
The idea of ERM is to approximate the risk (2) using the average loss (or empirical risk) The average loss L(h|D) of the hypothesis h is measured on a set of labelled data points (the training set) 1) , y (1) , u (1) , . . ., x (m) , y (m) , u (m) . ( The training set D contains data points for which we know the true label value y (i) and the corresponding user signal u (i) .
Section IV applies our methods to the problem of hate speech detection.In this application, a data point is a short text message ("tweet") and the training set (5) consists of tweets for which we know if they are hate speech or not.As the user signal we will use the presence of a small number of keywords that are considered a strong indicator for hate speech.
Many practical ML methods are based on solving the ERM problem ĥ ∈ arg min h∈H L(h|D).
However, a direct implementation of ERM ( 6) is prone to overfitting if the hypothesis space H is too large (e.g., linear maps using many features and very deep decision trees) compared to the size m of the training set.To avoid overfitting in this high-dimensional regime [31], [32], we add a regularization term λR(h) to the empirical risk in (6), The choice of the regularization parameter λ ≥ 0 in ( 7) can be guided by a probabilistic model for the data or using validation techniques [27].
A dual form of regularized ERM ( 7) is obtained by replacing the regularization term with a constraint, The solutions of (8) coincide with those of ( 7) for an appropriate choice of η [33].Solving the primal formulation ( 7) might be computationally more convenient as it is an unconstrained optimization problem in contrast to the dual formulation (8) [34].However, the dual form (8) allows to explicitly specify an upper bound η on the value R(h (η) ) for the learned hypothesis h (η) .
Regularization techniques are typically used to improve statistical performance (risk) of the learned hypothesis.Instead, we use regularization as a vehicle for ensuring explainability.In particular, we do not use the regularization term as an estimate for the generalization error L(h) − L(h|D).Rather, we use a regularization term that measures for the subjective explainability of the predictions ŷ = h(x).
The regularization parameter λ in (7) (or η in the dual formulation ( 8)) adjusts the level of subjective explainability of the learned hypothesis ĥ.

B. Subjective Explainability
There seems to be no widely accepted formal definition for the explainability (interpretability) of a learned hypothesis ĥ.Some authors simply define specific ML methods to deliver a explainable hypothesis [20].Examples for such intrinsically explainable ML methods include linear regression and (shallow) decision trees.
While linear regression is sometimes considered as interpretable, the predictions obtained by applying a linear hypothesis to a huge number of features might be difficult to grasp.Moreover, the interpretability of linear models also depends on the background (formal training) of the specific user of a ML method.
Similar to [6] we use information-theoretic concepts to make the notion of explainability precise.This approach interprets each data point as realizations of i.i.d.random variables.In particular, the features x, label y and user signal u associated with a data point are realizations drawn from a joint probability density function (pdf) p(x, y, u).In general, the joint pdf p(x, y, u) is unknown and needs to be estimated from data using, e.g., maximum likelihood methods [28], [29].
Note that since we model the features of a data point as the realization of a random variable, the prediction ŷ = h(x) also becomes the realization of a random variable.Figure 1 summarizes the overall probabilistic model for data points, the user signal and the predictions delivered by (the hypothesis learned with) a ML method.
We measure for the subjective explainability of the predictions ŷ delivered by a hypothesis h for some data point x, y, u as, Here, we used the conditional (differential) entropy H(h|u) (see Ch. 2 and Ch. 8 [35]) We introduce the ("calibration") constant C in ( 9) for notational convenience.The actual value of C is meaningless for our approach (see Section III) and serves only the convention that the subjective explainability E(h|u) is non-negative.
For regression problems, the predicted label ŷ might be modelled as a continuous random variable.In this case, the quantity H(ŷ|u) is a conditional differential entropy.With slight abuse of notation we refer to H(ŷ|u) as a conditional entropy and do not explicitly distinguish between the case where ŷ is discrete, such as in classification problems studied in Sections III-A-III-B and Section IV.
The conditional entropy H(h|u) in ( 9) quantifies the uncertainty (of a user that assigns the value u to a data point) about the prediction ŷ = h(x) delivered by the hypothesis h.Smaller values H(h|u) correspond to smaller levels of subjective uncertainty about the predictions ŷ = h(x) for a data point with known user signal u.This, in turn, corresponds to a larger value E(h|u) of subjective explainability.
Section IV discusses explainable methods for detecting hate speech or the use of offensive language.
A data point represents a short text message (a tweet).Here, the user signal u could be the presence of specific keywords that are considered a strong indicator for hate speech or offensive language.These keywords might be provided by the user via answering a survey or they might be determined by computing word histograms on public datasets that have been manually labeled [36].The features x, label y and user signal u of a data point are realizations drawn from a pdf p(x, y, u).Our goal is to learn a hypothesis h such that its predictions ŷ have a small conditional entropy given the user signal u.

III. EXPLAINABLE EMPIRICAL RISK MINIMIZATION
Section II has introduced all the components of EERM as a novel principle for explainable ML.EERM learns a hypothesis h by using an estimate H(h|u) for the conditional entropy in (9) as the regularization term R(h) in (7), A dual form of ( 11) is obtained by specializing (8), The empirical risk L(h|D) and the regularizer H(h|u) are computed solely from the available training set (5).We will discuss specific choices for the estimator H(ŷ|u) in Section III-A -III-B.
The idea of EERM is that the solution of (11) (or ( 12)) is a hypothesis that balances the requirement of a small loss (accuracy) with a sufficient level of subjective explainability E(h|u) = C − H(h|u)).This balance is steered by the parameter λ in (11) and η in (12), respectively.Choosing a large value for λ in (11) (small value for η in ( 12)) penalizes any hypothesis resulting in a large estimate H(h|u) for the conditional entropy H(h|u).Assuming H(h|u) ≈ H(h|u), using a large λ in (11) (small η in ( 12) enforces a high subjective explainability (9) of the learned hypothesis h (λ) .
For the specific choice λ = 0, EERM (11) reduces to plain ERM that delivers a hypothesis h (λ=0) with risk L min .This special case of EERM is obtained from the dual form (8) using a sufficiently large η.

A. Explainable Linear Regression
We now specialize EERM in its primal form (11) to linear regression [28], [29].Linear regression methods learn the parameters w of a linear hypothesis h (w) (x) = w T x to minimize the squared error loss of the resulting prediction error.The features x and user signal u of a data point are modelled realizations of jointly Gaussian random variables with mean zero and covariance matrix C, Note that ( 13) only specifies the marginal of the joint pdf p(x, y, u) (see Figure 1).Using the probabilistic model ( 13), we obtain (see [35]) Here, we use the conditional variance σ 2 ŷ|u of ŷ = h(x) of the predicted label ŷ = h(x) given the user signal u for a data point.
To develop an estimator H(h|u) for ( 14), we use the identity [37,Sec. 4.6.] The identity (15) relates the conditional variance σ 2 ŷ|u to the minimum mean squared error that can be achieved by estimating ŷ using a linear estimator αu with some α ∈ R. We obtain an estimator for the conditional variance σ 2 ŷ|u by replacing the expectation in ( 15) by a sample average over the training set D (5), It seems reasonable to estimate the conditional entropy H(h (w) |u) via the plugging in the estimated conditional variance ( 16) into ( 14), yielding the plug-in estimator (1/2).However, in view of the duality between ( 8) and ( 12), any monotonic increasing function of a given entropy estimator essentially amounts to a reparametrization λ → λ and η → η .Since such a reparametrization is irrelevant as we choose λ in a data-driven fashion, we will use the estimated conditional variance ( 16) itself as an estimator Note that we neither require the estimator (17) to be consistent nor to be unbiased [38].Our main requirement is that, with high probability, the estimator (17) varies monotonically with the conditional entropy H(h (w) |u).
Inserting the estimator (17) into EERM (11), yields Algorithm 1 as an instance of EERM for linear regression.Algorithm 1 requires as input a choice for the regularization parameter λ > 0 and a training set 1) , y (1) , u (1) , . . ., x (m) , y (m) , u (m) .As its output, Algorithm 1 delivers a hypothesis h (λ) that compromises between small risk L(h) and subjective explainability E(h|u).This compromise is controlled by the value of λ.

Algorithm 1 Explainable Linear Regression
Input: explainability parameter λ, training set D (see ( 5)) 1: solve empirical risk subjective explainability ( 18) Fundamental Trade-Off Between Subjective Explainability and Risk.Let us now study the fundamental trade off between subjective explainability E(h|u) and risk of a linear hypothesis for data points characterized by a single feature x.We consider data points (x, u, y) T , characterized by a single feature x ∈ R, numeric label y ∈ R and user feedback u ∈ R, as i.i.d.realizations of a Gaussian random vector Our goal is to learn a linear hypothesis h(x) = x T w which is parametrized by a weight vector Let us require a minimum prescribed subjective explainability E(h|u) ≥ C − η, which is equivalent to the constraint (see ( 9) and ( 14)) We can further develop the constraint (20) using (19) and basic calculus for Gaussian processes [39], The constraint (20) is enforced by requiring The goal is to find a linear hypothesis h(x) = x T w, whose weight vector w satisfies (22) to ensure sufficient subjective explainability, that incurs minimum risk where µ x 2 = σ 2 x + µ 2 x and µ xy = σ x,y + µ x µ y .We minimize the risk (23) under the constraint (22), which is equivalent to enforcing subjective explainability of at least C − η, Any optimal weight w solving (25) is characterized by the Karush-Kuhn-Tucker conditions [34, Sec.
Consider data points characterized by features x and a binary label y ∈ {0, 1}.Moreover, each data point is characterized by a binary user signal u ∈ {0, 1}.The restriction to binary labels and user signals is for ease of exposition.Our approach can be generalized easily to more than two label values (mult-class classification) and non-binary user signals.
The model H in ( 12) is constituted by all decision trees whose root node tests the user signal u and whose depth does not exceed a prescribed maximum depth d max [27].The depth d of a specific decision tree h is the maximum number of test nodes that are encountered along any possible path from root node to a leaf node [27].
Figure (4) illustrates a hypothesis h obtained from a decision tree with depth d = 2.We consider only decision trees whose nodes implement a binary test, such as whether a specific feature x j exceeds some threshold.Each such binary test can maximally contribute one bit to the entropy of the resulting prediction (at some leaf node).
Thus, for a given user signal u, the conditional entropy of the prediction ŷ = h(x) is upper bounded by d − 1 bits.Indeed, since the root node is reserved for testing the user signal u, the number of binary tests carried out for computing the prediction is upper bounded by d − 1.We then obtain Algorithm 2 from 3: learn decision tree classifier h (u=0) with maximum depth d max using training set D (u=0) 4 learn decision tree classifier h (u=1) with maximum depth d max using training set D (u=1) Output:

A. Explainable Linear Regression
Firstly we study the usefulness of EERM by numerical experiments revolving around the problem of predicting the weather temperature based on a public dataset during 01.01.2020 -13.12.2021 downloaded from Finnish Meteorological Institute (FMI) (https://en.ilmatieteenlaitos.fi/download-observations).
Specifically, we consider datapoints that represent the daily weather recordings along with a time-stamp at Nuuksio in Finland.The feature vector x is constructed using the numerical value of the minimum temperature of a day while the maximum temperature of the same day is the label y.Each datapoint is also characterized by a user signal u ∈ R. The user signal is defined as the maximum temperature with perturbations.
We employ the formulas ( 28) - (30) to learn an explainable linear regression with its subjective explainability upper bounded via a given value of η.The Gaussian random vector is calculated by datapoints in the training set.
The results in Figure ( 5) and (6) show that as the upper bound of the conditional entropy η increases, i.e., the subjective explainability E(h|u) decreases, the empirical risk L(h) goes down until the optimal weight w is achieved.

B. Explainable Decision Tree
Then we study the usefulness of EERM by numerical experiments revolving around the problem of detecting hate-speech and offensive language in social media [40].Hate-speech is a contested term whose meaning ranges from concrete threats to individuals to venting anger against authority [41].Hate-speech is characterized by devaluing individuals based on group-defining characteristics such as their race, ethnicity, religion and sexual orientation [42].
Our experiments use a public dataset that contains curated short messages (tweets) from a social network [36].Each tweet has been manually rated by a varying number of users as either "hate-speech", "offensive language" or "neither".For each tweet we define its binary label as y = 1 ("inappropriate tweet"') if the majority of users rated the tweet either as "hate-speech" or "offensive language".If the majority of users rated the tweet as "neither", we define its label value as y = 0 ("appropriate tweet").
The feature vector x of a tweet is constructed using the normalized frequencies ("tf-idf") of individual words [43].Each tweet is also characterized by a binary user signal u ∈ {0, 1}.The user signal is defined to be u = 1 if the tweet contains at least one of the 5 most frequent words appearing in tweets with y = 1.
We use Algorithm 2 to learn an explainable decision tree classifier with its subjective explainability upper bounded by η = 2 bits.The training set D used for Algorithm 2 is obtained by randomly selecting a fraction of around 90% percent of the entire dataset.The remaining 10% of tweets are used as a test set.
To learn the decision tree classifiers in step 3 and 4 of Algorithm 2, we used the implementations provided by the current version of the Python package scikit-learn [44].The resulting explainable decision tree classifier (with the root node testing the user signal) h (η=2) (x) achieved an accuracy of 0.929 on the test set.

V. CONCLUSION
The explainability of predictions provided by ML becomes increasingly relevant for their use in automated decision-making [45], [46].Given lay and expert user's different level of expertise and knowledge, providing subjective (tailored) explainability is instrumental for achieving trustworthy AI [47], [45].Our main contribution is EERM as a new design principle for subjective explainable ML.EERM is obtained by using the conditional entropy of predictions, given a user signal, as a regularizer.The hypothesis learned by EERM balances between small risk and a sufficient explainability for a specific user (explainee).
Fig.1.The features x, label y and user signal u of a data point are realizations drawn from a pdf p(x, y, u).Our goal is to learn a hypothesis

Figure 2
Figure 2 illustrates the parametrized solutions of (11) in the plane spanned by risk and subjective explainability.The different curves in Figure 2 are parametrized solutions of (11) obtained from using different training sets (assumed to consists of i.i.d.data points) and different estimators H of the conditional entropy H(h|u) in (9).

Fig. 2 .
Fig.2.The solutions of EERM, either in the primal(11) or dual(12) form, trace out a curve in the plane spanned by the risk L(h) and the subjective explainability E(h|u).

Fig. 4 .
Fig.4.EERM implementation for learning an explainable decision tree classifier.EERM amounts to learning a separate decision tree for all data points sharing a common user signal u.The constraint in (12) can be enforced naturally by fixing a maximum tree depth d.

Fig. 5 .
Fig. 5. the subjective explainability E(h|u) influences the empirical risk L(h) via a given value of η.

Fig. 6 .
Fig.6.the subjective explainability E(h|u) influences the optimal weights w via a given value of η.