Remember to Correct the Bias When Using Deep Learning for Regression!

When training deep learning models for least-squares regression, we cannot expect that the training error residuals of the final model, selected after a fixed training time or based on performance on a hold-out data set, sum to zero. This can introduce a systematic error that accumulates if we are interested in the total aggregated performance over many data points (e.g., the sum of the residuals on previously unseen data). We suggest adjusting the bias of the machine learning model after training as a default post-processing step, which efficiently solves the problem. The severeness of the error accumulation and the effectiveness of the bias correction are demonstrated in exemplary experiments.


Problem statement
We consider regression models f : X → R d of the form with parameters θ = (w, a, b) and x ∈ X.Here X is some arbitrary input space and w.l.o.g.we assume d = 1.The function h w : X → R F is parameterized by w and maps the input to an F -dimensional real-valued feature representation, a ∈ R F , and b is a scalar.If X is a Euclidean space and h the identity, this reduces to standard linear regression.However, we are more interested in the case where h w is more complex.In particular, • f θ can be a deep neural network, where a and b are the parameters of the final output layer and h w represents all other layers (e.g., a convolutional or point cloud architecture); • h : X → R can be any regression model (e.g., a random forest or deep neural network) and f θ denotes h w with an additional wrapper, where a = 1 and initially b = 0.
In the following, we call b the distinct bias parameter of our model (although w may comprise many parameters typically referred to as bias parameters if h w is a neural network).Given some training data D = {(x 1 , y 1 ), . . ., (x N , y N )} drawn from a distribution p data over X × R, we assume that the model parameters θ are determined by minimizing the mean-squared-error (MSE) potentially combined with some form of regularization.Typically, the goal is to achieve a low risk R(f θ ) = E (x,y)∼p data [(y − f θ (x)) 2 ] = ER D (f θ ), where the second expectation is over all test data sets drawn i.i.d.based on p data .However, here we are mainly concerned with applications where the (expected) absolute total error defined as the absolute value of the sum of residuals is of high importance.The relative total error is then given by which is closely related to the bias or relative systematic error Our study is motivated by applications in large-scale ecosystem monitoring such as convolutional neural network-based systems estimating tree canopy area from satellite imagery [2] applied for assessing the total tree canopy cover of a country and learning systems trained on small patches of 3D point clouds to predict the biomass (and thus stored carbon) of large forests [5,8].However, there are many other application areas, such as estimating the overall performance of a portfolio based on estimates of the performance of the individual assets or overall demand forecasting based on forecasts for individual consumers.At a first glance, it seems that optimizing ER D (f θ ) guarantees low E∆ D (f θ ), where the expectations are agian with respect to data sets drawn i.i.d.based on p data .Obviously, R D (f θ ) = 0 implies ∆ D (f θ ) = 0.More general, the optimal parameters θ * minimizing R D (f θ ) result in ∆ D (f θ * ) = 0. Actually, for all parameters θ b * = (w, a, b * ), where b * is the optimal bias parameter minimizing R S (f θ ) given w and a, it is well known that the error residuals sum to zero and thus ∆ D (f θ b * ) = 0.This can directly be seen from equation 6 below.However, when b is not optimal in the sense above, a low R D (f θ ) may not imply a low ∆ D (f θ ).In fact, if we are ultimately interested in the total aggregated performance over many data points, a wrongly adjusted parameter b may lead to significant systematic errors.Assume that f * is the Bayes optimal model for a given task and that f δ is the model where the optimal bias parameter b * is replaced by b * − δ b .Then for a test set D test of cardinality N test we have That is, the errors δ b accumulate.While one typically hopes that errors partly cancel out when applying a model to a lot of data points, the aggregated error due to a badly chose bias parameter increases.This can be a severe problem when using deep learning for regression, because in the canonical training process of a neural network for regression minimizing the (regularized) MSE the parameter b of the final model cannot be expected to be optimal: • Large deep learning systems are typically not trained until the parameters are close to their optimal values, because this is not necessary to achieve the desired performance in terms of MSE and/or training would take too long.
• The final weight configuration is often picked based on the performance on a validation data set, not depending on how close the parameters are to their optimal values [e.g., 12].
• Mini-batch learning introduces a random effect in the parameter updates, and therefore in the bias parameter value in the finally chosen network.
Thus, despite low MSE, the performance of a (deep) learning system in terms of the total error as defined in equation 3 can get arbitrarily bad.For example, in the tree canopy estimation task described above, you may get a decently accurate biomass estimate for individual trees, but the prediction over a large area (i.e., the quantity you are actually interested in) could be very wrong.
Therefore, we propose to adjust the bias parameter after training a machine learning model for least-squares regression as a default postprocessing step.In the next section, we show how to simply compute this correction and discuss the consequences.Section 3 presents experiments demonstrating the problem and the effectiveness of the proposed solution.
2 Solution: Adjusting the bias Given fixed parameters w and a, for an optimally adjusted bias parameter b * on D = {(x 1 , y 1 ), . . ., (x N , y N )} the first derivative w.r.t.b vanishes: Thus, for fixed w and a we can simply solve for the optimal bias parameter: In The trivial consequences of this adjustment are: 1.The MSE on the training data set is reduced.

The residuals on the training set cancel each other.
But what happens on unseen data?Adjusting the single scalar parameter b based on a lot of data is very unlikely to lead to overfitting.On the contrary, in practice we are typically observing a reduced MSE on external test data after adjusting the bias.However, this effect is typically minor.The weights of the neural network and in particular the bias parameter in the final linear layer are learned sufficiently well so that the MSE is not significantly degraded because the single bias parameter is not adjusted optimally -and that is why one typically does not worry about it although the effect on the absolute total error may be drastic.
Which data should be used to adjust the bias?While one could use an additional hold-out set for the final optimization of b, this is not necessary.Data already used in the model design process can be used, because assuming a sufficient amount of data selecting a single parameter is unlikely to lead to overfitting.
If there is a validation data set (e.g., for early-stopping), then these data could be used.However, we recommend to simply use all data available for model building (e.g., the union of training and validation set).If data augmentation is used, augmented data sets could be considered.
How to deal with regularization?So far, we just considered empirical risk minimization.However, the bias parameter can adjusted regardless of how the model was obtained.This includes the use of earlystopping [12] or regularized risk minimization with an objective of the form 1 Here, Ω denotes some regularization depending on the parameters.This includes weight-decay, however, typically this type of regularization would not consider the bias parameter b of a regression model anyway [e.g., 1, p. 342].

Examples
In this section, we present experiments that illustrate the problem of a large total error despite a low MSE and show that adjusting the bias as proposed above is a viable solution.We start with simple regression tasks based on a UCI benchmark data set [3], which are easy to reproduce.Then we move closer to real-world applications and consider convolutional neural networks for ecosystem monitoring.

Gas turbine emission prediction
First, we look at an artificial example based on real-world data from the UCI benchmark repository [3], which is easy to reproduce.We consider the Gas Turbine CO and NOx Emission Data Set [6], where each data point corresponds to CO and NOx (NO and NO 2 ) emissions and 11 aggregated sensor measurements from a gas turbine summarized over one hour.The typical tasks are to predict the hourly emissions given the sensor measurements.Here we consider the fictitious task of predicting the total amount of CO emissions for a set of measurements.
Experimental setup.There are 36 733 data points in total.We assumed that we know the emissions for N train = 21 733 randomly selected data points, which we used to build our models.
We trained a neural network with two hidden layers with sigmoid activation functions having 16 and 8 neurons, respectively, feeding into a linear output layer.There were shortcut connections from the inputs to the output layer.We randomly split the training data into 16 733 examples for gradient computation and 5000 examples for validation.The network was trained for 1000 epochs using Adam [7] with a learning rate of 1 • 10 −2 and mini-batches of size 64.The network with the lowest error on the validation data was selected.For adjusting the bias parameter, we computed δ b using equation 7 and all N train data points available for model development.As a baseline, we fitted a linear regression model using all N train data points.
We used Scikit-learn [10] and PyTorch [9] in our experiments.The input data were converted to 32-bit floating point precision.We repeated the experiments 10 times with 10 random data splits, network initializations, and mini-batch shufflings.
Results.The results are shown in Table 1 and Figure 1.The neural networks without bias correction achieved a higher R 2 (coefficient of determination) than the linear regression on the training and test data, see Table 1.On the test data, the R 2 averaged over the ten trials increased from 0.56 to 0.78 when using the neural network.However, the ∆ D and ∆ Dtest were much lower for linear regression.This shows that a better MSE does not directly translate to a better total error (sum of residuals).
Correcting the bias of the neural network did not change the networks' R 2 , however, the total errors went down to the same level as for linear regression and even below.Thus, correcting the bias gave the best of both world, a low MSE for individual data points and a low accumulated error.
Figure 1 demonstrates how the total error developed as a function of test data set size.As predicted, with a badly adjusted bias parameter the total error increased with the number of test data points, while for the Table 1: Results for the total CO emissions prediction tasks for the different models, where "linear" refers to linear regression, "not corrected" to a neural network without bias correction, and "corrected" to the same neural network with corrected bias parameter.The results are based on 10 trials.The mean and standard error (SE) are given; values are rounded to two decimals; R 2 , ∆ and δ denote the coefficient of determinations, the absolute total error, and the relative error; δ is given in percent; D and D test are all data available for model development and testing, respectively.Here "linear" refers to linear regression, "not corrected" to a neural network without bias correction, and "corrected" to the same neural network with corrected bias parameter.The results are averaged over 10 trials, the error bars show the standard error (SE).
linear models and the neural network with adjusted bias this negative effect was less pronounced.The linear models performed worse than the neural networks with adjusted bias parameters, which can be explained by the worse accuracy of the individual predictions.

Forest Coverage
Deep learning holds great promise for large-scale ecosystem monitoring [11,15], for example for estimating tree canopy cover and forest biomass from remote sensing data [2,8].Here we consider a simplified task where the goal is to predict the amount of pixels in an image that belong to forests given a satellite image.We generated the input data from Sentinel 2 measurements (RGB values) and the accumulated pixels from a landcover map 1 as targets, see Figure 2 for examples.Both, input and target are at the same 10 m spatial  Experimental setup.From the 127 643 data points in total, 70% (89 350) were used for training, 10% (12 764) for validation and 20% (25 529) for testing.For each of the 10 trials a different random split of the data was considered.
We employed the EfficientNet-B0 [13], a deep convolutional network that uses mobile inverted bottleneck MBConv [14] and squeeze-and-excitation [4] blocks.It was trained for 300 epochs with Adam and a batch size of 256.For 100 epochs the learning rate was set to 3 • 10 −4 and thereafter reduced to 1 • 10 −5 .The validation set was used to select the best model w.r.t.R 2 .When correcting the bias, the training and validation set were combined.We considered the constant model predicting the mean of the training targets as a baseline.
Results.The results are summarized in Figure 3 and Table 2.The bias correction did not yield a better R 2 result, with 0.992 on the training set and 0.977 on the test set.However, ∆ Dtest on the test set decreased by a factor of 2.6 from 152 666 to 59 819.The R 2 for the mean prediction is by definition 0 on the training set and was close to 0 on the test set, yet ∆ Dtest is 169 666, meaning that a shift in the distribution center occurred, rendering the mean prediction unreliable.
In Figure 3, we show ∆ Dtest and δ Dtest while increasing the test set size.As expected, the total absolute error of the uncorrected neural networks increases with increasing number of test data points.Simply predicting the mean gave similar results in terms of the accumulated errors compared to the uncorrected model, which shows how misleading the R 2 can be as an indicator how well regression models perform in terms of the accumulated total error.When the bias was corrected, this effect drastically decreased and the performance was better compared to mean prediction.

Conclusions
Adjusting the bias such that the residuals sum to zero should be the default postprocessing step when doing least-squares regression using deep learning.It comes at the cost of at most a single forward propagation of the training and/or validation data set, but removes a systematic error that accumulates if individual predictions are summed.Results were averaged over 10 trials and show the different models, where "mean" refers to predicting the constant mean of the training set, "not corrected" to EfficientNet-B0 without bias correction, and "corrected" to the same neural network with corrected bias parameter.
practice, we can either replace b in our trained model by b * or add δ b to all model predictions.The costs of computing b * and δ b are the same as computing the error on the data set used for adjusting the bias.

Figure 1 :
Figure 1: Absolute errors (absolute value of the summed residuals) are shown on the left and the relative errors on the right for the CO emission prediction task.Both are are presented in relation to the test set size.The error bars show the standard error (SE).Here "linear" refers to linear regression, "not corrected" to a neural network without bias correction, and "corrected" to the same neural network with corrected bias parameter.The results are averaged over 10 trials, the error bars show the standard error (SE).

Figure 3 :
Figure 3: The absolute errors (absolute value of the summed residuals) are shown on the left and the relative errors on the right for the forest coverage prediction task.Both are are presented in relation to the test set size.The error bars show the standard error (SE).Results were averaged over 10 trials and show the different models, where "mean" refers to predicting the constant mean of the training set, "not corrected" to EfficientNet-B0 without bias correction, and "corrected" to the same neural network with corrected bias parameter.
That is, we are interested in how well (x,y)∈Dtest f θ (x) approximates (x,y)∈Dtest y for a test set D test .For |D test | → ∞ a constant model predicting ŷ = E (x,y)∼p data [y] would minimize ∆ Dtest (f θ )/|D test |.other and from ŷ because of finite sample effects and violations of the i.i.d.assumption (e.g., due to covariate shift or sample selection bias), so optimization of individual predictions (e.g., minimizing equation 2) is preferred.
However, in practice 1 |D| (x,y)∈D y and 1 |Dtest| (x,y)∈Dtest y can be considerably different from each

Table 2 :
Results of forest coverage prediction, R 2 and ∆ denote the coefficient of determinations and absolute total error; D and D test are all data available for model development and testing, respectively.The relative total error δ is given in percent.Average and standard error (SE) of these metrics are given over 10 trials for the different models, where "mean" refers to predicting the constant mean of the training set, "not corrected" to EfficientNet-B0 without bias correction, and "corrected" to the same neural network with corrected bias parameter.Values are rounded to three decimals.
(a) Absolute total error (b) Relative total error