Interpretable Linear Dimensionality Reduction based on Bias-Variance Analysis

One of the central issues of several machine learning applications on real data is the choice of the input features. Ideally, the designer should select only the relevant, non-redundant features to preserve the complete information contained in the original dataset, with little collinearity among features and a smaller dimension. This procedure helps mitigate problems like overfitting and the curse of dimensionality, which arise when dealing with high-dimensional problems. On the other hand, it is not desirable to simply discard some features, since they may still contain information that can be exploited to improve results. Instead, dimensionality reduction techniques are designed to limit the number of features in a dataset by projecting them into a lower-dimensional space, possibly considering all the original features. However, the projected features resulting from the application of dimensionality reduction techniques are usually difficult to interpret. In this paper, we seek to design a principled dimensionality reduction approach that maintains the interpretability of the resulting features. Specifically, we propose a bias-variance analysis for linear models and we leverage these theoretical results to design an algorithm, Linear Correlated Features Aggregation (LinCFA), which aggregates groups of continuous features with their average if their correlation is"sufficiently large". In this way, all features are considered, the dimensionality is reduced and the interpretability is preserved. Finally, we provide numerical validations of the proposed algorithm both on synthetic datasets to confirm the theoretical results and on real datasets to show some promising applications.


Introduction
Dimensionality reduction plays a crucial role in applying Machine Learning (ML) techniques in real-world datasets (Sorzano et al, 2014).Indeed, in a large variety of scenarios, data are high-dimensional with a large number of correlated features.For instance, financial datasets are characterized by time series representing the trend of stocks in the financial market, and climatological datasets include several highly-correlated features that, for example, represent temperature value at different points on the Earth.On the other hand, only a small subset of features is usually significant for learning a specific task, and it should be identified to train a well-performing ML algorithm.In particular, considering many redundant features boosts the model complexity, which increases its variance and the risk of overfitting (Hastie et al, 2009).Furthermore, when the number of features is high, and comparable with the number of samples, the available data become sparse, leading to poor performance (curse of dimensionality (Bishop, 2006)).For this reason, dimensionality reduction and feature selection techniques are usually applied.Feature selection (Chandrashekar and Sahin, 2014) focuses on choosing a subset of features important for learning the target following a specific criterion (e.g., the most correlated with the target, the ones that produce the highest validation score), discarding the others.On the other hand, dimensionality reduction methods (Sorzano et al, 2014) maintain all the features projecting them in a (much) lower dimensional space, producing new features that are linear or non-linear combinations of the original ones.Compared to feature selection, this latter approach has the advantage of reducing the dimensionality without discarding any feature and exploiting all of their contributions to the projections.Moreover, recalling that the variance of a sum of random variables is smaller than or equal to the original one, the features computed with linear dimensionality reduction have smaller variance.However, the reduced features might be less interpretable since they are linear combinations of the original ones with different coefficients.
In this paper, we propose a novel dimensionality reduction method that exploits the information of each feature, without discarding any of them, while preserving the interpretability of the resulting feature set.To this end, we aggregate features through their average, and we propose a criterion that aggregates two features when it is beneficial in terms of the bias-variance tradeoff.Specifically, we focus on linear regression, assuming a linear relationship between the features and the target.In this context, the main idea of this work is to identify a group of aggregable features and substitute them with their average.Intuitively, in linear settings, two features should be aggregated if their correlation is large enough.We identify a theoretical threshold on the minimum correlation for which it is profitable to unify the two features.This threshold is the minimum correlation value between two features for which, comparing the two linear regression models before and after the aggregation, the variance decrease is larger than the increase of bias.
Choosing the average to aggregate the features is to preserve interpretability (the resulting reduced feature is just the average of k original features).Another advantage is that the variance of the average is smaller than the variance of the original features if they are not perfectly correlated.Indeed, assuming that we unify k standardized features, the variance of their average becomes var( X) = 1 k + k−1 k ρ, with ρ being the average correlation of distinct features (Jacod and Protter, 2004).The main restriction of choosing the average to aggregate is that we will only consider continuous features since the mean is not well-defined for categorical features.Moreover, it would be improper to evaluate the mean between heterogeneous features: interpretability is preserved only if the aggregation is meaningful.Indeed, the practical motivation behind this work relates to spatially distributed climatic datasets, where thousands of measurements of the same variable (e.g., temperature) are available at different points on the Earth.In this context, feeding a supervised model directly with all the variables would be impractical, both in terms of the curse of dimensionality and collinearity, since measurements that are close in space have a large correlation.Aggregating these variables with their mean leads to a reduced dataset with less collinearity and preserving interpretability, since each variable represents the mean of the measurement of the considered quantity over an area identified in a data-driven way, with theoretical guarantees.Another issue may arise when considering features with a different unit of measurement or scale, for this reason we will consider standardized variables.
Outline: The paper is structured as follows.In Section 2, we formally define the problem, and we provide a brief overview of the main dimensionality reduction methods.Section 3 introduces the methodology that will be followed throughout the paper.In Section 4, the main theoretical result is presented for the bivariate setting, which is then generalized to D dimensions in Section 5. Finally, in Section 6, the proposed algorithm, Linear Correlated Features Aggregation (LinCFA), is applied to synthetic and real-world datasets to experimentally confirm the result and lead to the conclusions of Section 7. The paper is accompanied by supplementary material.Specifically, Appendix A contains the proofs and technical results of the bivariate case that are not reported in the main paper, Appendix B shows an additional finite-samples bivariate analysis, Appendix C elaborates on the bivariate results to be composed only of theoretical or empirical quantities, Appendix D contains the proofs and technical results of the three-dimensional setting, and Appendix E presents in more details the experiments performed.

Preliminaries
In this section, we introduce the notation and assumptions employed in the paper (Section 2.1) and we survey the main related works (Section 2.2).

Notation and Assumptions
Let (X, Y ) be random variables with joint probability distribution P X,Y , where X ∈ R D is the D-dimensional vector of features and Y ∈ R is the scalar target of a supervised learning regression problem.Given N data sampled from the distribution P X,Y , we denote the corresponding feature matrix as X ∈ R N ×D and the target vector as Y ∈ R N .Each element of the random vector X is denoted with x i and it is called a feature of the ML problem.We denote as y the scalar target random variable and with σ 2 y and σ2 y its variance and sample variance.For each pair of random variables a, b we denote with σ 2 a , cov(a, b) and ρ a,b respectively the variance of the random variable a and its covariance and correlation with the random variable b.Their estimators are σ2 a , ĉ ov(a, b) and ρa,b .Finally, the expected value and the variance operators applied on a function f (a) of a random variable a w.r.t.its distribution are denoted with E a [f (a)] and var a (f (a)).
A dimensionality reduction method can be seen as a function φ : R N ×D → R N ×d , mapping the original feature matrix X with dimensionality D into a reduced dataset U = φ(X) ∈ R N ×d with d < D. The goal of this projection is to reduce the (possibly huge) dimensionality of the original dataset while keeping as much information as possible in the reduced dataset.This is usually done by preserving a distance (e.g., Euclidean, geodesic) or the probability of a point to have the same neighbours after the projection (Zaki and Jr., 2014).
In this paper, we assume a linear dependence between the features X and the target Y , i.e., Y = w T X+ , where is a zero-mean noise, independent of X, and w ∈ R D is the weight vector.Without loss of generality, the expected value of each feature is assumed to be zero, i.e., E[x i ] = E[Y ] = 0 ∀i ∈ {1, . . ., D}.Finally, we consider linear regression as ML method: the i-th estimated coefficient is denoted with ŵi , the estimated noise with ˆ and the predicted (scalar) target with ŷ.

Unsupervised Dimensionality Reduction
Classical dimensionality reduction methods can be considered as unsupervised learning techniques which, in general, do not take into account the target, but they focus on projecting the dataset X through the minimization of a given loss.
The most popular unsupervised linear dimensionality reduction technique is Principal Components Analysis (PCA) (Pearson, 1901;Hotelling, 1933), a linear method that embeds the data into a linear subspace of dimension d describing as much as possible the variance in the original dataset.One of the main difficulties of applying PCA in real problems is that it performs linear combinations of possibly all the D features, usually with different coefficients, losing the interpretability of each principal component and suffering the curse of dimensionality.To overcome this issue, there exist some variants like svPCA (Ulfarsson and Solo, 2011), which forces most of the weights of the projection to be zero.This contrasts with the approach proposed in this paper, which aims to preserve interpretability while exploiting the information yielded by each feature.There exist several variants to overcome different issues of PCA (e.g., out-ofsample generalization, linearity, sensitivity to outliers) and other methods that approach the problem from a different perspective (e.g., generative approach with Factor Analysis, independence-based approach with Independent Component Analysis, matrix factorization with SVD), an extensive overview can be found in (Sorzano et al, 2014).A broader overview of linear dimensionality reduction techniques can be found in (Cunningham and Ghahramani, 2015).Specifically, SVD (Golub and Reinsch, 1970) leads to the same result of PCA from an algebraic perspective through matrix decomposition.Factor analysis (Thurstone, 1931) assumes that the features are generated from a smaller set of latent variables, called factors, and tries to identify them by looking at the covariance matrix.Both PCA and Factor Analysis can reduce through rotations the number of features that are combined for each reduced component to improve the interpretability, but their coefficients can still be different and hard to interpret.Finally, Independent Component Analysis (Hyvarinen, 1999) is an information theory approach that looks for independent components (not only uncorrelated as PCA) that are not constrained to be orthogonal.This method is more focused on splitting different signals mixed between features than on reducing their dimensionality, which can be done as a subsequent step with feature selection, which would be simplified from the fact that the new features are independent.
Differently from the linear nature of PCA, many non-linear approaches exist, following the idea that the data can be projected onto non-linear manifolds.Some of them optimize a convex objective function (usually solvable through a generalized eigenproblem) trying to preserve global similarity of data (e.g., Isomap (Tenenbaum et al, 2000), Kernel PCA (Shawe-Taylor and Cristianini, 2004), Kernel Entropy Component Analysis (Jenssen, 2010), MVU (Weinberger et al, 2004), Diffusion Maps (Lafon andLee, 2006)) or local similarity of data (LLE (Roweis and Saul, 2000), Laplacian Eigenmaps (Belkin and Niyogi, 2001), LTSA (Zhang and Zha, 2004)).Other methods optimize a non-convex objective function with the purpose of rescaling Euclidean distance (Sammon Mapping (Sammon, 1969)) introducing more complex structures like neural networks (Multilayer Autoencoders (Hinton and Salakhutdinov, 2006)) or aligning mixtures of models (LLC (Teh and Roweis, 2002)).Since in this paper we assume linearity, non-linear techniques for dimensionality reduction cannot outperform traditional linear techniques such as PCA and its variants in linear contexts (Van Der Maaten et al, 2009) or with real data (Espadoto et al, 2021), therefore they will not be compared to the proposed method.On the contrary, classical PCA is one of the most applied linear unsupervised dimensionality reduction techniques in ML applications, therefore, it will be compared with the proposed algorithm LinCFA in the experimental section.

Supervised Dimensionality Reduction
Supervised dimensionality reduction is a less-known but powerful approach when the main goal is to perform classification or regression rather than learn a data projection into a lower dimensional space.The methods of this subfield are usually based on classical unsupervised dimensionality reduction, adding the regression or classification loss in the optimization phase.In this way, the reduced dataset U is the specific projection that allows maximizing the performance of the considered supervised problem.This is usually done in classification settings, minimizing the distance within the same class and maximizing the distance between different classes in the same fashion as Linear Discriminant Analysis (Fisher, 1936).The other possible approach is to directly integrate the loss function for classification or regression.Following the taxonomy presented in (Chao et al, 2019), these supervised approaches can be divided into PCA-based, NMF-based (mostly linear), and manifold-based (mostly non-linear).
A well-known PCA-based algorithm is Supervised PCA.The most straightforward approach of this kind has been proposed in (Bair et al, 2006), which is a heuristic that applies classical PCA only to the subset of features mostly related to the target.A more advanced approach can be found in (Barshan et al, 2011), where the original dataset is orthogonally projected onto a space where the features are uncorrelated, simultaneously maximizing the dependency between the reduced dataset and the target by exploiting Hilbert-Schmidt independence criterion.The goal of Supervised PCA is similar to that of the algorithm proposed in this paper.The main difference is that we are not looking for an orthogonal projection, but we aggregate features by computing their means (thus, two projected features can be correlated) to preserve interpretability.Many variants of Supervised PCA exist, e.g., to make it a nonlinear projection or to make it able to handle missing values (Yu et al, 2006).Since it is defined in the same context (linear) and has the same final purpose (minimize the mean squared regression error), supervised-PCA will be compared with the approach proposed by this paper in the experimental section.NMF-based algorithms (Jing et al, 2012;Lu et al, 2017) have better interpretability than PCA-based, but they focus on the non-negativity property of features, which is not a general property of linear problems.Manifoldbased methods (Ribeiro et al, 2008;Zhang et al, 2018;Zhang, 2009;Raducanu and Dornaika, 2012), on the other hand, perform non-linear projections with higher computational costs.Therefore, both families of techniques will not be considered in this linear context.

Proposed Methodology
In this section, we introduce the proposed dimensionality reduction algorithm, named Linear Correlated Features Aggregation (LinCFA), from a general perspective.The approach is based on the following simple idea.Starting from the features x i of the D-dimensional vector X, we build the aggregated features u k of the d-dimensional vector U .The dimensionality reduction function φ is fully determined by a partition P = {P 1 , . . ., P d } of the set of features {x 1 , . . ., x D }.In particular, each feature x i is assigned to a set P k ∈ P and each feature u k is computed as the average of the features in the k-th set of P: (1) In the following sections, we will focus on finding theoretical guarantees to determine how to build the partition P. Intuitively, two features will belong to the same element of the partition P if their correlation is larger than a threshold.This threshold is formalized as the minimum correlation for which the Mean Squared Error (MSE ) of the regression with a single aggregated feature (i.e., the average) is not worse than the MSE with the two separated features. 1In particular, it is possible to decompose the MSE as follows (biasvariance decomposition (Hastie et al, 2009)): where x, y are the features and the target of a test sample, T is the training set, h T (•) is the ML model trained on dataset T , h(•) is its expected value w.r.t. the training set T and ȳ is the expected value of the test output target y w.r.t. the input features x.Decreasing model complexity leads to a decrease in variance and an increase in bias.Therefore, in the analysis, we will compare these two variations and identify a threshold as the minimum value of correlation for which, after the aggregation, the decrease of variance is greater or equal than the increase of bias, so that the MSE will be greater or equal than the original one.

Two-dimensional Analysis
This section introduces the theoretical analysis, performed in the bivariate setting, that identifies the minimum value of the correlation between the two features for which it is convenient to aggregate them with their mean.In particular, Subsection 4.1 introduces the assumptions under which the analysis is performed.Subsection 4.2 computes the amount of variance decreased when performing the aggregation.Then, Subsection 4.3 evaluates the amount of bias increased due to the aggregation.Finally, Subsection 4.4 combines the two results identifying the minimum amount of correlation for which it is profitable 1 For this reason, the approach can be considered supervised.
to aggregate the two features.In addition, Appendix A contains the proofs and technical results that are not reported in the main paper, Appendix B includes an additional finite-sample analysis, and Appendix C computes confidence intervals that allow stating the results with only theoretical or empirical quantities.

Setting
In the two-dimensional case (D = 2), we consider the relationship between the two features x 1 , x 2 and the target y to be linear and affected by Gaussian noise: y = w 1 x 1 + w 2 x 2 + , with ∼ N (0, σ 2 ).As usually done in linear regression (Johnson and Wichern, 2007), we assume the training dataset X to be known.Moreover, recalling the zero-mean assumption ( x 2 ] = 0 and σ 2 y = σ 2 .We compare the performance (in terms of bias and variance) of the twodimensional linear regression ŷ = ŵ1 x 1 + ŵ2 x 2 with the one-dimensional linear regression, which takes as input the average between the two features ŷ = ŵ x1+x2 2 = ŵx.As a result of this analysis, we will define conditions under which aggregating features x 1 and x 2 in the feature x is convenient.

Variance Analysis
In this subsection, we compare the variance of the two models with both an asymptotic and a finite-samples analysis.Since the two-dimensional model estimates two coefficients, it is expected to have a larger variance.Instead, aggregating the two features reduces the variance of the model.

Variance of the estimators
A quantity, necessary to compute the variance of the models that will be compared throughout this subsection, is the covariance matrix of the vector ŵ of the estimated regression coefficients w.r.t. the training set.Given the training features X, a known result in a general linear problem with n samples and D features (Johnson and Wichern, 2007) (see Appendix A for the computations) is: var The following lemma shows the variance of the weights for the two specific models that we are comparing.
Lemma 1 Let the real model be linear with respect to the features x 1 and x 2 (y = w 1 x 1 + w 2 x 2 + ).In the one-dimensional case ŷ = ŵx, we have: In the two-dimensional case ŷ = ŵ1 x 1 + ŵ2 x 2 , we have: (5) Proof The proof of the two results follows from Equation (3), see Appendix A for the computations.

Variance of the model
Recalling the general definition of variance of the model from Equation (2), in the specific case of linear regression it becomes: The following result shows the variance of the two specific models (univariate and bivariate) considered in this section.
Theorem 1 Let the real model be linear with respect to the two features x 1 and x 2 (y = w 1 x 1 + w 2 x 2 + ).Then, in the one dimensional case y = ŵ x1+x2 2 = ŵx, we have: In the two dimensional case y = ŵ1 x 1 + ŵ2 x 2 , we have: Proof The proof combines the results of Lemma 1 with the definition of variance for a linear model given in Equation ( 6).The detailed proof can be found in Appendix A.

Comparisons
In this subsection, the difference between the variance of the linear regression with two features x 1 and x 2 and the variance of the linear regression with one feature x = x1+x2 2 is shown.We will prove that, as expected, this difference is positive and it represents the reduction of variance when substituting a two-dimensional random vector with the average of its components.
First, the asymptotic analysis is performed, obtaining a result that can be applied with good approximation when a large number of samples n is available.Then, the analysis is repeated in the finite-samples setting, with an additional assumption on the variance and sample variance of the features x 1 and x 2 , that simplify the computations.2 Case I: asymptotic analysis In the limit of the number of samples n of the training dataset T approaching infinity, the estimators that we are considering are consistent, i.e., they converge in probability to the real values of the parameters (e.g., plim n→∞ σ2 x1 = σ 2 x1 ).Therefore the following result can be proved.
Theorem 2 If the number of samples n tends to infinity, let ∆ n→∞ var be the difference between the variance of the two-dimensional and the one-dimensional linear models, it is equal to: that is a positive quantity and tends to zero when the number of samples tends to infinity.
Proof The result follows from the difference between Equation 8 and 7, exploiting the consistency of the estimators.
Case II: finite-samples analysis with equal variance and sample variance For the finite-samples analysis, we add the following assumption to simplify the computations: Theorem 3 If the conditions of Equation (10) hold, let ∆var be the difference between the variance of the two-dimensional and the one-dimensional linear models, it is always non-negative and it is equal to: Proof The proof starts again from the variances of the two models found in Theorem 1 and it performs algebraic computations exploiting the assumption stated in Equation ( 10).All the steps can be found in Appendix A.
Remark 1 When the number of samples n tends to infinity, the result of Equation ( 11) reduces to the asymptotic case, as in Equation ( 9).
Remark 2 The quantities found in Theorem 2 and 3 are always non-negative, meaning that the variance of the two-dimensional case is always greater or equal than the corresponding one-dimensional version, as expected.

Bias Analysis
In this subsection, we compare the (squared) bias of the two models under examination with both an asymptotic and a finite-samples analysis, as done in the previous subsection for the variance.Since the two-dimensional model corresponds to a larger hypothesis space it is expected to have a lower bias w.r.t. the one-dimensional.
The procedure to derive the difference between biases is similar to the one followed for the variance.The first step is to compute the expected value w.r.t. the training set T of the vector ŵ of the regression coefficients estimates, given the training features X.This is used to compute the bias of the models.In particular, in Equation ( 2), we defined the (squared) bias as follows: Starting from this definition, the bias of the one-dimensional case ŷ = ŵx is computed.Moreover, for the two dimensional case y = ŵ1 x 1 + ŵ2 x 2 the model is clearly unbiased.Detailed computations can be found in Appendix A.
After the derivation of the bias of the models, the same asymptotic and finite-samples analysis performed on the variance is repeated in this section for the (squared) bias.Since the two-dimensional model is unbiased, we can conclude that the increase of the bias component of the loss, when the two features are substitute by their mean, is equal to the bias of the one-dimensional model.

Case I: asymptotic analysis
When the number of samples n of the training dataset T approaches infinity, recalling that the estimators considered converge in probability to the expected values of the parameters, the following result holds.
Theorem 4 If the number of samples n tends to infinity, let ∆ n→∞ bias be the difference between the bias of the one-dimensional and the two-dimensional models, it is equal to: where the second equality holds if σx 1 = σx 2 = 1.
Proof The proof starts from the bias of the two models computed in Appendix A and exploits the fact that in the limit n → ∞, it is possible to substitute every sample estimator with the real quantity of the parameters because they are consistent estimators.Details can be found in Appendix A.
Case II: finite-samples analysis with equal variance and sample variance In the finite-samples case, we provide the same analysis performed for variance, i.e., with the assumptions of Equation ( 10).
Theorem 5 If the conditions of Equation (10) hold, let ∆ bias be the difference between the (squared) bias of the one-dimensional and the two-dimensional linear models, then it has value: Proof The proof starts from the bias of the two models and performs algebraic computations exploiting the assumptions of Equation ( 10).All the steps can be found in Appendix A.
Remark 3 When the number of samples n tends to infinity, the result in Equation ( 15) reduces to the asymptotic case as in Theorem 4.
Remark 4 Some observations are in order: • As expected, the quantities found in Theorem 4, 5 are always non-negative, since the hypothesis space of the univariate model is a subset of the one of the bivariate model.• We observe that ∆ bias = 0 if ρ x1,x2 = 1.Indeed, when the two variables are perfectly (positively) correlated their coefficients in the linear regression are equal, therefore there is no loss of information in their aggregation.• Finally, when the two regression coefficients are equal w 1 = w 2 there is no increase of bias due to the aggregation, since it is enough to learn a single coefficient w to have the same performance of the bivariate model.

Correlation Threshold
This subsection concludes the analysis with two features by comparing the reduction of variance with the increase of bias when aggregating the two features x 1 and x 2 with their average x = x1+x2

2
. In conclusion, the result shows when it is convenient to aggregate the two features with their mean, in terms of mean squared error.
Considering the asymptotic case, the following theorem compares bias and variance of the models.
Theorem 6 When the number of samples n tends to infinity and the relationship between the features and the target is linear with Gaussian noise, the decrease of variance is greater than the increase of (squared) bias when the two features x 1 and x 2 are aggregated with their average if and only if: that, for σx 1 = σx 2 = 1 becomes: Proof Computing the difference between Equation ( 9) and ( 13) the result follows.
In the finite-samples setting, with the additional assumptions of Equation ( 10), the following theorem shows the result of the comparison between bias and variance of the two models.
Theorem 7 Let the variance and sample variance of the features x 1 and x 2 be equal (Equation (10)) and the relationship between the features and then target be linear with Gaussian noise.The decrease of variance is greater than the increase of (squared) bias when the two features x 1 and x 2 are aggregated with their average if and only if: that, for σx = 1 becomes: Proof Computing the difference between Equation ( 11) and ( 15) the result follows.
Remark 5 The results of Theorem 6 and 7 comply with the intuition that, in a linear setting with two features, they should be aggregated if their correlation is large enough.
Remark 6 Theorem 6 and 7 with unitary sample variances produce the same threshold both in the finite and the asymptotic settings.
In conclusion, the thresholds found in Theorem 6 and 7 show that it is profitable in terms of M SE to aggregate two variables in a bivariate linear setting with Gaussian noise if: • the variance of the noise σ 2 is large, which means that the process is noisy and the variance should be reduced; • the number of samples n is small, indeed in this case there is little knowledge about the actual model, therefore it is better to learn one parameter rather than two; • the difference between the two coefficients w 1 − w 2 is small, which implies that they are similar, and learning a single coefficient introduces a little loss of information.
5 Generalization: three-dimensional and D-dimensional analysis In the previous section, we focused on aggregating two features in a bivariate setting.In this section, we extend that approach to three features.Starting from the related results, we will straightforwardly extend them to a general problem with D features.Because of the complexity of the computations, we focus on asymptotic analysis only.After the analysis, we conclude this section with the main algorithm proposed in this paper: Linear Correlated Features Aggregation (LinCFA).

Three-dimensional case
In the three-dimensional case (D = 3), we consider the relationship between the three features and the target to be linear with Gaussian noise: In accordance with the previous analysis, we assume the training dataset X = [x 1 x 2 x 3 ] to be known and recalling the zero-mean assumption ( In this setting and for the general D dimensional setting of the next subsection, which will be a direct application of this, we compare the performance of the bivariate linear regression ŷ = ŵi x i + ŵj x j of each pair of features x i , x j with the univariate linear regression that considers their average ŷ = ŵ xi+xj 2 = ŵx, to decide whether it is convenient to aggregate them or not in terms of M SE.Indeed, extending the dimension from D = 2 to a general dimension D and comparing all the possible models where groups of variables are aggregated is combinatorial in the number of features and it would be impractical.Also, comparing the full D dimensional regression model with the D − 1 dimensional model where two variables are aggregated is impractical.Indeed, when the number of features is huge, in addition to a polynomial computational cost, both models suffer issues like the curse of dimensionality and risk of overfitting.
To simplify the exposition, for the theoretical analysis, we will consider

Variance
The following theorem shows the asymptotic difference of variance between the two considered linear regression models.
Then, for the one-dimensional linear regression ŷ = ŵ x1+x2 2 , we have: and for the two-dimensional linear regression ŷ = ŵ1 x 1 + ŵ2 x 2 , we have: Proof The results follow from the general expression of variance of the estimators from Equation (3) and substituting respectively X and X for the two considered models.
Remark 7 Since the linear regression models are the same of the bivariate case, starting from the result of Theorem 8, the variance of the estimators in the two cases remains the same of Lemma 1 and the asymptotic difference of variances remains the one of Theorem 2 (∆ n→∞ var = σ 2 (n−1) ).

Bias
This subsection introduces the asymptotic difference of bias of the two considered linear regression models in the three dimensional setting.
As in the bivariate setting, the first step is to calculate the bias for each of the two considered models.In the asymptotic case, assuming unitary variances of the features σ x1 = σ x2 = σ x3 = 1, for the one-dimensional regression ŷ = ŵ x1+x2 2 and for the two-dimensional regression ŷ = ŵ1 x 1 + ŵ2 x 2 , the (squared) bias E x [( h(x) − ȳ) 2 ] can be expressed with two functions, that will be denoted respectively with F(•) and G(•).These functions depend on the three features, their coefficients and their correlations.The exact expressions of the two biases and the related proofs can be found in Appendix D.
For the extension to the D-dimensional case, it will be necessary to keep the feature x 3 having general variance σ 2 x3 .With little changes in the algebraic computations of the proof, the bias of the two models can be easily extended (see Appendix D for the details).
From the results obtained, it is possible to compute the increase of bias due to the aggregation of the two variables x 1 , x 2 with their average x = x1+x2 2 .
Theorem 9 In the asymptotic setting, let the relationship between the features and the target be linear with Gaussian noise.Assuming unitary variances of the features σx 1 = σx 2 = σx 3 = 1, the increase of bias due to the aggregation of the features x 1 and x 2 with their average is given by: Proof The result follows from the difference of the biases of the two models, after algebraic computations.
Remark 8 Letting the feature x 3 having general variance σ 2 x3 , with little changes in the algebraic computations of the proof, the difference of biases is given by: ) . (23)

Correlation threshold
The result of the following theorem extends the result of Theorem 6 for the three-dimensional setting.
Theorem 10 In the asymptotic setting, let the relationship between the features and the target be linear with Gaussian noise.Assuming unitary variances of the features σx 1 = σx 2 = σx 3 = 1, the decrease of variance is greater than the increase of (squared) bias due to the aggregation of the features x 1 and x 2 with their average if and only if: Proof Recalling the asymptotic difference of variances from Remark 7 and the difference of biases from Theorem 9 the result follows after algebraic computations on the difference ∆ n→∞ var − ∆ n→∞ Bias ≥ 0.
Remark 9 Equation (24) holds also in the case of generic variance σ 2 x3 of the feature x 3 , with the only difference that b becomes: Remark 10 The result obtained in this section with three features is more difficult to interpret than the bivariate one.However, if the two features x 1 and x 2 are uncorrelated with the third feature x 3 or they have the same correlation with it (ρx 1 ,x3 = ρx 2 ,x3 ), then Equation ( 24) is equal to the one found in the bivariate asymptotic analysis (Equation ( 17)).
Remark 11 Since the analysis is asymptotic, the theoretical quantities in Equation ( 24) can be substituted with their consistent estimators when the number of samples n is large.

D-dimensional case
This last subsection of the analysis shows the generalization from three to D dimensions.In particular, we assume the relationship between the D features x 1 , ..., x D and the target to be linear with Gaussian noise y = w 1 x 1 + ... + w D x D + , with ∼ N (0, σ 2 ).As done throughout the paper, we assume the training dataset X = [x 1 ... x D ] to be known and from the zero-mean assumption E[y] = 0 and σ 2 y = σ 2 .As discussed for the three-dimensional case, we compare the performance (in terms of bias and variance) of the two-dimensional linear regression ŷ = ŵi x i + ŵj x j with the one-dimensional linear regression ŷ = ŵ xi+xj 2 = ŵx and in the computations we consider x i = x 1 , x j = x 2 without loss of generality.
It is possible to directly extend the three-dimensional analysis of the previous subsection to this general case considering the model to be y = w 1 x 1 + w 2 x 2 + wx + , with w = 1 and x = w 3 x 3 + ... + w D x D .Recalling that in this case the third feature x has general variance σ 2 x , the following lemma holds.
Then, performing linear regression in the asymptotic setting, the decrease of variance is greater than the increase of bias when aggregating the two features x 1 and x 2 with their average if and only if the condition on the correlation of Equation ( 24 ).
Proof The lemma follows by applying the three-dimensional analysis with general variance of the third feature σ 2 x (Theorem 10 and Remark 9).

D-dimensional algorithm
For the general D-dimensional case, as explained in the previous subsection, the three-dimensional results can be extended considering as third feature the linear combination of the D − 2 features not currently considered for the aggregation.A drawback of applying the obtained result in practice is that it requires the knowledge of all the coefficients w 1 , ..., w D , which is unrealistic, or to approximate them through an estimate, performing linear regression on the complete D-dimensional dataset.In this case, the computational cost is O(n • D 2 + D 3 )-which becomes O(n • D 2 + D 2.37 ) if using the Coppersmith-Winograd algorithm (Coppersmith and Winograd, 1990)-and it is impractical with a huge number of features.Therefore, since the equation in the three dimensional asymptotic analysis becomes equal to the bivariate one if the two features have the same correlation with the third (Remark 10), it is reasonable, if they are highly correlated, to assume this to be valid and to apply the asymptotic bivariate result shown in Equation ( 17) to decide whether the two features should be aggregated or not.In this way, we iteratively try all combinations of two features, with complexity O(n + D 2 ) in the worst case, in order to choose the groups of features that is convenient to aggregate with their mean. Algorithm In Algorithm 1 the pseudo-code of the proposed algorithm Linear Correlated Features Aggregation (LinCFA) can be found.The proposed dimensionality reduction algorithm creates a d dimensional partition of the indices of the features {1, . . ., D} by iteratively comparing couples of features and adding them to the same subset if their correlation (correlation(x i , x j )) is greater than the threshold (threshold(x i , x j , y)), obtained from Equation ( 17).Then, it aggregates the features in each set k of the partition (P) with their average, producing each output xk .

Numerical Validation
In this section, the theoretical results obtained in Section 4 and 5 are exploited to perform dimensionality reduction on synthetic datasets of two, three and D dimensions.Furthermore, the proposed dimensionality reduction approach LinCFA is applied to four real datasets and compared with PCA and Supervised-PCA. 3To evaluate the performance of the computed linear regressions, the results will be evaluated in terms of Mean Squared Error (M SE) and R-squared (R 2 ).In the bivariate setting, according to Equation ( 17) and ( 19), it is convenient to aggregate the two features with a small number of samples n, with a small absolute value of the difference between the coefficients of the linear model w 1 , w 2 or with a large variance of the noise σ 2 .The synthetic experiments (full description in Appendix E) confirm with data the theoretical result.In particular, they are performed with a fixed number of samples n = 500, a fixed correlation between the features ρ x1,x2 ≈ 0.9, comparing two combinations of weights (at small and large distances) and three different variances of the noise (small, normal, large).

Two-dimensional application
Table 1 shows the results of the experiments (more detailed results can be found in Table E1,E2 of Appendix E).In line with the theory, when the weights in the linear model are consistently distant, only with a huge variance of the noise the threshold is far from 1 and the two features are aggregated, while for a reasonably small amount of variance in the noise they are kept separated.On the other hand, when the weights in the linear model are similar, the threshold of Equation ( 17) is small and the conclusion is to aggregate the two features also with a small amount of variance in the noise.The confidence intervals on the R 2 and on the MSE confirm that, when the correlation is above the threshold, the performance of the linear model when the two features are aggregated with their average is statistically not worse than the bivariate model where they are kept separate.It is finally important to notice that, knowing the coefficients of the regression, always leads to aggregate the two features or not in all the 500 repetitions of the experiment (row # aggregations (theo)).On the contrary, estimating the coefficients from data leads to the same action in most repetitions but not always (row # aggregations (emp)), since the limited amount of data introduces noise into the estimates.Equation ( 24) expresses the interval for which it is convenient to aggregate the two features x 1 and x 2 in the three-dimensional setting.As in the bivariate case, it is related to the number of samples, the difference between weights, and the variance of the noise.In addition, it also depends on the difference of the correlations between each of the two features with the third one x 3 and on the weight w 3 .The experiment performed in this setting is based on synthetic data, computed with the following realistic setting: the weights w 1 = 0.4, w 2 = 0.6 are closer than w 3 = 0.2.Moreover, the two features are significantly correlated: ρ x1,x2 ≈ 0.88 (more details can be found in Appendix E).

Three-dimensional application
In this setting, as shown in Table 2, it is convenient to aggregate the two features x 1 , x 2 with their average both in terms of MSE and R 2 , since the aggregation does not worsen the performances.In particular, the aggregation is already convenient with a small standard deviation of the noise (σ = 0.5).This subsection introduces the D-dimensional synthetic experiment performed 500 times with n = 500 samples and D = 100 features, reduced with the proposed algorithm LinCFA (more details can be found in Appendix E).

D-dimensional application
The test results shown in Table 3 underline that knowing the real values of the coefficients of the linear model would lead to a reduced dataset of d = 4 features and a significant increase of performance (R 2 aggregate (theo), M SE aggregate (theo)), while using the empirical coefficients the dimension is reduced to d = 15, still with a significant increase of performance both in terms of MSE and R 2 (R 2 aggregate (emp), M SE aggregate (emp)).This is a satisfactory result and it is confirmed by the real dataset application described below.1a shows the number of reduced features for a different number of samples.Figure 1b,1c show the regression performance in terms of R 2 and MSE for different number of samples.Blue lines refer to the linear regression with all the original features, while red and green lines respectively refer to linear regression on the features reduced by applying the proposed algorithm considering theoretical and empirical quantities.To better understand the performance of the algorithm, in Figure 1 we consider the number of selected features and the regression scores.From Figure 1a it is clear that with a small number of samples, both considering theoretical and empirical quantities, the number of reduced features d becomes smaller to prevent overfitting.Moreover, considering the empirical quantities, which are the only ones available in practice, lead to a larger number of reduced features (but still significantly smaller than the original dimension D). Figure 1b,1c show the performance of the linear regression considering the reduced features compared with the full dataset.When the number of samples is significantly larger than the number of features, the performance of the reduced datasets is only slightly better but, when the number of samples is of the same order of magnitude as the number of features, the reduced datasets (both considering empirical and theoretical quantities) significantly outperform the regression over the full dataset.Moreover, the regression performed with reduced datasets is much robust, since it has a score that is stable for different numbers of samples.
The main practical result introduced in this paper (the algorithm LinCFA) has been also tested on four real datasets.In particular, the results of the application of the dimensionality reduction method introduced in this paper are discussed in comparison with the chosen baselines: PCA and Supervised PCA4 .Specifically, the algorithm has been applied to a dataset of 18 features predicting life expectancy and three large dimensional datasets related to finance and climate (the datasets are described in more detail in Appendix E).Table 4 shows the M SE and R 2 coefficients obtained with linear regression applied on the full dataset, on the dataset reduced by LinCFA, and on the dataset reduced by PCA and Supervised-PCA.The number of components selected for PCA is set to explain 95% of variance, while for supervised PCA it is reported the best result (evaluating from d = 1 to d = 50 principal components).
It is possible to notice that in the first case, when the number of features is low, the results are similar between the full regression and the regression on the reduced dataset applying supervised PCA, PCA or LinCFA.On the other hand, when the algorithms are applied to the large dimensional data, the algorithm that we propose obtains slightly better performances than the other two methods.Therefore, the main result of this paper is able, in linear settings, to reduce the dimensionality of the input features improving (or not-worsening) the performance of the linear model and, most importantly, preserving the interpretability of the reduced features.

Conclusion and Future Work
This paper presents a dimensionality reduction algorithm in linear settings with the theoretical guarantee to produce a reduced dataset that does not perform worse than the full dataset in terms of M SE, with a decrease of variance that is larger than the increase of bias due to the aggregation of features.The main strength of the proposed approach is that it aggregates features through their mean, which reduces the dimension meanwhile preserving the interpretability of each feature, which is not common in traditional dimensionality reduction approaches like PCA.Moreover, the complexity of the proposed algorithm is lower than performing a linear regression on the full original dataset.The main weaknesses of the proposed method are that all the computations have been done assuming the features to be continuous and the relationship between the target and the features to be linear, which is a strong assumption in real-world applications.However, the empirical results show an increase in performance and a significant reduction of dimensionality when applying the proposed algorithm to real-world datasets on which linear regression has good performances.
In future work it may be interesting to relax the linearity assumption, considering the target as a general function of the input features and applying a general machine learning method to the data.Another possible way to enrich the results obtained in this paper is to consider structured data, where prior knowledge of their relationship can be useful to identify the most suitable features for the aggregation (e.g., on climatological data, features that are registered by two adjacent sensors are more likely to be aggregated).

Appendix A Two-dimensional analysis: additional proofs and results
This section shows proofs and additional technical results that are not reported in Section 4 to keep the exposition clear.

A.1 Variance
This subsection contains some additional proofs related to the bivariate analysis of variance presented in the main paper.
Proof of Equation (3) Given the training set of features X and target y, in a linear regression model, the estimated weights are computed as ŵ = (X T X) −1 X T y. Therefore: Since (X T X) −1 X T X = I and var T (y|X) = σ 2 by hypothesis, the result follows.
Proof of Lemma 1 To prove this results it is enough to start from Equation (3) and substitute the values of X.
For the one-dimensional setting: Recalling that the expected value of the random variables x 1 and x 2 is zero by hypothesis, then n i=1 (x i ) 2 = (n − 1)σ 2 x.
For the two dimensional setting: The result follows recalling again that the expected value of the random variables Therefore, for the one-dimensional regression: x var T ( ŵ).Conditioning on the features training set X: Regarding the two dimensional regression: Conditioning on the features training set: Proof of Theorem 3 From Theorem 1, the difference of variances between the twodimensional and the one-dimensional cases is: that with the assumptions of Equation ( 10) can be written as: , and that the same applies for the sample variance, the expression above is equal to: Applying the common denominator the result follows: .

A.2 Bias
This subsection contains some technical results and proofs used to compute the difference of biases between the two considered model in the bivariate linear setting of the main paper.

A.2.1 Expected value of the estimators
The expected value with respect to the training set T of the vector ŵ of the regression coefficients estimates is necessary for the computations of the bias of the models.Given the training features X, its known expression in a general problem y = f (X) + is given by (Johnson and Wichern, 2007): Proof Given the training set of features X and target y, in a linear regression model, the estimated weights are computed as ŵ = (X T X) −1 X T y.Therefore: where the last equality holds since the expected value of the noise term is null by hypothesis.
The following lemma shows the expected value of the weights for the two models that we are considering.
Lemma 3 Let the real model be linear with respect to the features x 1 and x 2 (y = w 1 x 1 + w 2 x 2 + ).Then, in the one-dimensional case ŷ = ŵx, we have: In the two-dimensional case ŷ = ŵ1 x 1 + ŵ2 x 2 the estimators are unbiased: Proof To prove this result it is enough to apply Equation (A1) in the two settings.
In the one dimensional case, with x = x1+x2 2 , it becomes: Assuming the real model to be linear (y = w 1 x 1 + w 2 x 2 + ): Remembering that the expected values of x 1 , x 2 are equal to zero it is possible to substitute the summations with sample variances and covariances, obtaining the result.
In the two dimensional setting, from the general result and substituting f (X) = Xw:

A.2.2 Bias of the model
In Equation (2), we defined the (squared) bias as follows: The following result shows the bias of the two specific models considered in this section.
Theorem 11 Let the real model be linear with respect to the two features x 1 , x 2 (y = w 1 x 1 + w 2 x 2 + ).Then, in the one dimensional case y = ŵx, we have: On the other hand, in the two dimensional case y = ŵ1 x 1 + ŵ2 x 2 the model is unbiased: Proof The proof combines the results of Lemma 3 with the definition of bias given in Equation (A4).
Let us consider a training dataset T and a test sample x, y.Given the definition of (squared) bias: in the one dimensional case, considering Conditioning on the features training set and exploiting the independence between train and test set: . Substituting in the last equation the expression found in Lemma 3 for E T [ ŵ|X] in the one-dimensional setting, the result follows.
In the two dimensional regression, the bias is: Conditioning on the features training set, exploiting the independence between train and test set and recalling E T

A.2.3 Comparisons
The asymptotic and finite-samples results for the comparisons of biases can be found in the main paper, in this subsection the related proofs are shown.
Proof of Theorem 4 Considering the bias of the two models computed in Theorem 11 and exploiting consistency of the estimators, the difference between the one and the two dimensional model is equal to: The result follows by definition of covariance.
Proof of Theorem 5 From Theorem 11, the difference between the one-dimensional and the two-dimensional bias is equal to the one-dimensional bias, since the twodimensional model is unbiased.Therefore it is equal to: ). Substituting the assumptions from Equation (10) we get: ), from which, after basic algebraic computations, the result follows.
Appendix B Two-dimensional analysis: additional setting In the main paper, the finite-sample analysis in the two-dimensional case is performed with the assumption that the two features x 1 , x 2 have respectively the same variance and sample variance.In this section are reported the results after the relaxation of this hypothesis.
In particular, the only assumption for this finite-sample analysis is unitary variance: implying by definition of covariance that cov(x 1 , x 2 ) = ρ x1,x2 .This is not an impacting restriction since it is always possible to scale a random variable to have unitary variance dividing it by its standard deviation.
The following theorem shows the difference of variance between the twodimensional and the one-dimensional linear regression models.
Theorem 12 In the finite-case with unitary variances, the difference of variances of the linear model with two features compared to the one with a single feature (which is their mean), is equal to: Proof Starting from the difference of variances between the two-dimensional and the one-dimensional model (Theorem 1): exploiting the unitary variance assumption becomes: Applying the common denominator: Finally, adding and subtracting on the numerator the term 2σ 2 x1 σ2 x2 and grouping the terms, it is equal to: Remark 12 When the number of samples n tends to infinity the result becomes the same found in the asymptotic analysis.Moreover, when the sample variances of the two features are equal, the result becomes the same of the finite case analysis with equal sample and real variances.
Lemma 4 The quantity found as difference of variances between the twodimensional and one-dimensional case in this general setting is always non-negative.
Proof Recalling the result of Equation (B8), the first factor σ 2 (n−1) and the denominator of the second one σ2 )σ 2 x1+x2 are always non-negative, so the difference of features is positive if and only if the second numerator is positive: , therefore the minimum value of this term is: Substituting it back in the original inequality: that is a quantity always non-negative and proves the lemma.
The following theorem shows the difference of (squared) bias between the one-dimensional and the two-dimensional models.
Theorem 13 In the finite-case, assuming unitary variances σx 1 = σx 2 = 1, the increase of bias due to the aggregation of the two features with their average is equal to: Proof Recalling the expression of the difference of bias between the one-dimensional and the two-dimensional linear regression models (Theorem 11): exploiting the unitary variance assumption can be written as: After basic algebraic computations the result follows.
Remark 13 When the number of samples n tends to infinity the result becomes the same found in the asymptotic analysis.Moreover, when the sample variances of the two features are equal, the result becomes the same of the finite case analysis with equal sample and real variances.
Theorem 14 Necessary and sufficient condition for positivity of the difference between the reduction of variance and the increase of bias when aggregating two features with their average in the unitary-variance finite-sample setting is: Proof The result is obtained subtracting the results of the two previous theorems, after algebraic computations.

Appendix C Two-dimensional analysis: Theoretical and Practical quantities
This section elaborates the inequalities found in the main paper in Theorem 6, 7 considering only theoretical quantities or, on the other hand, quantities that can all be computed from data.In this way, in the bivariate case, we have both a theoretical conclusion of the analysis and an empirical one that can be used in practice.
For the asymptotic analysis it is straightforward to obtain a theoretical and an empirical expression, indeed at the limit the estimators converge in probability to the theoretical quantities.
Theorem 15 In the asymptotic setting of Theorem 6, considering only theoretical quantities, the following inequalities hold: On the other hand, considering only quantities that can be derived from data: where s 2 = ˆ T ˆ n−3 is the unbiased estimator of the variance σ 2 of the residual of the linear regression (an estimate ˆ of the residual can be computed subtracting the predicted value to the real value of the target).
Proof Equation (C11) is the same result of Theorem 6.To derive Equation (C12) it is sufficient to substitute the theoretical quantities with their consistent estimators.
For the finite-samples analysis it is necessary to introduce confidence intervals to substitute theoretical with empirical quantities and viceversa.
Theorem 16 In the finite-case setting of Theorem 7, considering only empirical quantities, the following inequality holds with probability at least 1 − δ: where χ 2 n−3 (•) represents a Chi-squared distribution with n − 3 degrees of freedom and F 3,n−3 (•) a Fisher distribution with 3, n − 3 degrees of freedom.
Proof The unilateral confidence interval for the variance σ 2 of the residual of the linear regression model y = w 1 x 1 + w 2 x 2 + , assuming ∼ N (0, σ 2 ) is, with probability 1 − α (Johnson and Wichern, 2007): The simultaneous confidence interval for the weights w 1 , w 2 of the linear regression model y = w 1 x 1 + w 2 x 2 + is, with probability 1 − γ: Considering the confidence intervals and the inequality of Theorem 7, with probability 1 − δ: x (w 1 − w 2 ) 2 .This means that the inequality holds and concludes the proof.
Remark 14 At the limit the quantity χ 2 n−3 tends to 1, so the result of Theorem 16 is coherent with the asymptotic result.
In order to obtain the result with only theoretical quantities in the finitesample case, it is necessary to introduce two bounds on the difference between covariance and sample covariance.
Proposition 17 The following inequalities hold.
• With probability 1 − δ: • with probability 1 − δ: Proof The proof exploits Hoeffding's inequality by applying it to the random variable Z = x 1 x 2 .The proof will derive the results of the proposition for two general random variables X, Y and n data x i , y i sampled from their distribution.We denote with X, Ȳ the means of the considered samples.From Hoeffding's inequality (Hoeffding, 1963) applied to the variable Z = XY , with probability 1 − δ, we get: which implies: Then with probability 1 − 2δ: where the second inequality applies Hoeffding's inequality three times.Equation (C14) is therefore proved.
On the other hand, with probability 1 − 2δ: where the first inequality is again due to the application of Hoeffding's inequality three times.From this result, Equation C15 follows.
It is now possible to derive the expression of Equation ( 18) with only theoretical quantities.
Theorem 18 In the finite-case setting of Theorem 7, considering only theoretical quantities, the following inequality holds with probability at least 1 − δ: Proof To prove the theorem it is enough to apply the upper bound for the sample variance from (Maurer and Pontil, 2009) and the lower bound for the sample covariance from the inequalities of Proposition 17.
Regarding the sample variance, with probability 1 − α it holds (Maurer and Pontil, 2009): For the sample covariance, Proposition 17 shows that with probability 1 − γ: Starting from the inequality of Theorem 7: Therefore, with probability 1 − δ: After basic algebraic computations, the result follows.
Remark 15 The empirical result of Theorem 16 depends on the distribution of the residual, assuming it to be Gaussian.On the other hand, the theoretical expression of Theorem 18 does not need any assumption on the distribution.

Appendix D Three-dimensional algorithm
This section contains detailed results and proofs related to Section 5 of the main paper.
bias of the two models, expressions and derivations In the asymptotic setting, letting the relationship between the features and the target be linear with Gaussian noise and assuming unitary variances of the features For the two-dimensional regression ŷ = ŵ1 x 1 + ŵ2 x 2 : Where the last equivalence is due to the fact that train and test data are independent.Then, conditioning on the training features set X and substituting the value of E T [ ŵ|X] from Equation (A1), if follows: Considering the asymptotic case, substituting the sample estimators with the real statistical measures and the variances with 1 the result follows.In the two dimensional setting: exploiting again the independence between train and test data.Then, conditioning on the training set X, substituting the value of E T [ ŵ|X] 2 from Equation (A1) and calling: ))).For the asymptotic case, substituting the sample estimators with the real statistical measures and the variances with 1 the result follows.
Extension of bias of the models to general variance of third variable Considering a general variance σ 2 x3 for the third variable x 3 , the bias of the models computed in the previous paragraph become (respectively for the one and two dimensional estimates): .

Appendix E Experiments
This section provides more details and results on the experiments performed in the two-dimensional, three-dimensional and D-dimensional settings.

E.1 Bivariate synthetic data
As introduced in Section 6 of the main paper, the experiments performed in the bivariate setting are synthetic.In particular, for each of the six experiments, the data are computed as follows.The samples of the first independent variable x 1 are extracted from a uniform distribution in the interval [0, 1].The second feature x 2 is a linear combination between the feature x 1 and a random sample extracted from a uniform distribution in the interval [0, 1] (specifically  x 2 = 0.8x 1 + 0.2u, u ∼ U([0, 1])).Finally, the target variable y is a linear combination between the two features x 1 , x 2 with weights w 1 , w 2 and the addition of a gaussian noise with variance σ 2 .Table E1,E2 provide more details about the bivariate results introduced in Table 1 in the main paper.
In Table E1 the extended results associated with large difference between weights w 1 = 0.2, w 2 = 0.8 and three different values of standard deviation of the noise σ ∈ {0.5, 1, 10} are reported, repeating s = 500 times each experiment, considering n = 500 data for training and n = 500 data for testing.The quantity ρ represents the minimum value of correlation for which it is convenient to aggregate the two features and it is computed exploiting the asymptotic result of Equation ( 17).From its theoretical values is clear that, for a reasonable amount of variance of the noise, since the weights of the linear model are significantly different, it is better to keep the features separated.This is confirmed by the confidence intervals of the R 2 and the M SE, which are both better in the bivariate case (full) rather than the univariate case (aggr).On the other hand, when the variance of the noise is large, the process becomes much more noisy and it is convenient to aggregate the two features.
Considering the empirical ρ rather than the theoretical one, which is unknown with real datasets, the only situation where in the majority of the cases it is useful to aggregate is with large variance.It is important also to notice that the confidence intervals are much larger in the noisy setting, meaning that there is much more uncertainty and therefore it is useful to aggregate the two features in order to reduce it.
In Table E2 the same results are reported, considering a linear relationship with small difference between weights w 1 = 0.47, w 2 = 0.52.In this setting, since the weights are closer, also with small amount of variance it is convenient to aggregate the two features if they are sufficiently correlated.Also with the empirical evaluation of the threshold, the two features would be aggregated for the majority of the repetitions of the experiment, leading to a non-worsening of the M SE and R 2 coefficient for the aggregated case, as shown by the confidence intervals.

E.2 Three-dimensional synthetic data
This subsection explains more details about the synthetic experiments performed in the three-dimensional setting and introduced in Section 6 of the main paper.They show the usefulness of the extension to the three-dimensional linear regression model.In particular the samples of the first independent variable x 1 are extracted from a uniform distribution in the interval [0, 1].The second feature x 2 is a linear combination between the feature x 1 and a random sample extracted from a uniform distribution in the interval [0, 1] (specifically x 2 = 0.65x 1 + 0.35u, u ∼ U([0, 1])).The third feature x 3 is a linear combination between the features x 1 , x 2 and a random sample extracted from a uniform distribution in the interval [0, 1] (x 3 = 0.5x 1 + 0.5x 2 + 0.5u, u ∼ U([0, 1])).Finally, the target variable y is a linear combination between the three features x 1 , x 2 , x 3 with weights w 1 = 0.4, w 2 = 0.6 that are closer than the third weight w 3 = 0.2 and the addition of a gaussian noise with variance σ 2 = 0.25.The experiment has been repeated s = 500 times with n = 500 samples both for the train and the test set.As reported in Table E3, which is an extension of Table 2 reported in the main paper, the theoretical values of correlation thresholds computed from the asymptotic result of Equation ( 24) and the empirical several factors that can be categorized into immunization related factors, mortality factors, economical factors and social factors.The dataset is available on Kaggle5 and it is also provided in the repository of this work.It is made of D = 18 continuous input variables and a scalar output.
The second dataset is a financial dataset made of D = 75 continuous features and a scalar output.The model predicts the cash ratio depending on other metrics from which it is possible to derive many fundamental indicators.The dataset is taken from Kaggle6 and it is provided in the repository of this work.Finally, the algorithm is tested on two climatological dataset composed by D = 136 and D = 1991 continuous climatological features and a scalar target which represents the state of vegetation of a basin of Po river.This datasets have been composed by the authors merging different sources for the vegetation index, temperature and precipitation over different basins (see (Didan, 2015;Cornes et al, 2018;Zellner, 2022)), and they are available in the repository of this work.
) holds (with the parameter b expressed like in Equation (25) as b = σx(ρx 1 ,x −ρx 2 ,x)w (w1−w2) Fig.1: Figure1ashows the number of reduced features for a different number of samples.Figure1b,1c show the regression performance in terms of R 2 and MSE for different number of samples.Blue lines refer to the linear regression with all the original features, while red and green lines respectively refer to linear regression on the features reduced by applying the proposed algorithm considering theoretical and empirical quantities.

Table 1 :
Experiment on synthetic bivariate data for two combinations of weights and three different values of variance of the noise.

Table 2 :
Synthetic experiment in the three dimensional setting comparing the full model with three variables with the bivariate model where x 1 , x 2 are aggregated with their mean.

Table 3 :
Synthetic experiment in the D dimensional setting.The experiment has been repeated twice: considering the theoretical threshold with the exact coefficients (theo) and with coefficients estimated from data (emp).

Table 4 :
Experiments on real datasets.The total number of samples n has been divided into train (66% of data) and test (33% of data) sets.
Interpretable Linear Dimensionality Reduction Proof of Theorem 1 Let us consider a training dataset T and a univariate test sample (x, y).Then the variance is: