1 Introduction

1.1 Background and Importance

“High dimensional” refers to the situations where the number of covariates or predictors is much larger than the number of data points (i.e., \(p\gg n\)). Such situations happen in many domains nowadays where the rapid development of technological advances helps collect a large number of variables to better understand a given phenomenon of interest. Examples occur in genomics, fMRI data analysis, large-scale healthcare analytics, text/image analysis and astronomy, to name but a few.

In the last two decades regularisation approaches such as lasso, elastic net and ridge regression have become the methods of choice for analysing such high dimensional data. Much work has been done since the introduction of regularisation in tackling high dimensional linear regression problems. Regularisation methods especially lasso and ridge regression [10, 31, 40] have been applied to many applications in different disciplines [1, 15, 23, 26]. The theory behind regularisation methods often relies on the sparsity assumptions to achieve theoretical guarantees in their performance, ideally when dealing with high dimensional data. The performance of regularisation methods has been studied by many researchers, however conditions other than sparsity, such as the effects of correlated variables, covariate location and effect size have not been well understood. We investigate this in high dimensional linear regression models under sparse and non-sparse situations.

In this paper, we consider the high dimensional linear regression model

$$\begin{aligned} y = X\beta + \varepsilon , \,\,\, p\gg n, \end{aligned}$$
(1)

where \(y=(y_1,\ldots ,y_n)\in {\mathbb {R}}^{n}\) is the response vector, \(X\in {\mathbb {R}}^{n\times p}\) is the design matrix for covariates \(x_1,\ldots ,x_n\), the vector \(\beta \in {\mathbb {R}}^{p}\) contains the unknown regression coefficients, and \(\varepsilon \in {\mathbb {R}}^{n}\) is the random noise vector. We assume, without loss of generality, that the model does not have any intercept terms by mean-centring all the response and covariates.

We assume no prior knowledge on \(\beta \). It is well-known that the ordinary least square (OLS) solution for estimating \(\beta \) is \(\hat{\beta }^{\text {OLS}} = (X^{T}X)^{-1}X^{T}y\) [10]. However, when \(p > n\), X is no longer full rank, and the OLS results in infinitely many solutions, leading to over-fitting in the high dimensional case [14]. This kind of ill-posed problems arises in many applications as discussed above. Regularisation methods that impose penalty on the number of unknown parameters \(\beta \) is therefore a general and popular way to overcome the issue of ill-posed problems.

Issues due to the curse of dimensionality become apparent in the case of \(p \gg n\). A particular example occurs in fMRI image analysis, where selection from a large number of brain regions could lead to insensitive models on top of over-fitting [30]. Also, numerical results from utilising regularisation methods in high dimensional data are unsatisfactory in terms of identifying one that performs the best most of the time [7]. To tackle these issues, sparsity assumption utilises the idea of “less is more” [14], referring to the phenomenon that an underlying data structure can mostly be explained by few out of many features. Such assumption would help some regularisation methods to at least achieve consistent variable selection even when \(p \gg n\) [18].

Data structures are not necessarily sparse in real-world applications of high dimensional data, and sparsity assumptions are difficult to hold in practice [3]. However, regularisation methods are often applied to those applications despite their limitations. In this paper, we mainly aim to study the following problems which have not been well understood or never studied previously:

  1. i.

    The effects of data correlation on the performance of common regularisation methods.

  2. ii.

    The effects of covariate location on the performance of common regularisation methods.

  3. iii.

    The impact of effect size on the performance of common regularisation methods.

  4. iv.

    The performance of the recently developed de-biased lasso [33, 37] in comparison to the common regularisation methods.

In our investigations we evaluate the performance of the regularisation methods by focusing on their variable selection, parameter estimation and prediction performances under the above situations. We also use simulations to explain the curse of dimensionality with a fixed-effect Gaussian design.

1.2 Related Work

Lasso and ridge regression, which use \(L_{1}\) and \(L_{2}\) penalties respectively (see Sect. 2.1), are the two most common regularisation methods. Many novel methods have been built upon them. For example, Zou and Hastie [40] developed the elastic net that uses a combination of these two penalties. The elastic net is particularly effective in tackling multicollinearity, and it can generally outperform both lasso and ridge regression under such situation. The study on elastic net had relatively low dimensions with the sample size larger than the number of covariates [40]. Moreover, the number of covariates associated with truly non-zero coefficients was smaller than the sample size. Studies with similar kinds of settings were also used when developing other novel methods [25, 36, 39]. Other new approaches with variations of standard techniques have also been investigated [13, 22, 26]. Also, reducing bias of estimators such as the lasso estimator is recently used to tackle issues of 0 standard errors and biased estimates [2, 37]. Beforehand, a method called the bias-corrected ridge regression utilised the idea of bias reduction by projecting each feature column to the space of their compliment columns to achieve, with Gaussian noise, asymptotic normality for a projected ridge regression estimator under a fixed design [4]. Regularisation methods were also evaluated in other situations with classification purposes [1, 12, 13, 23, 24, 26, 35].

Statistical inference such as hypothesis testing with regularisation methods was difficult for a long time due to the mathematical limitations and the highly biased estimators in high dimensional models. Obenchain [27] argue that inference with biased estimators could be misleading when they are far away from their least squares region. The asymptotic theory has also shown that the lasso estimates can be 0 when the true values are indeed 0 [21], which can explain why the bootstrap with lasso estimators can lead to a 0 standard error [31]. Park and Casella [28] developed a Bayesian approach to construct confidence intervals for lasso estimates as well as its hyperparameters. They considered a Laplace prior for the unknown parameters \(\beta \) in the regression model (1) conditional on unknown variance of an independent and identically distributed Gaussian noise, leading to conditional normality for y and \(\beta \). However, they did not account for the presence of bias in parameter estimators when using regularisation methods. The recent de-biased lasso (see Sect. 2.2) instead reduces the bias of lasso and enables to make statistical inference about a low dimensional parameter in the regression model (1). It is unknown whether or not the de-biased lasso can outperform the lasso and other regularisation methods when increasing the data dimension in sparse and non-sparse situations.

More recently, [6] conducted a theoretical study on the prediction performance of the lasso. Their main finding was that the incorporation of a correlation measure into the tuning parameter could lead to a nearly optimal prediction performance of the lasso. Also, [29] proposed the spike-and-slab lasso procedure for variable selection and parameter estimation in linear regression.

2 Regularisation Methods in High Dimensional Regression

2.1 Regularisation with a More General Penalty

Given the high dimensional linear regression (1), the regularisation with \(L_q\) penalty minimises

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^{n}(y_{i} - x_{i}\beta )^{2} + \lambda \Big (\sum _{j = 1}^{p}\beta _{j}^q\Big )^{1/q}, \end{aligned}$$
(2)

where the first term is the squared error loss from the OLS, and the second term is a general \(L_q\) penalty on regression coefficients with \(\lambda \ge 0\) being a tuning parameter for controlling the amount of shrinkage.

Two special cases of (2) are the lasso with \(L_1\) penalty (i.e., \(q = 1\)) and the ridge regression with \(L_2\) penalty (i.e., \(q = 2\)). Also, the subset selection emerges as \(q \rightarrow 0\), and the lasso uses the smallest value of q (i.e., closest to subset selection) that yields a convex problem. Convexity is very beneficial for computational purposes [32].

Since the lasso provides a sparse solution (i.e., the number of non-zero parameter estimates are smaller than the sample size n) [31, 32], lasso regression requires the sparsity assumption, that is, many of the covariates are assumed to be unrelated to the response variable. It is appealing to be able to identify, out of a large number of predictors, a handful of them that are main contributions to some desired predictions, particularly in genome-wide association studies (GWAS) [2, 5, 35]. This leads to parsimonious models from which the selected variables can be further examined, as well as greatly reducing subsequent computational costs in predictions.

A limitation of the lasso is that when there are contributing variables in a correlated group, lasso tends to select only few of them in a random manner. Yuan and Lin [36] proposed the group lasso method for performing variable selection on groups of variables to overcome the issue, given prior knowledge of the underlying data structure. Also, the choice of \(\lambda \) in lasso may not satisfy the oracle properties, which can lead to inconsistent selection results in high dimensions [9, 39]. As discussed in the introduction, Zou [39] developed the adaptive lasso to allow weighted \(L_{1}\) penalty on individual coefficients, and showed that the new penalty satisfies the oracle properties. Further issues arise with high dimensional data, which can be summarised as curse of dimensionality. Roughly speaking, curse of dimensionality is a phenomenon at which ordinary approaches (in this case regularisation approaches) to a statistical problem are no longer reliable when the associated dimension is drastically high. We particularly investigate this in Sect. 3.

2.2 The De-Biased Lasso

De-biased lasso [33, 37] is a lasso-based method that aims to reduce the bias of the lasso estimator. Also, unlike the original lasso, the de-biased lasso enables us to make statistical inferences, for example, to conduct component-wise hypothesis testing in high dimensional models [33]. It is known that the lasso estimator \(\hat{\beta }^{\text {lasso}}\) fulfils the Karush–Kuhn–Tucker (KKT) conditions: [33, 37]

$$\begin{aligned} -\frac{1}{n}X^{T}(y - X\hat{\beta }^{\text {lasso}}) + \lambda (\partial ||\hat{\beta }^{\text {lasso}}||_{1}) = \underline{0}_{p}, \end{aligned}$$
(3)

where \(\underline{0}_{p}\in {\mathbb {R}}^{p}\) is the zero vector and \(\partial ||\beta ||_{1}\) denotes the sub-differential of the \(l_{1}\)-norm of \(\beta \) with

$$\begin{aligned}&(\partial ||\beta ||_{1})_{j} = \text {sign}(\beta _{j}),\;\; \beta _{j} \ne 0,\\&(\partial ||\beta ||_{1})_{j}\in [-1,1],\;\; \beta _{j}=0. \end{aligned}$$

The sub-differential at \(\beta _{j} = 0\) for any \(j = 1,2,\ldots ,p\) is a convex set of all possible sub-gradients since the \(l_{1}\) norm is not differentiable at that point.

Substituting \(y = X\beta + \varepsilon \) and \(G = \frac{1}{n}X^{T}X\) into Eq. (3) and using the strong sparsity assumption, we get the following estimator [33]

$$\begin{aligned} \hat{\beta }^{\text {debiased}} = \hat{\beta }^{\text {lasso}} + \frac{1}{n}\overset{\sim }{G}X^{T}(y - X\hat{\beta }^{\text {lasso}}) + R, \end{aligned}$$
(4)

where \(\overset{\sim }{G}\) is an inverse matrix approximation of G, and R is a residual term from using an approximated inverse matrix [33, for details see]. One can view the second term on the right side of (4) as a bias correction to the lasso estimator [37]. Together with linear Gaussian setting and some restrictions in choosing an optimal \(\lambda \), \(\hat{\beta }^{\text {debiased}}\) yields a statistic that asymptotically follows a multivariate Gaussian distribution [33], reaching the same result from Zhang and Zhang’s work [37].

The de-biased lasso is feasible when there is a good approximation of \(\overset{\sim }{G}\). To do so, a method called the lasso for node-wise regression has been suggested. Details regarding the node-wise regression and the theoretical results of asymptotic normality for de-biased lasso can be found in [33]. Note that the residual term R is asymptotically negligible under some additional sparsity conditions when approximating \(\overset{\sim }{G}\).

It is shown that the de-biased lasso is very effective in making statistical inference about a low dimensional parameter when the sparsity assumption holds [33, 37]. In Sect. 5, we investigate whether or not the de-biased lasso can outperform the lasso and other regularisation methods when increasing the data dimension in sparse situations.

3 Curse of Dimensionality with a Fixed-Effect Gaussian Design

In this section, we demonstrate, via simulations, that how the curse of dimensionality could yield undesirable and inconsistent feature selection in high dimensional regression. We generated 100 datasets from the high dimensional linear regression model (1) with standard Gaussian noise. We considered a range of different values from 100 to 5000 for the dimension p, and for each value of p we generated the design matrix \(X\in {\mathbb {R}}^{150\times p}\) from the standard Gaussian distribution. Also for the true vector of coefficients \(\beta _{0}\in {\mathbb {R}}^{p}\), we generated its first 100 entries from the standard Gaussian distribution and set all the rest to 0. We fitted the lasso with \(L_1\) penalty to each generated dataset with 10-fold cross validation using the glmnet package in R [11].

Fig. 1
figure 1

Lasso becomes less effective in feature selection when p increases

Fig. 2
figure 2

The lasso regression prediction error increases with p

Table 1 The number of selected variables by lasso for different values of p in the first 10 trials

The results, presented in Fig. 1, show that the average identification rate tends to decrease when the dimension p increases. In other words, the number of selected variables that are correctly identified over the number of selected variables decreases as p gets larger. Since statistical inference is not feasible with lasso regression [21] as already discussed, one may rely on the fitted model which associates with the smallest prediction mean square error (MSE). From Figure 1, it can be seen that the proportion of non-zero variables correctly identified by lasso regression over those which are selected essentially decreases from 100% to 18.2% when moving from \(p = 100\) towards \(p = 5000\). This is also reflected by the increasing MSE as shown in Fig. 2. Table 1 provides more details regarding the first 10 trials where we can see that the number of variables selected by lasso regression seems to be consistent from \(p = 100\) to \(p = 500\), but deteriorates as p becomes much larger. All these suggest feature selection inconsistency of lasso regression in high dimensional situations with very large p.

The choice of \(\lambda \) during cross-validation is crucial since \(\lambda \) governs the threshold value below which the estimated coefficients are forced to be 0. In our simulations, we used the default cross-validation values in the glmnet package. Figure 3 shows the MSE for different values of \(\lambda \) from a 10-fold cross-validation in a single trial when \(p = 5000\). Here the grid used to determine an optimal choice of \(\lambda \) is \(\{0.05, 0.1, 0.15, \ldots , 0.95, 1, 1.05, \ldots , 1.95, 2, \ldots , 10\}\) instead of the default ones. In this case, the one associated with the minimum MSE is 1.65. Compared to Fig. 4 which uses the default values in glmnet, Fig. 3 reveals another aspect of the curse of dimensionality: the error bars are too large for every cross-validated values of \(\lambda \) that even 1.65 may not be a good choice for optimal \(\lambda \) after all, and this is mainly due to the lack of sufficient observations compared to the number of covariates or features. One may choose a wider range for the grid, however substantial standard errors would be unavoidable.

Fig. 3
figure 3

Cross validation plot for optimising \(\lambda \) in the lasso regression, with associated number of variables selected on the top with each candidate of \(\lambda \). The left vertical dashed line refers to the candidate associated with the minimum MSE, and the right vertical dashed line refers to the largest candidate which is within 1 standard deviation away from the minimum MSE

Fig. 4
figure 4

A similar cross validation plot to Fig. 3 with the default grid for comparisons

The cross-validation result presented in Fig. 3 may not alone reveal all aspects of the situation. Figure 5 presents a more general result on the cross validation from the first 20 trials with the same simulation setting and with \(p = 5000\). It can be seen that there are different cross-validation patterns, all with substantial standard errors. We recall that in our example the sample size is 150 (in accordance with the common applications of high dimensional data) with the number of truly non-zero coefficients being 100. With the vast number of covariates, the lasso may still lead to inconsistent variable selection when there is a large number of truly non-zero coefficients [38].

Fig. 5
figure 5

Cross-validation plots of the first 20 trials with \(p = 5000\)

4 Performance of Regularisation Methods in Sparse and Non-sparse High Dimensional Data

In this section, we investigate the performance of three common regularisation methods (lasso, ridge regression and elastic net) in estimation, prediction and variable selection for high dimensional data under different sparse and non-sparse situations. In particular, we study the performance of these regularisation methods when data correlation, covariate location and effect size are taken into account in the high dimensional linear model (1), as explained in the sequel.

We assume the true underlying model is

$$\begin{aligned} y_{i} = \underline{x}_{i}^{T}\beta ^{0} + \varepsilon _i,\quad \varepsilon _i \sim N(0,\sigma ^2=1), \end{aligned}$$
(5)

where \(\beta ^{0}\in {\mathbb {R}}^{p}\) is the vector of true regression coefficients.

Also, we define the following notations:

  • \(\hat{\beta }\in {\mathbb {R}}^{p}\): estimator of coefficients in the fitted model,

  • \(\hat{y} = X\hat{\beta }\in {\mathbb {R}}^{n}\): vector of predicted values using the fitted model,

  • \(S_{0}\): active set of variables of the true underlying model,

  • \(S_{\text {final}}\): active set of the fitted model,

  • \(S_{\text {small}}\subset S_{0}\): active subset of variables with small contributions in the true underlying model,

where the active set refers to the index set representing the covariates in the regression model. We use the following performance measures to assess the accuracy of prediction, parameter estimation and identification rates for each method:

$$\begin{aligned} \begin{aligned} \text {Mean Square Error (MSE)}&= \frac{1}{n}\sum _{i=1}^{n}(\hat{y}_{i} - y_{i})^{2}\\ \text {Mean Absolute Bias (MAB)}&= \frac{1}{|S_{\text {final}}\cap S_{0}|}\sum _{j\in S_{\text {final}}\cap S_{0}}|\hat{\beta }_{j} - \beta ^{0}_{j}|\\ \text {Power } (P)&= \frac{|S_{\text {final}}\cap S_{0}|}{|S_{0}|}\\ \text {Small Power } (P_{\text {small}})&= \frac{|S_{\text {final}}\cap S_{\text {small}}|}{|S_{\text {small}}|}. \end{aligned} \end{aligned}$$
(6)

In the simulations, we generate X and \(\beta ^{0}\) from a zero-mean multivariate Gaussian distribution with covariance matrices \(\varSigma _{X}\) and \(\varSigma _{\beta ^{0}}\) chosen under different scenarios. \(\varSigma _{\beta ^{0}}\) is chosen as the identity matrix when generating \(\beta ^{0}\), however we change the diagonal entries from 1 to 0.1 when coefficients of small effects are considered in the simulations. We use the identity matrix for \(\varSigma _{X}\) when no data correlation is present, and we consider \(\varSigma _{X}=V_{3}\) when inducing correlated data in X, where \(V_{i}\) is defined, for any positive integer \(i\in {\mathbb {Z}}^{+}\), as follows

in which the top-left block of \(V_{i}\) contains i matrices of A, which are aligned diagonally. Also, the bottom-right block is an identity matrix to fulfil the required dimension p.

In a sparse situation, where the underlying data structure is truly sparse, we choose \(n = 150\), \(p = 10{,}000\), and \(p^{*} = 200\). Recall that \(p^{*}>n\) means it is impossible to identify all of the important features without over-fitting. Given p can be much larger with the same n in practice [2], such identification may not be possible even in a sparse situation, thus \(\beta ^{0}_{j}\) is set to be 0 for any \(j = n+1,n+2,\ldots ,p\) unless stated otherwise. In a non-sparse situation, we change \(p^{*}\) to 1000. In this case, we either use \(\varSigma _{X} = V_{5}\) or \(\varSigma _{X} = V_{20}\) to account for data correlation.

Each simulation is repeated 100 times and the average values are calculated for each performance measure in (6). We use the R-package glmnet for implementing the regularisation methods considered here.

We also include the principal component regression (PCR) [10, 19] in our comparisons. PCR is becoming a popular technique in different fields, especially in bioinformatics [22]. By performing principal component analysis (PCA) on X whose columns are centred, one obtains an orthonormal basis with k principal components (called the loading matrix \(B\in {\mathbb {R}}^{p\times k}\)), which are used to transform X to a new design matrix \(X^{'} = XB\), \(X^{'}\in {\mathbb {R}}^{n\times k}\). One can then perform the OLS on \(X^{'}\) by considering

$$\begin{aligned} y = X^{'}\gamma + \varepsilon , \end{aligned}$$

where y is mean-centred and \(\gamma \in {\mathbb {R}}^{k\times 1}\) is the vector of unknown transformed parameters to be estimated. With the estimator \(\hat{\gamma } = (X^{'T}X^{'})^{-1}X^{'T}y\), one can find \(\hat{\beta }\) by reverting the transformation as follows

$$\begin{aligned} \hat{\beta } = B\hat{\gamma }. \end{aligned}$$

In high dimensional situations, when \(p > n\), there can be at most \(n-1\) principal components, which means the OLS with \(X^{'}\) does not face the ill-posed problem. PCR exists for a long time [19, 20], but it did not attract much attention before partly because it could be seen as a hard thresholding version of ridge regression from the perspective of singular value decomposition [10]. The magnitude of eigenvalues associated to each principal component corresponds to the amount of information not redundant from X. To find the optimal number of principal components, we perform 10-fold cross-validation on each candidate by successive inclusion of principal components in decreasing order of their associated eigenvalues. We use the R-package pls [34] for implementation of PCR in our simulations.

4.1 Data Correlation

The complete simulation results for all the four methods under the sparse and non-sparse situations are reported in Tables 2 and  3 below. Regarding the prediction performance, a part of the simulation results presented in Fig. 6 shows that, when important covariates are associated with correlated data, the prediction performance of lasso and elastic net are better than both the ridge regression and PCR under the sparse situation. Another part of the simulation results shown in Fig. 7 suggests that, for the non-sparse situation, the prediction performance of the lasso and elastic net is very similar to the ridge regression, however PCR outperforms all of these methods.

Table 2 Complete results of simulation I
Table 3 Complete results of simulation III

Regarding the parameter estimation accuracy, the simulation results in Figs. 6 and 7 indicate that parameter estimation by ridge regression and PCR are largely unaffected by correlated data, with both having smaller average MAB compared to the lasso and elastic net, and this performance is mainly because of their dense solutions.

Regarding the variable selection performance, Fig. 7 shows that elastic net performs better than lasso in the case of correlated data, which can be justified by the presence of the grouping effect. Note that the data correlation associated with nuisance and less important variables seems to have little effect on our results compared to the data correlation associated with important variables.

Without data correlation, lasso and elastic net have prediction performances similar to ridge regression and PCR under the sparse situation (see the results for case 1 in Table 2). This is probably because of the identification of important covariates being limited by sample size and high dimensionality, causing difficulty for the lasso and elastic net to outperform the ridge regression and PCR. Under the non-sparse situation, lasso and elastic net performed even worse in prediction compared to ridge regression and PCR (see the results for case 1 in Table 3).

Overall, when important covariates are associated with correlated data, our results showed that the prediction performance is improved across all these four methods under both sparse and non-sparse situations, and that the prediction performance flipped to favour the lasso and elastic net over the ridge regression and PCR.

4.2 Covariate Location

Regarding the effects of the covariate location, we find, from the simulation results, that important variables being more scattered among groups of correlated data tend to result in better prediction performances. Such observation becomes more obvious under the non-sparse situation. With the same data correlation setting, all the methods performed better with 2 clusters of size 500 in Fig. 8, than with 5 clusters of important variables of size 200 as shown in Fig. 7. Since lasso tends to randomly select covariates in a group of correlated data, we expect that the lasso is less likely to select nuisance covariates when most of them are important in such group, thus improving prediction and variable selection performances. In terms of estimation accuracy, ridge regression and PCR were largely unaffected as expected, while lasso and elastic net had varying results. Therefore, the condition of covariate location helping prediction performance does not seem to necessarily reflect in the parameter estimation.

Fig. 6
figure 6

The average MSE and MAB with the data correlation associated with important variables under a sparse situation

Fig. 7
figure 7

The average MSE, MAB and power with the data correlation associated with important variables under a non-sparse situation

Fig. 8
figure 8

The average MSE, MAB and power with important variables being more concentrated across the groups of correlated data under a non-sparse situation

Fig. 9
figure 9

The average MSE, MAB, power and small power for the case with the majority of important covariates having small contributions among groups of correlated data under a non-sparse situation

4.3 Effect Size

Given the same number of important covariates, our simulation results (see cases 1 and 4 in Table 2 and cases 1 and 5 in Table 3) suggest that having a smaller overall effect size helps prediction and parameter estimation performances across all the methods. This is reasonable since the magnitude of errors is smaller in exchange of harder detection of covariates, having small contributions to the predictions.

With data correlation, our results also reveal that the overall effect size could alter our perception of underlying data structures in the non-sparse situation. Figure 9 shows the performance bar-plots for all the four methods when there were 1000 important covariates, 950 of which belonging to small effect size. Compared to Figs. 7 and 8 that both of which had 1000 important covariates of similar effect sizes, Fig. 9 indicates that the lasso and elastic net tend to perform better than the ridge regression and PCR in terms of prediction accuracy in this situation. This is probably because selecting some of those 50 important features associated with large effects is sufficient to explain the majority of the effects behind, which masks those associated with small effects. Other than the sparsity level of important covariates, overall covariate effect size seems to also change the indication of whether an underlying data structure is sparse via observing prediction performances, especially in a non-sparse situation.

5 Performance of the De-Biased Lasso

Similar to the lasso, sparsity assumptions play a major role in justifying the use of de-biased lasso. In this section, we evaluate the performance of the de-biased lasso in prediction, parameter estimation and variable selection, and compare the results with the other methods considered in the previous section. We are particularly interested in understanding how this recently developed method performs when the data dimension p increases, so we can provide a rough idea of its practicality to emerging challenges in big data analysis.

In the simulations, we again focus on high dimensional situations with the effects of data correlation, covariate location and effect size being considered. We use the sample size \(n = 40\) and let the maximum dimension p to be 600 here. Similar to the previous simulations, we repeat each simulation case 100 times and calculate the average values for each performance measure in (6). We should mention that for calculating P and \(P_{small}\) both the lasso and elastic net are based on their nature of variable selection, but the de-biased lasso is based on point-wise statistical inference [33, 37]. This implies that the performance of de-biased lasso on P and \(P_{small}\) can encounter multiple hypothesis testing issue on top of curse of dimensionality. To account for this, we corrected the associated p-values by controlling the family-wise error rate of the tests via the Bonferroni-Holm approach [16]. To induce correlated data, we consider \(\varSigma _{X} = V_{2}\). We use the R-package hdi [7] to implement the de-biased lasso to each generated data set.

Table 4 Complete results of simulation IV
Fig. 10
figure 10

The average MSE, MAB and power in sparse high dimensional data with dimension \(p = 50\)

The complete simulation results are given in Table 4 below. A part of the results, presented in Fig. 10, show that in a low dimensional sparse situation with uncorrelated data, the de-biased lasso outperforms all the other methods in terms of prediction and parameter estimation. However, the variable selection performance of the de-biased lasso is very similar to the lasso and elastic net. The results shown in Fig. 11 suggest that the prediction by de-biased lasso could still be as good as the lasso and elastic net in a sparse situation when the dimension p increases, however the de-biased lasso no longer identifies any important covariates, thus its performance in parameter estimation cannot be assessed in the case when the dimension p is large. The results in Fig. 12 show that inducing correlated data seems to help de-biased lasso identify important covariates, however its performance in prediction and parameter estimation is no longer comparable to the other methods in the case of correlated data. The unsatisfactory variable selection performance of the de-biased lasso is probably due to many hypothesis tests as mentioned above, and its poor performance in prediction and parameter estimation in the case of correlated data could be due to the multicollinearity issues causing spurious test results.

Fig. 11
figure 11

The average MSE, MAB and power in sparse high dimensional data with dimension \(p = 600\)

Fig. 12
figure 12

The average MSE, MAB and power in sparse high dimensional data with dimension \(p = 600\) and with the presence of correlated data

6 Real Data Example

In this section, we use a real data example to compare the performance of the different regularisation methods in real-world applications. We consider the riboflavin data obtained from a high-throughput genomic study concerning the riboflavin (vitamin \(B_2\)) production rate. This data set was made publicly available by Bühlmann et al. [3], and contains \(n = 71\) samples and \(p = 4088\) covariates corresponding to \(p = 4088\) genes. For each sample, there is a real-valued response variable indicating the logarithm of the riboflavin production rate along with the logarithm of the expression level of the \(p = 4088\) genes as the covariates. Further details regarding the dataset and its availability can be found in [7, 17].

Table 5 The prediction results from applying all the methods to the riboflavin data

We applied the de-biased lasso and each of the other methods to the riboflavin data 100 times through different random partitions of training and testing sets, and compared their prediction performance using the average MSE as in (6). The prediction results are shown in Table 5. The results indicate that while PCR performed as good if not better than ridge regression, the lasso and elastic net had smaller average MSE than both ridge regression and PCR. The MSE of de-biased lasso is similar to the elastic net and lasso. The potential correlation between genes helps elastic net and lasso to perform better in prediction, which is consistent with our simulation results in the previous sections. Also, according to our simulation findings in Sects. 4 and 5, it seems the underlying structure of the riboflavin dataset is sparse in the sense that among all the unknown covariates, which contribute to the production rate of vitamin \(B_{2}\), only a few of them have relatively large effects. We emphasise that this does not necessarily indicate sparsity on the number of important covariates compared to the data dimension. The riboflavin dataset has been recently analysed for statistical inference purposes such as constructing confidence intervals and hypothesis tests by some researchers including [3, 7, 17].

7 Conclusions and Discussion

We investigated the effects of data correlation, covariate location and effect size on the performance of regularisation methods such as lasso, elastic net and ridge regression when analysing high dimensional data. We particularly evaluated how prediction, parameter estimation and variable selection by these methods are affected under those conditions. We also studied the performance of the recently developed de-biased lasso under such conditions, and furthermore included the PCR in our simulations for comparison purposes. We considered different sparse and non-sparse situations in our simulation studies. The main findings of the simulation results and real data analysis are summarised below:

  • When important covariates are associated with correlated variables, the simulation results showed that the prediction performance improves across all the methods considered in the simulations, for both sparse and non-sparse high dimensional data. The prediction performance flipped to favour the lasso and elastic net over the ridge regression and PCR.

  • When the correlated variables are associated with nuisance and less important variables, we observed that the prediction performance is generally unaffected across all the methods compared to the situation when the data correlation is associated with important variables.

  • In the presence of correlated variables, the parameter estimation performance of the ridge regression, elastic net and PCR was not affected, but the lasso showed a poorer parameter estimation when moving from sparse data to non-sparse data.

  • The variable selection performance of the elastic net was better than the lasso in the presence of correlated data.

  • Regarding the effects of the covariate location, we found that important variables being more scattered among groups of correlated data tend to result in better prediction performances. Such behaviour was more obvious for non-sparse data. The lasso tends to randomly select covariates in a group of correlated data, so it is less likely to select nuisance covariates when most of them are important in such group, thus improving prediction and variable selection performances.

  • Unlike in prediction and variable selection, the impact of covariate location was very small on the parameter estimation performance across all the methods.

  • Given the same number of important covariates, the simulation results showed that having a smaller overall effect size helps the prediction and parameter estimation performances across all the methods. The simulation results indicated that the lasso and elastic net tend to perform better than the ridge regression and PCR in terms of prediction accuracy in such situation. In the presence of data correlation, the overall effect size could change our indication of whether an underlying data structure is sparse via observing prediction performances, especially in the non-sparse situations.

  • For the de-biased lasso, the simulation results showed that the de-biased lasso outperforms all the other methods in terms of prediction and parameter estimation in low dimensional sparse situations with uncorrelated data. When the data dimension p increases, the prediction by de-biased lasso is as good as the lasso and elastic net, however the de-biased lasso no longer identifies any important covariates when the dimension p is very large. The results also showed that inducing correlated data seems to help de-biased lasso identify important covariates when p is very large, however its performance in prediction and parameter estimation is no longer comparable to the other methods in the presence of correlated data.

It should be pointed out that we also included the adaptive lasso [39] in our simulation comparisons, however because the results were very similar to the lasso we did not report them in the simulation section.

We also observed that the curse of dimensionality can yield inconsistent and undesirable feature selection in high dimensional regression. The choice of shrinkage parameter \(\lambda \) during the cross-validation process was found to be crucial. For high dimensional data, the error bars were too large for every cross-validated value of \(\lambda \) and it was mainly due to the lack of sufficient observations compared to the number of covariates (\(p \gg n\)).

Finally, the de-biased lasso can be used in a similar fashion as the OLS, but in ill-posed low dimensional problems. It therefore suffers from multicollinearity as well as the issue of too many hypothesis tests in high dimensional data [3, 7, 33]. With many procedures available to tackle issues from multiple hypothesis testing, a more accurate estimation procedure would be helpful when applying the de-biased lasso to high dimensional data. It will be very useful to conduct research on how the de-biased lasso combined with bootstrap [8] performs in high dimensional data under the above three conditions.