1 Why penalise coefficients of variables of interest?

Suppose that for some, presumably small, set \(G \subseteq \{1,\ldots ,p\}\), we want a confidence set for \(\beta _G^0\). Much of the recent literature, including the paper under discussion, proceeds by constructing an initial estimator, such as the Lasso estimator \(\hat{\beta }\), and then attempting to de-bias it. Our starting point is the following provocative question: since we know in advance the set of variables we are interested in, why would we want to penalise these coefficients in the first place? Of course, it is standard practice not to penalise the intercept term in high-dimensional linear models, to preserve location equivariance, but we now consider taking this one stage further. More precisely, consider the linear model

$$\begin{aligned} Y = \mathbf {X}\beta ^0 + \epsilon , \end{aligned}$$

where the columns of \(\mathbf {X}\) have Euclidean length \(n^{1/2}\), where \(\mathbf {X}_G^T\mathbf {X}_G\) is positive definite, and where, for simplicity, we assume that \(\epsilon \sim N_n(0,\sigma ^2I)\). We further assume that the set \(S := \{j : \beta _j^0 \ne 0\}\) of signal variables has cardinality s, and let \(N := \{1,\ldots ,p\} \setminus S\). For \(\lambda > 0\), let

$$\begin{aligned} (\hat{\beta }_G,\hat{\beta }_{-G}) := \mathop {\hbox {argmin}}\limits _{(\beta _G,\beta _{-G}) \in \mathbb {R}^{|G|} \times \mathbb {R}^{p-|G|}} \frac{1}{n}\Vert Y - \mathbf {X}_G\beta _G - \mathbf {X}_{-G}\beta _{-G}\Vert _2^2 + \lambda \Vert \beta _{-G}\Vert _1, \end{aligned}$$

where we emphasise that \(\Vert \beta _G\Vert _1\) is unpenalised. For fixed \(\beta _{-G} \in \mathbb {R}^{p-|G|}\), the solution in the first argument is given by ordinary least squares:

$$\begin{aligned} \hat{\beta }_G(\beta _{-G}) := (\mathbf {X}_G^T\mathbf {X}_G)^{-1}\mathbf {X}_G^T(Y - \mathbf {X}_{-G}\beta _{-G}). \end{aligned}$$

We therefore find that

$$\begin{aligned} \hat{\beta }_{-G} = \mathop {\hbox {argmin}}_{\beta _{-G} \in \mathbb {R}^{p-|G|}} \frac{1}{n}\Vert (I-P_G)(Y - \mathbf {X}_{-G}\beta _{-G})\Vert _2^2 + \lambda \Vert \beta _{-G}\Vert _1, \end{aligned}$$
(1)

where \(P_G := \mathbf {X}_G(\mathbf {X}_G^T\mathbf {X}_G)^{-1}\mathbf {X}_G^T\) denotes the matrix representing an orthogonal projection onto the column space of \(\mathbf {X}_G\). In other words, \(\hat{\beta }_{-G}\) is simply the Lasso solution with response and design matrix pre-multiplied by \((I-P_G)\). Moreover,

$$\begin{aligned} \hat{\beta }_G = \hat{\beta }_G(\hat{\beta }_{-G}) = (\mathbf {X}_G^T\mathbf {X}_G)^{-1}\mathbf {X}_G^T(Y - \mathbf {X}_{-G}\hat{\beta }_{-G}). \end{aligned}$$

For our theoretical analysis of \(\hat{\beta }_G\), we will require the following compatibility condition:

(A1) :

There exists \(\phi _0 > 0\) such that for all \(b \in \mathbb {R}^{p-|G|}\) with \(\Vert b_N\Vert _1 \le 3\Vert b_S\Vert _1\), we have

$$\begin{aligned} \Vert b_S\Vert _1^2 \le \frac{s\Vert (I-P_G)\mathbf {X}_{-G}b\Vert _2^2}{n\phi _0^2}. \end{aligned}$$

The theorem below is only a small modification of existing results in the literature (e.g. Bickel et al. 2009), but for completeness we provide a proof in “Appendix”.

Theorem 1

Assume (A1), and let \(\lambda := A\sigma \sqrt{\frac{\log p}{n}}\). Then with probability at least \(1 - p^{-(A^2/8-1)}\),

$$\begin{aligned} \frac{1}{n}\Vert (I-P_G)\mathbf {X}_{-G}(\hat{\beta }_{-G} - \beta _{-G}^0)\Vert _2^2 + \frac{\lambda }{2}\Vert \hat{\beta }_{-G} - \beta _{-G}^0\Vert _1 \le \frac{3A^2}{\phi _0^2}\frac{\sigma ^2s\log p}{n}. \end{aligned}$$

Theorem 1 allows us to show that if, in addition to (A1), the columns of \(\mathbf {X}_G\) and those of \(\mathbf {X}_{-G}\) satisfy a strong lack of correlation condition, then \(\hat{\beta }_G\) can be used for asymptotically valid inference for \(\beta _G\). To formalise this latter condition, it is convenient to let \(\mathbf {\Theta }\) denote the \(|G| \times (p-|G|)\) matrix \((\mathbf {X}_G^T\mathbf {X}_G)^{-1}\mathbf {X}_G^T \mathbf {X}_{-G}\).

Corollary 2

Consider an asymptotic framework in which \(s=s_n \ge 1\) and \(p=p_n \rightarrow \infty \) as \(n \rightarrow \infty \), but \(\sigma ^2 > 0\) and G are constant. Assume (A1) holds for sufficiently large n (with \(\phi _0\) not depending on n), and also that \(\Vert \mathbf {\Theta }\Vert _\infty = o(s^{-1} \log ^{-1/2} p)\). If we choose \(\lambda := A\sigma \sqrt{\frac{\log p}{n}}\) in the above procedure with constant \(A > 2\sqrt{2}\), then

$$\begin{aligned} n^{1/2}(\hat{\beta }_G - \beta _G^0) \mathop {\rightarrow }\limits ^{d} N_{|G|}\bigl (0,\sigma ^2(\mathbf {X}_G^T\mathbf {X}_G)^{-1}\bigr ). \end{aligned}$$

Proof

We can write

$$\begin{aligned} n^{1/2}(\hat{\beta }_G - \beta _G^0) = n^{1/2}(\mathbf {X}_G^T\mathbf {X}_G)^{-1}\mathbf {X}_G^T\epsilon - \varDelta , \end{aligned}$$

where \(\varDelta := n^{1/2}(\mathbf {X}_G^T\mathbf {X}_G)^{-1}\mathbf {X}_G^T \mathbf {X}_{-G}(\hat{\beta }_{-G} - \beta _{-G}^0)\). Now

$$\begin{aligned} n^{1/2}(\mathbf {X}_G^T\mathbf {X}_G)^{-1}\mathbf {X}_G^T\epsilon \sim N_{|G|}\bigl (0,\sigma ^2(\mathbf {X}_G^T\mathbf {X}_G)^{-1}\bigr ). \end{aligned}$$

Also, from the proof of Theorem 1, on \(\varOmega _0 := \bigl \{\Vert \mathbf {X}_{-G}^T(I-P_G)\epsilon \Vert _\infty /n \le \lambda /2\}\),

$$\begin{aligned} \Vert \varDelta \Vert _\infty \le \Vert \mathbf {\Theta }\Vert _\infty n^{1/2}\Vert \hat{\beta }_{-G} - \beta _{-G}^0\Vert _1 \le \frac{6A}{\phi _0^2} \Vert \mathbf {\Theta }\Vert _\infty s\log ^{1/2} p \rightarrow 0. \end{aligned}$$

Since \(\mathbb {P}(\varOmega _0) \rightarrow 1\), the conclusion follows.

We remark that for \(j \in G^c\), \(\mathbf {\Theta }_j\) is the coefficient in the ordinary least squares regression of \(X_j\) on \(\mathbf {X}_G\). Even though the condition on \(\Vert \mathbf {\Theta }\Vert _\infty \) is strong, it may well be reasonable to suppose that, having pre-specified the index set G of variables that we are interested in, we should avoid including in our model other variables that have significant correlation with \(\mathbf {X}_G\).

2 More complicated settings

Without this strong orthogonality condition, we might instead consider adjusting \(\hat{\beta }_G\) by debiasing or de-sparsifying \(\hat{\beta }_{-G}\). Following van de Geer et al. (2014), we suggest replacing \(\hat{\beta }_{-G}\) by

$$\begin{aligned} \hat{b}_{-G} = \hat{\beta }_{-G} + \frac{1}{n} M \mathbf {X}_{-G}^T(I-P_G)(Y - \mathbf {X}_{-G}\hat{\beta }_{-G}) \end{aligned}$$

for some matrix \(M \in \mathbb {R}^{(p-|G|)\times (p-|G|)}\). This yields the de-biased estimator

$$\begin{aligned} \hat{b}_G&= (\mathbf {X}_G^T\mathbf {X}_G)^{-1}\mathbf {X}_G^T(Y - \mathbf {X}_{-G}\hat{b}_{-G}) \\&= \beta _{G}^0 + (\mathbf {X}_G^T\mathbf {X}_G)^{-1}\mathbf {X}_G^T \epsilon - \frac{1}{n} \varvec{\varTheta }M \mathbf {X}_{-G}^T(I-P_G)\epsilon - R(\hat{\beta }_{-G}-\beta _{-G}^0), \end{aligned}$$

where R is the \(|G| \times (p-|G|)\) matrix given by

$$\begin{aligned} R := \varvec{\varTheta } - \frac{1}{n} \varvec{\varTheta }M \mathbf {X}_{-G}^T(I-P_G)\mathbf {X}_{-G}. \end{aligned}$$

Under our Gaussian errors assumption, \((\mathbf {X}_G^T\mathbf {X}_G)^{-1}\mathbf {X}_G^T \epsilon \) and \(n^{-1}\varvec{\varTheta } M \mathbf {X}_{-G}^T(I-P_G)\epsilon \) are independent centred Gaussian random vectors; thus if the remainder term \(R(\hat{\beta }_{-G}-\beta _{-G}^0)\) is of smaller order, we see that our estimate \(\hat{b}_G\) is approximately centred Gaussian. The techniques of van de Geer et al. (2014) or Javanmard and Montanari (2014) might then be used to give asymptotic justifications for Gaussian confidence sets and hypothesis tests concerning \(\beta _G^0\). But another very interesting direction would be to adapt the bootstrap approaches proposed in the current paper to the estimate \(\hat{b}_G\).

As in van de Geer et al. (2014), we should choose M depending on \(\mathbf {X}\) to control

$$\begin{aligned} \delta :=\Vert R(\hat{\beta }_{-G}-\beta _{-G})\Vert _\infty \le \Vert R\Vert _\infty \Vert \hat{\beta }_{-G}-\beta _{-G}\Vert _1. \end{aligned}$$

Note that we may write the matrix R in terms of the sample covariance matrix of the covariates \({\hat{\varSigma }} :=\mathbf {X}^T\mathbf {X}/n\) (using obvious notation for the partitioning) as

$$\begin{aligned} R = {\hat{\varSigma }}_{G,G}^{-1}{\hat{\varSigma }}_{G,-G} \bigl (I-M({\hat{\varSigma }}_{-G,-G} -{\hat{\varSigma }}_{-G,G}{\hat{\varSigma }}_{G,G}^{-1}{\hat{\varSigma }}_{G,-G})\bigr ). \end{aligned}$$

Of course, if \(\hat{\varSigma }\) is invertible, then

$$\begin{aligned} ({\hat{\varSigma }}_{-G,-G} -{\hat{\varSigma }}_{-G,G}{\hat{\varSigma }}_{G,G}^{-1}{\hat{\varSigma }}_{G,-G})^{-1} = (\hat{\varSigma }^{-1})_{-G,-G}, \end{aligned}$$

so M can be thought of as an approximation to \((\hat{\varSigma }^{-1})_{-G,-G}\) (even though \(\hat{\varSigma }\) is not invertible when \(p > n\)). In general, we might use concentration inequalities for entries in \({\hat{\varSigma }}\) to control \(\Vert R\Vert _\infty \); if we think of |G| as small, then we only have O(p) entries to control, rather than \(O(p^2)\) as is more typical in these debiasing problems. We hope to pursue these ideas elsewhere.