1 Introduction

Logistic regression models the ratio of success versus failure for binary variables. These models are convenient and useful in many situations. To name one example, in genome wide association scans (GWAS), a combination of alleles on single nucleotide polymorphisms (SNPs) is either present or not (Cantor et al. 2010). Recently, it became clear that logistic regression can also be used to obtain estimates of connections in a binary network (e.g. Ravikumar et al. 2010; Bühlmann and van de Geer 2011). A particular version of a binary network is the Ising model, in which the probability of a node being ’active’ is determined only by its direct neighbours (pairwise interactions only). The Ising model originated in statistical physics and was used to model magnetisation of solids (Kindermann et al. 1980; Cipra 1987; Baxter 2007), and was investigated extensively by Besag (1974) and Cressie (1993) and recently by Marsman et al. (2017), amongst others, in a statistical modelling context. Recently, the Ising model has also been applied to modelling networks of mental disorders (Borsboom et al. 2011; van Borkulo et al. 2014). The objective in models of psychopathology is to both explain and predict certain observations such as co-occurrences of disorders (comorbidity).

Here we focus on violations of the assumptions of lasso in logistic regression with high-dimensional data (more parameters than observations, \(p>n\)). In particular, we consider the consequences for both prediction and estimation when violating the assumptions of sparsity and restricted eigenvalues (multicollinearity). For sparse models and \(p>n\), it has been shown that statistical guarantees about the underlying network and its coefficients can be obtained with certain assumptions for Gaussian data (Meinshausen and Bühlmann 2006; Bickel et al. 2009; Hastie et al. 2015), for discrete data (Loh and Wainwright 2012), and for exponental family distributions (Bühlmann and van de Geer 2011). Specifically, Ravikumar et al. (2010) show that, under strong regularity conditions, using a series of regressions for the conditional probability in the Ising model (logistic regression), the correct structure (topology) of a network can be obtained in the high-dimensional setting.

In many practical settings, it is uncertain whether the assumptions of the lasso for accurate network estimation hold. Specifically, the assumptions of sparsity and the restricted eigenvalues (Bickel et al. 2009) are in many situations untestable. We therefore investigate here how estimation and prediction in Ising networks are affected by violating the sparsity and restricted eigenvalue assumptions. The setting of logistic regression and nodewise estimation of the Ising model parameters allows us to clearly determine how and why prediction and estimation are affected. We use the idea of connected nodes in a graph that are identical in the observations (and call them connected copies) to show why prediction is better for graph structures that violate the restricted eigenvalue or sparsity assumption. These connected copies represent the idea of extreme multicollinearity. One way to view connected copies is obtaining edge weights that lead to a network with perfect correlations between nodes (variables). We therefore compare in terms of prediction and estimation different situations where we violate the restricted eigenvalue or sparsity assumption based on different data generating processes. An example of a setting where near connected copies in networks are found is in high resolution functional magnetic resonance imaging. Here time series of contiguous voxels that are connected (also physically, see e.g. Johansen-Berg et al. 2004), are near exact copies of one another. The concept of connected copies allows us to determine the consequences for prediction loss, using the fact that subsets of connected copies do not change the risk or \(\ell _{1}\) norm. We also show that prediction loss is a lower bound for estimation error (in \(\ell _{1}\)) and so by consequence, if prediction loss increases, so does estimation error.

We first provide some background in Sect.  2 on the Ising model and its relation to logistic regression. To show the consequences of violating the assumptions of multicollinearity and sparsity, we discuss these assumptions at length in Sect. 3. We also show how they provide the statistical guarantees for the lasso (e.g. Negahban et al. 2012; Bühlmann and van de Geer 2011; Ravikumar et al. 2010). Then armed with these intuitions, we give in Sect. 4 some insight into the consequences for prediction and estimation when the sparsity or restricted eigenvalue assumption is violated. We also provide some simulations to confirm our results.

2 Logistic regression and the Ising model

The Ising model is part of the exponential family of distributions (see, e.g. Brown 1986; Young and Smith 2005; Wainwright and Jordan 2008). Let \(X=(X_{1},X_{2},\ldots ,X_{p})\) be a random variable with values in \(\{0,1\}^{p}\). The Ising model can then be defined as follows. Let G be a graph consisting of nodes in \(V=\{1,2,\ldots ,p\}\) and edges (st) in \(E\subseteq V\times V\). To each node \(s\in V\), a random variable \(X_{s}\) is associated with values in \(\{0,1\}\). The probability of each configuration x depends on a main effect (external field) and pairwise interactions. It is sometimes referred to as the auto logistic function (Besag 1974), or a pairwise Markov random field to emphasise that the parameter and sufficient statistic space are limited to pairwise interactions (Wainwright and Jordan 2008). Each \(x_{s}\in \{0,1\}\) has conditional on all remaining variables (nodes) \(X_{\backslash s}\) probability of success \(\pi _{s}:=\mathbb {P}(X_{s}=1\mid x_{\backslash s})\), where \(x_{\backslash s}\) contains all nodes except s. Let \(\xi =(m,A)\) contain all parameters, where the \(p\times p\) matrix A contains the pairwise interaction strengths and the p vector m is the main effects (external field). The distribution for configuration x of the Ising model is then

$$\begin{aligned} \mathbb {P}(x) = \frac{1}{Z(\xi )}\exp \left( \sum _{s\in V}m_{s}x_{s}+\sum _{(s,t)\in E}A_{st}x_{s}x_{t}\right) . \end{aligned}$$
(1)

In general, the normalisation \(Z(\xi )\) is intractable, because the sum consists of \(2^{p}\) possible configurations for \(x\in \{0,1\}^{p}\); for example, for \(p=30\), we obtain over 1 million configurations to evaluate in the sum in \(Z(\xi )\) (see, e.g. Wainwright and Jordan (2008) for lattice (Bethe) approximations).

Alternatively, the conditional distribution does not contain the normalisation constant \(Z(\xi )\) and so is more amenable to analysis. The conditional distribution is again an Ising model (Besag 1974; Kolaczyk 2009)

$$\begin{aligned} \pi _{s}=\mathbb {P}(X_{s}=1\mid x_{\backslash s}) = \frac{\exp \left( m_{s}+\sum _{t:(s,t)\in E}A_{st}x_{t} \right) }{1+\exp \left( m_{s}+\sum _{t:(s,t)\in E}A_{st}x_{t} \right) }. \end{aligned}$$
(2)

It immediately follows that the log odds (Besag 1974) is

$$\begin{aligned} \mu _{s}(x_{\backslash s})=\log \left( \frac{\pi _{s}}{1-\pi _{s}}\right) =m_{s} +\sum _{t:(s,t)\in E}A_{st}x_{t}. \end{aligned}$$
(3)

For each node \(s\in V\), we collect the p parameters \(m_{s}\) and \((A_{st}, t\in V\backslash \{s\})\) in the vector \(\theta\). Note that the log odds \(\theta \mapsto \mu _{\theta }\) is a linear function, and so if \(x=(1,x_{\backslash s})\) then \(\mu _{\theta }=x^\mathsf{T}\theta\). The theory of generalised linear models (GLM) can therefore immediately be applied to yield consistent and efficient estimators of \(\theta\) when sufficient observations are available, i.e. \(p<n\) (Nelder and Wedderburn 1972; Demidenko 2004). To obtain an estimate of \(\theta\) when \(p>n\), we require regularisation or another method (Hastie et al. 2015; Bühlmann et al. 2013).

2.1 Nodewise logistic regression

Meinshausen and Bühlmann (2006) showed that for sparse models the true neighbourhood of a graph can be obtained with high probability by performing a series of conditional regressions with Gaussian random variables. For each node \(s\in V\), the set of nodes with nonzero \(A_{st}\) are determined, culminating in a neighbourhood for each node. Combining these results leads to the complete graph, even when the number of nodes p is much larger than the number of observations n. This is called neighbourhood selection, or nodewise regression. This idea was extended to Bernoulli (Ising) graphs by Ravikumar et al. (2010), but see also van de Geer (2011, chapters 6 and 13). Nodewise regression allows us to use standard logistic regression to determine the neighbourhood for each node. This framework, of course, comes at a cost, and two strong assumptions are required. We discuss these assumptions in Sect. 3.

To estimate the coefficients, Meinshausen and Bühlmann (2006) used a sequential regression procedure for Gaussian data where each node in turn is treated as the dependent variable and the remaining ones as independent variables. By repeating this analysis for all nodes in V, a total of \(p-1\) neighbourhood estimates of nonzero parameters are obtained for all nodes \(s\in V\). Since each node is considered twice, the estimates are often combined by either an and-rule, where an edge is obtained if \(\hat{A}_{st}\ne 0\) and \(\hat{A}_{ts}\ne 0\), or an or-rule, where either parameter estimate can be nonzero (Meinshausen and Bühlmann 2006).

Ravikumar et al. (2010) translated this procedure to binary variables using pseudo-likelihoods. Recall that \(\theta \mapsto \mu _{\theta }\) is the linear function \(\mu _{\theta _{s}}(x_{\backslash s})= m_{s}+\sum _{t\in V\backslash s}A_{st}x_{t}\) of the conditional Ising model obtained from the log odds (3). The parameters in the p dimensional vector \(\theta\) are \(m_{s}\) for the intercept (external field) and \((A_{st},t\in V\backslash \{s\})\), representing the connectivity parameters for node s based on all remaining nodes \(V\backslash \{s\}\). Let the \(n\times p\) matrix \(X_{\backslash s}=(1_{n},X_{1}, \ldots ,X_{p})\) be the matrix with the vector of 1s in \(1_{n}\) and the remaining variables without \(X_{s}\). We write \(y_{i}\) for the observation \(x_{i,s}\) of node s and \(x_{i}=(1,x_{i,\backslash s})\) and \(\mu _{i}:=\mu _{i,\theta _{s}}(x_{i,\backslash s})\), basically leaving out the subscript s to index the node, and only use the node index s whenever circumstances demand it. Let the loss function be the negative log of the conditional probability \(\pi\) in (2), known as a pseudo log-likelihood (Besag 1974)

$$\begin{aligned} \psi (y_{i},\mu _{i}) :=-\log \mathbb {P}(y_{i}\mid x_{i}) = -y_{i}\mu _{i} + \log \left( 1+ \exp (\mu _{i})\right) . \end{aligned}$$
(4)

For logistic loss \(\psi\), the theoretical risk is defined as

$$\begin{aligned} R_{\psi }(\mu ) = \frac{1}{n}\sum _{i=1}^{n}\mathbb {E}\psi (y_{i},\mu _{i}). \end{aligned}$$
(5)

The value that optimises the risk is \(\theta ^{*}=\arg \inf _{\theta \in \mathbb {R}^{p}} R_{\psi }(\mu )\); given the choice of logistic loss we can do no better than \(\theta ^{*}\) in terms of the population. Of course we do not have the theoretical risk and so we use an empirical version

$$\begin{aligned} R_{n,\psi }(\mu )=\frac{1}{n}\sum _{i=1}^{n}\psi (y_{i},\mu _{i}). \end{aligned}$$
(6)

Define \(\mu ^{*}:=\mu _{\theta ^{*}}(x)\), which uses the optimal value \(\theta ^{*}\) under theoretical risk. For sparse estimation, the \(\ell _{1}\) (lasso) minimisation is given by

$$\begin{aligned} \hat{\theta }=\arg \min _{\theta \in \mathbb {R}^{p}} \left\{ \frac{1}{n}\sum _{i=1}^{n}\psi (y_{i},\mu _{i})+\lambda ||\theta ||_{1} \right\} , \end{aligned}$$
(7)

where \(||\theta ||_{1}=\sum _{t\in V\backslash \{s\}}|\theta _{t}|\) is the \(\ell _{1}\) norm, \(\lambda\) is a fixed penalty parameter. Since \(\psi\) is convex and \(||\theta ||_{1}\) is convex, the objective function \(R_{n,\psi }+\lambda ||\theta ||_{1}\) in (7) is convex, which allows us to apply convex optimisation. We discuss how to obtain the parameters with the coordinate descent algorithm in Sect. 4.

Once the parameters are obtained it turns out that inference on network parameters is in general difficult with \(\ell _{1}\) regularisation (Pötscher and Leeb 2009). One solution is to desparsify it by adding a projection of the residuals (van de Geer et al. 2013; Javanmard and Montanari 2014; Zhang and Zhang 2014; Waldorp 2015), which is sometimes referred to as the desparsified lasso. Another type of inference is one where clusters of nodes obtained from the lasso are interpreted instead of individual nodes (Lockhart et al. 2014).

To illustrate the result of an implementation of logistic regression for the Ising model, consider Fig. 1. We generated a random Erdös–Renyi graph (left panel) with \(p=100\) nodes and probability of an edge 0.05, resulting in 258 edges. The igraph package in R was used with erdos.renyi.game (Csardi and Nepusz 2006). To generate data (\(n=50\) observations of the \(p=100\) nodes) from the Ising model, the package IsingSampler was used, and to obtain estimates of the parameters the package IsingFit was used (by Epskamp, see van Borkulo et al. 2014) in combination with the and rule.

Fig. 1
figure 1

Ising networks with \(p=100\) nodes. Left panel: true network used to generate data. Right panel: estimated Ising model with nodewise logistic regression from \(n=50\) generated observations

The recall (true positive rate) for this example was 0.69 and the precision (positive predictive value) was 0.42. So we see that about 30% of the true edges are missing and about 60% of the estimated edges is incorrect. This is not surprising given that we have 4950 possible edges to determine and only 50 observations. (More details on the simulation are in Sect. 4.2.)

3 Assumptions for prediction and estimation

To determine the consequences of violating the assumptions of the lasso in logistic regression, we discuss the assumptions for accurate prediction and estimation. Both prediction and estimation require that the solution is sparse; informally, that the number of non-zero edges in the graph is relatively small (see Assumption  1 below). For accurate estimation we also require an assumption on the covariance between the nodes in the graph. Several types of assumptions have been proposed (see van de Geer et al. 2009, for an excellent over view and additional results on obtaining the lasso solution), but here we focus on the restricted eigenvalue assumption because of its direct connection to multicollinearity.

3.1 Sparsity

Central to lasso estimation is the assumption that the underlying problem is low dimensional (Bühlmann and van de Geer 2011; Giraud 2014). This is the assumption of sparsity. This is essential because whenever \(p>n\) there is no unique solution to the empirical risk \(R_{n,\psi }(\mu )\) defined in (6) (Wainwright 2009). Sparsity can be defined in different ways. The most common is a restriction on the number of nonzero edges, sometimes referred to as coordinate sparsity (Giraud 2014). Let \(S_{0}\) denote the support containing the indices of the nonzero coefficients, i.e., \(S_{0}:=\{j: \theta _{j}\ne 0\}\) and its size \(s_{0}=|S_{0}|\).

Assumption 1

(Coordinate sparsity) The size \(s_{0}\) of the set of nonzero coefficients \(S_{0}\) in \(\theta ^{*}\) is of order \(o(\sqrt{n/\log p})\).

There are other forms of sparsity, such as the fused sparsity, where the support is defined as \(\{j:\theta _{j}-\theta _{j-1}\ne 0\}\). This ensures that there are relatively few jumps in, for instance, a piecewise continuous function (see Giraud 2014, for more details). Another form of sparsity is where the \(\ell _{1}\) size of the parameter vector \(\theta\) is restricted. We use this to show that prediction (classification) in logistic regression is accurate.

Assumption 2

(\(\ell _{1}\)-sparsity) The \(\ell _{1}\) norm of the coefficients \(\theta ^{*}\) is of order \(o(\sqrt{n/\log p})\), i.e. \(||\theta ^{*}||_{1}=o(\sqrt{n/\log p})\).

In logistic regression, there is a natural classifier that predicts whether \(y_{i}\) is 1 or 0. We simply check whether the probability of a 1 is greater than 1/2, that is, whether \(\pi _{i} > 1/2\). Because \(\mu _{i}>0\) if and only if \(\pi _{i}>1/2\), we obtain the natural classifier

$$\begin{aligned} \mathcal {C}(y_{i}) = \mathbb {1}\{ \mu _{i}>0 \}, \end{aligned}$$
(8)

where \(\mathbb {1}\) is the indicator function. This is called 0–1 loss or sometimes Bayes loss (Hastie et al. 2015). Instead of 0–1 loss we use logistic loss (4) to determine how well we predict individual observations \(y_{i}\) to which class they belong, 0 or 1. Define the prediction loss (sometimes called excess risk) with logistic loss \(\psi\) as

$$\begin{aligned} \mathcal {L}_{\psi }(\mu ) = \frac{1}{n}\sum _{i=1}^{n}\mathbb {E}\left( \psi (y_{i},\mu _{i})-\psi (y_{i},\mu _{i}^{*})\right) . \end{aligned}$$
(9)

Note that by definition of \(\theta ^{*}\) that \(\mathcal {L}_{\psi }(\mu )\ge 0\) for any \(\theta \mapsto \mu _{\theta }\). A similar definition is possible for 0–1 loss using \(\mathcal {C}\), which is \(\mathcal {L}_{\mathcal {C}}(\mu )\). (Bartlett et al. (2003), Theorem 3.3) show that \(\mathcal {L}_{\psi }\rightarrow 0\) implies \(\mathcal {L}_{\mathcal {C}}\rightarrow 0\) as \(n\rightarrow \infty\). In other words, using logistic loss finally results in the optimal 0–1 (Bayes) loss, and so nothing is lost in using logistic loss as a proxy for 0–1 loss.

Prediction has been shown to be accurate using Assumption  2. Suppose that the regularisation parameter \(\lambda\) is of order \(O(\sqrt{\log p/n})\), then the prediction loss is bounded above (Bühlmann and van de Geer 2011)

$$\begin{aligned} \mathcal {L}_{\psi }(\hat{\mu }) + \lambda ||\hat{\theta }||_{1} \le 2\lambda ||\theta ^{*}||_{1}. \end{aligned}$$
(10)

If in addition Assumption 2 holds, where \(||\theta ^{*}||_{1}\) is of order \(o(\sqrt{n/\log p})\), then \(\mathcal {L}_{\psi }(\hat{\mu })=o(1)\). This result is in the Appendix as Lemma 3 and corresponds to that in Ravikumar et al. (2010), but see also (Bühlmann and van de Geer (2011), Sect. 14.8) for stronger results. The requirement that the regularisation parameter is of order \(O(\sqrt{\log p/n})\) is obtained because the stochastic part in the prediction loss has to be negligible (see Lemma  2 in the Appendix for details). If we choose \(\lambda\) sufficiently large, we are guaranteed with probability at least \(1-2\exp (-nt^{2})\) for some \(t>0\) that the prediction loss is bounded by the \(\ell _{1}\) norm of the parameter of interest \(\theta ^{*}\) as in (10).

It follows directly from (10) that the lasso estimation error is larger than prediction loss, and so prediction is easier than estimation (see also Hastie et al. 2001). From (10), we get an upper bound on prediction loss

$$\begin{aligned} \mathcal {L}_{\psi }(\hat{\mu }) \le 2\lambda \left( ||\hat{\theta }-\theta ^{*}||_{1}\right) , \end{aligned}$$
(11)

where we used the reverse triangle inequality (see Lemma  4 in the Appendix for details). This shows that lasso estimation error is larger than prediction error.

3.2 Restricted eigenvalues

Next to sparsity, the second assumption for the lasso is related to the problem that when \(p>n\) the empirical risk \(R_{n,\psi }\) is not strongly convex and hence no unique solution is available. It turns out that we need to consider a subset of lasso estimation errors \(\delta =\hat{\theta }-\theta ^{*}\) such that strong convexity holds for that subset (Negahban et al. 2012).

Because we have \(p>n\) we cannot obtain strong convexity in general, and we need to relax the assumption. This is how we get to the restricted eigenvalue assumption. Let \(\nabla _{j}\psi (y_{i},x_{i}^\mathsf{T}\theta )\) be the first derivative with respect \(\theta _{j}\) and \(\nabla ^{2}_{jj}\psi (y_{i},x_{i}^\mathsf{T}\theta )\) the second derivative with respect to \(\theta _{j}\). Then demanding strong convexity means that if we consider the \(s_{0}\times s_{0}\) submatrix \(\nabla ^{2}_{S_{0}}\psi _{n}(\theta )\) then we need that \(\nabla ^{2}_{S_{0}}\psi _{n}(\theta )\ge \gamma I\), where I is the identity matrix and we used \(\psi (\theta )\) instead of \(\psi (y,\mu )\) to emphasise dependence on \(\theta\) (and \(\mu =x^\mathsf{T}\theta\)). This we can never get (see the Appendix for more details on strong convexity). But from (10) it follows that if the directions of the lasso error \(\delta =\hat{\theta }-\theta ^{*}\) follow a cone shaped region with \(||\delta _{S_{0}^{c}}||_{1}\le \alpha ||\delta _{S_{0}}||_{1}\) (see Theorem 1 in the Appendix), then within these directions strong convexity holds. We refer to this set as \(\mathbb {C}_{\alpha }=\{\delta \in \mathbb {R}^{p}:||\delta _{S_{0}^{c}}||_{1}\le \alpha ||\delta _{S_{0}}||_{1}\}\). In the directions where the cone shape holds so that \(\delta \in \mathbb {C}_{\alpha }\), the loss function is strictly larger than 0, except at \(\delta =0\), but is flat and can be 0 if \(\delta \notin \mathbb {C}_{\alpha }\) (see Negahban et al. (2012) or Hastie et al. (2015) for an excellent discussion). This assumption is called the restricted eigenvalue assumption.

The second derivative or Fisher information matrix is

$$\begin{aligned} \nabla ^{2}\psi (\theta )=\frac{1}{n}\sum _{i=1}^{n}\mathbb {E}\pi (\mu _{i})\pi (-\mu _{i})x_{i}x_{i}^\mathsf{T}. \end{aligned}$$
(12)

We assume that this population level matrix is positive definite. Then by strong convexity we have for \(\gamma >0\) that \(\nabla ^{2}\psi \ge \gamma I\), and so

$$\begin{aligned} \mathcal {L}_{\psi }(\hat{\mu })\ge \frac{1}{2}\delta ^\mathsf{T}\nabla ^{2}\psi (\hat{\theta })\delta \ge \frac{\gamma }{2}||\delta ||_{2}^{2}, \end{aligned}$$

which allows us to relate the lasso estimation error to prediction loss such that we can conclude consistency because of the bound on prediction error in (10) (see Lemma  3 in the Appendix). The problem is that we work with the empirical \(p\times p\) matrix \(\nabla ^{2}\psi _{n}(\theta )\) which is necessarily singular since \(p>n\). The empirical Fisher information is

$$\begin{aligned} \nabla ^{2}\psi _{n}(\theta )=\frac{1}{n}\sum _{i=1}^{n}\pi (\mu _{i})\pi (-\mu _{i})x_{i}x_{i}^\mathsf{T}, \end{aligned}$$
(13)

which has zero eigenvalues because it is positive semidefinite whenever \(p>n\). Bickel et al. (2009) suggested the restricted eigenvalue assumption that is sufficient to guarantee that \(\nabla ^{2}\psi _{n}(\theta )\) has positive eigenvalues for lasso errors \(\delta \in \mathbb {C}_{\alpha }\). Here we require in the setting of nodewise logistic regression that for all nodes s simultaneously the lower bound \(\gamma _{G}>0\) is sufficient and \(\alpha =1\). We emphasise the nodewise estimation of all edges in E using \(\psi _{s}\) and \(\delta _{s}\).

Assumption 3

(Restricted eigenvalue) The population Fisher information matrix \(\nabla ^{2}\psi _{s}\) of dimensions \(p\times p\) is nonsingular and \(\max _{j}\nabla ^{2}_{jj}\psi _{s}(\theta )<K\), for some \(K>0\) and for all \(s\in V\). The empirical matrix \(\nabla ^{2}\psi _{n,s}(\theta )\) satisfies the restricted eigenvalue (RE) assumption if for some \(\gamma _{G}>0\) it holds that

$$\begin{aligned} \min _{s\in V}\frac{\delta _{s}^\mathsf{T}\nabla ^{2}\psi _{n,s}(\theta )\delta _{s}}{||\delta _{s}||_{2}^{2}} \ge \gamma _{G}\qquad \text{for all} \quad 0\ne \delta _{s}\in \mathbb {C}_{1}. \end{aligned}$$
(14)

The restricted eigenvalue assumption has been investigated in the context of Gaussian data (Bickel et al. 2009; Wainwright 2009; Raskutti et al. 2010; Hastie et al. 2015, chapter11), in the setting of the Ising model (Ravikumar et al. 2010, Lemma 3), and in generalised linear models (Van de Geer 2008; Bühlmann and van de Geer 2011, chapter 6). The original restricted eigenvalue assumption as presented in Bickel et al. (2009) is slightly stronger than the compatibility assumption of van de Geer et al. (2009). See van de Geer et al. (2009) for more details on the compatibility and other assumptions to bound estimation error in the lasso. Here we use the RE assumption because of its connection to multicollinearity, discussed in Sect. 4.

Let \(\theta _{S}\) be the vector where for each \(t\in V\) we have \(\theta _{t}\mathbb {1}\{t\in S\}\). It follows that \(\theta =\theta _{S}+\theta _{S^{c}}\), where \(S^{c}\) is the complement of S. The RE assumption implies that the \(s_{0}\times s_{0}\) submatrix \(\nabla ^{2}_{S_{0}}\psi _{n}(\theta )\) indexed by \(S_{0}\) has smallest eigenvalue \(>0\). This can be seen as follows. RE implies that there is a \(\delta\) such that \(\delta _{S_{0}}\ne 0\), \(\delta _{S_{0}^{c}}=0\), implying \(||\delta _{S_{0}^{c}}||_{1}\le ||\delta _{S_{0}}||_{1}\), and \(\delta ^\mathsf{T}(\nabla ^{2}\psi _{n})\delta >0\). This implies that for some \(\gamma _{G}>0\)

$$\begin{aligned} \nabla ^{2}_{S_{0}}\psi _{n}(\theta )\ge \gamma _{G} I \end{aligned}$$

and so we have restricted strong convexity for this \(\delta \in \mathbb {C}_{1}\). The two Assumptions 1 on coordinate sparsity and 3 on restricted eigenvalues make it possible to derive the \(\ell _{1}\) estimation error bound (see Theorem 1 in the Appendix for details)

$$\begin{aligned} \max _{s\in V}||\delta _{s}||_{1}=\max _{s\in V}||\hat{\theta }_{s}-\theta _{s}^{*}||_{1} \le \frac{16}{\gamma _{G}}s_{0}\lambda . \end{aligned}$$
(15)

The bound corresponds to the one given in Negahban et al. (2012, Corollary 2, discussed in Sect. 4.4), and the one in Bühlmann and van de Geer (2011, Lemma 6.8). Because we require the smallest \(\gamma\) such that the RE assumption holds, we have that this bound holds simultaneously for all nodes in the Ising graph.

The bounds on prediction and estimation are important to know the circumstances for the statistical guarantees. However, in many practical situations, we cannot be certain of the assumptions of sparsity and restricted eigenvalues. These assumptions cannot be checked. And so it becomes relevant to know what the consequences for prediction and estimation are when the assumptions are not satisfied. This is what we investigate next.

4 Violation of sparsity and restricted eigenvalues

If we violate either the sparsity or restricted eigenvalue assumption, then we would expect that lasso estimation error becomes worse, and indeed this happens. However, this is not so clear for prediction. In fact, it turns out that prediction becomes better for non-sparse models that violate the restricted eigenvalue (RE) assumption. Our main result is that violating the RE or sparsity assumption leads to a decrease in empirical risk, and hence in loss. The RE assumption is violated by an extreme case of multicollinearity, namely where some nodes are copies of other nodes. When such copies are connected we call them connected copies. In connected copies, the coefficients are proportional to the original ones, such that we do not arbitrarily change the data generating process. One way to view connected copies is to find multiplicative constants for the edge weights that lead to a network with perfect correlations between nodes. We therefore compare prediction and estimation of different situations where we violate the RE or sparsity assumption based on different data generating processes. Proposition 1 shows that the number of connected copies co-determines the decrease in empirical risk, and hence, violating the RE assumption leads to a decrease in risk. Next, in Corollary 1, we show that violating the sparsity assumption leads to either a decrease or increase of empirical risk depending on whether the set of coefficients in the different subsets of nodes is positive or negative, respectively. We illustrate the theoretical results with some simulations in Sect.  4.2.

4.1 Connected copies

Suppose that for some nodes \(s,t\in V\) we have that the observations are identical, that is, \(x_{i,s}=x_{i,t}\) for all i. Then the coefficients obtained with the lasso using the quadratic approximation to the logistic loss in coordinate descent will be identical, i.e. \(\hat{\theta }_{s}=\hat{\theta }_{t}\) (Hastie et al. 2015, see also the Appendix for a discussion of the coordinate descent algorithm). This can be seen from the following considerations. By (13) we have that element (ss) of the second derivative matrix is

$$\begin{aligned} \nabla ^{2}_{ss}\psi _{n}=\frac{1}{n}\sum _{i=1}^{n}\pi (\mu _{i})\pi (-\mu _{i})x_{i,s}^{2}, \end{aligned}$$

and this is the same for element (tt) since \(x_{s}=x_{t}\). Similarly, for the sth element \(\nabla _{s}\psi _{n}\), we obtain

$$\begin{aligned} \nabla _{s}\psi _{n} = \frac{1}{n}\sum _{i=1}^{n}(-y_{i}+\pi (\mu _{i}))x_{i,s}, \end{aligned}$$

which equals \(\nabla _{t}\psi _{n}\) because \(x_{s}=x_{t}\). In the coordinate descent algorithm, the updating scheme using the quadratic approximation (see the Appendix for details) is at time \(q+1\)

$$\begin{aligned} \theta ^{q+1}_{j} = \theta ^{q}_{j} - {\left\{ \begin{array}{ll} (\nabla ^{2}_{jj}\psi ^{q})^{-1}\nabla _{j}\psi ^{q} -\lambda & \quad {}\text {if } \quad (\nabla ^{2}_{jj}\psi ^{q})^{-1}\nabla _{j}\psi ^{q} \,\,>\,\,\lambda \\ (\nabla ^{2}_{jj}\psi ^{q})^{-1}\nabla _{j}\psi ^{q} +\lambda & \quad {}\text {if } \quad (\nabla ^{2}_{jj}\psi ^{q})^{-1}\nabla _{j}\psi ^{q} \,\,<\,\,-\lambda \\ 0 & \quad {}\text {if } \quad |(\nabla ^{2}_{jj}\psi ^{q})^{-1}\nabla _{j}\psi ^{q}| \le \lambda , \end{array}\right. } \end{aligned}$$
(16)

where \((\nabla ^{2}_{jj}\psi ^{q})^{-1}\) is element (jj) of the inverse of the second order derivative matrix \(\nabla ^{2}\psi ^{q}\) for step q in the coordinate descent algorithm. Then we obtain in the coordinate descent algorithm \((\nabla ^{2}_{ss}\psi ^{q}_{n})^{-1}\nabla _{s}\psi ^{q}_{n}\) at each step q for both nodes s and t, implying that the coefficients are the same. So for each node in the nodewise regressions, we obtain a Fisher matrix where column s is the same as column t. Now if both s and t are in \(S_{0}\), then the smallest eigenvalue of \(\nabla ^{2}\psi _{n,S_{0}}\) is 0, and hence, the RE assumption is violated. We will use this idea of identical nodes to explain why prediction loss becomes better when we violate the RE assumption.

We call a node t in the subset \(L\subset V\) a connected copy of \(s\in K=V\backslash L\) if \((s,t)\in E\) and \(x_{t}=x_{s}\). This says that two directly connected nodes are identical to each other for all n observations. Note that the coefficient between a connected copy and its original must be positive; if the coefficient was negative, then the connected copy would also have to be the reverse of its original, which cannot be true because the variables are defined to be identical. We know from estimation that if a node is a connected copy then the lasso solution is no longer unique (Hastie et al. 2015). In fact, if t is a connected copy of s, then all solutions with \(\alpha \hat{\theta }_{s}\) and \((1-\alpha )\hat{\theta }_{t}\), with \(0\le \alpha \le 1\) and \(\hat{\theta }_{s}\), \(\hat{\theta }_{t}\) are estimates of the parameters of nodes s and t, respectively, result in identical empirical risk \(R_{n,\psi }\) as when those connected copies have been deleted. Similarly, we will have the same \(\ell _{1}\) norm as when the connected copies have been deleted. As a consequence, we cannot distinguish between the situation with or without the connected copy in \(\ell _{1}\) optimisation. We denote by \(L_{t}\) the set of all connected copies \(s\in L_{t}\) of \(t\in K\), which defines an equivalence relation on L, such that \(L_{t}\cap L_{s}=\varnothing\) and \(\cup L_{t}=L\). We denote the parameter vector where the connected copies in L have been deleted by \(\theta _{\backslash L}\) and correspondingly \(\mu _{\backslash L}=x^\mathsf{T}_{\backslash L}\theta _{\backslash L}\).

Lemma 1

In the Ising graph \(G=(V,E)\) suppose nodes in \(L\subset V\) are connected copies of nodes in \(K=V\backslash L\). Furthermore, the nodewise lasso solutions \(\hat{\theta }\) are obtained with (7) where for each connected copy \(t\in L_{t}\) of node \(s\in K\), with \(\alpha _{t}\hat{\theta }_{t}\), we have that \(\sum _{t\in L_{t}}\alpha _{t}=1\). Then the empirical risk \(R_{n,\psi }(\hat{\mu })\) and \(\ell _{1}\) norm of \(\hat{\theta }\) are the same as when the connected copies in L are deleted, i.e. \(R_{n,\psi }(\hat{\mu })=R_{n,\psi }(\hat{\mu }_{\backslash L})\) and \(||\hat{\theta }||_{1}=||\hat{\theta }_{\backslash L}||_{1}\).

So we have that the non-uniqueness of the lasso in case of a connected copy, results in the exact same value for the empirical risk whether we delete it or take any one of the weighted versions such that the coefficients sum to 1. Note that we do not change the underlying process in any arbitrary way; the nodes are connected and the coefficients remain proportional to the original ones. We immediately obtain that the size |L| of the set of connected copies co-determines the prediction loss. We obtain this result because the coefficients of the connected copies with respect to their originals are positive.

Proposition 1

For the Ising graph, let \(L_{1}\) and \(L_{2}\) be subsets of connected copies of nodes in \(V\backslash L_{1}\cup L_{2}\) such that \(L_{1}\subset L_{2}\) and hence \(|L_{1}|< |L_{2}|\). Then we have for the prediction loss that the sum of coefficients in \(L_{1}^{c}\cap L_{2}\) is \(>0\), and the risk \(R_{n,\psi }(\hat{\mu }_{\backslash L_{1}})\ge R_{n,\psi }(\hat{\mu }_{\backslash L_{2}})\).

This follows from Lemma 1 directly, since there we saw that the prediction loss including connected copies is equal to the prediction error when those connected copies are deleted. This idea explains why the empirical risk decreases as a function of an increasing number of connected copies.

The same idea can be used to determine why prediction becomes better for non-sparse sets. Proposition 1 can be altered such that a similar result holds for sparsity, where we do not need the connected copies. The only requirement is that we know what the sum of the coefficients is that are in the larger set of connected nodes, because the nodes need not be connected in this case. Let the \(S_{a}\) be a set of nodes with a possibly non-sparse set of nonzero edges in the sense that \(|S_{a}|>O(\sqrt{n/\log p})\). Suppose that \(S_{0}\subset S_{a}\) so that \(|S_{0}|<|S_{a}|\).

Corollary 1

In the Ising graph \(G=(V,E)\) suppose that we have a particular, not necessarily sparse, node set with nonzero edges in \(S_{a}\), and define the subset \(S_{0}\subset S_{a}\). Then we obtain for the empirical risk \(R_{n,\psi }\) that

  1. (1)

    if the sum of coefficients in \(S_{0}^{c}\cap S_{a}\) is \(>0\), then \(R_{n,\psi }(\hat{\mu }_{\backslash S_{0}})\ge R_{n,\psi }(\hat{\mu }_{\backslash S_{a}})\);

  2. (2)

    if the sum of coefficients in \(S_{0}^{c}\cap S_{a}\) is \(<0\), then \(R_{n,\psi }(\hat{\mu }_{\backslash S_{0}})\le R_{n,\psi }(\hat{\mu }_{\backslash S_{a}})\).

We see that by eliminating the requirement of connectedness, we find that prediction loss decreases given that the coefficients in the remaining set of non-zero coefficients are positive.

We focus here on prediction loss because by (11) we have that the \(\ell _{1}\) estimation error is larger than prediction loss (given that the penalty parameter \(\lambda\) is of the right order), and hence if we find that prediction loss becomes higher, it follows that \(\ell _{1}\) estimation error becomes larger.

The above presented ideas of violating the sparsity assumption or restricted eigenvalue assumption are confirmed by some numerical illustrations.

4.2 Numerical illustration

To show the effects of non-sparse underlying representations and violation of the restricted eigenvalue assumption (multicollinearity), we performed some simulation studies. Here 0–1 data were generated by a Metropolis–Hastings algorithm, implemented in the R package IsingSampler (van Borkulo et al. 2014), according to a random graph (Erdös–Renyi) with \(p=100\) nodes and \(n=50\) observations. All edge coefficients were positive, so that we expect the prediction error to improve with increasing collinearity. Sparsity of the graph was varied by the probability of an edge from \(p_{e}=0.025\), which complies with the sparsity assumption, to the probability of an edge of \(p_{e}=0.2\), which does not comply with the sparsity assumption. For interpretation we defined sparsity in these simulations as \(1-p_{e}\), so that high sparsity means few non-zero edges. Multicollinearity was induced by equating two columns of the data X if there was an edge in the edge set of the true graph for a percentage \(\alpha\), ranging from 0 to 0.6. This ensured that the smallest \(\alpha s_{0}\) eigenvalues of the submatrix \(\nabla ^{2}\psi _{n,S_{0}}\) are 0, thereby violating the RE assumption.

The parameters for the nodes m and for interactions in A were estimated by nodewise logistic regressions, implemented in IsingFit (by Epskamp, see van Borkulo et al. 2014). Here the extended Bayesian information criterion (EBIC) is used to determine the optimal \(\lambda\) for each logistic regression separately (Foygel and Drton 2013). This procedure was run 100 times and the averages across these runs (and nodes) are presented. We evaluated estimation accuracy by recall (\(|\hat{S}\cap S_{0}|/ |S_{0}|\)) and precision (\(|\hat{S}\cap S_{0}|/ |\hat{S}|\)). We also used a scaled \(\ell _{1}\) norm for the estimation error \(||\delta ||_{1}/u\), where \(\delta =\hat{\theta }-\theta ^{*}\) and u is the maximal value obtained. Prediction was evaluated by logistic loss \(\psi\) and Bayes loss \(\mathcal {C}\). We determine loss for data \(z_{i}\) independent from data \(y_{i}\), upon which the estimate \(\hat{\theta }\) is based (predictive risk).

Fig. 2
figure 2

Performance measures of constructing networks with the lasso as a function of sparsity, where sparsity is defined as \(1-p_{e}\), the reverse of the edge probability. In a, Bayes loss (redcircle) and logistic loss (bluetriangle), and in b, recovery in terms of recall (redcircle) and precision (bluetriangle) and the scaled \(\ell _{1}\) norm of the error (blackdiamond)

Figure 2b shows that recovery of parameters is accurate when sparsity is high (few non-zero edges), but recovery becomes poor when sparsity does not hold; from sparsity 0.95 and lower. This is seen in all three measures: recall, precision and the scaled \(\ell _{1}\) norm. In contrast, the 0–1 loss from (8) and the logistic loss in (4) actually become better (the loss decreases) when the data generating process is no longer sparse, as can be seen in Fig.  2a. This corresponds to Corollary 1, which shows that sparsity is not necessary for accurate prediction. We do require that the penalty parameter \(\lambda\) is of the appropriate order (i.e. \(\lambda =O(\sqrt{\log p/n})\)); here \(\lambda\) was selected by the EBIC (Foygel and Drton 2013) which ensured such a penalty. The EBIC has an additional hyperparameter \(\gamma\) to control the impact of the size of the search domain; we set \(\gamma\) to 0.25 in line with the reasonable performance obtained in Foygel and Drton (2013). Prediction loss is high at high sparsity because in the simulation there are only about 2–3 edges, which means that prediction of other nodes is extremely difficult.

In Fig. 3, the results can be seen when multicollinearity is varied. As expected, Fig.  3b shows that increasing multicollinearity reduced recovery; both recall and precision decreased to around 10%. Prediction loss, on the other hand, becomes smaller as shown in Fig. 3a, indicating better prediction for multicollinear data. This is in line with Proposition  1. We can also think of it in the following way. With increasing multicollinearity \(\alpha\), more equal columns in X are present for connected nodes. This leads to more similar behaviour of connected nodes in the Ising network and hence to better prediction.

Fig. 3
figure 3

Performance measures of constructing networks with the lasso as a function of collinearity (\(\alpha\)); collinearity is defined as the probability of identical observations for two nodes whenever these nodes are connected. In a, Bayes loss (redcircle) and logistic loss (bluetriangle), and in b, recovery in terms of recall (redcircle) and precision (bluetriangle) and the scaled \(\ell _{1}\) norm of the error (blackdiamond)

These results demonstrate that when either the sparsity assumption or multicollinearity (RE) assumption is violated, the prediction loss decreases, making prediction better. But also that estimation error increases. Hence, the estimated network that predicts well, will not be similar to the true underlying network. On the other hand, if the assumptions of sparsity and RE hold, then many of the edges in the Ising model are estimated correctly but because of the high-dimensional setting many true edges are also missed. And since in sparse settings fewer edges are present that determine the prediction, prediction is poorer.

5 Discussion

Logistic regression is an appropriate tool for prediction and estimation of parameters of the Ising model. Statistical guarantees have been given for prediction and estimation of the parameters of the Ising model using a sequence of logistic regressions whenever at least the assumptions of sparsity and restricted eigenvalues hold. Here we focused on violations of these assumptions and showed why prediction becomes better whenever sparsity or restricted eigenvalues do not hold. Intuitively, for predicting the underlying structure of the graph is irrelevant and when nodes behave similarly, prediction becomes easier. To confirm these intuitions we showed, using connected copies, that prediction loss can decrease as a function of multicollinearity and sparsity. When multicollinearity increases or sparsity decreases, then prediction loss decreases. By consequence of the fact that prediction loss can be considered as a lower bound for estimation error (albeit not a tight bound), estimation error is seen to become worse (increase) as multicollinearity increases or sparsity decreases. Our simulations support these findings and additionally show that recovery in terms of precision and recall becomes worse when violating the assumption of sparsity and multicollinearity.

The concept of connected copies used here is of course an idealisation of reality. Connected copies can be seen as a way to compare prediction and estimation for different structures (topologies) of graphs, where a connected copy is an extreme case in which the correlation between two variables is 1. We required this idealisation to obtain the analytical results. In practice, we will not encounter \(x_{s}=x_{t}\) but \(x_{s}\approx x_{t}\). This case is much more difficult to treat analytically. In the case where \(x_{s}\approx x_{t}\) then the parameter estimates will not be equal and the result would depend on the exact differences in estimates. But if we suppose that the sign of all the coefficients is positive, say, then we would expect similar behaviour of the empirical risk based on the results of Proposition 1 and Corollary 1.

We showed here the consequences of violating the restricted eigenvalue and sparsity assumptions in the Ising model using logistic regression. The next step is obviously to generalise these results to exponential family distributions. This will require additional restrictions such as the margin condition. The margin condition bridges the gap between estimation error and prediction loss. Because for logistic regression we have the linear functional \(\mu =\theta ^\mathsf{T}x\), we obtain a quadratic margin. For logistic regression, the margin condition then implies that \(||\hat{\mu }-\mu ^{*}||_{2}^{2} \ge \gamma ||\delta ||_{2}^{2}\), where \(\delta =\hat{\theta }-\theta ^{*}\) and using strong convexity on \(\frac{1}{n}\sum _{i=1}^{n}x_{i}x_{i}^\mathsf{T}\). But the margin condition does not hold in general and so requires additional assumptions (see Bühlmann and van de Geer 2011) to apply the current analysis of the consequences of violating RE and sparsity on estimation and prediction.