Abstract
We introduce a class of \(M \times M\) sample covariance matrices \({\mathcal {Q}}\) which subsumes and generalizes several previous models. The associated population covariance matrix \(\Sigma = \mathbb {E}{\mathcal {Q}}\) is assumed to differ from the identity by a matrix of bounded rank. All quantities except the rank of \(\Sigma - I_M\) may depend on \(M\) in an arbitrary fashion. We investigate the principal components, i.e. the top eigenvalues and eigenvectors, of \({\mathcal {Q}}\). We derive precise large deviation estimates on the generalized components \(\langle {\mathbf{{w}}} , {\varvec{\xi }_i}\rangle \) of the outlier and non-outlier eigenvectors \(\varvec{\xi }_i\). Our results also hold near the so-called BBP transition, where outliers are created or annihilated, and for degenerate or near-degenerate outliers. We believe the obtained rates of convergence to be optimal. In addition, we derive the asymptotic distribution of the generalized components of the non-outlier eigenvectors. A novel observation arising from our results is that, unlike the eigenvalues, the eigenvectors of the principal components contain information about the subcritical spikes of \(\Sigma \). The proofs use several results on the eigenvalues and eigenvectors of the uncorrelated matrix \({\mathcal {Q}}\), satisfying \(\mathbb {E}{\mathcal {Q}} = I_M\), as input: the isotropic local Marchenko–Pastur law established in Bloemendal et al. (Electron J Probab 19:1–53, 2014), level repulsion, and quantum unique ergodicity of the eigenvectors. The latter is a special case of a new universality result for the joint eigenvalue–eigenvector distribution.
1 Introduction
In this paper we investigate \(M \times M\) sample covariance matrices of the form
where the sample matrix \(A = (A_{i \mu })\) is a real \(M \times N\) random matrix. The main motivation to study such models stems from multivariate statistics. Suppose we are interested in the statistics of \(M\) mean-zero variables \(\mathbf{{a}} = (a_1,\ldots , a_M)^*\) which are thought to possess a certain degree of interdependence. Such problems of multivariate statistics commonly arise in population genetics, economics, wireless communication, the physics of mixtures, and statistical learning [3, 26, 33]. The goal is to unravel the interdependencies among the variables \(\mathbf{{a}}\) by finding the population covariance matrix
To this end, one performs a large number, \(N\), of repeated, independent measurements, called “samples”, of the variables \(\mathbf{{a}}\). Let \(A_{i\mu }\) denote the value of \(a_i\) in the \(\mu \)-th sample. Then the sample covariance matrix (1.1) is the empirical mean approximating the population covariance matrix \(\Sigma \).
In general, the mean of the variables \(\mathbf{{a}}\) is nonzero and unknown. In that case, the population covariance matrix (1.2) has to be replaced with the general form
Correspondingly, one has to subtract from \(A_{i \mu }\) the empirical mean of the \(i\)-th row of \(A\), which we denote by \([A]_i \mathrel {\mathop :}=\frac{1}{N} \sum _{\mu = 1}^N A_{i \mu }\). Hence, we replace (1.1) with
where we introduced the vector
Since \(\dot{{\mathcal {Q}}}\) is invariant under the shift \(A_{i\mu } \mapsto A_{i \mu } + m_i\) for any deterministic vector \((m_i)_{i = 1}^M\), we may assume without loss of generality that \(\mathbb {E}A_{i \mu } = 0\). We shall always make this assumption from now on.
It is easy to check that \(\mathbb {E}{\mathcal {Q}} = \mathbb {E}\dot{{\mathcal {Q}}} = \Sigma \). Moreover, we shall see that the principal components of \({\mathcal {Q}}\) and \(\dot{{\mathcal {Q}}}\) have identical asymptotic behaviour. For simplicity of presentation, in the following we focus mainly on \({\mathcal {Q}}\), bearing in mind that every statement we make on \({\mathcal {Q}}\) also holds verbatim for \(\dot{{\mathcal {Q}}}\) (see Theorem 2.23 below).
By the law of large numbers, if \(M\) is fixed and \(N\) taken to infinity, the sample covariance matrix \({\mathcal {Q}}\) converges almost surely to the population covariance matrix \(\Sigma \). In many modern applications, however, the population size \(M\) is very large and obtaining samples is costly. Thus, one is typically interested in the regime where \(M\) is of the same order as \(N\), or even larger. In this case, as it turns out, the behaviour of \({\mathcal {Q}}\) changes dramatically and the problem becomes much more difficult. In principal component analysis, one seeks to understand the correlations by considering the principal components, i.e. the top eigenvalues and associated eigenvectors, of \({\mathcal {Q}}\). These provide an effective low-dimensional projection of the high-dimensional data set \(A\), in which the significant trends and correlations are revealed by discarding superfluous data.
The fundamental question, then, is how the principal components of \(\Sigma = \mathbb {E}{\mathcal {Q}}\) are related to those of \({\mathcal {Q}}\).
1.1 The uncorrelated case
In the “null” case, the variables \(\mathbf{{a}}\) are uncorrelated and \(\Sigma = I_M\) is the identity matrix. The global distribution of the eigenvalues is governed by the Marchenko–Pastur law [30]. More precisely, defining the dimensional ratio
the empirical eigenvalue density of the rescaled matrix \(Q = \phi ^{-1/2} {\mathcal {Q}}\) has the same asymptotics for large \(M\) and \(N\) as
where we defined
to be the edges of the limiting spectrum. Hence, the unique nontrivial eigenvalue \(1\) of \(\Sigma \) spreads out into a bulk spectrum of \({\mathcal {Q}}\) with diameter \(4 \phi ^{1/2}\). Moreover, the local spectral statistics are universal; for instance, the top eigenvalue of \({\mathcal {Q}}\) is distributed according to the Tracy–Widom-1 distribution [24, 25, 42, 43]. Finally, the eigenvectors of \({\mathcal {Q}}\) are uniformly distributed on the unit sphere of \(\mathbb {R}^M\); following [15], we call this property the quantum unique ergodicity of the eigenvectors of \({\mathcal {Q}}\), a term borrowed from quantum chaos. We refer to Theorem 8.3 and Remark 8.4 below for precise statements.
1.2 Examples and outline of the model
The problem becomes much more interesting if the variables \(\mathbf{{a}}\) are correlated. Several models for correlated data have been proposed in the literature, starting with the Gaussian spiked model from the seminal paper of Johnstone [25]. Here we propose a general model which includes many previous models as special cases. We motivate it using two examples.
-
1.
Let \(\mathbf{{a}} = T \mathbf{{b}}\), where the entries of \(\mathbf{{b}}\) are independent with zero mean and unit variance, and \(T\) is a deterministic \(M \times M\) matrix. This may be interpreted as an observer studying a complicated system whose randomness is governed by many independent internal variables \(\mathbf{{b}}\). The observer only has access to the external variables \(\mathbf{{a}}\), which may depend on the internal variables \(\mathbf{{b}}\) in some complicated and unknown fashion. Assuming that this dependence is linear, we obtain \(\mathbf{{a}} = T \mathbf{{b}}\). The sample matrix for this model is therefore \(A = T B\), where \(B\) is an \(M \times N\) matrix with independent entries of unit variance. The population covariance matrix is \(\Sigma = T T^*\).
-
2.
Let \(r \in \mathbb {N}\) and set
$$\begin{aligned} \mathbf{{a}} = \mathbf{{z}} + \sum _{l = 1}^r y_l \mathbf{{u}}_l. \end{aligned}$$Here \(\mathbf{{z}} \in \mathbb {R}^M\) is a vector of “noise”, whose entries are independent with zero mean and unit variance. The “signal” is given by the contribution of \(r\) terms of the form \(y_l \mathbf{{u}}_l\), whereby \(y_1,\ldots , y_r\) are independent, with zero mean and unit variance, and \(\mathbf{{u}}_1,\ldots , \mathbf{{u}}_r \in \mathbb {R}^M\) are arbitrary deterministic vectors. The sample matrix is
$$\begin{aligned} A = Z + \sum _{l = 1}^r \mathbf{{u}}_l \mathbf{{y}}_l^*, \end{aligned}$$where, writing \(Y \mathrel {\mathop :}=[\mathbf{{y}}_1,\ldots , \mathbf{{y}}_r] \in \mathbb {R}^{N \times r}\!,\) the \((M + r) \times N\) matrix \(B \mathrel {\mathop :}=\left( {\begin{array}{c}Z\\ Y^*\end{array}}\right) \) has independent entries with zero mean and unit variance. Writing \(U \mathrel {\mathop :}=[\mathbf{{u}}_1,\ldots , \mathbf{{u}}_r] \in \mathbb {R}^{M \times r}\), we therefore have
$$\begin{aligned} A = T B , \qquad T \mathrel {\mathop :}=(I_M, U). \end{aligned}$$The population covariance matrix is \(\Sigma = T T^* = I_M + U U^*\).
Below we shall refer to these examples as Examples (1) and (2) respectively. Motivated by them, we now outline our model. Let \(B\) be an \((M + r) \times N\) matrix whose entries are independent with zero mean and unit variance. We choose a deterministic \(M \times (M + r)\) matrix \(T\), and set \({\mathcal {Q}} = \frac{1}{N} TBB^* T^*\). We stress that we do not assume that the underlying randomness is Gaussian. Our key assumptions are (i) \(r\) is bounded; (ii) \(\Sigma - I_M\) has bounded rank; (iii) \(\log N\) is comparable to \(\log M\); (iv) the entries of \(B\) are independent, with zero mean and unit variance, and have a sufficient number of bounded moments. The precise assumptions are given in Sect. 1.3 below. We emphasize that everything apart from \(r\) and the rank of \(\Sigma - I_M\) is allowed to depend on \(N\) in an arbitrary fashion.
As explained around (1.3), in addition to \({\mathcal {Q}}\) we also consider the matrix \(\dot{{\mathcal {Q}}} = \frac{1}{N - 1} TB(I_N - \mathbf{{e}} \mathbf{{e}}^*)B^* T^*\), whose principal components turn out to have the same asymptotic behaviour as those of \({\mathcal {Q}}\).
1.3 Definition of model
In this section we give the precise definition of our model and introduce some basic notations. For convenience, we always work with the rescaled sample covariance matrix
The motivation behind this rescaling is that, as observed in (1.6), it ensures that the bulk spectrum of \(Q\) has asymptotically a fixed diameter, \(4\), for arbitrary \(N\) and \(M\).
We always regard \(N\) as the fundamental large parameter, and write \(M \equiv M_N\). Here, and throughout the following, in order to unclutter notation we omit the argument \(N\) in quantities, such as \(M\), that depend on it. In other words, every symbol that is not explicitly a constant is in fact a sequence indexed by \(N\). We assume that \(M\) and \(N\) satisfy the bounds
for some positive constant \(C\).
Fix a constant \(r = 0,1,2,3,\ldots \). Let \(X\) be an \((M + r) \times N\) random matrix and \(T\) an \(M \times (M + r)\) deterministic matrix. For definiteness, and bearing the motivation of sample covariance matrices in mind, we assume that the entries of \(X\) and \(T\) are real. However, our method also trivially applies to complex-valued \(X\) and \(T\), with merely cosmetic changes to the proofs. We consider the \(M \times M\) matrix
Since \(TX\) is an \(M \times N\) matrix, we find that \(Q\) has
nontrivial (i.e. nonzero) eigenvalues.
We define the population covariance matrix
where \(\{\mathbf{{v}}_i\}_{i = 1}^M\) is a real orthonormal basis of \(\mathbb {R}^M\) and \(\{\sigma _i\}_{i = 1}^M\) are the eigenvalues of \(\Sigma \). Here we introduce the representation
for the eigenvalues \(\sigma _i\). We always order the values \(d_i\) such that
We suppose that \(\Sigma \) is positive definite, so that each \(d_i\) lies in the interval
Moreover, we suppose that \(\Sigma - I_M\) has bounded rank, i.e.
has bounded cardinality, \(|{\mathcal {R}} | = O(1)\). We call the couples \(((d_i, \mathbf{{v}}_i))_{i \in {\mathcal {R}}}\) the spikes of \(\Sigma \).
We assume that the entries \(X_{i \mu }\) of \(X\) are independent (but not necessarily identically distributed) random variables satisfying
In addition, we assume that, for all \(p \in \mathbb {N}\), the random variables \((N M)^{1/4} X_{i\mu }\) have a uniformly bounded \(p\)-th moment. In other words, we assume that there is a constant \(C_p\) such that
The assumption that (1.16) hold for all \(p \in \mathbb {N}\) may be easily relaxed. For instance, it is easy to check that our results and their proofs remain valid, after minor adjustments, if we only require that (1.16) holds for all \(p \leqslant C\) for some large enough constant \(C\). We do not pursue such generalizations further.
Our results concern the eigenvalues of \(Q\), denoted by
and the associated unit eigenvectors of \(Q\), denoted by
1.4 Sketch of behaviour of the principal components of \(Q\)
To guide the reader, we now give a heuristic description of the behaviour of principal components of \(Q\). The spectrum of \(Q\) consists of a bulk spectrum and of outliers—eigenvalues separated from the bulk. The bulk contains an order \(K\) eigenvalues, which are distributed on large scales according to the Marchenko–Pastur law (1.6). In addition, if \(\phi > 1\) there are \(M - K\) trivial eigenvalues at zero. Each \(d_i\) satisfying \(|d_i | > 1\) gives rise to an outlier located near its classical location
Any \(d_i\) satisfying \(|d_i | < 1\) does not result in an outlier. We summarize this picture in Fig. 1. The creation or annihilation of an outlier as a \(d_i\) crosses \(\pm 1\) is known as the BBP phase transition [3]. It takes place on the scaleFootnote 1 \(\bigl ||d_i | - 1 \bigr | \asymp K^{-1/3}\). This scale has a simple heuristic explanation (we focus on the right edge of the spectrum). Suppose that \(d_1 \in (0,1)\) and all other \(d_i\)’s are zero. Then the top eigenvalue \(\mu _1\) exhibits universality, and fluctuates on the scale \(K^{-2/3}\) around \(\gamma _+\) (see Theorem 8.3 and Remark 8.7 below). Increasing \(d_1\) beyond the critical value \(1\), we therefore expect \(\mu _1\) to become an outlier when its classical location \(\theta (d_1)\) is located at a distance greater than \(K^{-2/3}\) from \(\gamma _+\). By a simple Taylor expansion of \(\theta \), the condition \(\theta (d_1) - \gamma _+ \gg K^{-2/3}\) becomes \(d_1 - 1 \gg K^{-1/3}\).
A typical configuration of \(\{d_i\}\) (above) and the resulting spectrum of \(Q\) (below). An order \(M\) of the \(d_i\)’s are zero, which is symbolized by the thicker dot at \(0\). Any \(d_i\) inside the grey interval \([-1,1]\) does not give rise to an outlier, while any \(d_i\) outside the grey interval gives rise to an outlier located near its classical location \(\theta (d_i)\) and separated from the bulk \([\gamma _-, \gamma _+]\)
Next, we outline the distribution of the outlier eigenvectors. Let \(\mu _i\) be an outlier with associated eigenvector \(\varvec{\xi }_i\). Then \(\varvec{\xi }_i\) is concentrated on a cone [8, 31, 33] with axis parallel to \(\mathbf{{v}}_i\), the corresponding eigenvector of the population covariance matrix \(\Sigma \). More precisely, assuming that the eigenvalue \(\sigma _i = 1 + \phi ^{1/2} d_i\) of \(\Sigma \) is simple, we haveFootnote 2
where we defined
for \(d_i > 1\). The function \(u\) determines the aperture \(2 \arccos \sqrt{u(d_i)}\) of the cone. Note that \(u(d_i) \in (0,1)\) and \(u(d_i)\) converges to \(1\) as \(d_i \rightarrow \infty \). See Fig. 2.
The eigenvector \(\varvec{\xi }_i\) associated with an outlier \(\mu _i\) is concentrated on a cone with axis parallel to \(\mathbf{{v}}_i\). The aperture of the cone is determined by \(u(d_i)\) defined in (1.19)
1.5 Summary of previous related results
There is an extensive literature on spiked covariance matrices. So far most of the results have focused on the outlier eigenvalues of Example (1), with the nonzero \(d_i\) independent of \(N\) and \(\phi \) fixed. Eigenvectors and the non-outlier eigenvalues have seen far less attention.
For the uncorrelated case \(\Sigma = I_M\) and Gaussian \(X\) in (1.10) with fixed \(\phi \), it was proved in [24] for the complex case and in [25] for the real case that the top eigenvalue, rescaled as \(K^{2/3}(\mu _1 - \gamma _+)\), is asymptotically distributed according the Tracy–Widom law of the appropriate symmetry class [42, 43]. Subsequently, these results were shown to be universal, i.e. independent of the distribution of the entries of \(X\), in [36, 40]. The assumption that \(\phi \) be fixed was relaxed in [17, 35].
The study of covariance matrices with nontrivial population covariance matrix \(\Sigma \ne I_M\) goes back to the seminal paper of Johnstone [25], where the Gaussian spiked model was introduced. The BBP phase transition was established by Baik et al. [3] for complex Gaussian \(X\), fixed rank of \(\Sigma - I_M\), and fixed \(\phi \). Subsequently, the results of [3] were extended to the other Gaussian symmetry classes, such as real covariance matrices, in [11, 12]. The proofs of [3, 34] use an asymptotic analysis of Fredholm determinants, while those of [11, 12] use an explicit tridiagonal representation of \(X X^*\); both of these approaches rely heavily on the Gaussian nature of \(X\). See also [13] for a generalization of the BBP phase transition.
For the model from Example (1) with fixed nonzero \(\{d_i\}\) and \(\phi \), the almost sure convergence of the outliers was established in [4]. It was also shown in [4] that if \(|d_i | < 1\) for all \(i\), the top eigenvalue \(\mu _1\) converges to \(\gamma _+\). For this model, a central limit theorem of the outliers was proved in [2]. In [1], the almost sure convergence of the outliers was proved for a generalized spiked model whose population covariance matrix is of the block diagonal form \(\Sigma = {{\mathrm{diag}}}(A,T)\), where \(A\) is a fixed \(r \times r\) matrix and \(T\) is chosen so that the associated sample covariance matrix has no outliers.
In [8], the almost sure convergence of the projection of the outlier eigenvectors onto the finite-dimensional spike subspace was established, under the assumption that \(\phi \) and the nonzero \(d_i\) are fixed, and that \(B\) and \(T\) are both random and one of them is orthogonally invariant. In particular, the cone concentration from (1.18) was established in [8]. In [33], under the assumption that \(X\) is Gaussian and \(\phi \) and the nonzero \(d_i\) are fixed, a central limit theorem for a certain observable, the so-called sample vector, of the outlier eigenvectors was established. The result of [33] was extended to non-Gaussian entries for a special class of \(\Sigma \) in [39].
Moreover, in [9, 32] result analogous to those of [8] were obtained for the model from Example (2). Finally, a related class of models, so-called deformed Wigner matrices, have been the subject of much attention in recent years; we refer to [27, 28, 37, 38] for more details; in particular, the joint distribution of all outliers was derived in [28].
1.6 Overview of results
In this subsection we give an informal overview of our results.
We establish results on the eigenvalues \(\mu _i\) and the eigenvectors \(\varvec{\xi }_i\) of \(Q\). Our results consist of large deviation bounds and asymptotic laws. We believe that all of our large deviation bounds from Theorems 2.3, 2.7, 2.11, 2.16, and 2.17 are optimal (up to the technical conditions in the definition of \(\prec \) given in Definition 2.1). We do not prove this. However, we expect that, combining our method with the techniques of [28], one may also derive the asymptotic laws of all quantities on which we establish large deviation bounds, in particular proving the optimality of our large deviation bounds.
Our results on the eigenvalues of \(Q\) consist of two parts. First, we derive large deviation bounds on the locations of the outliers (Theorem 2.3). Second, we prove eigenvalue sticking for the non-outliers (Theorem 2.7), whereby each non-outlier “sticks” with high probability and very accurately to the eigenvalues of a related covariance matrix satisfying \(\Sigma = I_M\) and whose top eigenvalues exhibit universality. As a corollary (Remark 8.7), we prove that the top non-outlier eigenvalue of \(Q\) has asymptotically the Tracy–Widom-1 distribution. This sticking is very accurate if all \(d_i\)’s are separated from the critical point \(1\), and becomes less accurate if a \(d_i\) is in the vicinity of \(1\). Eventually, it breaks down precisely on the BBP transition scale \(|d_i - 1 | \asymp K^{-1/3}\), at which the Tracy–Widom-1 distribution is known not to hold for the top non-outlier eigenvalue. These results generalize those from [27, Theorem 2.7].
Next, we outline our results for the eigenvectors \(\varvec{\xi }_i\) of \(Q\). We consider the generalized components \(\langle {\mathbf{{w}}} , {\varvec{\xi }_i}\rangle \) of \(\varvec{\xi }_i\), where \(\mathbf{{w}} \in \mathbb {R}^M\) is an arbitrary deterministic vector. In our first result on the eigenvectors (Theorems 2.11 and 2.16), we establish large deviation bounds on the generalized components of outlier eigenvectors [and, more generally, of the outlier spectral projections defined in (2.11) below]. This result gives a quantitative version of the cone concentration from (1.18), which in particular allows us to track the strength of the concentration in the vicinity of the BBP transition and for overlapping outliers. Our results also establish the complete delocalization of an outlier eigenvector \(\varvec{\xi }_i\) in any direction orthogonal to the spike direction \(\mathbf{{v}}_i\), provided the outlier \(\mu _i\) is well separated from the bulk spectrum and other outliers. We say that the vector \(\varvec{\xi }_i\) is completely delocalized, or unbiased, in the direction \(\mathbf{{w}}\) if \(\langle {\mathbf{{w}}} , {\varvec{\xi }_i}\rangle ^2 \prec M^{-1}\), where “\(\prec \)” denotes a high probability bound up to powers of \(M^\varepsilon \) (see Definition 2.1).
If the outlier \(\mu _i\) approaches the bulk spectrum or another outlier, the cone concentration becomes less accurate. For the case of two nearby outlier eigenvalues, for instance, the cone concentration (1.18) of the eigenvectors breaks down when the distributions of the outlier eigenvalues have a nontrivial overlap. In order to understand this behaviour in more detail, we introduce the deterministic projection
where \(A \subset \{1,\ldots , M\}\). Then the cone concentration from (1.18) may be written as \(|\Pi _{\{i\}} \varvec{\xi }_i |^2 \approx u(d_i) |\varvec{\xi }_i |^2\). In contrast, in the degenerate case \(d_1 = d_2 > 1\) and all other \(d_i\)’s being zero, (1.18) is replaced with
where \(i,j \in \{1,2\}\). We deduce that each \(\varvec{\xi }_i\) lies on the cone
and that \(\Pi _{\{1,2\}} \varvec{\xi }_1 \perp \Pi _{\{1,2\}} \varvec{\xi }_2\). Moreover, we prove that \(\varvec{\xi }_i\) is completely delocalized in any direction orthogonal to \(\mathbf{{v}}_1\) and \(\mathbf{{v}}_2\). The interpretation is that \(\varvec{\xi }_1\) and \(\varvec{\xi }_2\) both lie on the cone (1.22), that they are orthogonal on both the range and null space of \(\Pi _{\{1,2\}}\), and that beyond these constraints their distribution is unbiased (i.e. isotropic). Finally, we note that the preceding discussion remains unchanged if one interchanges \(\varvec{\xi }_i\) and \(\mathbf{{v}}_i\). We refer to Example 2.15 below for more details.
In our second result on the eigenvectors (Theorem 2.17), we establish delocalization bounds for the generalized components of non-outlier eigenvectors \(\xi _i\). In particular, we prove complete delocalization of non-outlier eigenvectors in directions orthogonal to any spike \(\mathbf{{v}}_j\) whose value \(d_j\) is near the critical point 1. In addition, we prove that non-outlier eigenvectors away from the edge are completely delocalized in all directions. The complete delocalization in the direction \(\mathbf{{v}}_j\) breaks down if \(|d_j - 1 | \ll 1\). The interpretation of this result is that any spike \(d_j\) near the BBP transition point \(1\) causes all non-outlier eigenvectors \(\varvec{\xi }_i\) near the upper edge of the bulk spectrum to have a bias in the direction \(\mathbf{{v}}_j\), in contrast to the completely delocalized case where \(\varvec{\xi }_i\) is uniformly distributed on the unit sphere.
In our final result on the eigenvectors (Theorem 2.20), we give the asymptotic law of the generalized component \(\langle {\mathbf{{w}}} , {\varvec{\xi }_i}\rangle \) of a non-outlier eigenvector \(\varvec{\xi }_i\). In particular, we prove that this generalized component is asymptotically Gaussian and has a variance predicted by the delocalization bounds from Theorem 2.17. For instance, we prove that if \(|d_j - 1 | \gg K^{-1/3}\) then
for all non-outlier indices \(i\) that are not too large (see Theorem 2.20 for a precise statement). Here \(\Theta \) is a random variable that converges in distribution to a chi-squared variable. If \(\varvec{\xi }_i\) were completely delocalized in the direction \(\mathbf{{v}}_j\), the right-hand side would be of order \(M^{-1}\). Suppose for simplicity that \(\phi \) is of order one. The bias of \(\varvec{\xi }_i\) in the direction \(\mathbf{{v}}_j\) emerges as soon as \(|d_j - 1 | \ll 1\), and reaches a magnitude of order \(M^{-1/3}\) for \(d_j\) near the BBP transition. This is much larger than the unbiased \(M^{-1}\). Note that this phenomenon applies simultaneously to all non-outlier eigenvectors near the right edge: the right-hand side of (1.23) does not depend on \(i\). Note also that the right-hand side of (1.23) is insensitive to the sign of \(d_j - 1\). In particular, the bias is also present for subcritical spikes. We conclude that even subcritical spikes are observable in the principal components. In contrast, if one only considers the eigenvalues of the principal components, the subcritical spikes cannot be detected; this follows from the eigenvalue sticking result in Theorem 2.7.
Finally, the proofs of universality of the non-outlier eigenvalues and eigenvectors require the universality of \(Q\) for the uncorrelated case \(\Sigma = I_M\) as input. This universality result is given in Theorem 8.3, which is also of some independent interest. It establishes the joint, fixed-index, universality of the eigenvalues and eigenvectors of \(Q\) (and hence, as a special case, the quantum unique ergodicity of the eigenvectors of \(Q\) mentioned in Sect. 1.1). It works for all eigenvalue indices \(i\) satisfying \(i \leqslant K^{1 - \tau }\) for any fixed \(\tau > 0\).
We conclude this subsection by outlining the key novelties of our work.
-
(i)
We introduce the general models \(Q\) from (1.10) and \(\dot{Q}\) from (2.23) below, which subsume and generalize several models considered previously in the literature.Footnote 3 We allow the entries of \(X\) to be arbitrary random variables (up to a technical assumption on their tails). All quantities except \(r\) and the rank of \(\Sigma - I_M\) may depend on \(N\). We make no assumption on \(T\) beyond the bounded-rank condition of \(T T^* - I_M\). The dimensions \(M\) and \(N\) may be wildly different, and are only subject to the technical condition (1.9).
-
(ii)
We study the behaviour of the principal components of \(Q\) near the BBP transition and when outliers collide. Our results hold for generalized components \(\langle {\mathbf{{w}}} , {\varvec{\xi }_i}\rangle \) of the eigenvectors in arbitrary directions \(\mathbf{{w}}\).
-
(iii)
We obtain quantitative bounds (i.e. rates of convergence) on the outlier eigenvalues and the generalized components of the eigenvectors. We believe these bounds to be optimal.
-
(iv)
We obtain precise information about the non-outlier principal components. A novel observation is that, provided there is a \(d_i\) satisfying \(|d_i - 1 | \ll 1\) (i.e. \(Q\) is near the BBP transition), all non-outlier eigenvectors near the edge will be biased in the direction of \(\mathbf{{v}}_i\). In particular, non-outlier eigenvectors, unlike non-outlier eigenvalues, retain some information about the subcritical spikes of \(\Sigma \).
-
(v)
We establish the joint, fixed-index, universality of the eigenvalues and eigenvectors for the case \(\Sigma = I_M\). This result holds for any eigenvalue indices \(i\) satisfying \(i \leqslant K^{1 - \tau }\) for an arbitrary \(\tau > 0\). Note that previous works [29, 41] (established in the context of Wigner matrices) required either the much stronger condition \(i \leqslant (\log K)^{C \log \log K}\) or a four-moment matching condition.
We remark that the large deviation bounds derived in this paper also allow one to derive the joint distribution of the generalized components of the outlier eigenvectors; this will be the subject of future work.
1.7 Conventions
The fundamental large parameter is \(N\). All quantities that are not explicitly constant may depend on \(N\); we almost always omit the argument \(N\) from our notation.
We use \(C\) to denote a generic large positive constant, which may depend on some fixed parameters and whose value may change from one expression to the next. Similarly, we use \(c\) to denote a generic small positive constant. For two positive quantities \(A_N\) and \(B_N\) depending on \(N\) we use the notation \(A_N \asymp B_N\) to mean \(C^{-1} A_N \leqslant B_N \leqslant C A_N\) for some positive constant \(C\). For \(a < b\) we set \([\![{a,b}]\!] \mathrel {\mathop :}=[a,b] \cap \mathbb {Z}\). We use the notation \(\mathbf{{v}} = (v(i))_{i = 1}^M\) for vectors in \(\mathbb {R}^M\), and denote by \(|\cdot |= \Vert \cdot \Vert _2\) the Euclidean norm of vectors and by \(\Vert \cdot \Vert \) the corresponding operator norm of matrices. We use \(I_M\) to denote the \(M \times M\) identity matrix, which we also sometimes write simply as \(1\) when there is no risk of confusion.
We use \(\tau > 0\) in various assumptions to denote a positive constant that may be chosen arbitrarily small. A smaller value of \(\tau \) corresponds to a weaker assumption. All of our estimates depend on \(\tau \), and we neither indicate nor track this dependence.
2 Results
In this section we state our main results. The following notion of a high-probability bound was introduced in [18], and has been subsequently used in a number of works on random matrix theory. It provides a simple way of systematizing and making precise statements of the form “\(A\) is bounded with high probability by \(B\) up to small powers of \(N\)”.
Definition 2.1
(Stochastic domination) Let
be two families of nonnegative random variables, where \(U^{(N)}\) is a possibly \(N\)-dependent parameter set. We say that \(A\) is stochastically dominated by \(B\), uniformly in \(u\), if for all (small) \(\varepsilon > 0\) and (large) \(D > 0\) we have
for large enough \(N\geqslant N_0(\varepsilon , D)\). Throughout this paper the stochastic domination will always be uniform in all parameters (such as matrix indices) that are not explicitly fixed. Note that \(N_0(\varepsilon , D)\) may depend on the constants from (1.9) and (1.16) as well as any constants fixed in the assumptions of our main results. If \(A\) is stochastically dominated by \(B\), uniformly in \(u\), we use the notation \(A \prec B\). Moreover, if for some complex family \(A\) we have \(|A | \prec B\) we also write \(A = O_\prec (B)\).
Remark 2.2
Because of (1.9), all (or some) factors of \(N\) in Definition 2.1 could be replaced with \(M\) without changing the definition of stochastic domination.
2.1 Eigenvalue locations
We begin with results on the locations of the eigenvalues of \(Q\). These results will also serve as a fundamental input for the proofs of the results on eigenvectors presented in Sects. 2.2 and 2.3.
Recall that \(Q\) has \(M - K\) zero eigenvalues. We shall therefore focus on the \(K\) nontrivial eigenvalues \(\mu _1 \geqslant \cdots \geqslant \mu _K\) of \(Q\). On the global scale, the eigenvalues of \(Q\) are distributed according to the Marchenko–Pastur law (1.6). This may be easily inferred from the fact that (1.6) gives the global density of the eigenvalues for the uncorrelated case \(\Sigma = I_M\), combined with eigenvalue interlacing (see Lemma 4.1 below). In this section we focus on local eigenvalue information.
We introduce the set of outlier indices
As explained in Sect. 1.4, each \(i \in {\mathcal {O}}\) gives rise to an outlier of \(Q\) near the classical location \(\theta (d_i)\) defined in (1.17). In the definition (2.2), the lower bound \(1 + K^{-1/3}\) is chosen for definiteness; it could be replaced with \(1 + a K^{-1/3}\) for any fixed \(a > 0\). We denote by
the number of outliers to the left (\(s_-\)) and right (\(s_+\)) of the bulk spectrum.
For \(d \in {\mathcal {D}} \setminus [-1,1]\) we define
The function \(\Delta (d)\) will be used to give an upper bound on the magnitude of the fluctuations of an outlier associated with \(d\). We give such a precise expression for \(\Delta \) in order to obtain sharp large deviation bounds for all \(d \in {\mathcal {D}}\!\setminus \![-1,1]\). (Note that the discontinuity of \(\Delta \) at \(d = 2\) is immaterial since \(\Delta \) is used as an upper bound with respect to \(\prec \). The ratio of the right- and left-sided limits at \(2\) of \(\Delta \) lies in \([1,3]\).)
Our result on the outlier eigenvalues is the following.
Theorem 2.3
(Outlier locations) Fix \(\tau > 0\). Then for \(i \in {\mathcal {O}}\) we have the estimate
provided that \(d_i > 0\) or \(|\phi - 1 | \geqslant \tau \).
Furthermore\(,\) the extremal non-outliers \(\mu _{s_+ + 1}\) and \(\mu _{K - s_-}\) satisfy
and\(,\) assuming in addition that \(|\phi -1 | \geqslant \tau ,\)
Remark 2.4
Theorem 2.3 gives large deviation bounds for the locations of the outliers to the right of the bulk. Since \(\tau > 0\) may be arbitrarily small, Theorem 2.3 also gives the full information about the outliers to the left of the bulk except in the case \(1 > \phi = 1 + o(1)\). Although our methods may be extended to this case as well, we exclude it here to avoid extraneous complications.
Remark 2.5
By definition of \(s_-\) and \({\mathcal {D}}\), if \(\phi >1\) then \(s_-=0\). Hence, by (2.6), if \(\phi > 1\) there are no outliers on the left of the bulk spectrum.
Remark 2.6
Previously, the model from Example (1) in Sect. 1.2 with fixed nonzero \(\{d_i\}\) and \(\phi \) was investigated in [2, 4]. In [4], it was proved that each outlier eigenvalue \(\mu _i\) with \(i \in {\mathcal {O}}\) convergences almost surely to \(\theta (d_i)\). Moreover, a central limit theorem for \(\mu _i\) was established in [2].
The locations of the non-outlier eigenvalues \(\mu _i\), \(i \notin {\mathcal {O}}\), are governed by eigenvalue sticking, whereby the eigenvalues of \(Q\) “stick” with high probability to eigenvalues of a reference matrix which has a trivial population covariance matrix. The reference matrix is \(Q\) from (1.10) with uncorrelated entries. More precisely, we set
where \(O \equiv O(T) \in \mathrm O(M + r)\) is a deterministic orthogonal matrix. It is easy to check that \(\mathbb {E}H = (NM)^{-1/2} I_M\), so that \(H\) corresponds to an uncorrelated population. The matrix \(O(T)\) is explicitly given in (8.1) below. In fact, in Theorem 8.3 below we prove the universality of the joint distribution of non-bulk eigenvalues and eigenvectors of \(H\). Here, by definition, we say that an index \(i \in [\![{1,K}]\!]\) is non-bulk if \(i \notin [\![{K^{1 - \tau }, K - K^{1 - \tau }}]\!]\) for some fixed \(\tau > 0\). In particular, the asymptotic distribution of the non-bulk eigenvalues and eigenvectors of \(H\) does not depend on the choice of \(O\). Note that for the special case \(r = 0\) the eigenvalues of \(H\) coincide with those of \(X X^*\). We denote by
the eigenvalues of \(H\).
Theorem 2.7
(Eigenvalue sticking) Define
Fix \(\tau > 0\). Then we have for all \(i \in [\![{1, (1 - \tau ) K}]\!]\)
Similarly\(,\) if \(|\phi - 1 | \geqslant \tau \) then we have for all \(i \in [\![{\tau K, K}]\!]\)
Remark 2.8
As outlined above, in Theorem 8.3 below we prove that the asymptotic joint distribution of the non-bulk eigenvalues of \(H\) is universal, i.e. it coincides with that of the Wishart matrix \(H_{\mathrm{{Wish}}} = X X^*\) with \(r = 0\) and \(X\) Gaussian. As an immediate corollary of Theorems 2.7 and 8.3, we obtain the universality of the non-outlier eigenvalues of \(Q\) with index \(i \leqslant K^{1 - \tau } \alpha _+^3\). This condition states simply that the right-hand side of (2.9) is much smaller than the scale on which the eigenvalue \(\lambda _i\) fluctuates, which is \(K^{-2/3} i^{-1/3}\). See Remark 8.7 below for a precise statement.
Remark 2.9
Theorem 2.7 is analogous to Theorem 2.7 of [27], where sticking was first established for Wigner matrices. Previously, eigenvalue sticking was established for a certain class of random perturbations of Wigner matrices in [6, 7]. We refer to [27, Remark 2.8] for a more detailed discussion.
Aside from holding for general covariance matrices of the form (1.10), Theorem 2.7 is stronger than its counterpart from [27] because it holds much further into the bulk: in [27, Theorem 2.7], sticking was established under the assumption that \(i \leqslant (\log K)^{C \log \log K}\).
Remark 2.10
The edge universality following from Theorem 2.7 (as explained in Remark 2.8) generalizes the recent result [5]. There, for the model from Example (1) in Sect. 1.2 with fixed nonzero \(\{d_i\}\) and \(\phi \), it was proved that if \(d_i < 1\) for all \(i\) and \(\Sigma \) is diagonal, then \(\mu _1\) converges (after a suitable affine transformation) in distribution to the Tracy–Widom-1 distribution.
2.2 Outlier eigenvectors
We now state our main results for the outlier eigenvectors. Statements of results on eigenvectors requires some care, since there is some arbitrariness in the definition of the eigenvector \(\varvec{\xi }_i\) of \(Q\). In order to get rid of the arbitrariness in the sign (or, in the complex case, the phase) of \(\varvec{\xi }_i\) we consider products of generalized components,
It is easy to check that these products characterize the eigenvector \(\varvec{\xi }_i\) completely, up to the ambiguity of a global sign (or phase). More generally, one may consider the generalized components \(\langle {\mathbf{{v}}} , {(\cdot ) \, \mathbf{{w}}}\rangle \) of the (random) spectral projection
where \(A \subset {\mathcal {O}}\).
In the simplest case \(A = \{i\}\) the generalized components of \(P_A\) characterize the generalized components of \(\varvec{\xi }_i\). The need to consider higher-dimensional projections arises if one considers degenerate or almost degenerate outliers. Suppose for example that \(d_1 \approx d_2\) and all other \(d_i\)’s are zero. Then the cone concentration (1.18) fails, to be replaced with (1.21). The failure of the cone concentration is also visible in our results as a blowup of the error bounds. This behaviour is not surprising, since for degenerate outliers \(d_1 = d_2\) it makes no sense to distinguish the associated spike eigenvectors \(\mathbf{{v}}_1\) and \(\mathbf{{v}}_2\); only the eigenspace matters. Correspondingly, we have to consider the orthogonal projection onto the eigenspace of the outliers in \(A\). See Example 2.15 below for a more detailed discussion.
For \(i \in [\![{1,M}]\!]\) we define \(\nu _i \geqslant 0\) through
In other words, \(\nu _i(A)\) is the distance from \(d_i\) to either \(\{d_i\}_{i \in A}\) or \(\{d_i\}_{i \notin A}\), whichever it does not belong to. For a vector \(\mathbf{{w}} \in \mathbb {R}^M\) we also introduce the shorthand
to denote the components of \(\mathbf{{w}}\) in the eigenbasis of \(\Sigma \).
For definiteness, we only state our results for the outliers on the right-hand side of the bulk spectrum. Analogous results hold for the outliers on the left-hand side. Since the behaviour of the fluctuating error term is different in the regimes \(\mu _i - \gamma _+ \ll 1\) (near the bulk) and \(\mu _i - \gamma _+ \gg 1\) (far from the bulk), we split these two cases into separate theorems.
Theorem 2.11
(Outlier eigenvectors near bulk) Fix \(\tau > 0\). Suppose that \(A \subset {\mathcal {O}}\) satisfies \(1 + K^{-1/3} \leqslant d_i \leqslant \tau ^{-1}\) for all \(i \in A\). Define the deterministic positive quadratic form
where we recall the definition (1.19) of \(u(d_i)\). Then for any deterministic \(\mathbf{{w}} \in \mathbb {R}^M\) we have
Note that the last error term is zero if \(\mathbf{{w}}\) is in the subspace \({{\mathrm{Span}}}\{\mathbf{{v}}_i\}_{i \in A}\) or orthogonal to it.
Remark 2.12
Theorem 2.11 may easily also be stated for more general quantities of the form \(\langle {\mathbf{{v}}} , {P_A \mathbf{{w}}}\rangle \). We omit the precise statement; it is a trivial corollary of (5.2) below, which holds under the assumptions of Theorem 2.11.
We emphasize that the set \(A\) in Theorem 2.11 may be chosen at will. If all outliers are well-separated, then the choice \(A = \{i\}\) gives the most precise information. However, as explained at the beginning of this subsection, the indices of outliers that are close to each other should be included in the same set \(A\). Thus, the freedom to chose \(|A | \geqslant 2\) is meant for degenerate or almost degenerate outliers. (In fact, as explained after (2.14) below, the correct notion of closeness of outliers is that of overlapping.)
We consider a few examples.
Example 2.13
Let \(A = \{i\}\) and \(\mathbf{{w}} = \mathbf{{v}}_i\). Then we get from (2.12)
This gives a precise version of the cone concentration from (1.18). Note that the cone concentration holds provided the error is much smaller than the main term \(u(d_i)\), which leads to the conditions
here we used that \(d_i \asymp 1\) and \(M \asymp (1 + \phi ) K\).
We claim that both conditions in (2.14) are natural and necessary. The first condition of (2.14) simply means that \(\mu _i\) is an outlier. The second condition of (2.14) is a non-overlapping condition. To understand it, recall from (2.4) that \(\mu _i\) fluctuates on the scale \((d_i - 1)^{1/2} K^{-1/2}\). Then \(\mu _i\) is a non-overlapping outlier if all other outliers are located with high probability at a distance greater than this scale from \(\mu _i\). Recalling the definition of the classical location \(\theta (d_i)\) of \(\mu _i\), the non-overlapping condition becomes
After a simple estimate using the definition of \(\theta \), we find that this is precisely the second condition of (2.14). The degeneracy or almost degeneracy of outliers discussed at the beginning of this subsection is hence to be interpreted more precisely in terms of overlapping of outliers.
Provided \(\mu _i\) is well-separated from both the bulk spectrum and the other outliers, we find that the error in (2.13) is of order \(M^{-1/2}\).
Example 2.14
Take \(A = \{i\}\) and \(\mathbf{{w}} = \mathbf{{v}}_j\) with \(j \ne i\). Then we get from (2.12)
Suppose for simplicity that \(\phi \asymp 1\). Then, under the condition that \(|d_i - d_j | \asymp 1\), we find that \(\varvec{\xi }_i\) is completely delocalized in the direction \(\mathbf{{v}}_j\). In particular, if \(\nu _i \asymp 1\) then \(\varvec{\xi }_i\) is completely delocalized in any direction orthogonal to \(\mathbf{{v}}_i\).
As \(d_j\) approaches \(d_i\) the delocalization bound from (2.16) deteriorates, and eventually when \(\mu _i\) and \(\mu _j\) start overlapping, i.e. the second condition of (2.14) is violated, the right-hand side of (2.16) has the same size as the leading term of (2.13). This is again a manifestation of the fact that the individual eigenspaces of overlapping outliers cannot be distinguished.
Example 2.15
Suppose that we have an \(|A |\)-fold degenerate outlier, i.e. \(d_i = d_j\) for all \(i,j \in A\). Then from Theorem 2.11 and Remark 2.12 [see the estimate (5.2)] we get, for all \(i,j \in A\),
Defining the \(|A | \times |A |\) random matrix \(M = (M_{ij})_{i,j \in A}\) through \(M_{ij} \mathrel {\mathop :}=\langle {\mathbf{{v}}_i} , {\varvec{\xi }_j}\rangle \), we may write the left-hand side as \((M M^*)_{ij}\). We conclude that \(u(d_i)^{-1/2} M\) is approximately orthogonal, from which we deduce that \(u(d_i)^{-1/2} M^*\) is also approximately orthogonal. In other words, we may interchange the families \(\{\mathbf{{v}}_i\}_{i \in A}\) and \(\{\varvec{\xi }_i\}_{i \in A}\). More precisely, we get
This is the correct generalization of (2.13) from Example 2.13 to the degenerate case. The error term is the same as in (2.13), and its size and relation to the main term is exactly the same as in Example 2.13. Hence the discussion following (2.13) may be take over verbatim to this case.
In addition, analogously to Example 2.14, for \(i \in A\) and \(j \notin A\) we find that (2.16) remains true. This establishes the delocalization of \(\varvec{\xi }_i\) in any direction within the null space of \(\Pi _A\).
These estimates establish the general cone concentration, with optimal rate of convergence, for degenerate outliers outlined around (1.22). The eigenvectors \(\{\varvec{\xi }_i\}_{i \in A}\) are all concentrated on the cone defined by \(|\Pi _A \varvec{\xi } |^2 = u(d_i) |\varvec{\xi } |^2\) (for some immaterial \(i \in A\)). Moreover, the eigenvectors \(\{\varvec{\xi }_i\}_{i \in A}\) are orthogonal on both the range and null space of \(\Pi _A\). Provided that the group \(\{d_i\}_{i \in A}\) is well-separated from \(1\) and all other \(d_i\)’s, the eigenvectors \(\{\varvec{\xi }_i\}_{i \in A}\) are completely delocalized on the null space of \(\Pi _A\).
We conclude this example by remarking that a similar discussion also holds for a group of outliers that is not degenerate, but nearly degenerate, i.e. \(|d_i - d_j | \ll |d_i - d_k |\) for all \(i,j \in A\) and \(k \notin A\). We omit the details.
The next result is the analogue of Theorem 2.11 for outliers far from the bulk.
Theorem 2.16
(Outlier eigenvectors far from bulk) Fix \(\tau > 0\). Suppose that \(A \subset {\mathcal {O}}\) satisfies \(d_i \geqslant 1 + \tau \) for all \(i \in A,\) and that there exists a positive \(d_A\) such that \(\tau d_A \leqslant d_i \leqslant \tau ^{-1} d_A\) for all \(i \in A\). Then for any deterministic \(\mathbf{{w}} \in \mathbb {R}^M\) we have
We leave the discussion on the interpretation of the error in (2.17) to the reader; it is similar to that of Examples 2.13, 2.14, and 2.15.
2.3 Non-outlier eigenvectors
In this subsection we state our results on the non-outlier eigenvectors, i.e. on \(\varvec{\xi }_a\) for \(a \notin {\mathcal {O}}\). Our first result is a delocalization bound. In order to state it, we define for \(a \in [\![{1,K}]\!]\) the typical distance from \(\mu _a\) to the spectral edges \(\gamma _\pm \) through
This quantity should be interpreted as a deterministic version of \(|\mu _a - \gamma _- |\wedge |\mu _a - \gamma _+ |\) for \(a \notin {\mathcal {O}}\); see Theorem 3.5 below.
Theorem 2.17
(Delocalization bound for non-outliers) Fix \(\tau > 0\). For \(a \in [\![{1, (1 - \tau )K}]\!] \setminus {\mathcal {O}}\) and deterministic \(\mathbf{{w}} \in \mathbb {R}^M\) we have
Similarly\(,\) if \(|\phi - 1 | \geqslant \tau \) then for \(a \in [\![{\tau K, K}]\!]\!\setminus \!{\mathcal {O}}\) and deterministic \(\mathbf{{w}} \in \mathbb {R}^M\) we have
For the following examples, we take \(\mathbf{{w}} = \mathbf{{v}}_i\) and \(a \in [\![{1, (1 - \tau )K}]\!]\!\setminus \!{\mathcal {O}}\). Under these assumptions (2.19) yields
Example 2.18
Fix \(\tau > 0\). If \(|d_i - 1 | \geqslant \tau \) (\(d_i\) is separated from the transition point) or \(a \geqslant \tau K\) (\(\mu _a\) is in the bulk), then the right-hand side of (2.21) reads \((1 + \sigma _i) / M\). In particular, if the eigenvalue \(\sigma _i\) of \(\Sigma \) is bounded, \(\varvec{\xi }_a\) is completely delocalized in the direction \(\mathbf{{v}}_i\).
Example 2.19
Suppose that \(a \leqslant C\) (\(\mu _a\) is near the edge), which implies that \(\kappa _a \asymp K^{-2/3}\). Suppose moreover that \(d_i\) is near the transition point \(1\). Then we get
Therefore the delocalization bound for \(\varvec{\xi }_a\) in the direction of \(\mathbf{{v}}_i\) becomes worse as \(d_i\) approaches the critical point (from either side), from \((1 + \phi )^{1/2} M^{-1}\) for \(d_i\) separated from \(1\), to \((1 + \phi )^{-1/6} M^{-1/3}\) for \(d_i\) at the transition point \(1\).
Next, we derive the law of the generalized component \(\langle {\mathbf{{w}}} , {\varvec{\xi }_a}\rangle \) for non-outlier \(a\). In particular, this provides a lower bound complementing the upper bound from Theorem 2.17. Recall the definition (2.8) of \(\alpha _+\).
Theorem 2.20
(Law of non-outliers) Fix \(\tau > 0\). Then\(,\) for any deterministic \(a \in [\![{1, K^{1 - \tau } \alpha _+^3}]\!]{\setminus } {\mathcal {O}}\) and \(\mathbf{{w}} \in \mathbb {R}^M,\) there exists a random variable \(\Theta (a, \mathbf{{w}}) \equiv \Theta _N(a, \mathbf{{w}})\) satisfying
and
in distribution as \(N \rightarrow \infty ,\) uniformly in \(a\) and \(\mathbf{{w}}\). Here \(\chi _1^2\) is a Chi-squared random variable \((\)i.e. the square of a standard normal\().\)
An analogous statement holds near the left spectral edge provided \(|\phi - 1 | \geqslant \tau ;\) we omit the details.
Remark 2.21
More generally, our method also yields the asymptotic joint distribution of the family
(after a suitable affine rescaling of the variables, as in Theorem 8.3 below), where \(a_1,\ldots , a_k, b_1,\ldots , b_k \in [\![{1, K^{1 - \tau } \alpha _+^3}]\!]{\setminus }{\mathcal {O}}\). We omit the precise statement, which is a universality result: it says essentially that the asymptotic distribution of (2.22) coincides with that under the standard Wishart ensemble (i.e. an uncorrelated Gaussian sample covariance matrix). The proof is a simple corollary of Theorem 2.7, Proposition 6.2, Proposition 6.3, and Theorem 8.3.
Remark 2.22
The restriction \(a \leqslant K^{1 - \tau } \alpha _+^3\) is the same as in Remarks 2.8 and 8.7. There, it is required for the eigenvalue sticking to be effective in the sense that the right-hand side of (2.9) is much smaller than the scale on which the eigenvalue \(\lambda _a\) fluctuates. Here, it ensures that the distribution of the eigenvector \(\varvec{\xi }_a\) is determined by the distribution of a single eigenvector of \(H\) (see Proposition 6.2).
Finally, instead of \(Q\) defined in (1.10), we may also consider
where the vector \(\mathbf{{e}}\) was defined in (1.4). All of our results stated for \(Q\) also hold for \(\dot{Q}\).
Theorem 2.23
Theorems 2.3, 2.7, 2.11, 2.16, 2.17, and 2.20 hold with \(\mu _i\) and \(\varvec{\xi }_i\) denoting the eigenvalues and eigenvectors of \(\dot{Q}\) instead of \(Q\). For Theorem 2.7, \(\lambda _i\) denotes the eigenvalues of \(\frac{N}{N - 1} Y (I_N - \mathbf{{e}} \mathbf{{e}}^*) Y^*\) instead of \(Y Y^*\) from (2.7).
3 Preliminaries
The rest of this paper is devoted to the proofs of the results from Sects. 2.1–2.3. To clarify the presentation of the main ideas of the proofs, we shall first assume that
We make the assumption (3.1) throughout Sects. 3–7. The additional arguments required to relax the assumption (3.1) are presented in Sect. 8. Under the assumption (3.1) we have
Moreover, the extension of our results from \(Q\) to \(\dot{Q}\), and hence the proof of Theorem 2.23, is given in Sect. 9.
For an \(M \times M\) matrix \(A\) and \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\) we abbreviate
We also write
where \(\mathbf{{e}}_i \in \mathbb {R}^M\) denotes the \(i\)-th standard basis vector.
3.1 The isotropic local Marchenko–Pastur law
In this section we collect the key tool of our analysis: the isotropic Marchenko–Pastur law from [10].
It is well known that the empirical distribution of the eigenvalues of the \(N\times N\) matrix \(X^*X\) has the same asymptotics as the Marchenko–Pastur law
where we recall the edges \(\gamma _\pm \) of the limiting spectrum defined in (1.7). Similarly, as noted in (1.6), the empirical distribution of the eigenvalues of the \(M \times M\) matrix \(X X^*\) has the same asymptotics as \(\varrho _{\phi ^{-1}}\).
Note that (3.3) is normalized so that its integral is equal to one. The Stieltjes transform of the Marchenko–Pastur law (3.3) is
where the square root is chosen so that \(m_\phi \) is holomorphic in the upper half-plane and satisfies \(m_\phi (z) \rightarrow 0\) as \(z \rightarrow \infty \). The function \(m_\phi = m_\phi (z)\) is also characterized as the unique solution of the equation
satisfying \(\hbox {Im }m (z) > 0\) for \(\hbox {Im }z >0\). The formulas (3.3)–(3.5) were originally derived for the case when \(\phi =M/N\) is independent of \(N\) (or, more precisely, when \(\phi \) has a limit in \((0,\infty )\) as \(N \rightarrow \infty \)). Our results allow \(\phi \) to depend on \(N\) under the constraint (1.9), so that \(m_\phi \) and \(\varrho _\phi \) may also depend on \(N\) through \(\phi \).
Throughout the following we use a spectral parameter
with \(\eta > 0\), as the argument of Stieltjes transforms and resolvents. Define the resolvent
For \(z\in \mathbb {C}\), define \(\kappa (z)\) to be the distance from \(E=\hbox {Re }z\) to the spectral edges \(\gamma _\pm \), i.e.
Throughout the following we regard the quantities \(E(z)\), \(\eta (z)\), and \(\kappa (z)\) as functions of \(z\) and usually omit the argument unless it is needed to avoid confusion.
Sometimes we shall need the following notion of high probability.
Definition 3.1
An \(N\)-dependent event \(\Xi \equiv \Xi _N\) holds with high probability if \(1 - \mathbf{{1}} (\Xi ) \prec 0\).
Fix a (small) \(\omega \in (0,1)\) and define the domain
Beyond the support of the limiting spectrum, one has stronger control all the way down to the real axis. For fixed (small) \(\omega > 0\) define the region
of spectral parameters separated from the asymptotic spectrum by \(K^{-2/3 + \omega }\), which may have an arbitrarily small positive imaginary part \(\eta \). Throughout the following we regard \(\omega \) as fixed once and for all, and do not track the dependence of constants on \(\omega \).
Theorem 3.2
(Isotropic local Marchenko–Pastur law [10]) Suppose that (1.15)\(,\) (1.9)\(,\) and (1.16) hold. Then
uniformly in \(z \in \mathbf{{S}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\). Moreover\(,\)
uniformly in \(z \in \widetilde{\mathbf{{S}}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).
Remark 3.3
The probabilistic estimates (3.9) and (3.10) of Theorem 3.2 may be strengthened to hold simultaneously for all \(z \in \mathbf{{S}}\) and for all \(z\in \widetilde{\mathbf{{S}}}\), respectively. For instance, (3.10) may be strengthened to
for all \(\varepsilon > 0\), \(D > 0\), and \(N \geqslant N_0(\varepsilon , D)\). See [10, Remark 2.6].
The next results are on the nontrivial (i.e. nonzero) eigenvalues of \(H \mathrel {\mathop :}=XX^*\) as well as the corresponding eigenvectors. The matrix \(H\) has \(K\) nontrivial eigenvalues, which we order according to
(The remaining \(M - K\) eigenvalues of \(H\) are zero.) Moreover, we denote by
the unit eigenvectors of \(H\) associated with the nontrivial eigenvalues \(\lambda _1 \geqslant \lambda _2 \geqslant \cdots \geqslant \lambda _{K}\).
Theorem 3.4
(Isotropic delocalization [10]) Fix \(\tau > 0,\) and suppose that (1.15)\(,\) (1.9)\(,\) and (1.16) hold. Then for \(i \in [\![{1,K}]\!]\) we have
if either \(i \leqslant (1 - \tau ) K\) or \(|\phi - 1 | \geqslant \tau \).
The following result is on the rigidity of the nontrivial eigenvalues of \(H\). Let \(\gamma _1 \geqslant \gamma _2 \geqslant \cdots \geqslant \gamma _K\) be the classical eigenvalue locations according to \(\varrho _{\phi }\) [see (3.3)], defined through
Theorem 3.5
(Eigenvalue rigidity [10]) Fix \(\tau > 0,\) and suppose that (1.15)\(,\) (1.9)\(,\) and (1.16) hold. Then for \(i \in [\![{1,M}]\!]\) we have
if \(i \leqslant (1 - \tau ) K\) or \(|\phi - 1 | \geqslant \tau \).
3.2 Link to the semicircle law
It will often be convenient to replace the Stieltjes transform \(m_\phi (z)\) of \(\varrho _{\phi }(\mathrm {d}x)\) with the Stieltjes transform \(w_\phi (z)\) of the measure
Note that this is nothing but Wigner’s semicircle law centred at \(\phi ^{1/2} + \phi ^{-1/2}\). Thus,
where in the last step we used (3.4). Note that
Using \(w_\phi \) we can write (3.5) as
Lemma 3.6
For \(z \in \mathbf{{S}}\) and \(\phi \geqslant 1\) we have
as well as
Similarly\(,\)
where \(I(z) \mathrel {\mathop :}=-1\) for \(E \geqslant \phi ^{1/2} + \phi ^{-1/2}\) and \(I(z) \mathrel {\mathop :}=+1\) for \(E < \phi ^{1/2} + \phi ^{-1/2}\). Finally\(,\) for \(z \in \mathbf{{S}}\) we have
\((\)All implicit constants depend on \(\omega \) in the definition (3.7) of \(\mathbf{{S}}.)\)
Proof
The estimates (3.19) and (3.20) follow from the explicit expressions in (3.4) and (3.17). In fact, these estimates have already appeared in previous works. Indeed, for \(m_\phi \) the estimates (3.19) and (3.20) were proved in [10, Lemma 3.3]. In order to prove them for \(w_\phi \), we observe that the estimates (3.19) and (3.20) follow from the corresponding ones for the semicircle law, which were proved in [20, Lemma 4.3]. The estimates (3.21) follow from (3.20) and the elementary identity
which can be derived from (3.18); the estimates for \(m_\phi \) are derived similarly. Finally, (3.22) follows easily from
which may itself be derived from (3.5). \(\square \)
In analogy to \(w_\phi \) [see (3.17)], we define the matrix-valued function
Theorem 3.2 has the following analogue, which compares \(F\) with \(m_\phi \).
Lemma 3.7
Suppose that (1.15)\(,\) (1.9)\(,\) and (1.16) hold. Then
uniformly in \(z \in \mathbf{{S}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\). Moreover\(,\)
uniformly in \(z \in \widetilde{\mathbf{{S}}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).
Proof
The proof is an easy consequence of Theorem 3.2 and Lemma 3.6, combined with the fact that for \(z \in \mathbf{{S}}\) or \(z \in \widetilde{\mathbf{{S}}}\) we have \(|z | \asymp \phi ^{1/2}\) for \(\phi \geqslant 1\) and \(|z | \asymp \phi ^{-1/2}\) for \(\phi \leqslant 1\). \(\square \)
3.3 Extension of the spectral domain
In this section we extend the spectral domain on which Theorem 3.2 and Lemma 3.7 hold. The argument relies on the Helffer–Sjöstrand functional calculus [16]. Define the domains
Proposition 3.8
Fix \(\omega , \tau \in (0,1)\).
-
(i)
If \(\phi < 1 - \tau \) then
$$\begin{aligned} \bigl |\langle {\mathbf{{v}}} , {G(z) \mathbf{{w}}}\rangle - m_{\phi ^{-1}}(z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \prec \frac{1}{(\kappa + \eta )^2 + (\kappa + \eta )^{1/4}} K^{-1/2} \end{aligned}$$(3.27)uniformly for \(z \in \widehat{\mathbf{{S}}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).
-
(ii)
If \(|\phi - 1 | \leqslant \tau \) then (3.27) holds uniformly for \(z \in \widehat{\mathbf{{S}}} \setminus \mathbf{{B}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).
-
(iii)
If \(\phi > 1 + \tau \) then
$$\begin{aligned} \bigl |\langle {\mathbf{{v}}} , {G(z) \mathbf{{w}}}\rangle - m_{\phi ^{-1}}(z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \prec \frac{1}{\phi ^{1/2} |z | ((\kappa + \eta ) + (\kappa + \eta )^{1/4})} K^{-1/2}\qquad \end{aligned}$$(3.28)uniformly for \(z \in \widehat{\mathbf{{S}}} \setminus \{0\}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).
Proof
By polarization and linearity, we may assume that \(\mathbf{{w}} = \mathbf{{v}}\). Define the signed measure
so that
The basic idea of the proof is to apply the Helffer–Sjöstrand formula to the function
where \(x_0\) is chosen below. To that end, we need a smooth compactly supported cutoff function \(\chi \) on the complex plane satisfying \(\chi (w) \in [0,1]\) and \(|\partial _{\bar{w}} \chi (w) | \leqslant C(\omega , \tau )\). We distinguish the three cases \(\phi < 1 - \tau \), \(|\phi - 1 | \leqslant \tau \), and \(\phi > 1 + \tau \).
Let us first focus on the case \(\phi < 1 - \tau \). Set \(x_0 \mathrel {\mathop :}=\phi ^{1/2} + \phi ^{-1/2}\) and choose a constant \(\omega ' = \omega '(\omega , \tau ) \in (0, \omega )\) small enough that \(\gamma _- \geqslant 4 \omega '\). We require that \(\chi \) be equal to \(1\) in the \(\omega '\)-neighbourhood of \([\gamma _-, \gamma _+]\) and \(0\) outside of the \(2\omega '\)-neighbourhood of \([\gamma _-,\gamma _+]\). By Theorem 3.5 we have \(\hbox {supp }\rho ^\Delta \subset \{\chi = 1\}\) with high probability. Now choose \(z\) satisfying \({{\mathrm{dist}}}(z, [\gamma _-, \gamma _+]) \geqslant 3 \omega '\). Then the Helffer–Sjöstrand formula [16] yields, for \(x \in \hbox {supp }\rho ^\Delta \),
with high probability, where \(\mathrm {d}w\) denotes the two-dimensional Lebesgue measure in the complex plane. Noting that \(\int \mathrm {d}\rho ^{\Delta } = 0\), we may therefore write
with high probability, where in second step we used (3.30) and the fact that \(f_z\) is holomorphic away from \(z\). The integral is supported on the set \(\{\partial _{\bar{w}} \chi \ne 0\} \subset \{{w \mathrel {\mathop :}{{\mathrm{dist}}}(w,[\gamma _-, \gamma _+]) \in [\omega ', 2 \omega ']}\}\), on which we have the estimates \(|f_z(w) | \leqslant C (\kappa (z) + \eta (z))^{-2}\) and \(|m^\Delta (w) | \prec K^{-1/2}\), as follows from Theorem 3.10 applied to \(\mathbf{{S}}(\omega ', K)\) and (3.22). Recalling Remark 3.3, we may plug these estimates into the integral to get
which holds for \({{\mathrm{dist}}}(z, [\gamma _-, \gamma _+]) \geqslant 3 \omega '\). (Recall that \(|\partial _{\bar{w}} \chi (w) | \leqslant C\).) Combining this estimate with (3.10), the claim (3.27) follows for \(z \in \widehat{\mathbf{{S}}}\).
Next, we deal with the case \(|\phi - 1 | \leqslant \tau \). The argument is similar. We again choose \(x_0 \mathrel {\mathop :}=\phi ^{1/2} + \phi ^{-1/2}\). We require that \(\chi \) be equal to \(1\) in the \(\omega \)-neighbourhood of \([0, \gamma _+]\) and \(0\) outside of the \(2\omega \)-neighbourhood of \([0,\gamma _+]\). We may now repeat the above argument almost verbatim. For \({{\mathrm{dist}}}\{z, [0, \gamma _+]\} \geqslant 3 \omega \) and \(w \in \{\partial _{\bar{w}} \chi \ne 0\}\) we find that \(|f_z(w) | \leqslant C (\kappa (z) + \eta (z))^{-2}\) and \(|m^\Delta (w) | \prec K^{-1/2}\). Hence, recalling (3.10), we get (3.27) for \(z \in \widehat{\mathbf{{S}}}\!\setminus \!\mathbf{{B}}\).
Finally, suppose that \(\phi > 1 + \tau \). Now we set \(x_0 \mathrel {\mathop :}=0\). We choose the same \(\omega '\) and cutoff function \(\chi \) as in the case \(\phi < 1 - \tau \) above. Suppose that \({{\mathrm{dist}}}(z, [\gamma _-, \gamma _+]) \geqslant 3 \omega '\) and \(z \ne 0\). Thus, (3.30) holds with high probability for \(x \in \hbox {supp }\rho ^\Delta \!\setminus \! \{0\}\). Since \(f_w(0) = 0\), we therefore find that (3.31) holds. As above, we find that for \(w \in \{\partial _{\bar{w}} \chi \ne 0\}\) we have
and \(|m^\Delta (w) | \prec \phi ^{-1} K^{-1/2}\). Recalling (3.10), we find that (3.28) follows easily. \(\square \)
Proposition 3.8 yields the following result for \(F\) defined in (3.24).
Corollary 3.9
Fix \(\omega , \tau \in (0,1)\).
-
(i)
If \(\phi < 1 - \tau \) then
$$\begin{aligned} \bigl |\langle {\mathbf{{v}}} , {F(z) \mathbf{{w}}}\rangle - w_{\phi }(z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \prec \frac{\phi ^{1/2} |z |}{(\kappa + \eta )^2 + (\kappa + \eta )^{1/4}} K^{-1/2} \end{aligned}$$(3.32)uniformly for \(z \in \widehat{\mathbf{{S}}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).
-
(ii)
If \(|\phi - 1 | \leqslant \tau \) then (3.32) holds uniformly for \(z \in \widehat{\mathbf{{S}}} \!\setminus \! \mathbf{{B}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).
-
(iii)
If \(\phi > 1 + \tau \) then
$$\begin{aligned} \bigl |\langle {\mathbf{{v}}} , {F(z) \mathbf{{w}}}\rangle - w_{\phi }(z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \prec \frac{1}{(\kappa + \eta ) + (\kappa + \eta )^{1/4}} K^{-1/2} \end{aligned}$$(3.33)uniformly for \(z \in \widehat{\mathbf{{S}}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).
3.4 Identities for the resolvent and eigenvalues
In this section we derive the identities on which our analysis of the eigenvalues and eigenvectors relies. Recall the definition of the set \({\mathcal {R}}\) from (1.14). We write the population covariance matrix \(\Sigma \) from (1.12) as
where \(D = {{\mathrm{diag}}}(d_i)_{i \in {\mathcal {R}}}\) is an invertible diagonal \(|{\mathcal {R}} | \times |{\mathcal {R}} |\) matrix and \(V = [\mathbf{{v}}_i]_{i \in {\mathcal {R}}}\) is the matrix of eigenvectors \(\mathbf{{v}}_i\) of \(\Sigma \) indexed by the set \({\mathcal {R}}\). Note that \(V\) is an \(N \times |{\mathcal {R}} |\) isometry, i.e. \(V\) satisfies \(V^* V = I_{|{\mathcal {R}} |}\).
We use the definitions
where \(H = X X^*\) and \(Q = \Sigma ^{1/2} H \Sigma ^{1/2}\). We introduce the \(|{\mathcal {R}} | \times |{\mathcal {R}} |\) matrix
We also denote by \(\sigma (A)\) the spectrum of a square matrix \(A\).
The following lemma collects the basic identities for analysing \(\sigma (Q)\) and \(\widetilde{G}\). We remark that versions of its part (i) have already appeared in several previous works on finite-rank deformations of random matrix ensembles [2, 7, 27, 37].
Lemma 3.10
-
(i)
Suppose that \(\mu \notin \sigma (H)\). Then \(\mu \in \sigma (Q)\) if and only if
$$\begin{aligned} \det \left( {D^{-1} + W(\mu )}\right) = 0. \end{aligned}$$(3.34) -
(ii)
We have
$$\begin{aligned} \Sigma ^{1/2} \widetilde{G}(z) \Sigma ^{1/2} = G(z) - G(z) V \frac{\phi ^{1/2} z}{D^{-1} + W(z)} V^* G(z). \end{aligned}$$(3.35)
Proof
To prove (i), we write the condition \(\mu \in \sigma (Q)\) as
where we used that \(\mu \notin \sigma (H)\). Using
the matrix identity \(\det (1 + XY) = \det (1 + YX)\), and \(\det (\Sigma ) \ne 0\), we find
and the claim follows.
To prove (ii), we write
The claim now follows from the identity
with \(A = H - z\), \(B = D (\phi ^{-1/2} + D)^{-1}\), \(S = V\), and \(T = zV^*\). \(\square \)
The result (3.35), when restricted to the range of \(V\), has an alternative form (3.37) which is often easier to work with, since it collects all of the randomness in the single quantity \(W(z)\) on its right-hand side.
Lemma 3.11
We have
Proof
From (3.35) we get
Applying the identity
to the right-hand side yields
from which the claim follows. \(\square \)
4 Eigenvalue locations
In this section we prove Theorems 2.3 and 2.7. The arguments are similar to those of [27, Section 6], and we therefore only sketch the proofs. The proof of [27, Section 6] relies on three main steps: (i) establishing a forbidden region which contains with high probability no eigenvalues of \(Q\); (ii) a counting estimate for the special case where \(D\) does not depend on \(N\), which ensures that each connected component of the allowed region (complement of the forbidden region) contains exactly the right number of eigenvalues of \(Q\); and (iii) a continuity argument where the counting result of (ii) is extended to arbitrary \(N\)-dependent \(D\) using the gaps established in (i) and the continuity of the eigenvalues as functions of the matrix entries. The steps (ii) and (iii) are exactly the same as in [27], and will not be repeated here. The step (i) differs slightly from that of [27], and in the proofs below we explain these differences.
We need the following eigenvalue interlacing result, which is purely deterministic. It holds for any nonnegative definite \(M \times M\) matrix \(H\) and any rank-one deformation of the form \(Q = (1 + \tilde{d} \mathbf{{v}} \mathbf{{v}}^*)^{1/2} H (1 + \tilde{d} \mathbf{{v}} \mathbf{{v}}^*)^{1/2}\) with \(\tilde{d} \geqslant -1\) and \(\mathbf{{v}} \in \mathbb {R}^M\).
Lemma 4.1
(Eigenvalue interlacing) Let \(|{\mathcal {R}} | = 1\) and \(D = d \in {\mathcal {D}}\). For \(d > 0\) we have
and for \(d < 0\) we have
Proof
Using a simple perturbation argument (using that eigenvalues depend continuously on the matrix entries), we may assume without loss of generality that \(\lambda _1,\ldots , \lambda _M\) are all positive and distinct. Writing \(\Sigma = 1 + \phi ^{1/2} d \mathbf{{v}} \mathbf{{v}}^*\), we get from (3.35) that
Note that \(a > 0\). Thus we get
Writing this in spectral decomposition yields
As above, a simple perturbation argument implies that we may without loss of generality assume that all scalar products in (4.1) are nonzero. Now take \(z \in (0, \infty )\). Note that \(b(z)\) and \(d\) have the same sign.
To conclude the proof, we observe that the left-hand side of (4.1) defines a function of \(z \in (0,\infty )\) with \(M - 1\) singularities and \(M\) zeros, which is smooth and decreasing away from the singularities. Moreover, its zeros are the eigenvalues \(\lambda _1,\ldots , \lambda _M\). The interlacing property now follows from the fact that \(z\) is an eigenvalue of \(Q\) if and only if the left-hand side of (4.1) is equal to \(-b(z)\). \(\square \)
Corollary 4.2
For the rank-\(|{\mathcal {R}} |\) model (1.10) we have
with the convention that \(\lambda _{i} = 0\) for \(i > K\) and \(\lambda _i = \infty \) for \(i < 1\).
We now move on to the proof of Theorem 2.3. Note that the function \(\theta \) defined in (1.17) may be extended to a biholomorphic function from \(\{\zeta \in \mathbb {C}\mathrel {\mathop :}|\zeta | > 1\}\) to \(\{z \in \mathbb {C}\mathrel {\mathop :}z - (\phi ^{1/2} + \phi ^{-1/2}) \notin [-2,2])\}\). Moreover, using (3.18) it is easy to check that for \(|\zeta | > 1\) we have
Throughout the following we shall make use of the subsets of outliers
for \(\tau \geqslant 0\). Note that \({\mathcal {O}} = {\mathcal {O}}_0^+ \cup {\mathcal {O}}_0^-\).
Proof of Theorem 2.3
The proof of Proposition 2.3 is similar to that of [27, Equation (2.20)]. We focus first on the outliers to the right of the bulk spectrum. Let \(\varepsilon > 0\). We shall prove that there exists an event \(\Xi \) of high probability (see Definition 3.1) such that for all \(i \in {\mathcal {O}}_{4 \varepsilon }^+\) we have
and for \(i \in [\![{|{\mathcal {O}}_{4 \varepsilon }^+ | + 1, |{\mathcal {O}}_{4 \varepsilon }^+ | + r}]\!]\) we have
Before proving (4.3) and (4.4), we show how they imply (2.4) for \(d_i > 0\) and (2.6). From (4.4) we get for \(i\) satisfying \(K^{-1/3} \leqslant d_i - 1 \leqslant K^{-1/3 + 4 \varepsilon }\)
Since \(\varepsilon > 0\) was arbitrary, (2.4) for \(d_i > 0\) and (2.6) follow from (4.3) and (4.5).
What remains is the proof of (4.3) and (4.4). As in [27, Proposition 6.5], the first step is to prove that with high probability there are no eigenvalues outside a neighbourhood of the classical outlier locations \(\theta (d_i)\). To that end, we define for each \(i \in {\mathcal {O}}_\varepsilon ^+\) the interval
Moreover, we set \(I_0 \mathrel {\mathop :}=[0, \theta (1 + K^{-1/3 + 2 \varepsilon })]\).
We now claim that with high probability the complement of the set \(I(D) \mathrel {\mathop :}=I_0 \cup \bigcup _{i \in {\mathcal {O}}_{\varepsilon }^+} I_i(D)\) contains no eigenvalues of \(Q\). Indeed, from Theorem 3.5 and Corollary 3.9 combined with Remark 3.3 (with small enough \(\omega \equiv \omega (\varepsilon )\)), we find that there exists an event \(\Xi \) of high probability such that \(|\lambda _i - \gamma _+ | \leqslant K^{-2/3 + \varepsilon }\) for \(i \in [\![{1,2r}]\!]\) and
for all \(x \notin I_0\), where we defined
In particular, we have \(\mathbf{{1}} (\Xi ) \lambda _1 \leqslant \theta (1 + K^{-1/3 + \varepsilon })\). Hence we find from (3.34) that on the event \(\Xi \) the value \(x \notin I_0\) is an eigenvalue of \(Q\) if and only if the matrix
is singular. Since \(-d_i^{-1} = w_\phi (\theta (d_i))\) for \(i \in {\mathcal {O}}_\varepsilon ^+\), we conclude from the definition of \(I(D)\) that it suffices to show that if \(x \notin I(D)\) then
We prove (4.6) using the two following observations. First, \(w_\phi \) is monotone increasing on \((\gamma _+, \infty )\) and
as follows from (4.2). Second,
We omit further details, which may be found e.g. in [27, Section 6]. Thus we conclude that on the event \(\Xi \) the complement of \(I(D)\) contains no eigenvalues of \(Q\).
The next step of the proof consists in making sure that the allowed neighbourhoods \(I_i(D)\) contain exactly the right number of outliers; the counting argument (sketched in the steps (ii) and (iii) at the beginning of this section) follows that of [27, Section 6]. First we consider the case \(D = D(0)\) where for all \(i \ne j \in {\mathcal {O}}_{\varepsilon }^+\) we have \(d_i(0),d_j(0) \geqslant 2\) and \(|d_i(0) - d_j(0) | \geqslant 1\), and show that each interval \(\{I_i(D(0)) \mathrel {\mathop :}i \in {\mathcal {O}}_{\varepsilon }^+\}\) contains exactly one eigenvalue of \(Q\) (see [27, Proposition 6.6]). We then deduce the general case by a continuity argument, by choosing an appropriate continuous path \((D(t))_{t \in [0,1]}\) joining the initial configuration \(D(0)\) to the desired final configuration \(D = D(1)\). The continuity argument requires the existence of a gap in the set \(I(D)\) to the left of \(\bigcup _{i \in {\mathcal {O}}_{4 \varepsilon }^+} I_i(D)\). The existence of such a gap follows easily from the definition of \(I(D)\) and the fact that \(|{\mathcal {R}} |\) is bounded. The details are the same as in [27, Section 6.5]. Hence (4.3) follows. Moreover, (4.4) follows from the same argument combined with Corollary 4.2 for a lower bound on \(\mu _i\). This concludes the analysis of the outliers to the right of the bulk spectrum.
The case of outliers to the left of the bulk spectrum is analogous. Here we assume that \(\phi < 1 - \tau \). The argument is exactly the same as for \(d_i > 0\), except that we use the bound (3.32) to the left of the bulk spectrum as well as \(|\lambda _i - \gamma _- | \leqslant K^{-2/3 + \varepsilon }\) for \(i \in [\![{K-2r, K}]\!]\) with high probability. \(\square \)
Proof of Theorem 2.7
We only give the proof of (2.9); the proof of (2.10) is analogous. Fix \(\varepsilon > 0\). By Theorem 2.3, Theorem 3.5, Theorem 3.2, Lemma 3.7, and Remark 3.3, there exists a high-probability event \(\Xi \equiv \Xi _N(\varepsilon )\) satisfying the following conditions.
-
(i)
We have
$$\begin{aligned}&\mathbf{{1}} (\Xi ) |\mu _{s_+ + 1} - \gamma _+ | \leqslant K^{-2/3 + \varepsilon },\nonumber \\&\mathbf{{1}} (\Xi ) |\lambda _i-\gamma _i | \leqslant i^{-1/3}K^{-2/3 + \varepsilon }\quad (i \leqslant (1 - \tau )K). \end{aligned}$$(4.7) -
(ii)
For \(z \in \mathbf{{S}}(\varepsilon , K)\) we have
$$\begin{aligned} \mathbf{{1}} (\Xi ) \bigl \Vert W(z) - w_{\phi }(z) \bigr \Vert \leqslant K^\varepsilon \left( {\sqrt{\frac{\hbox {Im }w_{\phi }(z)}{K \eta }} + \frac{1}{K \eta }}\right) \end{aligned}$$(4.8)and
$$\begin{aligned} \max _{i,j} \bigl |\langle {\mathbf{{v}}_i} , {G(z) \mathbf{{v}}_j}\rangle - m_{\phi ^{-1}}(z) \delta _{ij} \bigr | \leqslant K^\varepsilon \left( {\sqrt{\frac{\hbox {Im }m_{\phi ^{-1}}(z)}{M \eta }} + \frac{1}{M \eta }}\right) . \end{aligned}$$(4.9)
For the following we fix a realization \(H \in \Xi \). We suppose first that
and define \(\eta \mathrel {\mathop :}=K^{-1 + 2 \varepsilon } \alpha _+^{-1}\). Now suppose that \(x\) satisfies
We shall show, using (3.34), that any \(x\) satisfying (4.11) cannot be an eigenvalue of \(Q\). First we deduce from (4.8) that
The estimate (4.12) follows by spectral decomposition of \(F(\cdot )\) together with the estimate \(2 |\lambda _i - x | \geqslant \sqrt{(\lambda _i - x)^2 + \eta ^2}\) for all \(i\). We get from (4.12) and Lemma 3.6 that
where we use the notation \(A = B + O(t)\) to mean \(\Vert A - B \Vert \leqslant C t\). Recalling (3.34), we conclude that on the event \(\Xi \) the value \(x\) is not an eigenvalue of \(Q\) provided
It is easy to check that this condition is satisfied if
which holds provided that
where we used (4.10). Recalling (4.7), we therefore conclude that for \(i \leqslant K^{1 - 2 \varepsilon } \alpha _+^3\) the set
contains no eigenvalue of \(Q\).
The next step of the proof is a counting argument (sketched in the steps (ii) and (iii) at the beginning of this section), which uses the eigenvalue interlacing from Lemma 4.1. They details are the same as in [27, Section 6], and hence omitted here. The counting argument implies that for \(i \leqslant K^{1 - 2 \varepsilon } \alpha _+^3\) and assuming (4.10) we have
What remains is to check (4.13) for the cases \(\alpha _+ < K^{-1/3 + \varepsilon }\) and \(i > K^{1 - 2 \varepsilon } \alpha _+^3\).
Suppose first that \(\alpha _+ < K^{-1/3 + \varepsilon }\). Then using the rigidity from (4.7) and interlacing from Corollary 4.2 we find
where we used the trivial bound \(i \geqslant 1\). Similarly, if \(i > K^{1 - 2 \varepsilon } \alpha _+^3\) satisfies \(i \leqslant (1 - \tau )K\), we may repeat the same estimate.
We conclude that (4.13) under the sole assumption that \(i \leqslant (1 - \tau )K\). Since \(\varepsilon > 0\) was arbitrary, (2.9) follows. \(\square \)
5 Outlier eigenvectors
In this section we focus on the outlier eigenvectors \(\xi _a\), \(a \in {\mathcal {O}}\). Here we in fact prove Theorem 2.11 under the stronger assumption
instead of \(1 + K^{-1/3} \leqslant d_i \leqslant \tau ^{-1}\). How to improve the lower bound from \(1 + K^{-1/3 + \tau }\) to the claimed \(K^{-1/3}\) requires a completely different approach, relying on eigenvector delocalization bounds, and is presented in Sect. 6 in conjunction with results for the non-outlier eigenvectors \(\xi _a\), \(a \notin {\mathcal {O}}\).
The proof of Theorem 2.16 is similar to that of Theorem 2.11; one has to adapt the proof to cover the range \(d_i \in [1 + \tau , \infty )\) instead of \(d_i \in [1 + K^{-1/3}, \tau ^{-1}]\). The key input is the extension of the spectral domain from Corollary 3.9. For the sake of brevity we omit the details of the proof of Theorem 2.16, and focus solely on Theorem 2.11.
The following proposition is the main result of this section.
Proposition 5.1
Fix \(\tau > 0\). Suppose that \(A\) satisfies (5.1). Then for all \(i,j = 1,\ldots , M\) we have
where the symbol \((i \leftrightarrow j)\) denotes the preceding terms with \(i\) and \(j\) interchanged.
Note that, under the assumption (5.1), Theorem 2.11 is an easy consequence of Proposition 5.1. As explained above, the proof of Theorem 2.11 in full generality is given in Sect. 6, where we give the additional argument required to relax (5.1).
The rest of this section is devoted to the proof of Proposition 5.1.
5.1 Non-overlapping outliers
We first prove a slightly stronger version of (5.2) under the additional non-overlapping condition
for all \(i \in A\), where \(\delta > 0\) is a positive constant. This is a precise version of the second condition of (2.14), whose interpretation was given below (2.14): an outlier indexed by \(A\) cannot overlap with an outlier indexed by \(A^c\). Note, however, that there is no restriction on the outliers indexed by \(A\) overlapping among themselves. The assumption (5.3) will be removed in Sect. 5.2. The main estimate for non-overlapping outliers is the following.
Proposition 5.2
Fix \(\tau > 0\) and \(\delta > 0\). Suppose that \(A\) satisfies (5.1) and (5.3) for all \(i \in A\). Then for all \(i,j = 1,\ldots , M\) we have
Remark 5.3
The only difference between (5.2) and (5.4) is the term proportional to \(\mathbf{{1}} (j \notin A)\) on the last line. In order to prove (5.2) without the overlapping condition (5.3), it is necessary to start from the stronger bound (5.4); see Sect. 5.2 below.
The rest of this subsection is devoted to the proof of Proposition 5.2. We begin by defining \(\omega \mathrel {\mathop :}=\tau / 2\) and letting \(\varepsilon < \min \{{\tau /3, \delta }\}\) be a positive constant to be determined later. We choose a high-probability event \(\Xi \equiv \Xi _N(\varepsilon , \tau )\) (see Definition 3.1) satisfying the following conditions.
-
(i)
We have
$$\begin{aligned} \mathbf{{1}} (\Xi ) \bigl |W_{ij}(z) - w_\phi (z) \delta _{ij} \bigr | \leqslant |z - \gamma _+ |^{-1/4} K^{-1/2 + \varepsilon } \end{aligned}$$(5.5)for \(i,j \in {\mathcal {R}}\), large enough \(K\), and all \(z\) in the set
$$\begin{aligned} \bigl \{{z \in \mathbb {C}\mathrel {\mathop :}\hbox {Re }z \geqslant \gamma _+ + K^{-2/3 + \omega } ,\, |z | \leqslant \omega ^{-1}}\bigr \}. \end{aligned}$$(5.6) -
(ii)
For all \(i\) satisfying \(1 + K^{-1/3} \leqslant d_i \leqslant \omega ^{-1}\) we have
$$\begin{aligned} \mathbf{{1}} (\Xi ) |\mu _{i} - \theta (d_i) | \leqslant (d_i - 1)^{1/2} \, K^{-1/2 + \varepsilon }. \end{aligned}$$(5.7) -
(iii)
We have
$$\begin{aligned} \mathbf{{1}} (\Xi ) |\mu _{s_++1} - \gamma _+ | \leqslant K^{-2/3+\varepsilon }. \end{aligned}$$(5.8)
Note that such an event \(\Xi \) exists. Indeed, (5.7) and (5.8) may be satisfied using Theorem 2.3, and (5.5) using Theorem 3.7 combined with Remark 3.3.
For the sequel we fix a realization \(H \in \Xi \) satisfying the conditions (i)–(iii) above. Hence, the rest of the proof of Proposition 5.2 is entirely deterministic, and the randomness only enters in ensuring that \(\Xi \) has high probability. Our starting point is a contour integral representation of the projection \(P_A\). In order to construct the contour, we define for each \(i \in A\) the radius
We define the contour \(\Gamma \mathrel {\mathop :}=\partial \Upsilon \) as the boundary of the union of discs \(\Upsilon \mathrel {\mathop :}=\bigcup _{i \in A} B_{\rho _i}(d_i)\), where \(B_\rho (d)\) is the open disc of radius \(\rho \) around \(d\). We shall sometimes need the decomposition \(\Gamma = \bigcup _{i \in A} \Gamma _i\), where \(\Gamma _i \mathrel {\mathop :}=\Gamma \cap \partial B_{\rho _i}(d_i)\). See Fig. 3 for an illustration of \(\Gamma \).
The integration contour \(\Gamma = \bigcup _{i \in A} \Gamma _i\). In this example \(\Gamma \) consists of two components, and we have \(|{\mathcal {R}} | = 6\) with \(A = \{2,3,4,5\}\). We draw the locations of \(d_i\) with \(i \in A\) using black dots and the other \(d_i\) using white dots. The contour is constructed by drawing circles of radius \(\rho _i\) around each \(d_i\) for \(i \in A\) (depicted with dotted lines). The piece \(\Gamma _i\) consists of the points on the circle centred at \(d_i\) that lie outside all other circles
We shall have to use the estimate (5.5) on the set \(\overline{\theta (\Upsilon )} \!\,\). Its applicability is an immediate consequence of the following lemma.
Lemma 5.4
The set \(\overline{\theta (\Upsilon )} \!\,\) lies in (5.6).
Proof
It is easy to check that \(\theta (\zeta ) \leqslant \omega ^{-1}\) for all \(\zeta \in \Upsilon \). In order to check the lower bound on \(\hbox {Re }\theta (\zeta )\), we note that for any \(\alpha \in (0,1)\) there exists a constant \(c \equiv c(\alpha , \tau )\) such that
for \(\hbox {Re }\zeta \geqslant 1\), \(|\hbox {Im }\zeta | \leqslant \alpha (\hbox {Re }\zeta - 1)\), and \(|\zeta | \leqslant \tau ^{-1}\). Now the claim follows easily from \(\hbox {Re }\zeta \geqslant 1 + K^{-1/3 + \tau }/2\) for all \(\zeta \in \Upsilon \), by choosing \(\alpha = 1/\sqrt{3}\). \(\square \)
Lemma 5.5
Each outlier \(\{\mu _{i}\}_{i \in A}\) lies in \(\theta (\Upsilon ),\) and all other eigenvalues of \(Q\) lie in the complement of \(\overline{\theta (\Upsilon )} \!\,\).
Proof
It suffices to prove that (a) for each \(i \in A\) we have \(\mu _{i} \in \theta (B_{\rho _i}(d_i))\) and (b) all the other eigenvalues \(\mu _j\) satisfy \(\mu _j \notin \theta (B_{\rho _i}(d_i))\) for all \(i \in A\).
In order to prove (a), we note that
for \(i \in A\), as follows from (5.3) and (5.1). Using
it is then not hard to get (a) from (5.10) and (5.7).
In order to prove (b), we consider the two cases (i) \(1 + K^{-1/3} \leqslant d_j \leqslant \omega ^{-1}\) with \(j \notin A\), and (ii) and \(j \geqslant s_+ + 1\). In the case (i), the claim (b) follows using (5.7), (5.11), and (5.3). In the case (ii), the claim (b) follows from (5.8) and the estimate
This concludes the proof. \(\square \)
Using the spectral decomposition of \(\widetilde{G}(z)\), Lemma 5.5, and the residue theorem, we may write the projection \(P_A\) as
Hence we get from (3.37) that
This is the desired integral representation of \(P_A\).
We first use (5.13) to compute \(\langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle \) in the case \(i,j \in {\mathcal {R}}\), where \(\mathbf{{v}}_i\) and \(\mathbf{{v}}_j\) lie in the range of \(V\). In that case we get from (5.13) that
We now perform a resolvent expansion on the denominator
Thus we get
where we defined
We begin by computing
where we used Cauchy’s theorem, (4.2), and the fact that \(d_i\) lies in \(\Upsilon \) if and only if \(i \in A\).
Next, we estimate
using the fact that \(f_{ij}\) is holomorphic inside \(\Gamma \) and satisfies the bounds
The first bound of (5.21) follows from (5.5), (5.11), and (5.12). The second bound of (5.21) follows by plugging the first one into
where the contour \({\mathcal {C}}\) is the circle of radius \(|\zeta - 1 |/2\) centred at \(\zeta \). (By assumptions on \(\varepsilon \) and \(\omega \), the function \(f_{ij}\) is holomorphic in a neighbourhood of the closed interior of \({\mathcal {C}}\).)
In order to estimate (5.20), we consider the three cases (i) \(i,j \in A\), (ii) \(i \in A\), \(j \notin A\), (iii) \(i \notin A\), \(j \in A\). Note that (5.20) vanishes if \(i,j \notin A\). We start with the case (i). Suppose first that \(i \ne j\) and \(d_i \ne d_j\). Then we find
A simple limiting argument shows that this bound is also valid for \(d_i = d_j\) and \(i = j\). Next, in the case (ii) we get from (5.21)
A similar estimate holds for the case (iii). Putting all three cases together, we find
What remains is the estimate of \(S_{ij}^{(2)}\). Here residue calculations are unavailable, and the precise choice of the contour \(\Gamma \) is crucial. We use the following basic estimate to control the integral.
Lemma 5.6
For \(k \in A,\) \(l \in {\mathcal {R}},\) and \(\zeta \in \Gamma _k\) we have
Proof
The upper bound \(|\zeta - d_l | \leqslant \rho _k + |d_k - d_l |\) is trivial, so that we only focus on the lower bound. Suppose first that \(l \notin A\). Then we get \(|\zeta - d_l | \geqslant |d_k - d_l | - \rho _k\), from which the claim follows since \(|d_k - d_l | \geqslant 2 \rho _k\) by (5.9).
For the remainder of the proof we may therefore suppose that \(l \in A\). Define \(\delta \mathrel {\mathop :}=|d_k - d_l | - \rho _k - \rho _l\), the distance between the discs \(D_{\rho _k}(d_k)\) and \(D_{\rho _l}(d_l)\) (see Fig. 3). We consider the two cases \(4 \delta \leqslant |d_k - d_l |\) and \(4 \delta > |d_k - d_l |\) separately.
Suppose first that \(4 \delta \leqslant |d_k - d_l |\). Then by definition of \(\delta \) we have \(|d_k - d_l | \leqslant \frac{4}{3} (\rho _k + \rho _l)\). Now a simple estimate using the definition of \(\rho _i\) yields \(\rho _k / 5 \leqslant \rho _l \leqslant 5 \rho _k\), from which we conclude \(|d_k - d_l | \leqslant 8 \rho _k\). The claim now follows from the bound \(|\zeta - d_l | \geqslant \rho _l\).
Suppose now that \(4 \delta > |d_k - d_l |\). Hence \(\rho _k + \rho _l \leqslant \frac{3}{4} |d_k - d_l |\), so that in particular \(\rho _k \leqslant |d_k - d_l |\). Thus we get
This concludes the proof. \(\square \)
From (5.18), (5.5), (5.11), and (5.12) we get
where we also used the estimate \(|\theta (\zeta ) | \asymp \phi ^{-1/2} (1 + \phi )\) for \(\zeta \in \Gamma \).
In order to estimate the matrix norm, we observe that for \(\zeta \in \Gamma _k\) we have on the one hand
from (5.5) and on the other hand
for any \(l \in {\mathcal {R}}\), where in the last step we used (5.10). Since \(\varepsilon < \delta \), these estimates combined with a resolvent expansion give the bound
for \(\zeta \in \Gamma _k\). Decomposing the integration contour in (5.23) as \(\Gamma = \bigcup _{k \in A} \Gamma _k\), and recalling that \(\Gamma _k\) has length bounded by \(2 \pi \rho _k\), we get from Lemma 5.6
We estimate the right-hand side using Cauchy–Schwarz. For \(i \notin A\) we find, using (5.9),
For \(i \in A\) we use (5.9) and the estimate \(\rho _k + |d_i - d_k | \geqslant \rho _i\) for all \(k \in A\) to get
From (5.24), (5.25), and (5.26), we get
Recall that \(M \asymp (1 + \phi ) K\). Hence, plugging (5.19), (5.22), and (5.27) into (5.15), we find
We have proved (5.28) under the assumption that \(i, j \in {\mathcal {R}}\). The general case is an easy corollary. For general \(i,j \in [\![{1,M}]\!]\), we define \(\widehat{{\mathcal {R}}} \mathrel {\mathop :}={\mathcal {R}} \cup \{i,j\}\) and consider
where \(\widehat{d}_k \mathrel {\mathop :}=d_k\) for \(k \in {\mathcal {R}}\) and \(\widehat{d}_k \in (0,1/2)\) for \(k \in \widehat{{\mathcal {R}}} \!\setminus \! {\mathcal {R}}\). Since \(|\widehat{{\mathcal {R}}} | \leqslant r + 2\) and \(\widehat{D}\) is invertible, we may apply the result (5.28) to this modified model. Now taking the limit \(\widehat{d}_k \rightarrow 0\) for \(k \in \widehat{{\mathcal {R}}} \!\setminus \! {\mathcal {R}}\) in (5.28) concludes the proof in the general case. Now Proposition 5.2 follows since \(\varepsilon \) may be chosen arbitrarily small. This concludes the proof of Proposition 5.2.
5.2 Removing the non-overlapping assumption
In this subsection we complete the proof of Proposition 5.1 by extending Proposition 5.2 to the case where (5.3) does not hold.
Proof of Proposition 5.1
Let \(\delta < \tau /4\). We say that \(i,j \in {\mathcal {O}}_{\tau /2}^+\) overlap if \(|d_i - d_j | \leqslant (d_i - 1)^{-1/2} K^{-1/2 + \delta }\) or \(|d_i - d_j | \leqslant (d_j - 1)^{-1/2} K^{-1/2 + \delta }\). For \(A \subset {\mathcal {O}}_{\tau }^+\) we introduce sets \(S(A), L(A) \subset {\mathcal {O}}_{\tau /2}^+\) satisfying \(S(A) \subset A \subset L(A)\). Informally, \(S(A) \subset A\) is the largest subset of indices of \(A\) that do not overlap with its complement. It is by definition constructed by successively choosing \(k \in A\), such that \(k\) overlaps with an index of \(A^c\), and removing \(k\) from \(A\); this process is repeated until no such \(k\) exists. One can check that the result is independent of the choice of \(k\) at each step. Note that \(S(A)\) may be empty.
Informally, \(L(A) \supset A\) is the smallest subset of indices in \({\mathcal {O}}_{\tau /2}^+\) that do not overlap with its complement. It is by definition constructed by successively choosing \(k \in {\mathcal {O}}_{\tau /2}^+{\setminus } A\), such that \(k\) overlaps with an index of \(A\), and adding \(k\) to \(A\); this process is repeated until no such \(k\) exists. One can check that the result is independent of the choice of \(k\) at each step. See Fig. 4 for an illustration of \(S(A)\) and \(L(A)\). Throughout the following we shall repeatedly make use of the fact that, for any \(A \subset {\mathcal {O}}_{\tau }^+\), Proposition 5.2 is applicable with \((\tau , A)\) replaced by \((\tau /2, S(A))\) or \((\tau /2, L(A))\).
After these preparations, we move on to the proof of (5.2). We divide the argument into four steps.
(a) \(i = j \notin A\). We consider two cases, \(i \notin L(A)\) and \(i \in L(A)\). Suppose first that \(i \notin L(A)\). Using that \(|{\mathcal {R}} |\) is bounded, it is not hard to see that \(\nu _i(A) \asymp \nu _i(L(A))\). We now invoke Proposition 5.2 and get
In the complementary case, \(i \in L(A)\), a simple argument yields
as well as \(\sigma _i \asymp 1 + \phi ^{1/2}\). From Proposition 5.2 we therefore get
where we used that \(M \asymp (1 + \phi ) K\). Recalling (5.29), we conclude
(b) \(i = j \in A\). We consider the two cases \(i \in S(A)\) and \(i \notin S(A)\). Suppose first that \(i \in S(A)\). We write
We compute the first term of (5.32) using Proposition 5.2 and the observation that \(\nu _i(A) \asymp \nu _i(S(A))\):
In order to estimate the second term of (5.32), we note that \(\nu _i(A) \asymp \nu _i(A {\setminus } S(A))\). We therefore apply (5.31) with \(A\) replaced by \(A \setminus S(A)\) to get
Going back to (5.32), we have therefore proved that
for \(i \in S(A)\).
Next, we consider the case \(i \notin S(A)\). Now we have (5.30), so that Proposition 5.2 yields
By (5.30) and \(M \asymp (1 + \phi ) K\), we have
from which we deduce (5.33) also in the case \(i \notin S(A)\).
(c) \(i \ne j\) and \(i \notin A\) or \(j \notin A\). From cases (a) and (b) [i.e. (5.31) and (5.33)], combined with the estimate
we find, assuming \(i \notin A\) or \(j \notin A\), that (5.2) holds with an additional factor \(K^{2 \delta }\) multiplying the right-hand side.
(d) \(i \ne j\) and \(i,j \in A\). We now deal with the last remaining case by using the splitting
The goal is to show that
Note that here \(\sigma _i \asymp \sigma _j \asymp 1 + \phi ^{1/2}\). We consider the four cases (i) \(i,j \in S(A)\), (ii) \(i \in S(A)\) and \(j \notin S(A)\), (iii) \(i \notin S(A)\) and \(j \in S(A)\), and (iv) \(i,j \notin S(A)\).
Consider first the case (i). The first term of (5.34) is bounded using Proposition 5.2 combined with \(\nu _i(A) \asymp \nu _i(S(A))\) and \(\nu _j(A) \asymp \nu _j(S(A))\). The second term of (5.34) is bounded using (5.2) from case (c) combined with \(\nu _i(A) \leqslant C \nu _i(A {\setminus } S(A))\) and \(\nu _j(A) \leqslant C \nu _j(A {\setminus } S(A))\). This yields (5.35) for \(\langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle \) in the case (i).
Next, consider the case (ii). For the first term of (5.34) we use the estimates
Thus we get from (5.4)
In order to estimate the last term, we first assume that \(d_j \leqslant d_i\) and \(d_i - 1 \leqslant 2 |d_i - d_j |\). Then we find
Conversely, if \(d_i \leqslant d_j\) or \(d_i - 1 \geqslant 2 |d_i - d_j |\), we have \(d_i - 1 \leqslant 2 (d_j - 1)\). Therefore, using (5.36) and the estimate \(M \asymp (1 + \phi ) K\), we get
Putting (5.37), (5.38), and (5.39) together, we may estimate the first term of (5.34) in the case (ii) as
For the second term of (5.34) in the case (ii) we use the estimates
Thus we get from case (c) that
where the last term is bounded by \(K^\delta \frac{1 + \phi ^{1/2}}{M \nu _i(A) \nu _j(A)}\). Recalling (5.40), we find (5.35) in the case (ii). The case (iii) is dealt with in the same way.
What remains therefore is case (iv). For the first term of (5.34) we use the estimates
Thus we get from (5.4) that
For the second term of (5.34) we use the estimates
Therefore we get from case (c) that
and a similar estimate holds for \(\langle {\mathbf{{v}}_j} , {P_{A \setminus S(A)} \mathbf{{v}}_j}\rangle \). Thus we conclude that
which is (5.35). This concludes the analysis of case (iv), and hence of case (d).
Conclusion of the proof Putting the cases (a)–(d) together, we have proved that (5.2) holds for arbitrary \(i,j\) with an additional factor \(K^{2 \delta }\) multiplying the error term on the right-hand side. Since \(\delta > 0\) can be chosen arbitrarily small, (5.2) follows. This concludes the proof of Proposition 5.1. \(\square \)
The construction of the sets \(S(A)\) and \(L(A)\). The black and white dots are the outlier indices \(\{{d_i \mathrel {\mathop :}i \in {\mathcal {O}}_{\tau /2}^+}\}\), contained in the interval \([1, \infty )\). Around each outlier index \(d_i\) we draw a grey circle of radius \((d_i - 1)^{-1/2} K^{-1/2 + \delta }\). By definition, two dots overlap if one is contained in the grey circle of the other. The three pictures depict (from top to bottom) the sets \(A\), \(S(A)\), and \(L(A)\), respectively. In each case, the given set is drawn using black dots and its complement using white dots
6 Non-outlier eigenvectors
In this section we focus on the non-outlier eigenvectors \(\varvec{\xi }_a\), \(a \notin {\mathcal {O}}\), as well as outlier eigenvectors close to the bulk spectrum. We derive isotropic delocalization bounds for \(\varvec{\xi }_a\) and establish the asymptotic law of the generalized components of \(\varvec{\xi }_a\). We also use the former result to complete the proof of Theorem 2.11 on the outlier eigenvectors.
In Sect. 6.1 we derive isotropic delocalization bounds on \(\varvec{\xi }_a\) for \({{\mathrm{dist}}}\{d_a, [-1,1]\} \leqslant 1 + K^{-1/3 + \tau }\). In Sect. 6.2 we use these bounds to prove Theorem 2.11 and to complete the proof of Theorem 2.17 started in Sect. 5. Next, in Sect. 6.3 we derive the law of the generalized components of \(\varvec{\xi }_a\) for \(a \notin {\mathcal {O}}\). This argument requires two tools as input: level repulsion (Proposition 6.3) and quantum unique ergodicity (see Sect. 1.1) of the eigenvectors \(\mathbf{{\zeta }}_b\) of \(H\) (Proposition 6.6). Both are explained in detail and proved below.
6.1 Bound on the spectral projections in the neighbourhood of the bulk spectrum
We first consider eigenvectors near the right edge of the bulk spectrum. Recall the typical distance from \(\mu _a\) to the spectral edges, denoted by \(\kappa _a\) and defined in (2.18).
Proposition 6.1
(Eigenvectors near the right edge) Fix \(\tau \in (0,1/3)\). For \(a \in [\![{s_+ + 1, (1 - \tau )K}]\!]\) we have
Moreover\(,\) if \(a \in [\![{1, s_+}]\!]\) satisfies \(d_a \leqslant 1 + K^{-1/3 + \tau }\) then
Proposition 6.1 has a close analogue for the left edge of the bulk spectrum, which holds under the additional condition \(|\phi - 1 | \geqslant \tau \); we omit its detailed statement.
Proof of Proposition 6.1
Suppose first that \(i \in {\mathcal {R}}\). Let \(\varepsilon > 0\) and set \(\omega \mathrel {\mathop :}=\varepsilon / 2\). Using (3.25), Remark 3.3, Theorem 2.3, and (3.15), we choose a high-probability event \(\Xi \) satisfying (4.7), (4.8), and
For the following we fix a realization \(H \in \Xi \). We choose the spectral parameter \(z = \mu _a + \eta \), where \(\eta > 0\) is the smallest (in fact unique) solution of the equation \(\hbox {Im }w_\phi (\mu _a + \mathrm {i}\eta ) = K^{-1 + 6 \varepsilon } \eta ^{-1}\). Hence (4.8) reads
Abbreviating \(\kappa \equiv \kappa (\mu _a)\), we find from (3.20) that
and
Armed with these definitions, we may begin the estimate of \(\langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle ^2\). The starting point is the bound
which follows easily by spectral decomposition. Since \(i \in {\mathcal {R}}\), we get from (3.37), omitting the arguments \(z\) for brevity,
where the last step follows from a resolvent expansion as in (5.14). We estimate the error terms using
where we used the definition of \(\eta \) and (6.4). Hence a resolvent expansion yields
We therefore get from (6.8) that
Next, we claim that for any fixed \(\delta \in [0, 1/3-\varepsilon )\) we have the lower bound
whenever \(\mu _a \in [\theta (0), \theta (1 + K^{-1/3 + \delta + \varepsilon })]\). To prove (6.10), suppose first that \(|d - 1 | \geqslant 1/2\). By (3.21), there exists a constant \(c_0 > 0\) such that for \(\kappa \leqslant c_0\) we have \(|\hbox {Re }w_\phi + 1 | \leqslant 1/4\). Thus we get, for \(\kappa \leqslant c_0\),
where we used that \(\hbox {Im }w_\phi \leqslant C\) by (3.19). Moreover, if \(\kappa \geqslant c_0\) we find from (3.20) that \(\hbox {Im }w_\phi \geqslant c\), from which we get
where in the second step we used \(|\hbox {Re }w_\phi | \leqslant C\) as follows from (3.19). This concludes the proof of (6.10) for the case \(|d - 1 | \geqslant 1/2\).
Suppose now that \(|d - 1 | \leqslant 1/2\). Then we get
We shall estimate this using the elementary bound
For \(\mu _a \in [\theta (0), \theta (1)]\) we get from (6.11) with \(M = C\), recalling (3.20) and (3.21), that \(|1 + d w_\phi | \geqslant c(|d - 1 | + \hbox {Im }w_\phi )\). By a similar argument, for \(\mu _a \in [\theta (1), \theta (1 + K^{-1/3 + \delta + \varepsilon })]\) we set \(M = K^{2 \delta }\) and get (6.10) using (6.5) and (6.6). This concludes the proof of (6.10).
Going back to (6.7), we find using (6.9)
Using \(|z | \asymp \mu _a \asymp \phi ^{-1/2} + \phi ^{1/2}\) and (6.10), we estimate the first term on the right-hand side of (6.12) as
where in the last step we used that \(\eta \leqslant K^{-2/3 + 4 \varepsilon + \delta }\), as follows from (6.5) and (6.6).
Next, we estimate the second term of (6.12) as
We estimate the last term of (6.12) as
Putting all three estimates together, we conclude that
In order to estimate the denominator of (6.13) from below using (6.10), we need a suitable lower bound on \(\hbox {Im }w_\phi (\mu _a + \mathrm {i}\eta )\). First, if \(a \geqslant s_+ + 1\) then we get from (4.7), Corollary 4.2, (6.5), and (3.20) that
in which case we get by choosing \(\delta = 0\) in (6.10) that
Next, if \(a \leqslant s_+\) satisfies \(d_a \leqslant K^{-1/3 + \tau }\) we get from (6.5), (6.6), and (3.20) that
In this case we have \(\mu _a \leqslant \theta (1 + K^{-1/3 + \tau + \varepsilon })\) by (6.3), so that setting \(\delta = \tau \) in (6.10) yields
Since \(\varepsilon > 0\) was arbitrary, (6.1) and (6.2) follow from (6.14) and (6.15) respectively. This concludes the proof of Proposition 6.1 in the case \(i \in {\mathcal {R}}\).
Finally, the case \(i \notin {\mathcal {R}}\) is handled by replacing \({\mathcal {R}}\) with \(\widehat{{\mathcal {R}}} \mathrel {\mathop :}={\mathcal {R}} \cup \{i\}\) and using a limiting argument, exactly as after (5.28). \(\square \)
6.2 Proof of Theorems 2.11 and 2.17
We now have all the ingredients needed to prove Theorems 2.11 and 2.17.
Proof of Theorem 2.17
The estimate (2.19) is an immediate corollary of (6.1) from Proposition 6.1. The estimate (2.20) is proved similarly (see also the remark following Proposition 6.1). \(\square \)
Proof of Theorem 2.11
We prove Theorem 2.11 using Propositions 5.1, 5.2, and 6.1. First we remark that it suffices to prove that (5.2) holds for \(A \subset {\mathcal {O}}\) satisfying \(1 + K^{-1/3} \leqslant d_k \leqslant \tau ^{-1}\) for all \(k \in A\). Indeed, supposing this is done, we get the estimate
from which Theorem 2.11 follows by noting that the second error term may be absorbed into the first, recalling that \(\sigma _i \asymp 1 + \phi ^{1/2}\) for \(i \in A\), that \(M \asymp (1 + \phi ) K\), and that \(d_i - 1 \geqslant K^{-1/3}\).
Fix \(\varepsilon > 0\). Note that there exists some \(s \in [1, |{\mathcal {R}} |]\) satisfying the following gap condition: for all \(k\) such that \(d_k > 1 + s K^{-1/3 + \varepsilon }\) we have \(d_k \geqslant 1 + (s+1) K^{-1/3 + \varepsilon }\). The idea of the proof is to split \(A = A_0 \sqcup A_1\), such that \(d_k \leqslant 1 + s K^{-1/3 + \varepsilon }\) for \(k \in A_0\) and \(d_k \geqslant 1 + (s + 1) K^{-1/3 + \varepsilon }\) for \(k \in A_1\). Note that such a splitting exists by the above gap property. Without loss of generality, we assume that \(A_0 \ne \emptyset \) (for otherwise the claim follows from Proposition 5.1).
It suffices to consider the six cases (a) \(i,j \in A_0\), (b) \(i \in A_0\) and \(j \in A_1\), (c) \(i \in A_0\) and \(j \notin A\), (d) \(i,j \in A_1\), (e) \(i \in A_1\) and \(j \notin A\), (f) \(i,j \notin A\).
(a) \(i,j \in A_0\)
We split
We apply Cauchy–Schwarz and Proposition 6.1 to the first term, and Proposition 5.1 to the second term. Using the above gap condition, we find
where the last step follows from \(d_i - 1 \leqslant C K^{-1/3 + \varepsilon }\).
(b) \(i \in A_0\) and \(j \in A_1\)
For this case it is crucial to use the stronger bound (5.4) and not (5.2). Hence, we need the non-overlapping condition (5.3). To that end, we assume first that (5.3) holds with \(\delta \mathrel {\mathop :}=\varepsilon \). Thus, by the above gap assumption (5.3) also holds for \(A_1\). In this case we get from (6.16) and Propositions 5.2 and 6.1 that
Clearly, the first two terms are bounded by the right-hand side of (5.2) times \(K^{3 \varepsilon }\). The last term is estimated as
where we used that \(d_i - 1 \leqslant d_j - 1 \leqslant C |d_i - d_j |\) be the above gap condition. This concludes the proof in the case where the non-overlapping condition (5.3) holds.
If (5.3) does not hold, we replace \(A_1\) with the smaller set \(S(A_1)\) defined in Sect. 5.2. Then we proceed as above, except that we have to deal in addition with the term \(\langle {\mathbf{{v}}_i} , {P_{A_1 \setminus S(A_1)}}\rangle \). The details are analogous to those of Sect. 5.2, and we omit them here.
(c), (e), (f) \(j \notin A\)
We use the splitting (6.16) and apply Cauchy–Schwarz and Proposition 6.1 to the first term, and Proposition 5.1 to the second term. Since \(\nu _j(A_1) \leqslant |d_j - 1 |\) in all cases, it is easy to prove that (6.16) is bounded by \(K^{3 \varepsilon }\) times the right-hand side of (5.2).
(d) \(i,j \in A_1\)
From (6.16) and Propositions 6.1 and 5.1 we get
From which we get (5.2) with the error term multiplied by \(K^{3 \varepsilon }\).
Conclusion of the proof We have proved that, for all \(i,j \in [\![{1,M}]\!]\) and \(A\) satisfying the assumptions of Theorem 2.11, the estimate (5.2) holds with an additional factor \(K^{3 \varepsilon }\) multiplying the error term. Since \(\varepsilon \) was arbitrary, we get (5.2). This concludes the proof. \(\square \)
6.3 The law of the non-outlier eigenvectors
For \(a \leqslant K/2\) define
the typical distance between \(\lambda _{a+1}\) and \(\lambda _a\). More precisely, the classical locations \(\gamma _a\) defined in (3.14) satisfy \(\gamma _a - \gamma _{a+1} \asymp \Delta _a\) for \(a \leqslant K/2\).
We may now state the main result behind the proof of Theorem 2.20. Recall the definitions (2.8) of \(\alpha _+\) and (2.3) of \(s_+\), the number of outliers to the right of the bulk spectrum. Recall also from (3.11) and (3.12) that \(\{\lambda _a\}\) and \(\{\varvec{\zeta }_a\}\) denote the eigenvalues and eigenvectors of \(H = X X^*\).
Proposition 6.2
Let \(s_+ + 1 \leqslant a \leqslant K^{1 - \tau } \alpha _+^3\) and define \(b \mathrel {\mathop :}=a - s_+\). Define the event
Then
Informally, Proposition 6.2 expresses generalized components of the eigenvectors of \(Q\) in terms of generalized components of eigenvectors of \(H\), under the assumption that \(\Omega \) has high probability. We first show how Proposition 6.2 implies Theorem 2.20. This argument requires two key tools. The first one is level repulsion, which, together with the eigenvalue sticking from Theorem 2.7, will imply that \(\Omega \) indeed has high probability. The second tool is quantum unique ergodicity (See Sect. 1.1) of the eigenvectors of \(H\), which establishes the law of the generalized components of the eigenvectors of \(H\).
The precise statement of level repulsion sufficient for our needs is as follows.
Proposition 6.3
(Level repulsion) Fix \(\tau \in (0, 1)\). For any \(\varepsilon > 0\) there exists a \(\delta > 0\) such that for all \(a \leqslant K^{1 - \tau }\) we have
The proof of Proposition 6.3 consists of two steps: (i) establishing (6.19) for the case of Gaussian \(X\) and (ii) a comparison argument showing that if \(X^{(1)}\) and \(X^{(2)}\) are two matrix ensembles satisfying (1.15) and (1.16), and if (6.19) holds for \(X^{(1)}\), then (6.19) also holds for \(X^{(2)}\). Both steps have already appeared, in a somewhat different form, in the literature. Step (i) is performed in Lemma 6.4 below, and step (ii) in Lemma 6.5 below. Together, Lemmas 6.4 and 6.5 immediately yield Proposition 6.3.
Lemma 6.4
(Level repulsion for the Gaussian case) Proposition 6.3 holds if \(X\) is Gaussian.
Proof
We mimic the proof of Theorem 3.2 in [14]. Indeed, the proof from [14, Appendix D] carries over almost verbatim. The key input is the eigenvalue rigidity from Theorem 3.5, which for the model of [14] was established using a different method than Theorem 3.5. As in [14], we condition on the eigenvalues \(\{\lambda _i \mathrel {\mathop :}i > K^{1 - \tau }\}\). On the conditioned measure, level repulsion follows as in [14]. Finally, thanks to Theorem 3.5 we know that the frozen eigenvalues \(\{\lambda _i \mathrel {\mathop :}i > K^{1 - \tau }\}\) are with high probability near their classical locations. Note that for \(\phi \approx 1\), the rigidity estimate (3.15) only holds for indices \(i \leqslant (1 - \tau ) K\); however, this is enough for the argument of [14, Appendix D], which is insensitive to the locations of eigenvalues at a distance of order one from the right edge \(\gamma _+\). We omit the full details. \(\square \)
Lemma 6.5
(Stability of level repulsion) Let \(X^{(1)}\) and \(X^{(2)}\) be two matrix ensembles satisfying (1.15) and (1.16). Suppose that Proposition 6.3 holds for \(X^{(1)}\). Then Proposition 6.3 also holds for \(X^{(2)}\).
The proof of Lemma 6.5 relies on Green function comparison, and is given in Sect. 7.4.
The second tool behind the proof of Theorem 2.20 is the quantum unique ergodicity of the eigenvectors \(\varvec{\zeta }_a\) of the matrix \(H = X X^*\), stated in Proposition 6.6 below. As noted in Sect. 1.1, quantum unique ergodicity is a term borrowed from quantum chaos that describes the complete “flatness” of the eigenvectors of \(H\). Here “flatness” means that the eigenvectors are asymptotically uniformly distributed on the unit sphere of \(\mathbb {R}^M\). The first result on quantum unique ergodicity of Wigner matrices is [29], where the quantum unique ergodicity of eigenvectors near the spectral edge was established. Under an additional four-moment matching condition, this result was extended to the bulk. Subsequently, this second result was derived using a different method in [41]. Recently, a new approach to the quantum unique ergodicity was developed in [15], where quantum unique ergodicity is established for all eigenvectors of generalized Wigner matrices. In this paper, we adopt the approach of [29], based on Green function comparison. As compared to the method of [15], its first advantage is that it is completely local in the spectrum, and in particular when applied near the right-hand edge of the spectrum it is insensitive to the presence of a hard edge at the origin. The second advantage of the current method is that it is very robust and may be used to establish the asymptotic joint distribution of an arbitrary family of generalized components of eigenvectors, as in Remark 6.7 below; we remark that such joint laws cannot currently be analysed using the method of [15]. On the other hand, our results only hold for eigenvector indices \(a\) satisfying \(a \leqslant K^{1 - \tau }\) for some \(\tau > 0\), while those of [15] admit \(\tau = 0\).
Our proof of quantum unique ergodicity generalizes that of [29] in three directions. First, we extend the method of [29] to sample covariance matrices (in fact to general sample covariance matrices of the form (1.10) with \(\Sigma = T T^* = I_M\); see Sect. 8). Second, we consider generalized components \(\langle {\mathbf{{w}}} , {\zeta _a}\rangle \) of the eigenvectors instead of the cartesian components \(\zeta _a(i)\). The third and deepest generalization is that we establish quantum unique ergodicity much further into the bulk, requiring only that \(a \leqslant K^{1 - \tau }\) instead of the assumption \(a \leqslant (\log K)^{C \log \log K}\) from [29].
Proposition 6.6
(Quantum unique ergodicity) Fix \(\tau \in (0,1)\). Then for any \(a \leqslant K^{1 - \tau }\) and deterministic unit vector \(\mathbf{{w}} \in \mathbb {R}^M\) we have
in the sense of moments\(,\) uniformly in \(a\) and \(\mathbf{{w}}\).
Remark 6.7
For simplicity, and bearing the application to Theorem 2.20 in mind, in Proposition 6.6 we establish the convergence of a single generalized component of a single eigenvector. However, our method may be easily extended to yield
for any deterministic unit vectors \(\mathbf{{v}}_1,\ldots , \mathbf{{v}}_l, \mathbf{{w}}_1,\ldots , \mathbf{{w}}_k \in \mathbb {R}^M\) and \(a_1 < \cdots < a_k \leqslant K^{1 - \tau }\), whereby we use the notation \(A_N \overset{d}{\sim }B_N\) to mean that \(A_N\) and \(B_N\) are tight, and \(\lim _{N \rightarrow \infty }\mathbb {E}(f(A_N) - f(B_N)) = 0\) for all polynomially bounded and continuous \(f\). Here \((Z_1,\ldots , Z_k)\) is a family of independent random variables defined by \(Z_i = A_i B_i\), where \(A_i\) and \(B_i\) are jointly Gaussian with covariance matrix
The proof of this generalization of Proposition 6.6 follows that of Proposition 6.6 presented in Sect. 7, requiring only heavier notation. In fact, our method may also be used to prove the universality of the joint eigenvalue–eigenvector distribution for any matrix \(Q\) of the form (1.10) with \(\Sigma = T T^* = I_M\); see Theorem 8.3 below for a precise statement.
The proof of Proposition 6.6 is postponed to Sect. 7.
Supposing Proposition 6.2 holds, together with Propositions 6.3 and 6.6, we may complete the proof of Theorem 2.20.
Proof of Theorem 2.20
Abbreviating \(b \mathrel {\mathop :}=a - s_+\) and
we define
Then, by assumption on \(a\), we may rewrite (6.18) as
Moreover, by Theorem 2.7 and Proposition 6.3, we have \(\mathbb {P}(\Omega ) \geqslant 1 - K^{-c}\) for some constant \(c > 0\). Finally, by Proposition 6.6 we have \(\widehat{\Theta }(a, \mathbf{{w}}) \rightarrow \chi _1^2\) in distribution (even in the sense of moments). The claim now follows easily. \(\square \)
The remainder of this section is devoted to the proof of Proposition 6.2.
Proof of Proposition 6.2
We define the contour \(\Gamma _a\) as the positively oriented circle of radius \(K^{-\tau /5} \Delta _a\) with centre \(\lambda _b\). Let \(\varepsilon > 0\) and \(\tau \mathrel {\mathop :}=1/2\), and choose a high-probability event \(\Xi \) such that (4.7), (4.8), and (4.9) hold. For the following we fix a realization \(H \in \Omega \cap \Xi \). Define
By the residue theorem and the definition of \(\Omega \), we find
To simplify notation, suppose now that \(i \in {\mathcal {R}}\) and consider \(\mathbf{{w}} = \mathbf{{v}}_i\). From (3.37) we find that
In order to compute (6.22), we need precise estimates for \(W\) on \(\Gamma _a\). Because the contour \(\Gamma _a\) crosses the branch cut of \(w_\phi \), we should not compare \(W(z)\) to \(w_\phi (z)\) for \(z \in \Gamma _a\). Instead, we compare \(W(z)\) to \(w_\phi (z_0)\), where
We claim that
for all \(z \in \Gamma _a\). To see this, we split
We estimate the first term of (6.24) by spectral decomposition, using that \({{\mathrm{dist}}}(z, \sigma (H)) \geqslant c \eta \), similarly to (4.12). The result is
where we used (4.7), (4.9), and Lemma 3.6. Moreover, we estimate the second term of (6.24) using (4.8) as
This concludes the proof of (6.23).
Next, we claim that
The proof of (6.25) is analogous to that of (6.10), using (4.7) and the assumption on \(a\); we omit the details.
Armed with (6.23) and (6.25), we may analyse (6.22). A resolvent expansion in the matrix \(w_\phi (z_0) - W(z)\) yields
We estimate the third term using the bound
To prove (6.27), we note first that by (6.25) we have
By (6.23) and assumption on \(a\), it is easy to check that
from which (6.27) follows.
We may now return to (6.26). The first term vanishes, the second is computed by spectral decomposition of \(W\), and the third is estimated using (6.27). This gives
where we also used (6.25).
Recalling (6.21) and (4.7), we therefore get
where we used \(\phi ^{1/2}\mu _a \asymp 1 + \phi \).
In order to simplify the leading term, we use
as follows from
where we used Lemma 3.6. Moreover, we use that
Using that \(\Xi \) has high probability for all \(\varepsilon > 0\) and recalling the isotropic delocalization bound (3.13), we therefore get for any random \(H\) that
We proved (6.28) under the assumption that \(i \in {\mathcal {R}}\), but a continuity argument analogous to that given after (5.28) implies that (6.28) holds for all \(i \in [\![{1,M}]\!]\). The above argument may be repeated verbatim to yield
Since we may always choose the basis \(\{\mathbf{{v}}_i\}_{i = 1}^M\) so that at most \(|{\mathcal {R}} | + 1\) components of \((w_1,\ldots , w_M)\) are nonzero, the claim now follows easily. \(\square \)
7 Quantum unique ergodicity near the soft edge of \(H\)
This section is devoted to the proof of Proposition 6.6.
Lemma 7.1
Fix \(\tau \in (0,1)\). Let \(h\) be a smooth function satisfying
for some positive constant \(C\). Let \(a \leqslant K^{1 - \tau }\) and suppose that \(\lambda _a\) satisfies (6.19) with some constants \(\varepsilon \) and \(\delta \). Then for small enough \(\delta _1 = \delta _1(\varepsilon , \delta )\) and \(\delta _2 = \delta _2(\varepsilon , \delta , \delta _1)\) the following holds. Defining
we have
where we defined \(\chi (E) \mathrel {\mathop :}=\mathbf{{1}} (\lambda _{a+1} \leqslant E^- \leqslant \lambda _a)\).
Proof
By the assumption (7.1) on \(h\), rigidity (3.15), and delocalization (3.13), we can write
provided that
where we defined \(\lambda _a^\pm \mathrel {\mathop :}=\lambda _a \pm K^{\delta _1}\eta \). For the following we choose
Now from (6.19) we get \(\mathbb {P}(\lambda _{a+1}^+\geqslant \lambda _a^-)\leqslant K^{-\delta }\) for \(\delta _1 < \varepsilon \). For \(\delta _1 < \delta \wedge \varepsilon \), we therefore get
In order to obtain (7.3), we have to rewrite the integrand on the right-hand side of (7.5) in terms of
Hence (7.5) and (7.6) combined with the mean value theorem imply that the left-hand side of (7.3) is bounded by
for any fixed \(\delta _2 \in (0,\delta _1)\). When applying the mean value theorem, we estimated the value of \(\theta '(\cdot )\) using (7.1), the fact that all terms on the right-hand side of (7.6) are nonnegative, and the estimate
The proof of (7.8) follows by using the spectral decomposition from (7.6) with the delocalization bound (3.13); for \(|b - a | \geqslant K^{\delta _2}\) we use the rigidity bound (3.15), and for \(|b - a | \leqslant K^{\delta _2}\) we estimate the integral using \(\int \frac{\eta }{e^2 + \eta ^2} \, \mathrm {d}e = \pi \). We omit the full details.
Next, using the eigenvalue rigidity from (3.15), it is not hard to see that there exists a constant \(C_1\) such that the contribution of \(|b - a| \geqslant K^{C_1\delta _2}\) to (7.7) is bounded by \(K^{-\delta _2}\). In order to prove (7.3), therefore, it suffices to prove
For \(b > a\), we get using (3.13) that
which is the right-hand side of (7.9) provided \(\delta _2\) is chosen small enough. Here in the first step we replaced \(\lambda _b\) with \(\lambda _{a+1}\) using the estimates \(\lambda _b \leqslant \lambda _{a+1} \leqslant E - K^{\delta _1} \eta \) valid for \(b > a\) and \(E\) in the support of \(\chi \).
For \(b < a\), we partition \(I = I_1 \cup I_2\) with \(I_1 \cap I_2 = \emptyset \) and
As above, we find
Let us therefore consider the integral over \(I_{1}\). One readily finds, for \(\lambda _a \leqslant \lambda _{a -1} \leqslant \lambda _b\), that
Using delocalization (3.13) we therefore find that
The expectation \(\mathbb {E}\, \frac{\eta ^2}{(\lambda _{a - 1}-\lambda _a)^2+\eta ^2}\) in (7.10) is bounded by \(\mathbb {P}(|\lambda _{a - 1}-\lambda _a|\leqslant \Delta _a K^{-\varepsilon }) +O(K^{-\varepsilon })\). Using (6.19), we therefore obtain
This concludes the proof. \(\square \)
In the next step, stated in Lemma 7.2 below, we replace the sharp cutoff function \(\chi \) in (7.3) with a smooth function of \(H\). Note first that from Lemma 7.1 and the rigidity (3.15), we get
where \(\tilde{E} \mathrel {\mathop :}=\gamma _+ + 1\) and \({\mathcal {N}}( E^-,\tilde{E}) \mathrel {\mathop :}=|\{i \mathrel {\mathop :}E^- < \lambda _i < \tilde{E}\} |\) is an eigenvalue counting function.
Next, for any \(E_1, E_2 \in [\gamma _- - 1, \gamma _+ + 1]\) and \(\tilde{\eta }>0\) we define \(f(\lambda ) \equiv f_{E_1,E_2,\tilde{\eta }}(\lambda )\) to be the characteristic function of \([E_1, E_2]\) smoothed on scale \(\tilde{\eta }\): \(f = 1\) on \([E_1, E_2]\), \(f = 0\) on \(\mathbb {R}\setminus [E_1-\tilde{\eta }, E_2+\tilde{\eta }]\) and \(|f' |\leqslant C\,\tilde{\eta }^{-1}\), \(|f''|\leqslant C\,\tilde{\eta }^{-2}\). Moreover, let \(q \equiv q_a:\mathbb {R}\rightarrow \mathbb {R}_+\) be a smooth cutoff function concentrated around \(a\), satisfying
The following result is the appropriate smoothed version of (7.11). It is a simple extension of Lemma 3.2 and Equation (5.8) from [29], and its proof is omitted.
Lemma 7.2
Let \(\tilde{E} \mathrel {\mathop :}=\gamma _+ + 1\) and
and abbreviate \(q \equiv q_a\) and \(f_E \equiv f_{ E^-, \tilde{E},\tilde{\eta }}\). Then under the assumptions of Lemma 7.1 we have
We may now conclude the proof of Proposition 6.6.
Proof of Proposition 6.6
The basic strategy of the proof is to compare the distribution of \(\langle {\mathbf{{w}}} , {\mathbf{{\varvec{\zeta }}}_a}\rangle ^2\) under a general \(X\) to that under a Gaussian \(X\). In the latter case, by unitary invariance of \(H = X X^*\), we know that \(\varvec{\zeta }_a\) is uniformly distributed on the unit sphere of \(\mathbb {R}^M\), so that \(M\langle {\mathbf{{w}}} , {\mathbf{{\varvec{\zeta }}}_a}\rangle ^2 \rightarrow \chi _1^2\) in distribution.
For the comparison argument, we use the Green function comparison method applied to the Helffer–Sjöstrand representation of \(f(H)\). Using Lemma 7.2 it suffices to estimate
where \(\mathbb {E}^{\mathrm{{Gauss}}}\) denotes the expectation with respect to Gaussian \(X\). Now we express \(f (H)\) in terms of Green functions using Helffer–Sjöstrand functional calculus. Recall the definition of \(\kappa _a\) from (2.18). Let \(g(y)\) be a smooth cutoff function with support in \([-\kappa _a, \kappa _a]\), with \(g(y)=1\) for \(|y| \leqslant \kappa _a/2\) and \(\Vert g^{(n)}\Vert _\infty \leqslant C\kappa _a^{-n}\), where \(g^{(n)}\) denotes the \(n\)-th derivative of \(g\). Then, similarly to (3.30), we have (see e.g. Equation (B.12) of [21])
Thus we get the functional calculus, with \(G(z)=(H-z)^{-1}\),
As in Lemma 5.1 of [29], one can easily extend (3.9) to \(\eta \) satisfying the lower bound \(\eta > 0\) instead of \(\eta \geqslant K^{-1 + \omega }\) in (3.7); the proof is identical to that of [29, Lemma 5.1]. Thus we have, for \(e \in [\gamma _+ - 1, \gamma _+ + 1]\) and \(\sigma \in (0, 1)\),
Therefore, by the trivial symmetry \(\sigma \mapsto -\sigma \) combined with complex conjugation, the third term on the right-hand side of (7.15) is bounded by
where we used that \(\int |f''_E(e) | \, \mathrm {d}e = O(\tilde{\eta }^{-1})\). Next, we note that (3.9) and Lemma 3.6 imply
Recalling (7.1) and using the mean value theorem, we find from (7.13), (7.17), and (7.18) that for large enough \(d\), in order to estimate (7.14), and hence prove (6.20), it suffices to prove the following lemma. Note that in it we choose \(X^{(1)}\) to be the original ensemble and \(X^{(2)}\) to be the Gaussian ensemble. \(\square \)
Lemma 7.3
Suppose that the two \(M \times N\) matrix ensembles \(X^{(1)}\) and \(X^{(2)}\) satisfy (1.15) and (1.16). Suppose that the assumptions of Lemma 7.1 hold\(,\) and recall the notations
as well as
where \(f_{E^-, \tilde{E}, \tilde{\eta }}\) was defined above (7.12). Recall \(q \equiv q_a\) from (7.12). Finally\(,\) suppose that \(\varepsilon >4\delta _1\) and \(\delta _1>4 \delta _2\).
Then for any \(d > 1\) and for small enough \(\varepsilon \equiv \varepsilon (\tau ,d) > 0\) and \(\delta _2 \equiv \delta _2(\varepsilon , \delta , \delta _1)\) we have
where we defined
and
The rest of this section is devoted to the proof of Lemma 7.3.
7.1 Proof of Lemma 7.3 I: preparations
We shall use the Green function comparison method [22, 23, 29] to prove Lemma 7.3. For definiteness, we assume throughout the remainder of Sect. 7 that \(\phi \geqslant 1\). The case \(\phi < 1\) is dealt with similarly, and we omit the details.
We first collect some basic identities and estimates that serve as a starting point for the Green function comparison argument. We work on the product probability space of the ensembles \(X^{(1)}\) and \(X^{(2)}\). We fix a bijective ordering map \(\Phi \) on the index set of the matrix entries,
and define the interpolating matrix \(X_\gamma \), \(\gamma \in [\![{1, MN}]\!]\), through
In particular, \(X_0 = X^{(1)}\) and \( X_{MN} = X^{(2)}\). Hence we have the telescopic sum
(in self-explanatory notation).
Let us now fix a \(\gamma \) and let \((b,\beta )\) be determined by \(\Phi (b, \beta ) = \gamma \). Throughout the following we consider \(b, \beta \) to be arbitrary but fixed and often omit dependence on them from the notation. Our strategy is to compare \(X_{\gamma -1}\) with \(X_\gamma \) for each \(\gamma \). In the end we shall sum up the differences in the telescopic sum (7.24).
Note that \(X_{\gamma - 1}\) and \(X_\gamma \) differ only in the matrix entry indexed by \((b,\beta )\). Thus we may write
Here \(\bar{X}\) is the matrix obtained from \(X_{\gamma }\) (or, equivalently, from \(X_{\gamma - 1}\)) by setting the entry indexed by \((b, \beta )\) to zero. Next, we define the resolvents
We shall show that the difference between the expectations \(\mathbb {E}^{X_{\gamma }}\) and \(\mathbb {E}^{\bar{X}}\) depends only on the first two moments of \(X^{(2)}_{b \beta }\), up to an error term that is negligible even after summation over \(\gamma \). Together with same argument applied to \(\mathbb {E}^{X_{\gamma -1}}\), and the fact that the second moments of \(X^{(1)}_{b \beta }\) and \(X^{(2)}_{b \beta }\) coincide, this will prove Lemma 7.3.
We define \(x_T(E)\) and \(y_T(E)\) as in (7.22) and (7.23) with \(G\) replaced by \(T\), and similarly \(x_S(E)\) and \(y_S(E)\) with \(G\) replaced by \(S\). Throughout the following we use the notation \(\mathbf{{w}} = (w(i))_{i = 1}^M\) for the components of \(\mathbf{{w}}\). In order to prove (7.21) using (7.24), it is enough to prove that for some constant \(c > 0\) we have
where \({\mathcal {A}}\) is polynomial of degree two in \(U_{b \beta }\) whose coefficients are \(\bar{X}\)-measurable.
The rest of this section is therefore devoted to the proof of (7.27). Recall that we assume throughout that \(\phi \geqslant 1\) for definiteness; in particular, \(K = N\).
We begin by collecting some basic identities from linear algebra. In addition to \(G(z) \mathrel {\mathop :}=(X X^* - z)^{-1}\) we introduce the auxiliary resolvent \(R(z) \mathrel {\mathop :}=(X^* X - z)^{-1}\). Moreover, for \(\mu \in [\![{1,M}]\!]\) we split
We also define the resolvent \(G^{[\mu ]} \mathrel {\mathop :}=(X^{[\mu ]} (X^{[\mu ]})^* - z)^{-1}\). A simple Neumann series yields the identity
Moreover, from [10, Equation (3.11)], we find
From (7.28) and (7.29) we easily get
Throughout the following we shall make use of the fundamental error parameter
which is analogous to the right-hand side of (3.9) and will play a similar role. We record the following estimate, which is analogous to Theorem 3.2.
Lemma 7.4
Under the assumptions of Theorem 3.2 we have\(,\) for \(z \in \mathbf{{S}},\)
and
Proof
This result is a generalization of (5.22) in [36]. The key identity is (7.30). Since \(G^{[\mu ]}\) is independent of \((X_{i \mu })_{i = 1}^N\), we may apply the large deviation estimate [10, Lemma 3.1] to \(G^{[\mu ]}X_{[\mu ]}\). Moreover, \(|R_{\mu \mu } | \prec 1\), as follows from Theorem 3.2 applied to \(X^*\), and Lemma 3.6. Thus we get
where the second step follows by spectral decomposition, the third step from Theorem 3.2 applied to \(X^{[\mu ]}\) as well as (3.22), and the last step by definition of \(\Psi \). This concludes the proof of (7.32).
Finally, (7.33) follows easily from Theorem 3.2 applied to the identity \(X^*G X = 1 + zR\). \(\square \)
After these preparations, we continue the proof of (7.27). We first expand the difference between \(S\) and \(T\) in terms of \(V\) [see (7.25)]. We use the resolvent expansion: for any \(m \in \mathbb {N}\) we have
and
Note that Theorem 3.2 and Lemma 7.4 immediately yield for \(z \in \mathbf{{S}}\)
Using (7.35), we may extend these estimates to analogous ones on \(\bar{X}\) and \(T\) instead of \(X_\gamma \) and \(S\). Indeed, using the facts \(\Vert R \Vert \leqslant \eta ^{-1}\), \(\Psi \geqslant N^{-1/2}\), and \(|U_{b \beta } | \prec \phi ^{-1/4}N^{-1/2}\) (which are easily derived from the definitions of the objects on the left-hand sides) combined with (7.35), we get the following result.
Lemma 7.5
For \(A\in \{S,T\}\) and \(B\in \{X_\gamma , \bar{X}\}\) we have
The final tool that we shall need is the following lemma, which collects basic algebraic properties of stochastic domination \(\prec \). We shall use them tacitly throughout the following. Their proof is an elementary exercise using union bounds and Cauchy–Schwarz. See [10, Lemma 3.2] for a more general statement.
Lemma 7.6
-
(i)
Suppose that \(A(v) \prec B(v)\) uniformly in \(v \in V\). If \(|V | \leqslant N^C\) for some constant \(C\) then \(\sum _{v \in V} A(v) \prec \sum _{v \in V} B(v)\).
-
(ii)
Suppose that \(A_{1} \prec B_{1}\) and \(A_{2} \prec B_{2}\). Then \(A_{1} A_{2} \prec B_{1} B_{2}\).
-
(iii)
Suppose that \(\Psi \geqslant N^{-C}\) is deterministic and \(A\) is a nonnegative random variable satisfying \(\mathbb {E}A^2 \leqslant N^{C}\). Then \(A \prec \Psi \) implies that \(\mathbb {E}A \prec \Psi \).
If the above random variables depend on an additional parameter \(u\) and all hypotheses are uniform in \(u\) then so are the conclusions.
7.2 Proof of Lemma 7.3 II: the main expansion
Lemma 7.5 contains the a-priori estimates needed to control the resolvent expansion (7.34). The precise form that we shall need is contained in the following lemma, which is our main expansion. Define the control parameter
where we recall the notation \(\mathbf{{w}} = (w(i))_{i = 1}^M\) for the components of \(\mathbf{{w}}\).
Lemma 7.7
(Resolvent expansion of \(x(E)\) and \(y(E))\) The following results hold for \(E \in I\). \((\)Recall the definition (7.20). For brevity\(,\) we omit \(E\) from our notation\(.)\)
-
(i)
We have the expansion
$$\begin{aligned} x_S - x_T = \sum _{l=1}^3 x_l \, U_{b\beta }^l + O_\prec \left( {\phi ^{-1} N^{-1} (\phi ^{1/4} |w(b) | + \Psi )^2}\right) , \end{aligned}$$(7.36)where \(x_l\) is a polynomial\(,\) with constant number of terms\(,\) in the variables
$$\begin{aligned} \bigl \{{T_{bb}, T_{\mathbf{{w}} b}, T_{b \mathbf{{w}}}, (T\bar{X})_{\mathbf{{w}} \beta }, (\bar{X}^*T)_{\beta \mathbf{{w}}}, (\bar{X}^*T\bar{X})_{\beta \beta }}\bigr \}. \end{aligned}$$In each term of \(x_l,\) the index \(\mathbf{{w}}\) appears exactly twice\(,\) while the indices \(b\) and \(\beta \) each appear exactly \(l\) times.
Moreover\(,\) we have the estimates
$$\begin{aligned} |x_1 | + |x_3 | \prec \phi ^{-1/4}N \Psi \Psi _b, \qquad |x_2 | \prec \phi ^{-1/2}N \Psi _b^2+N\Psi ^2, \end{aligned}$$(7.37)where the spectral parameter on the right-hand side is \(z = E + \mathrm {i}\eta \).
-
(ii)
We have the expansion
$$\begin{aligned} \hbox {Tr }S - \hbox {Tr }T = \sum _{l=1}^3 J_l U_{b\beta }^l + O_\prec \left( {\phi ^{-1} N^{-1} \Psi ^2}\right) , \end{aligned}$$(7.38)where \(J_l\) is a polynomial\(,\) with constant number of terms\(,\) in the variables
$$\begin{aligned} \bigl \{{T_{bb}, (T^2)_{bb}, (T^2 \bar{X})_{b\beta }, (\bar{X}^*T^2)_{\beta b}, (\bar{X}^*T\bar{X})_{\beta \beta }, (\bar{X}^*T^2\bar{X})_{\beta \beta }}\bigr \}. \end{aligned}$$In each term of \(J_l,T^2\) appears exactly once\(,\) while the indices \(b\) and \(\beta \) each appear exactly \(l\) times.
Moreover\(,\) for \(z \in \mathbf{{S}}\) we have the estimates
$$\begin{aligned} |J_1 | + |J_3 | \prec \phi ^{-1/4}N\Psi ^2, \qquad |J_2 | \prec N \Psi ^2. \end{aligned}$$(7.39) -
(iii)
Defining
$$\begin{aligned}&y_l \mathrel {\mathop :}=\frac{1}{2\pi }\int _{\mathbb {R}^2} J_{l} \, \Big ( \mathrm {i}\sigma f_E''(e) g(\sigma ) \, \mathbf{{1}} \left( {|\sigma | > \tilde{\eta }N^{-d \varepsilon }}\right) \\&\quad \qquad +\mathrm {i}f_E(e) g'(\sigma )- \sigma f_E'(e)g'(\sigma )\Big )\,\mathrm {d}e \, \mathrm {d}\sigma , \end{aligned}$$we have the expansion
$$\begin{aligned} y_S - y_T = \sum _{l=1}^3 y_{l} U_{a\beta }^l+ O_\prec \left( {N^{C \varepsilon } \phi ^{-1} N^{-2} \kappa _a^{1/2}}\right) \end{aligned}$$(7.40)together with the bounds
$$\begin{aligned} |y_1 | + |y_3 | \prec \phi ^{-1/4}N^{C\varepsilon }\kappa _a^{1/2}, \qquad |y_2 | \prec N^{C\varepsilon }\kappa _a^{1/2}. \end{aligned}$$(7.41)Here all constants \(C\) depend on the fixed parameter \(d\).
Proof
The proof is an application of the resolvent expansion (7.34) with \(m = 3\) to the definitions of \(x\) and \(y\).
We begin with part (i). The expansion (7.36) is obtained by expanding the resolvent \(S_{\mathbf{{w}} \mathbf{{w}}}\) in the definition of \(x_S\) using (7.34) with \(m = 3\). The terms are regrouped according the power, \(l\), of \(U_{b \beta }\). The error term of (7.36) contains all terms with \(l \geqslant 4\). It is a simple matter to check that the polynomials \(x_l\), for \(l = 1,2,3\), have the claimed algebraic properties. In order to establish the claimed bounds on the terms of the expansion, we use Lemma 7.5 to derive the estimates
and the same estimates hold if \(T\) is replaced by \(S\). Note that in (7.42) we used the bound \(|m_{\phi ^{-1}} | \asymp \phi ^{-1/2}\), which follows from the identity
and Lemma 3.6. Using (7.42), it is not hard to conclude the proof of part (i).
Part (ii) is proved in the same way as part (i), simply by setting \(\mathbf{{w}} = \mathbf{{e}}_i\) and summing over \(i = 1,\ldots , M\).
What remains is to prove the bounds in part (iii). To that end, we integrate by parts, first in \(e\) and then in \(\sigma \), in the term containing \( f_E''(e)\), and obtain
where we abbreviated \(\tilde{\eta }_d \mathrel {\mathop :}=\tilde{\eta }N^{-d \varepsilon }\). Thus we get the bound
Using (7.43), the conclusion of the proof of part (iii) follows by a careful estimate of each term on the right-hand side, using part (ii) as input. The ingredients are the definitions of \(\tilde{\eta }_d\) and \(\kappa _a\), as well as the estimate
The same argument yields the error bound in (7.40). This concludes the proof. \(\square \)
Armed with the expansion from Lemma 7.7, we may do a Taylor expansion of \(q\). To that end, we record the estimate \( \int _I |x_T(E) | \, \mathrm {d}E \prec N^{C \varepsilon }\), as follows from Lemma 7.5. Hence using Lemma 7.7 and expanding \(q(y_S(E))\) around \(q(y_T(E))\) with a fourth order rest term, we get
where we defined
as well as the polynomial
where we abbreviated \(m \equiv m(\mathbf{{l}})\). Here we use the convention that \(x_0 \mathrel {\mathop :}=x_T\). Note that \({\mathcal {L}}\) is a finite set (it has 14 elements), and for each \(\mathbf{{l}} \in {\mathcal {L}}\) the polynomial \(A_{\mathbf{{l}}}\) is independent of \(U_{b \beta }\). In the estimate of the error term on the right-hand side of (7.44) we also used the fact that for \(E\in I\) we have \(\Psi (E+i\eta ) \leqslant N^{C\varepsilon }\kappa _a^{1/2}\)
Next, using lemma 7.7 and \(N \Delta _a \asymp \kappa _a^{-1/2}\), we find
Using (7.44) and (7.46), we may do a Taylor expansion of \(h\) on the left-hand side of (7.27). This yields
where we abbreviated \(A_{\mathbf{{0}}} \mathrel {\mathop :}=\int _I x_0 \, \mathrm {d}E\). Since \(a \leqslant N^{1 - \tau }\), it is easy to see that, by choosing \(\varepsilon \) small enough depending on \(\tau \), the error term in (7.47) is bounded by \(N^{-c} (\phi ^{-1} N^{-2} + N^{-1} |w(b) |^2)\) for some positive constant \(c\). Taking the expectation and recalling that \(|U_{b \beta } | \prec \phi ^{-1/4} N^{-1/2}\) , we therefore get
where \(\mathbb {E}{\mathcal {A}}\) is as described after (7.27), i.e. it depends on the random variable \(U_{b \beta }\) only through its first two moments.
At this point we note that if we make the stronger assumption that the first three moments of \(X^{(1)}\) and \(X^{(2)}\) match (which, in the ultimate application to the proof of Proposition 6.6, means that \(\mathbb {E}X_{i \mu }^3 = 0\)), the proof is now complete. Indeed, in that case we may allow \({\mathcal {A}}\) to be a polynomial of degree three in \(U_{b \beta }\) with \(\bar{X}\)-measurable coefficients, and we may absorb the last line of (7.48) into \(\mathbb {E}{\mathcal {A}}\). This completes the proof of (7.27), and hence of Lemma 7.3, for the special case that the third moments of \(X^{(1)}\) and \(X^{(2)}\) match.
For the general case, we still have to estimate the last line of (7.48). The terms that we need to analyse are
These terms are dealt with in the following lemma.
Lemma 7.8
Let \(Y\) denote any term of (7.49). Then there is a constant \(c > 0\) such that
Plugging the estimate of Lemma 7.8 into (7.48), and recalling that \(\mathbb {E}U_{b \beta }^3 \leqslant C \phi ^{-3/4} N^{-3/2}\), it is easy to complete the proof of (7.27), and hence of Lemma 7.3. Lemma 7.8 is proved in the next subsection.
7.3 Proof of Lemma 7.3 III: the terms of order three and proof of Lemma 7.8
Recall that we assume \(\phi \geqslant 1\), i.e. \(K = N\); the case \(\phi \leqslant 1\) is dealt with analogously, and we omit the details.
We first remark that using the bounds (7.46) we find
Comparing this to (7.50), we see that we need to gain an additional factor \(N^{-1/2}\). How to do so is the content of this subsection.
The basic idea behind the additional factor \(N^{-1/2}\) is that the expectation \(\mathbb {E}Y\) is smaller than the typical size \(\sqrt{\mathbb {E}|Y |^2}\) of \(Y\) by a factor \(N^{-1/2}\). This is a rather general property of random variables which can be written, up to a negligible error term, as a polynomial of odd degree in the entries \(\{\bar{X}_{i \beta }\}_{i = 1}^m\). A systematic representation of a large family of random variables in terms of polynomials was first given in [18], and was combined with a parity argument in [10]. Subsequently, an analogous parity argument for more singular functions was developed in [44]. Following [44], we refer to the process of representing a random variable \(Y\) as a polynomial in \(\{\bar{X}_{i \beta }\}_{i = 1}^M\) up to a negligible error term as the polynomialization of \(Y\).
We shall develop a new approach to the polynomialization of the variables (7.49). The main reason is that these variables have a complicated algebraic structure, which needs to be combined with the Helffer–Sjöstrand representation (7.23). These difficulties lead us to define a family of graded polynomials (given in Definitions 7.10–7.12), which is general enough to cover the polynomialization of all terms from (7.49) and imposes conditions on the coefficients that ensure the gain of \(N^{-1/2}\). The basic structure behind these polynomials is a classification based on the \(\ell ^2\)- and \(\ell ^3\)-norms of their coefficients.
Let us outline the rough idea of the parity argument. We use the notations \(\bar{X} = \bar{X}_{[\beta ]} + \bar{X}^{[\beta ]}\) and \(T^{[\beta ]}(z) \mathrel {\mathop :}=(\bar{X}^{[\beta ]} (\bar{X}^{[\beta ]})^* - z)^{-1}\), in analogy to those introduced before (7.28). A simple example of a polynomial is
This is a polynomial of degree two. Note that the coefficients \(T^{[\beta ]}_{ij}\) are \(\bar{X}^{[\beta ]}\)-measurable, i.e. independent of \(\bar{X}_{[\beta ]}\). It is not hard to see that \(\mathbb {E}{\mathcal {P}}_2\) is of the same order as \(\sqrt{\mathbb {E}|{\mathcal {P}}_2 |^2}\), so that taking the expectation of \({\mathcal {P}}_2\) does not yield better bounds. The situation changes drastically if the polynomial is odd degree. Consider for instance the polynomial
Now we have \(|\mathbb {E}{\mathcal {P}}_3 | \lesssim N^{-1/2} \sqrt{\mathbb {E}|{\mathcal {P}}_3 |^2}\). The reason for this gain of a factor \(N^{-1/2}\) is clear: taking the expectation forces all three summation indices \(i,j,k\) to coincide.
In the following we define a large family of \(\mathbb {Z}_2\)-graded polynomials that is sufficiently general to cover the polynomializations of the terms (7.49). We shall introduce a notation \(O_{\prec , *}(A)\), which generalizes the notation \(O_{\prec }(A)\) from 2.1; here \(* \in \{{\mathrm{{even}}, \mathrm{{odd}}}\}\) denotes the parity of the polynomial, and \(A\) its size. We always have the trivial bound \(O_{\prec , *}(A) = O_{\prec }(A)\). In addition, we roughly have the estimates
The need to gain an additional factor \(N^{-1/2}\) from odd polynomials imposes nontrivial constraints on the polynomial coefficients, which are carefully stated in Definitions 7.10–7.12; they have been tailored to the class of polynomials generated by the terms (7.49).
We now move on to the proof of Lemma 7.8. We recall that we assume throughout that \(\phi \geqslant 1\). We first introduce a family of graded polynomials suitable for our purposes. It depends on a constant \(C_0\), which we shall fix during the proof to be some large but fixed number.
Definition 7.9
(Admissible weights) Let \(\varrho = (\varrho _i \mathrel {\mathop :}i \in [\![{1, \phi N}]\!])\) be a family of deterministic nonnegative weights. We say that \(\varrho \) is an admissible weight if
Definition 7.10
(\(O_{\prec , d}(\cdot )\)) For a given degree \(d \in \mathbb {N}\) let
be a polynomial in \(\bar{X}\). Analogously to the notation \(O_\prec (\cdot )\) introduced in Definition 2.1, we write \({\mathcal {P}} = O_{\prec , d}(A)\) if the following conditions are satisfied.
-
(i)
\(A\) is deterministic and \(V_{i_1 \cdots i_d}\) is \(\bar{X}^{[\beta ]}\)-measurable.
-
(ii)
There exist admissible weights \(\varrho ^{(1)},\ldots , \varrho ^{(d)}\) such that
$$\begin{aligned} |V_{i_1 \cdots i_d} | \prec A \, \varrho ^{(1)}_{i_1} \cdots \varrho ^{(d)}_{i_d}. \end{aligned}$$(7.54) -
(iii)
We have the deterministic bound \(|V_{i_1 \cdots i_d} | \leqslant N^{C_0}\).
The above definition extends trivially to the case \(d = 0\), where \({\mathcal {P}} \!=\! V\) is \(\bar{X}^{[\beta ]}\)-measurable.
Definition 7.11
(\(O_{\prec , \diamond }(\cdot )\)) Let \({\mathcal {P}}\) be a polynomial of the form
We write \({\mathcal {P}} = O_{\prec , \diamond }(A)\) if \(V_i\) is \(\bar{X}^{[\beta ]}\)-measurable, \(|V_i | \leqslant N^{C_0}\), and \(|V_i | \prec A\) for some deterministic \(A\).
Definition 7.12
(Graded polynomials) We write \({\mathcal {P}} = O_{\prec , \mathrm{{even}}}(A)\) if \({\mathcal {P}}\) is a sum of at most \(C_0\) terms of the form
where \(n,m \leqslant C_0\) and \(A\) is deterministic.
Moreover, we write \({\mathcal {P}} = O_{\prec , \mathrm{{odd}}}(A)\) if \({\mathcal {P}} = \widehat{{\mathcal {P}}}\, {\mathcal {P}}_{\mathrm{{even}}}\), where \(\widehat{{\mathcal {P}}} = O_{\prec , 1}(1)\) and \({\mathcal {P}}_{\mathrm{{even}}} = O_{\prec , \mathrm{{even}}}(A)\).
Definitions 7.10–7.12 refine Definition 2.1 in the sense that
Indeed, let \({\mathcal {P}} = O_{\prec , d}(A)\) be of the form (7.53). Then a simple large deviation estimate (e.g. a trivial extension of [19, Theorem B.1(iii)]) yields
where the last step follows from the definition of admissible weights. Similarly, if \({\mathcal {P}} = O_{\prec , \diamond }(A)\) is of the form (7.55), a large deviation estimate (e.g. [19, Theorem B.1(i)]) yields
Note that terms of the form \(O_{\prec , \mathbf{{\cdot }}}(A)\) satisfy simple algebraic rules. For instance, we have
and
after possibly increasing \(C_0\). (As with the standard big O notation, such expressions are to be read from left to right.) We stress that such operations may be performed an arbitrary, but bounded, number of times. It is a triviality that all of the following arguments will involve at most \(C_0\) such algebraic operations on graded polynomials, for large enough \(C_0\).
The point of the graded polynomials is that bounds of the form (7.56) are improved if \(d\) is odd and we take the expectation. The precise statement is the following.
Lemma 7.13
Let \({\mathcal {P}} = O_{\prec , \mathrm{{odd}}}(A)\) for some deterministic \(A \leqslant N^C\). Then for any fixed \(D > 0\) we have
Proof
It suffices to set \(A = 1\) and consider \({\mathcal {P}} = \widehat{{\mathcal {P}}} {\mathcal {P}}_0 \prod _{s = 1}^{m}{\mathcal {P}}_i\), where \(\widehat{{\mathcal {P}}}\), \({\mathcal {P}}_0\), and \({\mathcal {P}}_i\) are as in Definition 7.12. By linearity, it suffices to consider
where \(d = 2n\) is even. We suppose that \(|W_{i_0} | \prec \varrho ^{(0)}_{i_0}\), \(|V_{i_1 \cdots i_d} | \prec \varrho ^{(1)}_{i_1} \cdots \varrho ^{(d)}_{i_d}\), and \(|V^{(d+l)}_{i_{d+l}} | \prec 1\) for \(l = d+1,\ldots , d+m\). Here \(\varrho ^{(k)}_{i_k}\) denotes an admissible weight (see Definition 7.9). Thus we have
where the term \(N^{-D}\) comes from the trivial deterministic bound \(|V_{i_1 \cdots i_d} | \leqslant N^C\) on the low-probability event of \(\prec \) [i.e. the event inside \(\mathbb {P}[\,\cdot \,]\) in (2.1)] in (7.54), and analogous bounds for the other \(\bar{X}^{[\beta ]}\)-measurable coefficients.
The expectation imposes that each summation index \(i_0,\ldots , i_{d+m}\) coincide with at least one other summation index. Thus we get
where the indicator function \(I(\cdot )\) imposes the condition that each summation index must coincide with at least another one, and we introduced the weight \(\tilde{\varrho }_i^{(k)} \mathrel {\mathop :}=N^{-1/2} \phi ^{-1/4} \varrho _i\). Note that
Here for \(q > 3\) we used the inequality \(\Vert \tilde{\varrho }^{(k)} \Vert _{\ell ^q} \leqslant \Vert \tilde{\varrho }^{(k)} \Vert _{\ell ^p}\) for \(q \geqslant p\). The indicator function \(I\) on the right-hand side of (7.57) imposes a reduction in the number of independent summation indices. We may write \(I = \sum _{P} I_P\) as a sum over all partitions \(P\) of the set \([\![{0, d+m}]\!]\) with blocks of size at least two, whereby
Hence the summation over \(i_1,\ldots , i_{d+m}\) factors into a product over the blocks of \(P\). We shall show that the contribution of each block is at most one, and that there is a block whose contribution is at most \(N^{-1/2}\).
Fix \(p \in P\) and denote by \(S_p\) the contribution of the block \(p\) to the summation in the main term of (7.57). Define \(s \mathrel {\mathop :}=|p \cap [\![{0,d}]\!] |\) and \(t \mathrel {\mathop :}=|p \cap [\![{d+1, d+m}]\!] |\). By definition of \(P\), we have \(s + t \geqslant 2\). By the inequality of arithmetic and geometric means, we have
Using (7.58) it is easy to conclude that
Moreover, since \(d\) is even, at least one block of \(P\) satisfies \((s,t) \ne (2,0)\).
Thus we find that
Since \(d+m \leqslant 2 C_0\), the proof is complete. \(\square \)
In order to apply Lemma 7.13 to the terms \(Y\) from (7.49), we need to expand \(Y\) in terms of graded polynomials. This expansion is summarized in the following result, which gives the polynomializations of the coefficients of the terms from (7.49). For an arbitrary unit vector \(\mathbf{{v}} \in \mathbb {R}^M\) we define the control parameter
Lemma 7.14
Fix \(D > 0\). Then there exists \(C_0 = C_0(D)\) such that for any unit vector \(\mathbf{{v}} \in \mathbb {R}^M\) we have
uniformly for \(z \in \mathbf{{S}}\).
Proof
We begin by noting that (3.9) applied to \(X^{[\mu ]}\) and (3.22) combined with a large deviation estimate (see [19, Theorem B.1]) yields
Using (7.35) and Lemma 7.5, it is not hard to deduce that
Thus for any fixed \(n\) we may expand
where in the second step we used (3.5) and (3.23). Now we split
where in the second step we used the estimates \(|T_{ij}^{[\beta ]} - \delta _{ij} m_{\phi ^{-1}} | \prec \phi ^{-1} \Psi \) and \(|m_{\phi ^{-1}} | \leqslant C \phi ^{-1/2}\). Since \(|z m_\phi | \leqslant C \phi ^{1/2}\), we therefore conclude that
From (7.31) and the definition of \(\eta \), we readily find that \(\Psi \leqslant N^{-c \tau }\) for some constant \(c\). Therefore choosing \(n \equiv n(\tau ,D)\) large enough yields
Having established (7.64), the remainder of the proof is relatively straightforward. From (7.29) and (7.30) we get
Moreover, using \(\Psi \geqslant c N^{-1/2}\) and \(T_{\mathbf{{v}} i}^{[\beta ]} = v(i) m_{\phi ^{-1}} + O_\prec (\phi ^{-1} \Psi ) = O_\prec (\phi ^{-1/2} |v(i) | + \phi ^{-1} \Psi )\), we find
We conclude that
Now (7.62) follows easily from (7.65) and (7.64).
Moreover, (7.59) and (7.61) follow from (7.28) combined with (7.65) and (7.64). For (7.61) we estimate the second term in (7.28) by
where in the last step we used that \(\Psi \geqslant c N^{-1/2}\). Moreover, (7.60) is a trivial consequence of (7.59). Finally, (7.63) follows from (7.28) and (7.64) combined with
This concludes the proof. \(\square \)
Note that the upper bounds in Lemma 7.14 are the same as those of (7.42), except that \(\Psi \) is replaced with the larger quantity \(\Psi ^{\mathbf{{v}}}\). In order to get back to \(\Psi \) from \(\Psi ^{\mathbf{{v}}}\), we use the following trivial result.
Lemma 7.15
We have
if
Proof
The claim follows immediately from the upper bound \(\Psi \geqslant N^{-1/2}\), valid for all \(z \in \mathbf{{S}}\). \(\square \)
In each application of Lemma 7.14, we shall verify one of the conditions of (7.66). The first condition is verified for \(\eta \leqslant N^{-2/3}\), which always holds for the coefficients of \(x_1\), \(x_2\), and \(x_3\) (recall (7.2)).
The second condition of (7.66) will be verified when computing the coefficients of \(y_1\), \(y_2\), and \(y_3\). To that end, we make use of the freedom of the choice of basis when computing the trace in the definition of \(J_1\), \(J_2\), and \(J_3\). We shall choose a basis that is completely delocalized. The following simple result guarantees the existence of such a basis.
Lemma 7.16
There exists an orthonormal basis \(\mathbf{{w}}_1,\ldots , \mathbf{{w}}_M\) of \(\mathbb {R}^M\) satisfying
uniformly in \(i\) and \(j\).
Proof
Let the matrix \([\mathbf{{w}}_1 \cdots \mathbf{{w}}_M]\) of orthonormal basis vectors be uniformly distributed on the orthogonal group \(\mathrm O(M)\). Then each \(\mathbf{{w}}_i\) is uniformly distributed on the unit sphere, and by standard Gaussian concentration arguments one finds that \(|w_i(j) | \prec M^{-1/2}\). In particular, there exists an orthonormal basis \(\mathbf{{w}}_1,\ldots , \mathbf{{w}}_M\) satisfying (7.67). In fact, a slightly more careful analysis shows that one can choose \(|w_i(j) | \leqslant (2 + \varepsilon ) (\log M)^{1/2} M^{-1/2}\) for any fixed \(\varepsilon > 0\) and large enough \(M\). \(\square \)
We may now derive estimates on the matrix \(T^2\) by writing \((T^2)_{jk} = \sum _{i}T_{j \mathbf{{w}}_i} T_{\mathbf{{w}}_i k}\), where \(\{\mathbf{{w}}_i\}\) is a basis satisfying (7.67). From Lemmas 7.14 and 7.15 we get the following result.
Lemma 7.17
Fix \(D > 0\). Then there exists \(C_0 = C_0(D)\) such that
uniformly for \(z \in \mathbf{{S}}\).
Proof
We prove (7.70); the other estimates are proved similarly. We choose a basis \(\mathbf{{w}}_1,\ldots , \mathbf{{w}}_M\) as in Lemma 7.16, and write
where we used (7.61) with \(\mathbf{{w}}\) replaced by \(\mathbf{{w}}_i\), (7.62), and Lemma 7.15. Summing over \(i\), and recalling that \(\Psi \geqslant N^{-1/2}\), it is easy to conclude (7.70). \(\square \)
In particular, as in (7.39) we find
where the parity of \(J_i\) follows easily from its definition.
The estimates from Lemma 7.14 are compatible with integration in the following sense. Suppose that \({\mathcal {P}}(s)\) depends on a parameter \(s \in S\), where \(S \subset \mathbb {R}^k\) has bounded volume, and that \({\mathcal {P}}(s) = O_{\prec , *}(A(s)) + O_\prec (N^{-D})\) uniformly in \(s \in S\), where \(A(s)\) is a deterministic function of \(s\) and \(* \in \{\mathrm{{even}}, \mathrm{{odd}}\}\) denotes the parity of \({\mathcal {P}}\). Suppose in addition that \({\mathcal {P}}(s)\) is Lipschitz continuous with Lipschitz constant \(N^C\). Then, analogously to Remark 3.3, we have
Lemmas 7.14 and 7.17 are the key estimates of the coefficients appearing in (7.49). We claim that all estimates of Lemma 7.7, along with (7.42), remain valid, in the sense that an estimate of the form \(|u | \prec v\) is to be replaced with
where \(* \in \{\mathrm{{even}}, \mathrm{{odd}}\}\) denotes the parity of polynomialization of \(u\). Indeed, for the estimates (7.37) on \(x_i\), we always have \(\hbox {Im }z = \eta \leqslant N^{-2/3}\), so that by Lemma 7.15 we have \(\Psi ^{\mathbf{{v}}} \prec \Psi \). Thus we get from Lemma 7.14 that
where the parity of \(x_i\) may be easily deduced from their definitions. Moreover, for the estimates (7.41) we use (7.72) to get
Note that, thanks to Lemmas 7.14 and 7.17, we have obtained exactly the same upper bounds on the coefficients \(x_i\) and \(y_i\) as the ones obtained in Lemma 7.7, but we have in addition expressed them, up to a negligible error, as graded polynomials, to which Lemma 7.13 is applicable.
In addition to the coefficients \(x_i\) and \(y_i\), we have to control the coefficient \(q^{(m)}(y_T)\) in the definition (7.45) of \(A_{\mathbf{{l}}}\). We in fact claim that
This follows from the estimate
which may be derived from (7.68), combined with a Taylor expansion of \(q^{(m)}\). Similarly, we find that
We may now put everything together. Noting that the degree of the polynomializations of the expressions (7.49) is always odd, we obtain, in analogy to (7.51) that
for \(Y\) being any term of (7.49). Hence Lemma 7.8 follows from Lemma 7.13 and Young’s inequality.
7.4 Stability of level repulsion: proof of Lemma 6.5
This is a Green function comparison argument, using the machinery introduced in Sect. 7.1. A similar comparison argument was given in Propositions 2.4 and 2.5 of [29]. The details in the sample covariance case and for indices \(a\) satisfying \(a \leqslant K^{1 - \tau }\) follow an argument very similar to (in fact simpler than) the one from Sects. 7.1–7.3. As in the proofs of Propositions 2.4 and 2.5 of [29], one writes the level repulsion condition in terms of resolvents. In our case, one uses the representation (7.15) as the starting point. Then the machinery of Sects. 7.1–7.3 may be applied with minor modifications. We omit the details.
8 Extension to general \(T\) and universality for the uncorrelated case
In this section we relax the assumption (3.1), and hence extend all arguments of Sects. 3–7 to cover general \(T\). We also prove the fixed-index joint eigenvector-eigenvalue universality of the matrix \(H\) defined in (2.7), for indices bounded by \(K^{1 - \tau }\) for some \(\tau > 0\).
Bearing the applications in the current paper in mind, we state the results of this section for the matrix \(H\) from (2.7), but it is a triviality that all results and their proofs carry over to case of arbitrary \(Q\) from (1.10) provided that \(\Sigma = T T^* = I_M\).
8.1 The isotropic Marchenko–Pastur law for \(Y Y^*\)
We start with the singular value decomposition of \(T\), which we write as
where \(O' \in \mathrm O(M)\) and \(O'' \in \mathrm O(M+r)\) are orthogonal matrices, \(0\) is the \(M \times r\) zero matrix, and \(\Lambda \) is an \(M \times M\) diagonal matrix containing the singular values of \(T\). Setting
we have
We conclude that
where \(H \mathrel {\mathop :}=Y Y^*\) and \(Y \mathrel {\mathop :}=(I_M, 0) O X\) were defined in (2.7). Comparing this to (3.2), we find that to relax the assumption (3.1) we have to generalize the arguments of Sects. 3–7 by replacing \(X X^*\) with \(H = Y Y^*\).
The generalization of \(G = (X X^* - z)^{-1}\) is the resolvent of \(Y Y^*\),
We also abbreviate
Throughout the following we identify \(\mathbf{{w}} \in \mathbb {R}^M\) with its natural embedding \(\left( {\begin{array}{c}\mathbf{{w}}\\ 0\end{array}}\right) \in \mathbb {R}^{M+r}\). Thus, for example, for \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\) we may write \(G'_{\mathbf{{v}} \mathbf{{w}}}\).
Theorem 8.1
(Local laws for \(Y Y^*)\) Theorem 3.2 remains valid with \(G\) replaced by \(\widehat{G}\). Moreover\(,\) Theorems 3.4 and 3.5 remain valid for \(\varvec{\zeta }_i\) and \(\lambda _i\) denoting the eigenvectors and eigenvalues of \(Y Y^*\).
Proof
It suffices to prove the first sentence, since all claims in the second sentence follow from the isotropic law (see [10] for more details). We only prove (3.9) for \(\widehat{G}\); the other bound, (3.10) for \(\widehat{G}\), is proved similarly. To simplify the presentation, we suppose that \(r = 1\); the case \(r \geqslant 2\) is a trivial extension. Abbreviate \(\bar{M} \mathrel {\mathop :}=M + 1\). Noting that \(Y_{i \mu } = \mathbf{{1}} (i \ne \bar{M}) (OX)_{i \mu }\), we find from [10, Definition 3.5 and Equation (3.7)] that
For definiteness, we focus on (3.9) for \(\widehat{G}\); the proof of (3.10) for \(\widehat{G}\) is similar. Since \(G' = O G O^*\), we have \(G'_{\mathbf{{v}} \mathbf{{w}}} = G_{O^* \mathbf{{v}} \, O^* \mathbf{{w}}}\). Hence, using (3.9) and (8.2), the proof will be complete provided we can show that
where we recall the definition (7.31) of \(\Psi \). In fact, from Lemma 3.6 and (3.23) we find that \(\Phi / |m_{\phi ^{-1}} | \leqslant N^{-c}\) for some positive constant \(c\) depending on \(\tau \). Hence (3.9) yields
This concludes the proof. \(\square \)
Having established Theorem 8.1, all arguments from Sects. 3–6 that use it as input may be taken over verbatim, after replacing \(G\) by \(\widehat{G}\). More precisely, all results from Sects. 3–6 remain valid for a general \(Q\), with the exception of Proposition 6.3, Lemmas 6.4 and 6.5, and Proposition 6.6. Therefore we have completed the proofs of all of our main results except Theorem 2.20.
In order to prove Theorem 2.20, we still have to prove Lemmas 6.4 and 6.5 and Proposition 6.6 for \(Y Y^*\) instead of \(X X^*\). Lemma 6.4 is easy: for Gaussian \(X\) we have \(Y \overset{d}{=}(I_M, 0) X = \widetilde{X}\), where \(\widetilde{X}\) is the \(M \times N\) matrix obtained from \(X\) by deleting its bottom \(r\) rows.
The proofs of Lemma 6.5 and Proposition 6.6 rely on Green function comparison. What remains, therefore, is to extend the argument of Sect. 7 from \(H = X X^*\) to \(H = Y Y^*\).
8.2 Quantum unique ergodicity for \(YY^*\)
In this subsection we prove Proposition 6.6 for the eigenvectors \(\varvec{\zeta }_a\) of \(H = Y Y^*\). As explained in Sect. 7.4, the proof of Lemma 6.5 is analogous and therefore omitted. We proceed exactly as in Sect. 7, replacing \(G\) with \(\widehat{G}\). It suffices to prove the following result.
Lemma 8.2
Lemma 7.3 remains valid if \(x(E)\) and \(y(E)\) are replaced with \(\widehat{x}(E)\) and \(\widehat{y}(E),\) obtained from the definitions (7.22) and (7.23) by replacing \(G\) with \(\widehat{G}\).
Proof
We take over the notation from the proof of Theorem 8.1, and to simplify notation again assume that \(r = 1\). As in Sect. 7, we suppose for definiteness that \(\phi \geqslant 1\). Defining \(\mathbf{{u}} \mathrel {\mathop :}=O \mathbf{{w}}\) and \(\mathbf{{r}} \mathrel {\mathop :}=O \mathbf{{e}}_{M+1}\), we have \(\langle {\mathbf{{u}}} , {\mathbf{{r}}}\rangle = 0\) and, using (8.2),
We conclude that
Recalling (3.9) and (7.31), we find that the second term is stochastically dominated by
where in the second step we used that \(\Psi \leqslant C (N \eta )^{-1}\), as follows from Lemma 3.6 and the definition of \(\eta \) in (7.2). Recalling the definitions from (7.2), we therefore conclude that for small enough \(\varepsilon \equiv \varepsilon (\tau )\) we have
for some positive constant \(c\) depending on \(\tau \).
Similarly, we have for any \(z \in \mathbf{{S}}\)
for some positive constant \(c\) depending on \(\tau \). Plugging (8.5) into the definition of \(\widehat{y}(E)\) and estimating the error term using integration by parts, as in (7.43), we get
Using the mean value theorem and the bound \(|y(E) | \prec 1\), we therefore get
Combined with (7.21), this concludes the proof. \(\square \)
This concludes the proof of Theorem 2.20 for the case of general \(T\).
8.3 The joint eigenvalue–eigenvector universality of \(YY^*\) near the spectral edges
In this section we observe that the technology developed in Sect. 7 allows us to establish the universality of the joint eigenvalue–eigenvector distribution of \(Q\) provided that \(\Sigma = I_M\). Without loss of generality, we consider the case where \(Q\) is given by \(H = Y Y^*\) defined in (2.7). This result applies to arbitrary eigenvalue and eigenvector indices which are bounded by \(K^{1 - \tau }\), and does in particular not need to invoke eigenvalue correlation functions.
This result generalizes the quantum unique ergodicity from Proposition 6.6 and its extension from Remark 6.7 by also including the distribution of the eigenvalues. The universality of both the eigenvalues and the eigenvectors is formulated in the sense of fixed indices. A result in a similar spirit was given in [29, Theorem 1.6], except that the upper bound on the eigenvalue and eigenvector indices \((\log K)^{C \log \log N}\) from [29] is improved all the way to \(K^{1 - \tau }\), for any \(\tau > 0\). A result covering all eigenvalue and eigenvector indices, i.e. with an index upper bound \(K\), was given in [29, Theorem 1.10] and [41, Theorem 8], but under the assumption of a four-moment matching assumption. Theorem 8.3 is a true universality result in that it does not require any moment matching assumptions, but it does require an index upper bound of \(K^{1 - \tau }\) instead of \(K\) on the eigenvalue and eigenvector indices.
In addition, Theorem 8.3 extends the previous results from [29] and [41] by considering arbitrary generalized components \(\langle {\varvec{\zeta }_a} , {\mathbf{{v}}}\rangle \) of the eigenvectors. Finally, Theorem 8.3 holds for the general class of covariance matrices defined in (2.7).
Theorem 8.3
(Universality for the uncorrelated case) Fix \(\tau > 0,\) \(k = 1,2,3, \ldots ,\) and \(r = 0,1,2, \ldots \). Choose an observable \(h \in C^4(\mathbb {R}^{2k})\) satisfying
for some constant \(C > 0\) and for all \(\alpha \in \mathbb {N}^{2k}\) satisfying \(|\alpha | \leqslant 4\). Let \(X\) be an \((M + r) \times N\) matrix\(,\) and define \(H\) through (2.7) for some orthogonal \(O \in \mathrm O(M + r)\). Denote by \(\lambda _1 \geqslant \cdots \geqslant \lambda _M\) the eigenvalues of \(H\) and by \(\varvec{\zeta }_1,\ldots , \varvec{\zeta }_M\) the associated unit eigenvectors. Let \(\mathbb {E}^{(1)}\) and \(\mathbb {E}^{(2)}\) denote the expectations with respect to two laws on \(X,\) both of which satisfy (1.15) and (1.16). Recall the definition (6.17) of \(\Delta _a,\) the typical distance between \(\lambda _a\) and \(\lambda _{a+1},\) and (3.14) of the classical location \(\gamma _a\).
Then for any indices \(a_1,\ldots , a_k, b_1,\ldots , b_k \in [\![{1, K^{1 - \tau }}]\!]\) and deterministic unit vectors \(\mathbf{{u}}_1, \mathbf{{w}}_1,\ldots , \mathbf{{u}}_k, \mathbf{{w}}_k \in \mathbb {R}^M\) we have
for some constant \(c \equiv c(\tau , k, r, h) > 0\).
Proof
The proof is a Green function comparison argument, a minor modification of that developed in Sect. 7. We write the distribution of \(\lambda _a - \gamma _a\) in terms of the resolvent \(\widehat{G}\), starting from the Helffer–Sjöstrand representation (7.15), exactly as in [29, Sections 4 and 5]. We omit further details. \(\square \)
Remark 8.4
In particular, Theorem 8.3 establishes the fixed-index universality of eigenvalues with indices bounded by \(K^{1 - \tau }\). Indeed, we may choose \(\mathbb {E}^{(2)}\) to be the expectation with respect to a Gaussian law, in which case \(H \overset{d}{=}\widetilde{X} \widetilde{X}^*\), where \(\widetilde{X}\) is a \(M \times N\) and Gaussian. (For example, the top eigenvalue of \(H\) is distributed according to the Tracy–Widom-1 distribution, etc.)
We note that even this fixed-index universality of eigenvalues is a new result, having previously only been established under the four-moment matching condition [29, 41] (in the context of Wigner matrices).
Remark 8.5
We formulated Theorem 8.3 for the real symmetric covariance matrices of the form (2.7), but it and its proof remain valid for complex Hermitian covariance matrices, as well as Wigner matrices (both real symmetric and complex Hermitian).
Remark 8.6
Assuming \(|\phi - 1 | > \tau \), the condition \(a \leqslant K^{1 - \tau }\) on the indices in Theorem 8.3 may be replaced with \(a \notin [\![{K^{1 - \tau }, K - K^{1 - \tau }}]\!]\).
Remark 8.7
Combining Theorems 8.3 and 2.7, we get the following universality result for \(Q\). Fix \(\tau > 0\), \(k = 1,2,3, \ldots \), and \(r = 0,1,2, \ldots \). For any continuous and bounded function \(h\) on \(\mathbb {R}^k\) we have
for any indices \(a_1,\ldots , a_k \leqslant K^{1 - \tau } \alpha _+^3\). Here \(\mathbb {E}^{\mathrm{{Wish}}}\) denotes expectation with respect to the Wishart case, where \(r = 0\), \(T = I_M\), and \(X\) is Gaussian. A similar result holds near the left edge provided that \(|\phi - 1 | \geqslant \tau \).
9 Extension to\(\dot{Q}\) and proof of Theorem 2.23
In this section we explain how to extend our analysis from \(Q\) defined in (1.10) to \(\dot{Q}\) defined in (2.23), hence proving Theorem 2.23. We define the resolvent
which will replace \(G(z) = (X X^* - z)^{-1}\) when analysing with \(\dot{Q}\) instead of \(Q\). We begin by noting that the isotropic local laws hold for also for \(\dot{G}\).
Theorem 9.1
(Local laws for \(X (1 - \mathbf{{e}} \mathbf{{e}}^*) X^*)\) Theorem 3.2 remains valid with \(G\) replaced by \(\dot{G}\). Moreover\(,\) Theorems 3.4 and 3.5 remain valid for \(\varvec{\zeta }_i\) and \(\lambda _i\) denoting the eigenvectors and eigenvalues of \(X (1 - \mathbf{{e}} \mathbf{{e}}^*) X^*\).
Proof
As in the proof of Theorem 8.1, we only prove (3.9) for \(\dot{G}\). Using the identity (3.36) we get
Using (3.9), the proof will be complete provided we can show that
for unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\), where \(\Phi \) was defined in (8.3). Recall the definition \(R(z) \mathrel {\mathop :}=(X^* X - z)^{-1}\). From the elementary identity \(X^*G X = 1 + zR\) and Theorem 3.2 applied to \(X^*\) instead of \(X\), we get
where in the last step we used (3.19) and (3.23). Using Lemma 9.3 below with \(\mathbf{{x}} = \mathbf{{e}}\), (9.2) therefore follows provided we can prove that
This is an immediate consequence of the estimate \((1 + \phi ) \Phi \leqslant C\), which itself easily follows from the definition (3.7) of \(\mathbf{{S}}\) and (3.22). \(\square \)
Next, we deal with the quantum unique ergodicity of \(X (1 - \mathbf{{e}} \mathbf{{e}}^*) X^*\). As explained in Sect. 8.2, it suffices to prove the following result.
Lemma 9.2
Lemma 7.3 remains valid if \(x(E)\) and \(y(E)\) are replaced with \(\dot{x}(E)\) and \(\dot{y}(E),\) obtained from the definitions (7.22) and (7.23) by replacing \(G\) with \(\dot{G}\).
Proof
The proof mirrors closely that of Lemma 8.2, using the identity (9.1) instead of (8.2) as input. We omit the details. \(\square \)
Using Theorem 9.1 and Lemma 9.2, combined with the results of Sect. 8.2, we conclude the proof of Theorem 2.23. To be precise, the arguments of Sects. 8 and 9 have to be successively combined so as to obtain the isotropic local laws and quantum unique ergodicity of the matrix \(Y (1 - \mathbf{{e}} \mathbf{{e}}^*) Y^*\). This has to be done in the following order. First, using Theorem 9.1 and Lemma 9.2, one establishes the local laws and quantum unique ergodicity for \(X (1 - \mathbf{{e}} \mathbf{{e}}^*) X^*\). Second, using these results as input, one repeats the arguments of Sect. 8, except that \(X X^*\) is replaced with \(X (1 - \mathbf{{e}} \mathbf{{e}}^*) X^*\); this is a trivial modification of the arguments presented in Sect. 8. Thus we get the local laws and quantum unique ergodicity for the matrix
Moreover, we find that Theorem 8.3 also holds if \(H = Y Y^*\) from (2.7) is replaced with \(Y (1 - \mathbf{{e}} \mathbf{{e}}^*) Y^*\).
All that remains is the proof of the following estimate, which generalizes (7.32).
Lemma 9.3
For \(z \in \mathbf{{S}}\) and deterministic unit vectors \(\mathbf{{v}} \in \mathbb {R}^M\) and \(\mathbf{{x}} \in \mathbb {R}^N,\) we have
where \(\Phi \) was defined in (8.3).
Proof
In the case where \(\mathbf{{x}} = \mathbf{{e}}_\mu \) is a standard unit vector of \(\mathbb {R}^N\), (9.3) is a trivial extension of (7.32) (which was proved under the assumption that \(\phi \geqslant 1\)). For general \(\mathbf{{x}}\), the proof requires more work. Indeed, writing \((GX )_{\mathbf{{v}} \mathbf{{x}}} = \sum _\mu (GX )_{\mathbf{{v}} \mu } \, u(\mu )\) and estimating \(|(GX )_{\mathbf{{v}} \mu } |\) by \(O_\prec (\phi ^{1/4} (1 + \phi ^{1/2}) \Phi )\) leads to a bound proportional to the \(\ell ^1\)-norm of \(\mathbf{{x}}\) instead of its \(\ell ^2\)-norm. In order to obtain the sharp bound, which is proportional to the \(\ell ^2\)-norm, we need to exploit cancellations among the summands. This phenomenon is related to the fluctuation averaging from [18], and was previously exploited in [10] to obtain the isotropic laws from Theorem 3.2. It is best made use of by estimating the \(p\)-th moment for an even integer \(p\),
here we used the first identity of (7.30). A similar argument was given in [10, Section 5]. The basic idea is to make all resolvents on the right-hand side of (9.4) independent of the columns of \(X\) indexed by \(\{\mu _1,\ldots , \mu _p\}\) (see [10, Definition 3.7]). As in [10, Section 5], we do this using the identities from [10, Lemma 3.8] for the entries of \(R\). In addition, for the entries of \(G\) we use the identity (in the notation of [10, Definition 3.7])
which follows from (7.28) and (7.29). As in [10, Section 5], the resulting expansion may be conveniently organized using graphs, and brutally truncated after a number of steps that depends only on \(p\) and \(\omega \) (here \(\omega \) is the constant from \(\mathbf{{S}}\) in (3.7)). The key observation is that, once the expansion is performed, we may take the pairing among the variables \(\{X_{k \mu _i} \mathrel {\mathop :}k \in [\![{1,N}]\!], i \in [\![{1,p}]\!]\}\); we find that each independent summation index \(\mu _i\) comes with a weight bounded by \(x(\mu _i)^2 + N^{-1}\), which sums to \(O(1)\). We refer to [10] for the full details of the method, and leave the modifications outlined above to the reader. \(\square \)
Notes
We use the symbol \(\asymp \) to denote quantities of comparable size; see “Conventions” at the end of this section for a precise definition.
References
Bai, Z., Yao, J.: On sample eigenvalues in a generalized spiked population model. J. Multivar. Anal. 106, 167–177 (2012)
Bai, Z.D., Yao, J.F.: Central limit theorems for eigenvalues in a spiked population model. Ann. Inst. H. Poincaré (B) 44, 447–474 (2008)
Baik, J., Ben Arous, G., Péché, S.: Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices. Ann. Probab. 33, 1643–1697 (2005)
Baik, J., Silverstein, J.W.: Eigenvalues of large sample covariance matrices of spiked population models. J. Multivar. Anal. 97, 1382–1408 (2006)
Bao, Z., Pan, G., Zhou, W.: Universality for the largest eigenvalue of a class of sample covariance matrices (preprint). arXiv:1304.5690v5
Benaych-Georges, F., Guionnet, A., Maïda, M.: Large deviations of the extreme eigenvalues of random deformations of matrices. Probab. Theory Relat. Fields 154, 703–751 (2012)
Benaych-Georges, F., Guionnet, A., Maïda, M.: Fluctuations of the extreme eigenvalues of finite rank deformations of random matrices. Electron. J. Probab. 16, 1621–1662 (2011)
Benaych-Georges, F., Nadakuditi, R.R.: The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices. Adv. Math. 227, 494–521 (2011)
Benaych-Georges, F., Nadakuditi, R.R.: The singular values and vectors of low rank perturbations of large rectangular random matrices. J. Multivar. Anal. 111, 120–135 (2012)
Bloemendal, A., Erdős, L., Knowles, A., Yau, H.-T., Yin, J.: Isotropic local laws for sample covariance and generalized Wigner matrices. Electron. J. Probab. 19, 1–53 (2014)
Bloemendal, A., Virág, B.: Limits of spiked random matrices II (preprint). arXiv:1109.3704
Bloemendal, A., Virág, B.: Limits of spiked random matrices I. Probab. Theory Relat. Fields 156, 795–825 (2013)
Borodin, A., Péché, S.: Airy kernel with two sets of parameters in directed percolation and random matrix theory. J. Stat. Phys. 132, 275–290 (2008)
Bourgade, P., Erdős, L., Yau, H.-T.: Edge universality of beta ensembles (2013, preprint). arXiv:1306.5728
Bourgade, P., Yau, H.-T.: The eigenvector moment flow and local quantum unique ergodicity (2013, preprint). arXiv:1312.1301
Davies, E.B.: The functional calculus. J. Lond. Math. Soc. 52, 166–176 (1995)
El Karoui, N.: Tracy-Widom limit for the largest eigenvalue of a large class of complex sample covariance matrices. Ann. Probab. 35, 663–714 (2007)
Erdős, L., Knowles, A., Yau, H.-T.: Averaging fluctuations in resolvents of random band matrices. Ann. H. Poincaré 14, 1837–1926 (2013)
Erdős, L., Knowles, A., Yau, H.-T., Yin, J.: Delocalization and diffusion profile for random band matrices. Commun. Math. Phys. 323, 367–416 (2013)
Erdős, L., Knowles, A., Yau, H.-T., Yin, J.: The local semicircle law for a general class of random matrices. Electron. J. Probab. 18, 1–58 (2013)
Erdős, L., Ramirez, J., Schlein, B., Yau, H.-T.: Universality of sine-kernel for Wigner matrices with a small Gaussian perturbation. Electron. J. Probab. 15, 526–604 (2010)
Erdős, L., Yau, H.-T., Yin, J.: Bulk universality for generalized Wigner matrices. Probab. Theory Relat. Fields 154, 341–407 (2012)
Erdős, L., Yau, H.-T., Yin, J.: Rigidity of eigenvalues of generalized Wigner matrices. Adv. Math. 229, 1435–1515 (2012)
Johansson, K.: Shape fluctuations and random matrices. Commun. Math. Phys. 209, 437–476 (2000)
Johnstone, I.M.: On the distribution of the largest eigenvalue in principal components analysis. Ann. Stat. 29, 295–327 (2001)
Johnstone, I.M.: High dimensional statistical inference and random matrices. In: Proceedings of International Congress of Mathematicians, pp. 1–28 (2006)
Knowles, A., Yin, J.: The isotropic semicircle law and deformation of Wigner matrices. Commun. Pure Appl. Math. 66, 1663–1749 (2013)
Knowles, A., Yin, J.: The outliers of a deformed Wigner matrix. Ann. Probab. (preprint, to appear). arXiv:1207.5619
Knowles, A., Yin, J.: Eigenvector distribution of Wigner matrices. Probab. Theory Relat. Fields 155, 543–582 (2013)
Marchenko, V.A., Pastur, L.A.: Distribution of eigenvalues for some sets of random matrices. Mat. Sbornik 72, 457–483 (1967)
Mestre, X.: Improved estimation of eigenvalues and eigenvectors of covariance matrices using their sample estimates. IEEE Trans. Inf. Theory 54, 5113–5129 (2008)
Nadler, Boaz: Finite sample approximation results for principal component analysis: a matrix perturbation approach. Ann. Stat. 36(6), 2791–2817 (2008)
Paul, D.: Asymptotics of sample eigenstructure for a large dimensional spiked covariance model. Stat. Sinica 17, 1617 (2007)
Péché, S.: The largest eigenvalue of small rank perturbations of Hermitian random matrices. Probab. Theory Relat. Fields 134, 127–173 (2006)
Péché, S.: Universality results for the largest eigenvalues of some sample covariance matrix ensembles. Probab. Theory Relat. Fields 143, 481–516 (2009)
Pillai, N.S., Yin, J.: Universality of covariance matrices (preprint). arXiv:1110.2501
Pizzo, A., Renfrew, D., Soshnikov, A.: On finite rank deformations of Wigner matrices. Ann. Inst. Henri Poincaré (B) 49, 64–94 (2013)
Renfrew, D., Soshnikov, A.: On finite rank deformations of Wigner matrices II: delocalized perturbations (preprint) arXiv:1203.5130
Shi, D.: Asymptotic joint distribution of extreme sample eigenvalues and eigenvectors in the spiked population model
Soshnikov, A.: A note on universality of the distribution of the largest eigenvalues in certain sample covariance matrices. J. Stat. Phys. 108, 1033–1056 (2002)
Tao, T., Vu, V.: Random matrices: universal properties of eigenvectors. Rand. Matrices Theory Appl. 1, 1150001 (2012)
Tracy, C., Widom, H.: Level-spacing distributions and the Airy kernel. Commun. Math. Phys. 159, 151–174 (1994)
Tracy, C., Widom, H.: On orthogonal and symplectic matrix ensembles. Commun. Math. Phys. 177, 727–754 (1996)
Yin, J.: The local circular law III: general case (2013, preprint). arXiv:1212.6599
Author information
Authors and Affiliations
Corresponding author
Additional information
A. Knowles was partially supported by Swiss National Science Foundation Grant 144662.
H.-T. Yau was partially supported by NSF Grant DMS-1307444 and Simons investigator fellowship.
J. Yin was partially supported by NSF Grant DMS-1207961.
Appendix: A few remarks on applications to statistics
Appendix: A few remarks on applications to statistics
In this appendix we give a few remarks on what our results imply for applications to statistics. We assume throughout that the population covariance matrix satisfies
for some constant \(C\).
We consider the following simple model problem. Suppose there is some (unknown) set \(S \subset \{1,\ldots , M\}\) whose associated variables \((a_k)_{k \in S}\) are strongly correlated. For simplicity, let us assume that the correlations are given by a single spike in \(\Sigma \), i.e.
where the spike direction \(\mathbf{{v}} = (v(k))_{k = 1}^M\) is given by
(We choose this precise form for \(\mathbf{{v}}\) so as to simplify the presentation as much as possible. The following discussion also holds if \(\mathbf{{v}}\) is essentially supported on \(S\), but the magnitude of its entries is not necessarily constant.) Moreover, for simplicity we assume that \(T = \Sigma ^{1/2}\). In components, we have
The goal is to recover the set \(S\) from an observed realization of the sample covariance matrix \({\mathcal {Q}}\). (In the more general case where \(\mathbf{{v}}\) is not constant on \(S\), one may easily recover its entries from the submatrix \(({\mathcal {Q}}_{kl})_{k,l \in S}\) once \(S\) has been determined.)
The most naive way to proceed is to compare the entries of \({\mathcal {Q}}\) with those of \(\Sigma \). Using (10.1) it is not hard to conclude that
We look at the off-diagonal terms of \({\mathcal {Q}}\) and infer that \(k\) belongs to \(S\) if there exists an index \(l\) such that \({\mathcal {Q}}_{kl}\) is much larger than \(N^{-1/2}\). For this approach to work, we require that \(|\Sigma _{kl} | \gg N^{-1/2}\), which reads \((\sigma - 1) v(k) v(l) \gg N^{-1/2}\). We conclude that this naive entrywise approach works provided that
In contrast, according to Theorems 2.3 and 2.11, looking at the principal components of \({\mathcal {Q}}\) allows us to determine \(S\) from the top eigenvector \(\varvec{\xi }_1\) provided the spike \(\sigma \) is supercritical, i.e. gives rise to an outlier separated by a distance of order one from the bulk spectrum. This gives the condition
The principal component analysis works and the naive componentwise approach does not if (10.4) holds and (10.3) does not. These conditions may be written as
Hence the principal component analysis for this example is very effective when the family of correlated variables is quite large, \(|S | \gg \phi ^{1/2} \sqrt{M}\).
More generally, the principal component approach may work in the regime
and cannot work for smaller \(|S |\). Indeed, the assumption (10.1) is satisfied for \(\sigma \leqslant |S |\), so that in the case of the strongest possible correlations, \(\sigma \asymp |S |\), the condition (10.4) reduces to (10.5). On the other hand, if \(|S | \ll \phi ^{1/2}\), then from (10.2) and the assumption (10.1) we find \(\sigma - 1 \leqslant |S | \ll \phi ^{1/2}\), in contradiction to (10.4).
Clearly, by (10.4), for the purposes of statistical inference it is desirable to make \(\phi \) as small as possible. This means that the number of samples per variables is as large as possible. It is therefore natural to attempt to make \(M\) smaller so as to reduce \(\phi \). Obviously, if we know a priori that \(S\) is contained in some subset \(Y\) of \(\{1,\ldots , M\}\) of size \(M/2\), then we simply discard all variables indexed by \(Y^c\) and consider the correlations restricted to \((a_i)_{i \in Y}\); we have halved \(\phi \) in the process.
However, if we have no such a priori knowledge about \(S\), discarding half of the variables \(a_i\) is a bad idea. In this case, the best one can do is to choose \(Y\) at random. Thus, suppose that \(S\) is uniformly distributed among the subsets of \(\{1,\ldots , M\}\) of size \(|S |\). We cut the sample space \(\{1,\ldots , M\}\) in half by keeping only the \(M/2\) first elements. We therefore obtain a new family of variables with dimensional parameters
Let \(\widetilde{\Sigma }\) be the \(M/2 \times M/2\) matrix obtained from \(\Sigma = 1 + \phi ^{1/2} d \mathbf{{v}} \mathbf{{v}}^*\) by restricting it to the first \(M/2\) elements, i.e. \(\widetilde{\Sigma }\mathrel {\mathop :}=(\Sigma _{kl})_{k,l = 1}^{M/2}\). Note that \(\widetilde{\Sigma }\) is again a rank-one perturbation of the identity. We write it in the form
where \(\widetilde{\mathbf{{v}}}\) is a unit vector. Since
with high probability, we find that \(\widetilde{v}(k) \approx \sqrt{2} v(k)\) for \(k \in S\). Picking an entry \(\Sigma _{kl} = \widetilde{\Sigma }_{kl}\) for \(k,l \in S \mathcal \{1,\ldots , M/2\}\), we obtain
Therefore,
We conclude that detecting spikes in the new problem is more difficult than in the original problem, and the halving of sample space is therefore counterproductive unless one has some good a priori information about \(\Sigma \).
We conclude this appendix with a remark about the use of the non-outlier principal components for statistical inference. Theorem 2.20 implies that the non-outlier eigenvectors near the edge are all biased in the direction of \(\mathbf{{v}}_i\) provided that \(d_i\) is near the BBP transition point \(1\). Suppose for simplicity that \(\phi \) is bounded. Then Theorem 2.20 implies that for \(d_i\) near \(1\) we have \(\langle {\mathbf{{w}}} , {\varvec{\xi }_a}\rangle ^2 \asymp (d_i - 1)^{-2} M^{-1}\) with high probability. We can therefore detect the spike \(d_i\) through the resulting bias in the direction \(\mathbf{{v}}_i\) as long as \(|d_i - 1 | \ll 1\). (Recall that the unbiased, or completely delocalized, case corresponds to \(\langle {\mathbf{{w}}} , {\varvec{\xi }_a}\rangle ^2 \asymp M^{-1}\).) Hence, the non-outlier eigenvectors \(\varvec{\xi }_a\) retain information even about subcritical spikes. This is in stark contrast to the eigenvalues \(\mu _a\), which, by Theorem 2.7, retain no information about the subcritical spikes.
If \(d_i \leqslant 1 - \tau \) for some fixed \(\tau \), then the principal components of \(Q\) contain no information about the spike \(d_i\). We illustrate this using the simple model from (10.2). Suppose we try to determine the set \(S\) by choosing the components of \(\varvec{\xi }_a\) that are much larger than the unbiased background value \(M^{-1/2}\). This works provided that for \(k \in S\) we have
Using \(v(k)^2 = |S |^{-1}\) for \(k \in S\), we therefore get the condition \((1 + \phi )^{1/2} \gg |S |\). This however cannot hold, since we have by assumption (10.1) on \(\Sigma \) that
Rights and permissions
About this article
Cite this article
Bloemendal, A., Knowles, A., Yau, HT. et al. On the principal components of sample covariance matrices. Probab. Theory Relat. Fields 164, 459–552 (2016). https://doi.org/10.1007/s00440-015-0616-x
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-015-0616-x
Mathematics Subject Classification
- 15B52
- 60B20
- 82B44



