1 Introduction

In this paper we investigate \(M \times M\) sample covariance matrices of the form

$$\begin{aligned} {\mathcal {Q}} = \frac{1}{N} A A^* = \left( {\frac{1}{N} \sum _{\mu = 1}^N A_{i \mu } A_{j \mu }}\right) _{i,j = 1}^M, \end{aligned}$$
(1.1)

where the sample matrix \(A = (A_{i \mu })\) is a real \(M \times N\) random matrix. The main motivation to study such models stems from multivariate statistics. Suppose we are interested in the statistics of \(M\) mean-zero variables \(\mathbf{{a}} = (a_1,\ldots , a_M)^*\) which are thought to possess a certain degree of interdependence. Such problems of multivariate statistics commonly arise in population genetics, economics, wireless communication, the physics of mixtures, and statistical learning [3, 26, 33]. The goal is to unravel the interdependencies among the variables \(\mathbf{{a}}\) by finding the population covariance matrix

$$\begin{aligned} \Sigma = \mathbb {E}\mathbf{{a}} \mathbf{{a}}^* = (\mathbb {E}a_i a_j)_{i,j = 1}^M. \end{aligned}$$
(1.2)

To this end, one performs a large number, \(N\), of repeated, independent measurements, called “samples”, of the variables \(\mathbf{{a}}\). Let \(A_{i\mu }\) denote the value of \(a_i\) in the \(\mu \)-th sample. Then the sample covariance matrix (1.1) is the empirical mean approximating the population covariance matrix \(\Sigma \).

In general, the mean of the variables \(\mathbf{{a}}\) is nonzero and unknown. In that case, the population covariance matrix (1.2) has to be replaced with the general form

$$\begin{aligned} \Sigma = \mathbb {E}\bigl [{(\mathbf{{a}} - \mathbb {E}\mathbf{{a}}) (\mathbf{{a}} - \mathbb {E}\mathbf{{a}})^*}\bigr ]. \end{aligned}$$

Correspondingly, one has to subtract from \(A_{i \mu }\) the empirical mean of the \(i\)-th row of \(A\), which we denote by \([A]_i \mathrel {\mathop :}=\frac{1}{N} \sum _{\mu = 1}^N A_{i \mu }\). Hence, we replace (1.1) with

$$\begin{aligned} \dot{{\mathcal {Q}}} \mathrel {\mathop :}=\frac{1}{N - 1} A (I_N - \mathbf{{e}} \mathbf{{e}}^*) A^* = \left( {\frac{1}{N - 1} \sum _{\mu = 1}^N (A_{i \mu } - [A]_i) (A_{j \mu } - [A]_j)}\right) _{i,j = 1}^M,\nonumber \\ \end{aligned}$$
(1.3)

where we introduced the vector

$$\begin{aligned} \mathbf{{e}} \mathrel {\mathop :}=N^{-1/2} (1,1,\ldots , 1)^* \in \mathbb {R}^N. \end{aligned}$$
(1.4)

Since \(\dot{{\mathcal {Q}}}\) is invariant under the shift \(A_{i\mu } \mapsto A_{i \mu } + m_i\) for any deterministic vector \((m_i)_{i = 1}^M\), we may assume without loss of generality that \(\mathbb {E}A_{i \mu } = 0\). We shall always make this assumption from now on.

It is easy to check that \(\mathbb {E}{\mathcal {Q}} = \mathbb {E}\dot{{\mathcal {Q}}} = \Sigma \). Moreover, we shall see that the principal components of \({\mathcal {Q}}\) and \(\dot{{\mathcal {Q}}}\) have identical asymptotic behaviour. For simplicity of presentation, in the following we focus mainly on \({\mathcal {Q}}\), bearing in mind that every statement we make on \({\mathcal {Q}}\) also holds verbatim for \(\dot{{\mathcal {Q}}}\) (see Theorem 2.23 below).

By the law of large numbers, if \(M\) is fixed and \(N\) taken to infinity, the sample covariance matrix \({\mathcal {Q}}\) converges almost surely to the population covariance matrix \(\Sigma \). In many modern applications, however, the population size \(M\) is very large and obtaining samples is costly. Thus, one is typically interested in the regime where \(M\) is of the same order as \(N\), or even larger. In this case, as it turns out, the behaviour of \({\mathcal {Q}}\) changes dramatically and the problem becomes much more difficult. In principal component analysis, one seeks to understand the correlations by considering the principal components, i.e. the top eigenvalues and associated eigenvectors, of \({\mathcal {Q}}\). These provide an effective low-dimensional projection of the high-dimensional data set \(A\), in which the significant trends and correlations are revealed by discarding superfluous data.

The fundamental question, then, is how the principal components of \(\Sigma = \mathbb {E}{\mathcal {Q}}\) are related to those of \({\mathcal {Q}}\).

1.1 The uncorrelated case

In the “null” case, the variables \(\mathbf{{a}}\) are uncorrelated and \(\Sigma = I_M\) is the identity matrix. The global distribution of the eigenvalues is governed by the Marchenko–Pastur law [30]. More precisely, defining the dimensional ratio

$$\begin{aligned} \phi \equiv \phi _N \mathrel {\mathop :}=\frac{M}{N}, \end{aligned}$$
(1.5)

the empirical eigenvalue density of the rescaled matrix \(Q = \phi ^{-1/2} {\mathcal {Q}}\) has the same asymptotics for large \(M\) and \(N\) as

$$\begin{aligned} \frac{\sqrt{[{(x-\gamma _-)(\gamma _+-x)}]_+}}{2 \pi \sqrt{\phi } \, x}\, \mathrm {d}x + (1-\phi ^{-1})_+ \, \delta (\mathrm {d}x), \end{aligned}$$
(1.6)

where we defined

$$\begin{aligned} \gamma _\pm \mathrel {\mathop :}=\phi ^{1/2}+ \phi ^{-1/2} \pm 2 \end{aligned}$$
(1.7)

to be the edges of the limiting spectrum. Hence, the unique nontrivial eigenvalue \(1\) of \(\Sigma \) spreads out into a bulk spectrum of \({\mathcal {Q}}\) with diameter \(4 \phi ^{1/2}\). Moreover, the local spectral statistics are universal; for instance, the top eigenvalue of \({\mathcal {Q}}\) is distributed according to the Tracy–Widom-1 distribution [24, 25, 42, 43]. Finally, the eigenvectors of \({\mathcal {Q}}\) are uniformly distributed on the unit sphere of \(\mathbb {R}^M\); following [15], we call this property the quantum unique ergodicity of the eigenvectors of \({\mathcal {Q}}\), a term borrowed from quantum chaos. We refer to Theorem 8.3 and Remark 8.4 below for precise statements.

1.2 Examples and outline of the model

The problem becomes much more interesting if the variables \(\mathbf{{a}}\) are correlated. Several models for correlated data have been proposed in the literature, starting with the Gaussian spiked model from the seminal paper of Johnstone [25]. Here we propose a general model which includes many previous models as special cases. We motivate it using two examples.

  1. 1.

    Let \(\mathbf{{a}} = T \mathbf{{b}}\), where the entries of \(\mathbf{{b}}\) are independent with zero mean and unit variance, and \(T\) is a deterministic \(M \times M\) matrix. This may be interpreted as an observer studying a complicated system whose randomness is governed by many independent internal variables \(\mathbf{{b}}\). The observer only has access to the external variables \(\mathbf{{a}}\), which may depend on the internal variables \(\mathbf{{b}}\) in some complicated and unknown fashion. Assuming that this dependence is linear, we obtain \(\mathbf{{a}} = T \mathbf{{b}}\). The sample matrix for this model is therefore \(A = T B\), where \(B\) is an \(M \times N\) matrix with independent entries of unit variance. The population covariance matrix is \(\Sigma = T T^*\).

  2. 2.

    Let \(r \in \mathbb {N}\) and set

    $$\begin{aligned} \mathbf{{a}} = \mathbf{{z}} + \sum _{l = 1}^r y_l \mathbf{{u}}_l. \end{aligned}$$

    Here \(\mathbf{{z}} \in \mathbb {R}^M\) is a vector of “noise”, whose entries are independent with zero mean and unit variance. The “signal” is given by the contribution of \(r\) terms of the form \(y_l \mathbf{{u}}_l\), whereby \(y_1,\ldots , y_r\) are independent, with zero mean and unit variance, and \(\mathbf{{u}}_1,\ldots , \mathbf{{u}}_r \in \mathbb {R}^M\) are arbitrary deterministic vectors. The sample matrix is

    $$\begin{aligned} A = Z + \sum _{l = 1}^r \mathbf{{u}}_l \mathbf{{y}}_l^*, \end{aligned}$$

    where, writing \(Y \mathrel {\mathop :}=[\mathbf{{y}}_1,\ldots , \mathbf{{y}}_r] \in \mathbb {R}^{N \times r}\!,\) the \((M + r) \times N\) matrix \(B \mathrel {\mathop :}=\left( {\begin{array}{c}Z\\ Y^*\end{array}}\right) \) has independent entries with zero mean and unit variance. Writing \(U \mathrel {\mathop :}=[\mathbf{{u}}_1,\ldots , \mathbf{{u}}_r] \in \mathbb {R}^{M \times r}\), we therefore have

    $$\begin{aligned} A = T B , \qquad T \mathrel {\mathop :}=(I_M, U). \end{aligned}$$

    The population covariance matrix is \(\Sigma = T T^* = I_M + U U^*\).

Below we shall refer to these examples as Examples (1) and (2) respectively. Motivated by them, we now outline our model. Let \(B\) be an \((M + r) \times N\) matrix whose entries are independent with zero mean and unit variance. We choose a deterministic \(M \times (M + r)\) matrix \(T\), and set \({\mathcal {Q}} = \frac{1}{N} TBB^* T^*\). We stress that we do not assume that the underlying randomness is Gaussian. Our key assumptions are (i) \(r\) is bounded; (ii) \(\Sigma - I_M\) has bounded rank; (iii) \(\log N\) is comparable to \(\log M\); (iv) the entries of \(B\) are independent, with zero mean and unit variance, and have a sufficient number of bounded moments. The precise assumptions are given in Sect. 1.3 below. We emphasize that everything apart from \(r\) and the rank of \(\Sigma - I_M\) is allowed to depend on \(N\) in an arbitrary fashion.

As explained around (1.3), in addition to \({\mathcal {Q}}\) we also consider the matrix \(\dot{{\mathcal {Q}}} = \frac{1}{N - 1} TB(I_N - \mathbf{{e}} \mathbf{{e}}^*)B^* T^*\), whose principal components turn out to have the same asymptotic behaviour as those of \({\mathcal {Q}}\).

1.3 Definition of model

In this section we give the precise definition of our model and introduce some basic notations. For convenience, we always work with the rescaled sample covariance matrix

$$\begin{aligned} Q = \phi ^{-1/2} {\mathcal {Q}}. \end{aligned}$$
(1.8)

The motivation behind this rescaling is that, as observed in (1.6), it ensures that the bulk spectrum of \(Q\) has asymptotically a fixed diameter, \(4\), for arbitrary \(N\) and \(M\).

We always regard \(N\) as the fundamental large parameter, and write \(M \equiv M_N\). Here, and throughout the following, in order to unclutter notation we omit the argument \(N\) in quantities, such as \(M\), that depend on it. In other words, every symbol that is not explicitly a constant is in fact a sequence indexed by \(N\). We assume that \(M\) and \(N\) satisfy the bounds

$$\begin{aligned} N^{1/C} \leqslant M \leqslant N^C \end{aligned}$$
(1.9)

for some positive constant \(C\).

Fix a constant \(r = 0,1,2,3,\ldots \). Let \(X\) be an \((M + r) \times N\) random matrix and \(T\) an \(M \times (M + r)\) deterministic matrix. For definiteness, and bearing the motivation of sample covariance matrices in mind, we assume that the entries of \(X\) and \(T\) are real. However, our method also trivially applies to complex-valued \(X\) and \(T\), with merely cosmetic changes to the proofs. We consider the \(M \times M\) matrix

$$\begin{aligned} Q \mathrel {\mathop :}=T X X^* T^*. \end{aligned}$$
(1.10)

Since \(TX\) is an \(M \times N\) matrix, we find that \(Q\) has

$$\begin{aligned} K \mathrel {\mathop :}=M \wedge N \end{aligned}$$
(1.11)

nontrivial (i.e. nonzero) eigenvalues.

We define the population covariance matrix

$$\begin{aligned} \Sigma \equiv \Sigma (T) \mathrel {\mathop :}=T T^* = \sum _{i = 1}^M \sigma _i \mathbf{{v}}_i \mathbf{{v}}_i^* = I_M + \phi ^{1/2} \sum _{i = 1}^M d_i \mathbf{{v}}_i \mathbf{{v}}_i^*, \end{aligned}$$
(1.12)

where \(\{\mathbf{{v}}_i\}_{i = 1}^M\) is a real orthonormal basis of \(\mathbb {R}^M\) and \(\{\sigma _i\}_{i = 1}^M\) are the eigenvalues of \(\Sigma \). Here we introduce the representation

$$\begin{aligned} \sigma _i = 1 + \phi ^{1/2} d_i \end{aligned}$$

for the eigenvalues \(\sigma _i\). We always order the values \(d_i\) such that

$$\begin{aligned} d_1 \geqslant d_2 \geqslant \cdots \geqslant d_M. \end{aligned}$$

We suppose that \(\Sigma \) is positive definite, so that each \(d_i\) lies in the interval

$$\begin{aligned} {\mathcal {D}} \mathrel {\mathop :}=(-\phi ^{-1/2}, \infty ). \end{aligned}$$
(1.13)

Moreover, we suppose that \(\Sigma - I_M\) has bounded rank, i.e.

$$\begin{aligned} {\mathcal {R}} \mathrel {\mathop :}=\{{i \mathrel {\mathop :}d_i \ne 0}\} \end{aligned}$$
(1.14)

has bounded cardinality, \(|{\mathcal {R}} | = O(1)\). We call the couples \(((d_i, \mathbf{{v}}_i))_{i \in {\mathcal {R}}}\) the spikes of \(\Sigma \).

We assume that the entries \(X_{i \mu }\) of \(X\) are independent (but not necessarily identically distributed) random variables satisfying

$$\begin{aligned} \mathbb {E}X_{i \mu }=0,\qquad \mathbb {E}X_{i \mu }^2=\frac{1}{\sqrt{N M}}. \end{aligned}$$
(1.15)

In addition, we assume that, for all \(p \in \mathbb {N}\), the random variables \((N M)^{1/4} X_{i\mu }\) have a uniformly bounded \(p\)-th moment. In other words, we assume that there is a constant \(C_p\) such that

$$\begin{aligned} \mathbb {E}\bigl |(N M)^{1/4} X_{i\mu } \bigr |^p \leqslant C_p. \end{aligned}$$
(1.16)

The assumption that (1.16) hold for all \(p \in \mathbb {N}\) may be easily relaxed. For instance, it is easy to check that our results and their proofs remain valid, after minor adjustments, if we only require that (1.16) holds for all \(p \leqslant C\) for some large enough constant \(C\). We do not pursue such generalizations further.

Our results concern the eigenvalues of \(Q\), denoted by

$$\begin{aligned} \mu _1 \geqslant \mu _2 \geqslant \cdots \geqslant \mu _M, \end{aligned}$$

and the associated unit eigenvectors of \(Q\), denoted by

$$\begin{aligned} \varvec{\xi }_1, \varvec{\xi }_2,\ldots , \varvec{\xi }_M \in \mathbb {R}^M. \end{aligned}$$

1.4 Sketch of behaviour of the principal components of \(Q\)

To guide the reader, we now give a heuristic description of the behaviour of principal components of \(Q\). The spectrum of \(Q\) consists of a bulk spectrum and of outliers—eigenvalues separated from the bulk. The bulk contains an order \(K\) eigenvalues, which are distributed on large scales according to the Marchenko–Pastur law (1.6). In addition, if \(\phi > 1\) there are \(M - K\) trivial eigenvalues at zero. Each \(d_i\) satisfying \(|d_i | > 1\) gives rise to an outlier located near its classical location

$$\begin{aligned} \theta (d) \mathrel {\mathop :}=\phi ^{1/2} + \phi ^{-1/2} + d + d^{-1}. \end{aligned}$$
(1.17)

Any \(d_i\) satisfying \(|d_i | < 1\) does not result in an outlier. We summarize this picture in Fig. 1. The creation or annihilation of an outlier as a \(d_i\) crosses \(\pm 1\) is known as the BBP phase transition [3]. It takes place on the scaleFootnote 1 \(\bigl ||d_i | - 1 \bigr | \asymp K^{-1/3}\). This scale has a simple heuristic explanation (we focus on the right edge of the spectrum). Suppose that \(d_1 \in (0,1)\) and all other \(d_i\)’s are zero. Then the top eigenvalue \(\mu _1\) exhibits universality, and fluctuates on the scale \(K^{-2/3}\) around \(\gamma _+\) (see Theorem 8.3 and Remark 8.7 below). Increasing \(d_1\) beyond the critical value \(1\), we therefore expect \(\mu _1\) to become an outlier when its classical location \(\theta (d_1)\) is located at a distance greater than \(K^{-2/3}\) from \(\gamma _+\). By a simple Taylor expansion of \(\theta \), the condition \(\theta (d_1) - \gamma _+ \gg K^{-2/3}\) becomes \(d_1 - 1 \gg K^{-1/3}\).

Fig. 1
figure 1

A typical configuration of \(\{d_i\}\) (above) and the resulting spectrum of \(Q\) (below). An order \(M\) of the \(d_i\)’s are zero, which is symbolized by the thicker dot at \(0\). Any \(d_i\) inside the grey interval \([-1,1]\) does not give rise to an outlier, while any \(d_i\) outside the grey interval gives rise to an outlier located near its classical location \(\theta (d_i)\) and separated from the bulk \([\gamma _-, \gamma _+]\)

Next, we outline the distribution of the outlier eigenvectors. Let \(\mu _i\) be an outlier with associated eigenvector \(\varvec{\xi }_i\). Then \(\varvec{\xi }_i\) is concentrated on a cone [8, 31, 33] with axis parallel to \(\mathbf{{v}}_i\), the corresponding eigenvector of the population covariance matrix \(\Sigma \). More precisely, assuming that the eigenvalue \(\sigma _i = 1 + \phi ^{1/2} d_i\) of \(\Sigma \) is simple, we haveFootnote 2

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {\varvec{\xi }_i}\rangle ^2 \approx u(d_i), \end{aligned}$$
(1.18)

where we defined

$$\begin{aligned} u(d_i) \equiv u_\phi (d_i) \mathrel {\mathop :}=\frac{\sigma _i}{\phi ^{1/2} \theta (d_i)} (1 - d_i^{-2}) \end{aligned}$$
(1.19)

for \(d_i > 1\). The function \(u\) determines the aperture \(2 \arccos \sqrt{u(d_i)}\) of the cone. Note that \(u(d_i) \in (0,1)\) and \(u(d_i)\) converges to \(1\) as \(d_i \rightarrow \infty \). See Fig. 2.

Fig. 2
figure 2

The eigenvector \(\varvec{\xi }_i\) associated with an outlier \(\mu _i\) is concentrated on a cone with axis parallel to \(\mathbf{{v}}_i\). The aperture of the cone is determined by \(u(d_i)\) defined in (1.19)

1.5 Summary of previous related results

There is an extensive literature on spiked covariance matrices. So far most of the results have focused on the outlier eigenvalues of Example (1), with the nonzero \(d_i\) independent of \(N\) and \(\phi \) fixed. Eigenvectors and the non-outlier eigenvalues have seen far less attention.

For the uncorrelated case \(\Sigma = I_M\) and Gaussian \(X\) in (1.10) with fixed \(\phi \), it was proved in [24] for the complex case and in [25] for the real case that the top eigenvalue, rescaled as \(K^{2/3}(\mu _1 - \gamma _+)\), is asymptotically distributed according the Tracy–Widom law of the appropriate symmetry class [42, 43]. Subsequently, these results were shown to be universal, i.e. independent of the distribution of the entries of \(X\), in [36, 40]. The assumption that \(\phi \) be fixed was relaxed in [17, 35].

The study of covariance matrices with nontrivial population covariance matrix \(\Sigma \ne I_M\) goes back to the seminal paper of Johnstone [25], where the Gaussian spiked model was introduced. The BBP phase transition was established by Baik et al. [3] for complex Gaussian \(X\), fixed rank of \(\Sigma - I_M\), and fixed \(\phi \). Subsequently, the results of [3] were extended to the other Gaussian symmetry classes, such as real covariance matrices, in [11, 12]. The proofs of [3, 34] use an asymptotic analysis of Fredholm determinants, while those of [11, 12] use an explicit tridiagonal representation of \(X X^*\); both of these approaches rely heavily on the Gaussian nature of \(X\). See also [13] for a generalization of the BBP phase transition.

For the model from Example (1) with fixed nonzero \(\{d_i\}\) and \(\phi \), the almost sure convergence of the outliers was established in [4]. It was also shown in [4] that if \(|d_i | < 1\) for all \(i\), the top eigenvalue \(\mu _1\) converges to \(\gamma _+\). For this model, a central limit theorem of the outliers was proved in [2]. In [1], the almost sure convergence of the outliers was proved for a generalized spiked model whose population covariance matrix is of the block diagonal form \(\Sigma = {{\mathrm{diag}}}(A,T)\), where \(A\) is a fixed \(r \times r\) matrix and \(T\) is chosen so that the associated sample covariance matrix has no outliers.

In [8], the almost sure convergence of the projection of the outlier eigenvectors onto the finite-dimensional spike subspace was established, under the assumption that \(\phi \) and the nonzero \(d_i\) are fixed, and that \(B\) and \(T\) are both random and one of them is orthogonally invariant. In particular, the cone concentration from (1.18) was established in [8]. In [33], under the assumption that \(X\) is Gaussian and \(\phi \) and the nonzero \(d_i\) are fixed, a central limit theorem for a certain observable, the so-called sample vector, of the outlier eigenvectors was established. The result of [33] was extended to non-Gaussian entries for a special class of \(\Sigma \) in [39].

Moreover, in [9, 32] result analogous to those of [8] were obtained for the model from Example (2). Finally, a related class of models, so-called deformed Wigner matrices, have been the subject of much attention in recent years; we refer to [27, 28, 37, 38] for more details; in particular, the joint distribution of all outliers was derived in [28].

1.6 Overview of results

In this subsection we give an informal overview of our results.

We establish results on the eigenvalues \(\mu _i\) and the eigenvectors \(\varvec{\xi }_i\) of \(Q\). Our results consist of large deviation bounds and asymptotic laws. We believe that all of our large deviation bounds from Theorems 2.32.7, 2.112.16, and 2.17 are optimal (up to the technical conditions in the definition of \(\prec \) given in Definition 2.1). We do not prove this. However, we expect that, combining our method with the techniques of [28], one may also derive the asymptotic laws of all quantities on which we establish large deviation bounds, in particular proving the optimality of our large deviation bounds.

Our results on the eigenvalues of \(Q\) consist of two parts. First, we derive large deviation bounds on the locations of the outliers (Theorem 2.3). Second, we prove eigenvalue sticking for the non-outliers (Theorem 2.7), whereby each non-outlier “sticks” with high probability and very accurately to the eigenvalues of a related covariance matrix satisfying \(\Sigma = I_M\) and whose top eigenvalues exhibit universality. As a corollary (Remark 8.7), we prove that the top non-outlier eigenvalue of \(Q\) has asymptotically the Tracy–Widom-1 distribution. This sticking is very accurate if all \(d_i\)’s are separated from the critical point \(1\), and becomes less accurate if a \(d_i\) is in the vicinity of \(1\). Eventually, it breaks down precisely on the BBP transition scale \(|d_i - 1 | \asymp K^{-1/3}\), at which the Tracy–Widom-1 distribution is known not to hold for the top non-outlier eigenvalue. These results generalize those from [27, Theorem 2.7].

Next, we outline our results for the eigenvectors \(\varvec{\xi }_i\) of \(Q\). We consider the generalized components \(\langle {\mathbf{{w}}} , {\varvec{\xi }_i}\rangle \) of \(\varvec{\xi }_i\), where \(\mathbf{{w}} \in \mathbb {R}^M\) is an arbitrary deterministic vector. In our first result on the eigenvectors (Theorems 2.11 and 2.16), we establish large deviation bounds on the generalized components of outlier eigenvectors [and, more generally, of the outlier spectral projections defined in (2.11) below]. This result gives a quantitative version of the cone concentration from (1.18), which in particular allows us to track the strength of the concentration in the vicinity of the BBP transition and for overlapping outliers. Our results also establish the complete delocalization of an outlier eigenvector \(\varvec{\xi }_i\) in any direction orthogonal to the spike direction \(\mathbf{{v}}_i\), provided the outlier \(\mu _i\) is well separated from the bulk spectrum and other outliers. We say that the vector \(\varvec{\xi }_i\) is completely delocalized, or unbiased, in the direction \(\mathbf{{w}}\) if \(\langle {\mathbf{{w}}} , {\varvec{\xi }_i}\rangle ^2 \prec M^{-1}\), where “\(\prec \)” denotes a high probability bound up to powers of \(M^\varepsilon \) (see Definition 2.1).

If the outlier \(\mu _i\) approaches the bulk spectrum or another outlier, the cone concentration becomes less accurate. For the case of two nearby outlier eigenvalues, for instance, the cone concentration (1.18) of the eigenvectors breaks down when the distributions of the outlier eigenvalues have a nontrivial overlap. In order to understand this behaviour in more detail, we introduce the deterministic projection

$$\begin{aligned} \Pi _A \mathrel {\mathop :}=\sum _{i \in A} \mathbf{{v}}_i \mathbf{{v}}_i^*, \end{aligned}$$
(1.20)

where \(A \subset \{1,\ldots , M\}\). Then the cone concentration from (1.18) may be written as \(|\Pi _{\{i\}} \varvec{\xi }_i |^2 \approx u(d_i) |\varvec{\xi }_i |^2\). In contrast, in the degenerate case \(d_1 = d_2 > 1\) and all other \(d_i\)’s being zero, (1.18) is replaced with

$$\begin{aligned} \langle {\varvec{\xi }_i} , {\Pi _{\{1,2\}} \varvec{\xi }_j}\rangle \approx \delta _{ij} \, u(d_1) |\varvec{\xi }_i | |\varvec{\xi }_j |, \end{aligned}$$
(1.21)

where \(i,j \in \{1,2\}\). We deduce that each \(\varvec{\xi }_i\) lies on the cone

$$\begin{aligned} |\Pi _{\{1,2\}} \varvec{\xi }_i |^2 \approx u(d_1) |\varvec{\xi }_i |^2, \end{aligned}$$
(1.22)

and that \(\Pi _{\{1,2\}} \varvec{\xi }_1 \perp \Pi _{\{1,2\}} \varvec{\xi }_2\). Moreover, we prove that \(\varvec{\xi }_i\) is completely delocalized in any direction orthogonal to \(\mathbf{{v}}_1\) and \(\mathbf{{v}}_2\). The interpretation is that \(\varvec{\xi }_1\) and \(\varvec{\xi }_2\) both lie on the cone (1.22), that they are orthogonal on both the range and null space of \(\Pi _{\{1,2\}}\), and that beyond these constraints their distribution is unbiased (i.e. isotropic). Finally, we note that the preceding discussion remains unchanged if one interchanges \(\varvec{\xi }_i\) and \(\mathbf{{v}}_i\). We refer to Example 2.15 below for more details.

In our second result on the eigenvectors (Theorem 2.17), we establish delocalization bounds for the generalized components of non-outlier eigenvectors \(\xi _i\). In particular, we prove complete delocalization of non-outlier eigenvectors in directions orthogonal to any spike \(\mathbf{{v}}_j\) whose value \(d_j\) is near the critical point 1. In addition, we prove that non-outlier eigenvectors away from the edge are completely delocalized in all directions. The complete delocalization in the direction \(\mathbf{{v}}_j\) breaks down if \(|d_j - 1 | \ll 1\). The interpretation of this result is that any spike \(d_j\) near the BBP transition point \(1\) causes all non-outlier eigenvectors \(\varvec{\xi }_i\) near the upper edge of the bulk spectrum to have a bias in the direction \(\mathbf{{v}}_j\), in contrast to the completely delocalized case where \(\varvec{\xi }_i\) is uniformly distributed on the unit sphere.

In our final result on the eigenvectors (Theorem 2.20), we give the asymptotic law of the generalized component \(\langle {\mathbf{{w}}} , {\varvec{\xi }_i}\rangle \) of a non-outlier eigenvector \(\varvec{\xi }_i\). In particular, we prove that this generalized component is asymptotically Gaussian and has a variance predicted by the delocalization bounds from Theorem 2.17. For instance, we prove that if \(|d_j - 1 | \gg K^{-1/3}\) then

$$\begin{aligned} \langle {\mathbf{{v}}_j} , {\varvec{\xi }_i}\rangle ^2 = \frac{\sigma _i}{M (d_j - 1)^2} \Theta , \end{aligned}$$
(1.23)

for all non-outlier indices \(i\) that are not too large (see Theorem 2.20 for a precise statement). Here \(\Theta \) is a random variable that converges in distribution to a chi-squared variable. If \(\varvec{\xi }_i\) were completely delocalized in the direction \(\mathbf{{v}}_j\), the right-hand side would be of order \(M^{-1}\). Suppose for simplicity that \(\phi \) is of order one. The bias of \(\varvec{\xi }_i\) in the direction \(\mathbf{{v}}_j\) emerges as soon as \(|d_j - 1 | \ll 1\), and reaches a magnitude of order \(M^{-1/3}\) for \(d_j\) near the BBP transition. This is much larger than the unbiased \(M^{-1}\). Note that this phenomenon applies simultaneously to all non-outlier eigenvectors near the right edge: the right-hand side of (1.23) does not depend on \(i\). Note also that the right-hand side of (1.23) is insensitive to the sign of \(d_j - 1\). In particular, the bias is also present for subcritical spikes. We conclude that even subcritical spikes are observable in the principal components. In contrast, if one only considers the eigenvalues of the principal components, the subcritical spikes cannot be detected; this follows from the eigenvalue sticking result in Theorem 2.7.

Finally, the proofs of universality of the non-outlier eigenvalues and eigenvectors require the universality of \(Q\) for the uncorrelated case \(\Sigma = I_M\) as input. This universality result is given in Theorem 8.3, which is also of some independent interest. It establishes the joint, fixed-index, universality of the eigenvalues and eigenvectors of \(Q\) (and hence, as a special case, the quantum unique ergodicity of the eigenvectors of \(Q\) mentioned in Sect. 1.1). It works for all eigenvalue indices \(i\) satisfying \(i \leqslant K^{1 - \tau }\) for any fixed \(\tau > 0\).

We conclude this subsection by outlining the key novelties of our work.

  1. (i)

    We introduce the general models \(Q\) from (1.10) and \(\dot{Q}\) from (2.23) below, which subsume and generalize several models considered previously in the literature.Footnote 3 We allow the entries of \(X\) to be arbitrary random variables (up to a technical assumption on their tails). All quantities except \(r\) and the rank of \(\Sigma - I_M\) may depend on \(N\). We make no assumption on \(T\) beyond the bounded-rank condition of \(T T^* - I_M\). The dimensions \(M\) and \(N\) may be wildly different, and are only subject to the technical condition (1.9).

  2. (ii)

    We study the behaviour of the principal components of \(Q\) near the BBP transition and when outliers collide. Our results hold for generalized components \(\langle {\mathbf{{w}}} , {\varvec{\xi }_i}\rangle \) of the eigenvectors in arbitrary directions \(\mathbf{{w}}\).

  3. (iii)

    We obtain quantitative bounds (i.e. rates of convergence) on the outlier eigenvalues and the generalized components of the eigenvectors. We believe these bounds to be optimal.

  4. (iv)

    We obtain precise information about the non-outlier principal components. A novel observation is that, provided there is a \(d_i\) satisfying \(|d_i - 1 | \ll 1\) (i.e. \(Q\) is near the BBP transition), all non-outlier eigenvectors near the edge will be biased in the direction of \(\mathbf{{v}}_i\). In particular, non-outlier eigenvectors, unlike non-outlier eigenvalues, retain some information about the subcritical spikes of \(\Sigma \).

  5. (v)

    We establish the joint, fixed-index, universality of the eigenvalues and eigenvectors for the case \(\Sigma = I_M\). This result holds for any eigenvalue indices \(i\) satisfying \(i \leqslant K^{1 - \tau }\) for an arbitrary \(\tau > 0\). Note that previous works [29, 41] (established in the context of Wigner matrices) required either the much stronger condition \(i \leqslant (\log K)^{C \log \log K}\) or a four-moment matching condition.

We remark that the large deviation bounds derived in this paper also allow one to derive the joint distribution of the generalized components of the outlier eigenvectors; this will be the subject of future work.

1.7 Conventions

The fundamental large parameter is \(N\). All quantities that are not explicitly constant may depend on \(N\); we almost always omit the argument \(N\) from our notation.

We use \(C\) to denote a generic large positive constant, which may depend on some fixed parameters and whose value may change from one expression to the next. Similarly, we use \(c\) to denote a generic small positive constant. For two positive quantities \(A_N\) and \(B_N\) depending on \(N\) we use the notation \(A_N \asymp B_N\) to mean \(C^{-1} A_N \leqslant B_N \leqslant C A_N\) for some positive constant \(C\). For \(a < b\) we set \([\![{a,b}]\!] \mathrel {\mathop :}=[a,b] \cap \mathbb {Z}\). We use the notation \(\mathbf{{v}} = (v(i))_{i = 1}^M\) for vectors in \(\mathbb {R}^M\), and denote by \(|\cdot |= \Vert \cdot \Vert _2\) the Euclidean norm of vectors and by \(\Vert \cdot \Vert \) the corresponding operator norm of matrices. We use \(I_M\) to denote the \(M \times M\) identity matrix, which we also sometimes write simply as \(1\) when there is no risk of confusion.

We use \(\tau > 0\) in various assumptions to denote a positive constant that may be chosen arbitrarily small. A smaller value of \(\tau \) corresponds to a weaker assumption. All of our estimates depend on \(\tau \), and we neither indicate nor track this dependence.

2 Results

In this section we state our main results. The following notion of a high-probability bound was introduced in [18], and has been subsequently used in a number of works on random matrix theory. It provides a simple way of systematizing and making precise statements of the form “\(A\) is bounded with high probability by \(B\) up to small powers of \(N\)”.

Definition 2.1

(Stochastic domination) Let

$$\begin{aligned} A = \left( {A^{(N)}(u) \mathrel {\mathop :}N \in \mathbb {N}, u \in U^{(N)}}\right) , \qquad B = \left( {B^{(N)}(u) \mathrel {\mathop :}N \in \mathbb {N}, u \in U^{(N)}}\right) \end{aligned}$$

be two families of nonnegative random variables, where \(U^{(N)}\) is a possibly \(N\)-dependent parameter set. We say that \(A\) is stochastically dominated by \(B\), uniformly in \(u\), if for all (small) \(\varepsilon > 0\) and (large) \(D > 0\) we have

$$\begin{aligned} \sup _{u \in U^{(N)}} \mathbb {P}\Bigl [{A^{(N)}(u) > N^\varepsilon B^{(N)}(u)}\Bigr ] \leqslant N^{-D} \end{aligned}$$
(2.1)

for large enough \(N\geqslant N_0(\varepsilon , D)\). Throughout this paper the stochastic domination will always be uniform in all parameters (such as matrix indices) that are not explicitly fixed. Note that \(N_0(\varepsilon , D)\) may depend on the constants from (1.9) and (1.16) as well as any constants fixed in the assumptions of our main results. If \(A\) is stochastically dominated by \(B\), uniformly in \(u\), we use the notation \(A \prec B\). Moreover, if for some complex family \(A\) we have \(|A | \prec B\) we also write \(A = O_\prec (B)\).

Remark 2.2

Because of (1.9), all (or some) factors of \(N\) in Definition 2.1 could be replaced with \(M\) without changing the definition of stochastic domination.

2.1 Eigenvalue locations

We begin with results on the locations of the eigenvalues of \(Q\). These results will also serve as a fundamental input for the proofs of the results on eigenvectors presented in Sects. 2.2 and 2.3.

Recall that \(Q\) has \(M - K\) zero eigenvalues. We shall therefore focus on the \(K\) nontrivial eigenvalues \(\mu _1 \geqslant \cdots \geqslant \mu _K\) of \(Q\). On the global scale, the eigenvalues of \(Q\) are distributed according to the Marchenko–Pastur law (1.6). This may be easily inferred from the fact that (1.6) gives the global density of the eigenvalues for the uncorrelated case \(\Sigma = I_M\), combined with eigenvalue interlacing (see Lemma 4.1 below). In this section we focus on local eigenvalue information.

We introduce the set of outlier indices

$$\begin{aligned} {\mathcal {O}} \mathrel {\mathop :}=\bigl \{{i \in {\mathcal {R}} \mathrel {\mathop :}|d_i | \geqslant 1 + K^{-1/3}}\bigr \}. \end{aligned}$$
(2.2)

As explained in Sect. 1.4, each \(i \in {\mathcal {O}}\) gives rise to an outlier of \(Q\) near the classical location \(\theta (d_i)\) defined in (1.17). In the definition (2.2), the lower bound \(1 + K^{-1/3}\) is chosen for definiteness; it could be replaced with \(1 + a K^{-1/3}\) for any fixed \(a > 0\). We denote by

$$\begin{aligned} s_{\pm } \mathrel {\mathop :}=\bigl |\bigl \{{i \in {\mathcal {O}} \mathrel {\mathop :}\pm d_i > 0}\bigr \} \bigr | \end{aligned}$$
(2.3)

the number of outliers to the left (\(s_-\)) and right (\(s_+\)) of the bulk spectrum.

For \(d \in {\mathcal {D}} \setminus [-1,1]\) we define

$$\begin{aligned} \Delta (d) \mathrel {\mathop :}={\left\{ \begin{array}{ll} \frac{\phi ^{1/2} \theta (d)}{1 + (|d | - 1)^{-1/2}} &{} \text {if } -\phi ^{-1/2} < d < -1 \\ (d - 1)^{1/2} &{} \text {if } 1 < d \leqslant 2 \\ 1 + \frac{d}{1 + \phi ^{-1/2}} &{} \text {if } d \geqslant 2. \end{array}\right. } \end{aligned}$$

The function \(\Delta (d)\) will be used to give an upper bound on the magnitude of the fluctuations of an outlier associated with \(d\). We give such a precise expression for \(\Delta \) in order to obtain sharp large deviation bounds for all \(d \in {\mathcal {D}}\!\setminus \![-1,1]\). (Note that the discontinuity of \(\Delta \) at \(d = 2\) is immaterial since \(\Delta \) is used as an upper bound with respect to \(\prec \). The ratio of the right- and left-sided limits at \(2\) of \(\Delta \) lies in \([1,3]\).)

Our result on the outlier eigenvalues is the following.

Theorem 2.3

(Outlier locations) Fix \(\tau > 0\). Then for \(i \in {\mathcal {O}}\) we have the estimate

$$\begin{aligned} |\mu _{i} - \theta (d_i) | \prec \Delta (d_i) \, K^{-1/2} \end{aligned}$$
(2.4)

provided that \(d_i > 0\) or \(|\phi - 1 | \geqslant \tau \).

Furthermore\(,\) the extremal non-outliers \(\mu _{s_+ + 1}\) and \(\mu _{K - s_-}\) satisfy

$$\begin{aligned} |\mu _{s_++1} - \gamma _+ | \prec K^{-2/3}, \end{aligned}$$
(2.5)

and\(,\) assuming in addition that \(|\phi -1 | \geqslant \tau ,\)

$$\begin{aligned} |\mu _{K-s_-} - \gamma _- | \prec K^{-2/3}. \end{aligned}$$
(2.6)

Remark 2.4

Theorem 2.3 gives large deviation bounds for the locations of the outliers to the right of the bulk. Since \(\tau > 0\) may be arbitrarily small, Theorem 2.3 also gives the full information about the outliers to the left of the bulk except in the case \(1 > \phi = 1 + o(1)\). Although our methods may be extended to this case as well, we exclude it here to avoid extraneous complications.

Remark 2.5

By definition of \(s_-\) and \({\mathcal {D}}\), if \(\phi >1\) then \(s_-=0\). Hence, by (2.6), if \(\phi > 1\) there are no outliers on the left of the bulk spectrum.

Remark 2.6

Previously, the model from Example (1) in Sect. 1.2 with fixed nonzero \(\{d_i\}\) and \(\phi \) was investigated in [2, 4]. In [4], it was proved that each outlier eigenvalue \(\mu _i\) with \(i \in {\mathcal {O}}\) convergences almost surely to \(\theta (d_i)\). Moreover, a central limit theorem for \(\mu _i\) was established in [2].

The locations of the non-outlier eigenvalues \(\mu _i\), \(i \notin {\mathcal {O}}\), are governed by eigenvalue sticking, whereby the eigenvalues of \(Q\) “stick” with high probability to eigenvalues of a reference matrix which has a trivial population covariance matrix. The reference matrix is \(Q\) from (1.10) with uncorrelated entries. More precisely, we set

$$\begin{aligned} H \mathrel {\mathop :}=Y Y^*, \qquad Y \mathrel {\mathop :}=(I_M, 0) O X, \end{aligned}$$
(2.7)

where \(O \equiv O(T) \in \mathrm O(M + r)\) is a deterministic orthogonal matrix. It is easy to check that \(\mathbb {E}H = (NM)^{-1/2} I_M\), so that \(H\) corresponds to an uncorrelated population. The matrix \(O(T)\) is explicitly given in (8.1) below. In fact, in Theorem 8.3 below we prove the universality of the joint distribution of non-bulk eigenvalues and eigenvectors of \(H\). Here, by definition, we say that an index \(i \in [\![{1,K}]\!]\) is non-bulk if \(i \notin [\![{K^{1 - \tau }, K - K^{1 - \tau }}]\!]\) for some fixed \(\tau > 0\). In particular, the asymptotic distribution of the non-bulk eigenvalues and eigenvectors of \(H\) does not depend on the choice of \(O\). Note that for the special case \(r = 0\) the eigenvalues of \(H\) coincide with those of \(X X^*\). We denote by

$$\begin{aligned} \lambda _1\geqslant \lambda _2 \geqslant \cdots \geqslant \lambda _M \end{aligned}$$

the eigenvalues of \(H\).

Theorem 2.7

(Eigenvalue sticking) Define

$$\begin{aligned} \alpha _\pm \mathrel {\mathop :}=\min _{1 \leqslant i \leqslant M} |d_i \mp 1 |. \end{aligned}$$
(2.8)

Fix \(\tau > 0\). Then we have for all \(i \in [\![{1, (1 - \tau ) K}]\!]\)

$$\begin{aligned} |\mu _{i + s_+} - \lambda _i | \prec \frac{1}{K \alpha _+}. \end{aligned}$$
(2.9)

Similarly\(,\) if \(|\phi - 1 | \geqslant \tau \) then we have for all \(i \in [\![{\tau K, K}]\!]\)

$$\begin{aligned} |\mu _{i - s_-} - \lambda _i | \prec \frac{1}{K \alpha _-}. \end{aligned}$$
(2.10)

Remark 2.8

As outlined above, in Theorem 8.3 below we prove that the asymptotic joint distribution of the non-bulk eigenvalues of \(H\) is universal, i.e. it coincides with that of the Wishart matrix \(H_{\mathrm{{Wish}}} = X X^*\) with \(r = 0\) and \(X\) Gaussian. As an immediate corollary of Theorems 2.7 and 8.3, we obtain the universality of the non-outlier eigenvalues of \(Q\) with index \(i \leqslant K^{1 - \tau } \alpha _+^3\). This condition states simply that the right-hand side of (2.9) is much smaller than the scale on which the eigenvalue \(\lambda _i\) fluctuates, which is \(K^{-2/3} i^{-1/3}\). See Remark 8.7 below for a precise statement.

Remark 2.9

Theorem 2.7 is analogous to Theorem 2.7 of [27], where sticking was first established for Wigner matrices. Previously, eigenvalue sticking was established for a certain class of random perturbations of Wigner matrices in [6, 7]. We refer to [27, Remark 2.8] for a more detailed discussion.

Aside from holding for general covariance matrices of the form (1.10), Theorem 2.7 is stronger than its counterpart from [27] because it holds much further into the bulk: in [27, Theorem 2.7], sticking was established under the assumption that \(i \leqslant (\log K)^{C \log \log K}\).

Remark 2.10

The edge universality following from Theorem 2.7 (as explained in Remark 2.8) generalizes the recent result [5]. There, for the model from Example (1) in Sect. 1.2 with fixed nonzero \(\{d_i\}\) and \(\phi \), it was proved that if \(d_i < 1\) for all \(i\) and \(\Sigma \) is diagonal, then \(\mu _1\) converges (after a suitable affine transformation) in distribution to the Tracy–Widom-1 distribution.

2.2 Outlier eigenvectors

We now state our main results for the outlier eigenvectors. Statements of results on eigenvectors requires some care, since there is some arbitrariness in the definition of the eigenvector \(\varvec{\xi }_i\) of \(Q\). In order to get rid of the arbitrariness in the sign (or, in the complex case, the phase) of \(\varvec{\xi }_i\) we consider products of generalized components,

$$\begin{aligned} \langle {\mathbf{{v}}} , {\varvec{\xi }_i}\rangle \langle {\varvec{\xi }_i} , {\mathbf{{w}}}\rangle . \end{aligned}$$

It is easy to check that these products characterize the eigenvector \(\varvec{\xi }_i\) completely, up to the ambiguity of a global sign (or phase). More generally, one may consider the generalized components \(\langle {\mathbf{{v}}} , {(\cdot ) \, \mathbf{{w}}}\rangle \) of the (random) spectral projection

$$\begin{aligned} P_A \mathrel {\mathop :}=\sum _{i \in A} \varvec{\xi }_{i} \varvec{\xi }_{i}^*, \end{aligned}$$
(2.11)

where \(A \subset {\mathcal {O}}\).

In the simplest case \(A = \{i\}\) the generalized components of \(P_A\) characterize the generalized components of \(\varvec{\xi }_i\). The need to consider higher-dimensional projections arises if one considers degenerate or almost degenerate outliers. Suppose for example that \(d_1 \approx d_2\) and all other \(d_i\)’s are zero. Then the cone concentration (1.18) fails, to be replaced with (1.21). The failure of the cone concentration is also visible in our results as a blowup of the error bounds. This behaviour is not surprising, since for degenerate outliers \(d_1 = d_2\) it makes no sense to distinguish the associated spike eigenvectors \(\mathbf{{v}}_1\) and \(\mathbf{{v}}_2\); only the eigenspace matters. Correspondingly, we have to consider the orthogonal projection onto the eigenspace of the outliers in \(A\). See Example 2.15 below for a more detailed discussion.

For \(i \in [\![{1,M}]\!]\) we define \(\nu _i \geqslant 0\) through

$$\begin{aligned} \nu _i \equiv \nu _i(A) \mathrel {\mathop :}={\left\{ \begin{array}{ll} \min _{j \notin A} |d_i - d_j | &{} \text {if } i \in A \\ \min _{j \in A} |d_i - d_j | &{} \text {if } i \notin A. \end{array}\right. } \end{aligned}$$

In other words, \(\nu _i(A)\) is the distance from \(d_i\) to either \(\{d_i\}_{i \in A}\) or \(\{d_i\}_{i \notin A}\), whichever it does not belong to. For a vector \(\mathbf{{w}} \in \mathbb {R}^M\) we also introduce the shorthand

$$\begin{aligned} w_i \mathrel {\mathop :}=\langle {\mathbf{{v}}_i} , {\mathbf{{w}}}\rangle \end{aligned}$$

to denote the components of \(\mathbf{{w}}\) in the eigenbasis of \(\Sigma \).

For definiteness, we only state our results for the outliers on the right-hand side of the bulk spectrum. Analogous results hold for the outliers on the left-hand side. Since the behaviour of the fluctuating error term is different in the regimes \(\mu _i - \gamma _+ \ll 1\) (near the bulk) and \(\mu _i - \gamma _+ \gg 1\) (far from the bulk), we split these two cases into separate theorems.

Theorem 2.11

(Outlier eigenvectors near bulk) Fix \(\tau > 0\). Suppose that \(A \subset {\mathcal {O}}\) satisfies \(1 + K^{-1/3} \leqslant d_i \leqslant \tau ^{-1}\) for all \(i \in A\). Define the deterministic positive quadratic form

$$\begin{aligned} \langle {\mathbf{{w}}} , {Z_A \mathbf{{w}}}\rangle \mathrel {\mathop :}=\sum _{i \in A} u(d_i) w_i^2, \end{aligned}$$

where we recall the definition (1.19) of \(u(d_i)\). Then for any deterministic \(\mathbf{{w}} \in \mathbb {R}^M\) we have

$$\begin{aligned} \langle {\mathbf{{w}}} , {P_A \mathbf{{w}}}\rangle&= \langle {\mathbf{{w}}} , {Z_A \mathbf{{w}}}\rangle + O_\prec \left[ \sum _{i \in A} \frac{w_i^2}{M^{1/2}(d_i - 1)^{1/2}} + \sum _{i = 1}^M \frac{\sigma _i w_i^2}{M \nu _i(A)^2}\right. \nonumber \\&\left. +\langle {\mathbf{{w}}} , {Z_A \mathbf{{w}}}\rangle ^{1/2} \left( {\sum _{i \notin A} \frac{\sigma _i w_i^2}{M \nu _i(A)^2}}\right) ^{1/2}\right] . \end{aligned}$$
(2.12)

Note that the last error term is zero if \(\mathbf{{w}}\) is in the subspace \({{\mathrm{Span}}}\{\mathbf{{v}}_i\}_{i \in A}\) or orthogonal to it.

Remark 2.12

Theorem 2.11 may easily also be stated for more general quantities of the form \(\langle {\mathbf{{v}}} , {P_A \mathbf{{w}}}\rangle \). We omit the precise statement; it is a trivial corollary of (5.2) below, which holds under the assumptions of Theorem 2.11.

We emphasize that the set \(A\) in Theorem 2.11 may be chosen at will. If all outliers are well-separated, then the choice \(A = \{i\}\) gives the most precise information. However, as explained at the beginning of this subsection, the indices of outliers that are close to each other should be included in the same set \(A\). Thus, the freedom to chose \(|A | \geqslant 2\) is meant for degenerate or almost degenerate outliers. (In fact, as explained after (2.14) below, the correct notion of closeness of outliers is that of overlapping.)

We consider a few examples.

Example 2.13

Let \(A = \{i\}\) and \(\mathbf{{w}} = \mathbf{{v}}_i\). Then we get from (2.12)

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {\varvec{\xi }_i}\rangle ^2 = u(d_i) + O_\prec \Biggl [{\frac{1}{M^{1/2} (d_i - 1)^{1/2}} + \frac{\sigma _i}{M \nu _i^2}}\Biggr ]. \end{aligned}$$
(2.13)

This gives a precise version of the cone concentration from (1.18). Note that the cone concentration holds provided the error is much smaller than the main term \(u(d_i)\), which leads to the conditions

$$\begin{aligned} d_i - 1 \gg K^{-1/3} \qquad \mathrm{{and}} \qquad \nu _i \gg (d_i - 1)^{-1/2} K^{-1/2}\,; \end{aligned}$$
(2.14)

here we used that \(d_i \asymp 1\) and \(M \asymp (1 + \phi ) K\).

We claim that both conditions in (2.14) are natural and necessary. The first condition of (2.14) simply means that \(\mu _i\) is an outlier. The second condition of (2.14) is a non-overlapping condition. To understand it, recall from (2.4) that \(\mu _i\) fluctuates on the scale \((d_i - 1)^{1/2} K^{-1/2}\). Then \(\mu _i\) is a non-overlapping outlier if all other outliers are located with high probability at a distance greater than this scale from \(\mu _i\). Recalling the definition of the classical location \(\theta (d_i)\) of \(\mu _i\), the non-overlapping condition becomes

$$\begin{aligned} \min _{j \in {\mathcal {O}} \setminus \{i\}} |\theta (d_j) - \theta (d_i) | \gg (d_i - 1)^{1/2} K^{-1/2}. \end{aligned}$$
(2.15)

After a simple estimate using the definition of \(\theta \), we find that this is precisely the second condition of (2.14). The degeneracy or almost degeneracy of outliers discussed at the beginning of this subsection is hence to be interpreted more precisely in terms of overlapping of outliers.

Provided \(\mu _i\) is well-separated from both the bulk spectrum and the other outliers, we find that the error in (2.13) is of order \(M^{-1/2}\).

Example 2.14

Take \(A = \{i\}\) and \(\mathbf{{w}} = \mathbf{{v}}_j\) with \(j \ne i\). Then we get from (2.12)

$$\begin{aligned} \langle {\mathbf{{v}}_j} , {\varvec{\xi }_i}\rangle ^2 \prec \frac{\sigma _j}{M (d_i - d_j)^2}. \end{aligned}$$
(2.16)

Suppose for simplicity that \(\phi \asymp 1\). Then, under the condition that \(|d_i - d_j | \asymp 1\), we find that \(\varvec{\xi }_i\) is completely delocalized in the direction \(\mathbf{{v}}_j\). In particular, if \(\nu _i \asymp 1\) then \(\varvec{\xi }_i\) is completely delocalized in any direction orthogonal to \(\mathbf{{v}}_i\).

As \(d_j\) approaches \(d_i\) the delocalization bound from (2.16) deteriorates, and eventually when \(\mu _i\) and \(\mu _j\) start overlapping, i.e. the second condition of (2.14) is violated, the right-hand side of (2.16) has the same size as the leading term of (2.13). This is again a manifestation of the fact that the individual eigenspaces of overlapping outliers cannot be distinguished.

Example 2.15

Suppose that we have an \(|A |\)-fold degenerate outlier, i.e. \(d_i = d_j\) for all \(i,j \in A\). Then from Theorem 2.11 and Remark 2.12 [see the estimate (5.2)] we get, for all \(i,j \in A\),

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle = \delta _{ij} u(d_i) + O_{\prec } \Biggl [{\frac{1}{M^{1/2} (d_i - 1)^{1/2}} + \frac{\sigma _i}{M \nu _i(A)^2}}\Biggr ]. \end{aligned}$$

Defining the \(|A | \times |A |\) random matrix \(M = (M_{ij})_{i,j \in A}\) through \(M_{ij} \mathrel {\mathop :}=\langle {\mathbf{{v}}_i} , {\varvec{\xi }_j}\rangle \), we may write the left-hand side as \((M M^*)_{ij}\). We conclude that \(u(d_i)^{-1/2} M\) is approximately orthogonal, from which we deduce that \(u(d_i)^{-1/2} M^*\) is also approximately orthogonal. In other words, we may interchange the families \(\{\mathbf{{v}}_i\}_{i \in A}\) and \(\{\varvec{\xi }_i\}_{i \in A}\). More precisely, we get

$$\begin{aligned} \langle {\varvec{\xi }_i} , {\Pi _A \varvec{\xi }_j}\rangle = (M^* M)_{ij} = \delta _{ij} u(d_i) + O_{\prec } \Biggl [{\frac{1}{M^{1/2} (d_i - 1)^{1/2}} + \frac{\sigma _i}{M \nu _i(A)^2}}\Biggr ]. \end{aligned}$$

This is the correct generalization of (2.13) from Example 2.13 to the degenerate case. The error term is the same as in (2.13), and its size and relation to the main term is exactly the same as in Example 2.13. Hence the discussion following (2.13) may be take over verbatim to this case.

In addition, analogously to Example 2.14, for \(i \in A\) and \(j \notin A\) we find that (2.16) remains true. This establishes the delocalization of \(\varvec{\xi }_i\) in any direction within the null space of \(\Pi _A\).

These estimates establish the general cone concentration, with optimal rate of convergence, for degenerate outliers outlined around (1.22). The eigenvectors \(\{\varvec{\xi }_i\}_{i \in A}\) are all concentrated on the cone defined by \(|\Pi _A \varvec{\xi } |^2 = u(d_i) |\varvec{\xi } |^2\) (for some immaterial \(i \in A\)). Moreover, the eigenvectors \(\{\varvec{\xi }_i\}_{i \in A}\) are orthogonal on both the range and null space of \(\Pi _A\). Provided that the group \(\{d_i\}_{i \in A}\) is well-separated from \(1\) and all other \(d_i\)’s, the eigenvectors \(\{\varvec{\xi }_i\}_{i \in A}\) are completely delocalized on the null space of \(\Pi _A\).

We conclude this example by remarking that a similar discussion also holds for a group of outliers that is not degenerate, but nearly degenerate, i.e. \(|d_i - d_j | \ll |d_i - d_k |\) for all \(i,j \in A\) and \(k \notin A\). We omit the details.

The next result is the analogue of Theorem 2.11 for outliers far from the bulk.

Theorem 2.16

(Outlier eigenvectors far from bulk) Fix \(\tau > 0\). Suppose that \(A \subset {\mathcal {O}}\) satisfies \(d_i \geqslant 1 + \tau \) for all \(i \in A,\) and that there exists a positive \(d_A\) such that \(\tau d_A \leqslant d_i \leqslant \tau ^{-1} d_A\) for all \(i \in A\). Then for any deterministic \(\mathbf{{w}} \in \mathbb {R}^M\) we have

$$\begin{aligned} \langle {\mathbf{{w}}} , {P_A \mathbf{{w}}}\rangle&= \langle {\mathbf{{w}}} , {Z_A \mathbf{{w}}}\rangle + O_\prec \Biggl [\frac{1}{M^{1/2}(\phi ^{1/2} + d_A)} \sum _{i \in A} \sigma _i w_i^2\nonumber \\&+ \left( {1 + \frac{\phi ^{1/2} d_A^2}{\phi ^{1/2} + d_A}}\right) \sum _{i = 1}^M \frac{\sigma _i w_i^2}{M \nu _i(A)^2}\nonumber \\&+ \frac{d_A}{\phi ^{1/2} + d_A}\left( {\sum _{i \in A} \sigma _i w_i^2}\right) ^{1/2} \left( {\sum _{i \notin A} \frac{\sigma _i w_i^2}{M \nu _i(A)^2}}\right) ^{1/2} \Biggr ]. \end{aligned}$$
(2.17)

We leave the discussion on the interpretation of the error in (2.17) to the reader; it is similar to that of Examples 2.132.14, and 2.15.

2.3 Non-outlier eigenvectors

In this subsection we state our results on the non-outlier eigenvectors, i.e. on \(\varvec{\xi }_a\) for \(a \notin {\mathcal {O}}\). Our first result is a delocalization bound. In order to state it, we define for \(a \in [\![{1,K}]\!]\) the typical distance from \(\mu _a\) to the spectral edges \(\gamma _\pm \) through

$$\begin{aligned} \kappa _a \mathrel {\mathop :}=K^{-2/3} (a \wedge (K + 1 - a))^{2/3}. \end{aligned}$$
(2.18)

This quantity should be interpreted as a deterministic version of \(|\mu _a - \gamma _- |\wedge |\mu _a - \gamma _+ |\) for \(a \notin {\mathcal {O}}\); see Theorem 3.5 below.

Theorem 2.17

(Delocalization bound for non-outliers) Fix \(\tau > 0\). For \(a \in [\![{1, (1 - \tau )K}]\!] \setminus {\mathcal {O}}\) and deterministic \(\mathbf{{w}} \in \mathbb {R}^M\) we have

$$\begin{aligned} \langle {\mathbf{{w}}} , {\varvec{\xi }_a}\rangle ^2 \prec \frac{|\mathbf{{w}} |^2}{M} + \sum _{i = 1}^M \frac{\sigma _i w_i^2}{M ((d_i - 1)^2 + \kappa _a)}. \end{aligned}$$
(2.19)

Similarly\(,\) if \(|\phi - 1 | \geqslant \tau \) then for \(a \in [\![{\tau K, K}]\!]\!\setminus \!{\mathcal {O}}\) and deterministic \(\mathbf{{w}} \in \mathbb {R}^M\) we have

$$\begin{aligned} \langle {\mathbf{{w}}} , {\varvec{\xi }_a}\rangle ^2 \prec \frac{|\mathbf{{w}} |^2}{M} + \sum _{i = 1}^M \frac{\sigma _i w_i^2}{M ((d_i + 1)^2 + \kappa _a)}. \end{aligned}$$
(2.20)

For the following examples, we take \(\mathbf{{w}} = \mathbf{{v}}_i\) and \(a \in [\![{1, (1 - \tau )K}]\!]\!\setminus \!{\mathcal {O}}\). Under these assumptions (2.19) yields

$$\begin{aligned} \langle {\mathbf{{w}}} , {\varvec{\xi }_a}\rangle ^2 \prec \frac{1}{M} + \frac{\sigma _i}{M ((d_i - 1)^2 + \kappa _a)}. \end{aligned}$$
(2.21)

Example 2.18

Fix \(\tau > 0\). If \(|d_i - 1 | \geqslant \tau \) (\(d_i\) is separated from the transition point) or \(a \geqslant \tau K\) (\(\mu _a\) is in the bulk), then the right-hand side of (2.21) reads \((1 + \sigma _i) / M\). In particular, if the eigenvalue \(\sigma _i\) of \(\Sigma \) is bounded, \(\varvec{\xi }_a\) is completely delocalized in the direction \(\mathbf{{v}}_i\).

Example 2.19

Suppose that \(a \leqslant C\) (\(\mu _a\) is near the edge), which implies that \(\kappa _a \asymp K^{-2/3}\). Suppose moreover that \(d_i\) is near the transition point \(1\). Then we get

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle ^2 \prec \frac{\sigma _i}{M ((d_i - 1)^2 + K^{-2/3})}. \end{aligned}$$

Therefore the delocalization bound for \(\varvec{\xi }_a\) in the direction of \(\mathbf{{v}}_i\) becomes worse as \(d_i\) approaches the critical point (from either side), from \((1 + \phi )^{1/2} M^{-1}\) for \(d_i\) separated from \(1\), to \((1 + \phi )^{-1/6} M^{-1/3}\) for \(d_i\) at the transition point \(1\).

Next, we derive the law of the generalized component \(\langle {\mathbf{{w}}} , {\varvec{\xi }_a}\rangle \) for non-outlier \(a\). In particular, this provides a lower bound complementing the upper bound from Theorem 2.17. Recall the definition (2.8) of \(\alpha _+\).

Theorem 2.20

(Law of non-outliers) Fix \(\tau > 0\). Then\(,\) for any deterministic \(a \in [\![{1, K^{1 - \tau } \alpha _+^3}]\!]{\setminus } {\mathcal {O}}\) and \(\mathbf{{w}} \in \mathbb {R}^M,\) there exists a random variable \(\Theta (a, \mathbf{{w}}) \equiv \Theta _N(a, \mathbf{{w}})\) satisfying

$$\begin{aligned} \langle {\mathbf{{w}}} , {\varvec{\xi }_a}\rangle ^2 = \sum _i \frac{\sigma _i w_i^2}{M (d_i - 1)^2} \, \Theta (a, \mathbf{{w}}) \end{aligned}$$

and

$$\begin{aligned} \Theta (a, \mathbf{{w}}) \longrightarrow \chi _1^2 \end{aligned}$$

in distribution as \(N \rightarrow \infty ,\) uniformly in \(a\) and \(\mathbf{{w}}\). Here \(\chi _1^2\) is a Chi-squared random variable \((\)i.e. the square of a standard normal\().\)

An analogous statement holds near the left spectral edge provided \(|\phi - 1 | \geqslant \tau ;\) we omit the details.

Remark 2.21

More generally, our method also yields the asymptotic joint distribution of the family

$$\begin{aligned} \left( {\mu _{a_1},\ldots , \mu _{a_k}, \langle {\mathbf{{u}}_1} , {\varvec{\xi }_{b_1}}\rangle \langle {\varvec{\xi }_{b_1}} , {\mathbf{{w}}_1}\rangle ,\ldots , \langle {\mathbf{{u}}_k} , {\varvec{\xi }_{b_k}}\rangle \langle {\varvec{\xi }_{b_k}} , {\mathbf{{w}}_k}\rangle }\right) \end{aligned}$$
(2.22)

(after a suitable affine rescaling of the variables, as in Theorem 8.3 below), where \(a_1,\ldots , a_k, b_1,\ldots , b_k \in [\![{1, K^{1 - \tau } \alpha _+^3}]\!]{\setminus }{\mathcal {O}}\). We omit the precise statement, which is a universality result: it says essentially that the asymptotic distribution of (2.22) coincides with that under the standard Wishart ensemble (i.e. an uncorrelated Gaussian sample covariance matrix). The proof is a simple corollary of Theorem 2.7, Proposition 6.2, Proposition 6.3, and Theorem 8.3.

Remark 2.22

The restriction \(a \leqslant K^{1 - \tau } \alpha _+^3\) is the same as in Remarks 2.8 and 8.7. There, it is required for the eigenvalue sticking to be effective in the sense that the right-hand side of (2.9) is much smaller than the scale on which the eigenvalue \(\lambda _a\) fluctuates. Here, it ensures that the distribution of the eigenvector \(\varvec{\xi }_a\) is determined by the distribution of a single eigenvector of \(H\) (see Proposition 6.2).

Finally, instead of \(Q\) defined in (1.10), we may also consider

$$\begin{aligned} \dot{Q} \mathrel {\mathop :}=\frac{N}{N - 1} T X (I_N - \mathbf{{e}} \mathbf{{e}}^*) X^* T^*, \end{aligned}$$
(2.23)

where the vector \(\mathbf{{e}}\) was defined in (1.4). All of our results stated for \(Q\) also hold for \(\dot{Q}\).

Theorem 2.23

Theorems 2.32.72.112.162.17, and 2.20 hold with \(\mu _i\) and \(\varvec{\xi }_i\) denoting the eigenvalues and eigenvectors of \(\dot{Q}\) instead of \(Q\). For Theorem 2.7, \(\lambda _i\) denotes the eigenvalues of \(\frac{N}{N - 1} Y (I_N - \mathbf{{e}} \mathbf{{e}}^*) Y^*\) instead of \(Y Y^*\) from (2.7).

3 Preliminaries

The rest of this paper is devoted to the proofs of the results from Sects. 2.12.3. To clarify the presentation of the main ideas of the proofs, we shall first assume that

$$\begin{aligned} r = 0 \quad \hbox {and}\quad T = \Sigma ^{1/2}. \end{aligned}$$
(3.1)

We make the assumption (3.1) throughout Sects. 37. The additional arguments required to relax the assumption (3.1) are presented in Sect. 8. Under the assumption (3.1) we have

$$\begin{aligned} Q = \Sigma ^{1/2} X X^* \Sigma ^{1/2}. \end{aligned}$$
(3.2)

Moreover, the extension of our results from \(Q\) to \(\dot{Q}\), and hence the proof of Theorem 2.23, is given in Sect. 9.

For an \(M \times M\) matrix \(A\) and \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\) we abbreviate

$$\begin{aligned} A_{\mathbf{{v}} \mathbf{{w}}} \mathrel {\mathop :}=\langle {\mathbf{{v}}} , {A \mathbf{{w}}}\rangle . \end{aligned}$$

We also write

$$\begin{aligned} A_{\mathbf{{v}} \mathbf{{e}}_i} \equiv A_{\mathbf{{v}} i}, \qquad A_{\mathbf{{e}}_i \mathbf{{v}}} \equiv A_{i \mathbf{{v}}}, \qquad A_{\mathbf{{e}}_i \mathbf{{e}}_j} \equiv A_{i j}, \end{aligned}$$

where \(\mathbf{{e}}_i \in \mathbb {R}^M\) denotes the \(i\)-th standard basis vector.

3.1 The isotropic local Marchenko–Pastur law

In this section we collect the key tool of our analysis: the isotropic Marchenko–Pastur law from [10].

It is well known that the empirical distribution of the eigenvalues of the \(N\times N\) matrix \(X^*X\) has the same asymptotics as the Marchenko–Pastur law

$$\begin{aligned} \varrho _\phi (\mathrm {d}x) \mathrel {\mathop :}=\frac{\sqrt{\phi }}{2\pi }\frac{\sqrt{[{(x-\gamma _-)(\gamma _+-x)}]_+}}{x}\, \mathrm {d}x + (1-\phi )_+ \, \delta (\mathrm {d}x), \end{aligned}$$
(3.3)

where we recall the edges \(\gamma _\pm \) of the limiting spectrum defined in (1.7). Similarly, as noted in (1.6), the empirical distribution of the eigenvalues of the \(M \times M\) matrix \(X X^*\) has the same asymptotics as \(\varrho _{\phi ^{-1}}\).

Note that (3.3) is normalized so that its integral is equal to one. The Stieltjes transform of the Marchenko–Pastur law (3.3) is

$$\begin{aligned} m_\phi (z)\mathrel {\mathop :}=\int \frac{\varrho _\phi (\mathrm {d}x)}{x-z} = \frac{\phi ^{ 1/2}-\phi ^{-1/2}-z+\mathrm {i}\sqrt{(z-\gamma _-)(\gamma _+-z)}}{2\,\phi ^{-1/2}\, z}, \end{aligned}$$
(3.4)

where the square root is chosen so that \(m_\phi \) is holomorphic in the upper half-plane and satisfies \(m_\phi (z) \rightarrow 0\) as \(z \rightarrow \infty \). The function \(m_\phi = m_\phi (z)\) is also characterized as the unique solution of the equation

$$\begin{aligned} m+\frac{1}{z+z\phi ^{-1/2}m-(\phi ^{ 1/2}-\phi ^{-1/2})} = 0 \end{aligned}$$
(3.5)

satisfying \(\hbox {Im }m (z) > 0\) for \(\hbox {Im }z >0\). The formulas (3.3)–(3.5) were originally derived for the case when \(\phi =M/N\) is independent of \(N\) (or, more precisely, when \(\phi \) has a limit in \((0,\infty )\) as \(N \rightarrow \infty \)). Our results allow \(\phi \) to depend on \(N\) under the constraint (1.9), so that \(m_\phi \) and \(\varrho _\phi \) may also depend on \(N\) through \(\phi \).

Throughout the following we use a spectral parameter

$$\begin{aligned} z = E + \mathrm {i}\eta , \end{aligned}$$

with \(\eta > 0\), as the argument of Stieltjes transforms and resolvents. Define the resolvent

$$\begin{aligned} G(z) \mathrel {\mathop :}=(X X^*-z)^{-1}. \end{aligned}$$

For \(z\in \mathbb {C}\), define \(\kappa (z)\) to be the distance from \(E=\hbox {Re }z\) to the spectral edges \(\gamma _\pm \), i.e.

$$\begin{aligned} \kappa \equiv \kappa (z) \mathrel {\mathop :}=|\gamma _+-E |\wedge |\gamma _- -E |. \end{aligned}$$
(3.6)

Throughout the following we regard the quantities \(E(z)\), \(\eta (z)\), and \(\kappa (z)\) as functions of \(z\) and usually omit the argument unless it is needed to avoid confusion.

Sometimes we shall need the following notion of high probability.

Definition 3.1

An \(N\)-dependent event \(\Xi \equiv \Xi _N\) holds with high probability if \(1 - \mathbf{{1}} (\Xi ) \prec 0\).

Fix a (small) \(\omega \in (0,1)\) and define the domain

$$\begin{aligned} \mathbf{{S}} \equiv \mathbf{{S}}(\omega , K) \mathrel {\mathop :}=\bigl \{{z \in \mathbb {C}\mathrel {\mathop :}\kappa \leqslant \omega ^{-1} ,\, K^{-1+\omega } \leqslant \eta \leqslant \omega ^{-1} ,\, |z | \geqslant \omega }\bigr \}. \end{aligned}$$
(3.7)

Beyond the support of the limiting spectrum, one has stronger control all the way down to the real axis. For fixed (small) \(\omega > 0\) define the region

$$\begin{aligned} \widetilde{\mathbf{{S}}}&\equiv \widetilde{\mathbf{{S}}}(\omega , K) \mathrel {\mathop :}=\bigl \{z \in \mathbb {C}\mathrel {\mathop :}E \notin [\gamma _-, \gamma _+], \nonumber \\ K^{-2/3 + \omega }&\leqslant \kappa \leqslant \omega ^{-1} ,\, |z | \geqslant \omega ,\, 0 < \eta \leqslant \omega ^{-1}\bigr \} \end{aligned}$$
(3.8)

of spectral parameters separated from the asymptotic spectrum by \(K^{-2/3 + \omega }\), which may have an arbitrarily small positive imaginary part \(\eta \). Throughout the following we regard \(\omega \) as fixed once and for all, and do not track the dependence of constants on \(\omega \).

Theorem 3.2

(Isotropic local Marchenko–Pastur law [10]) Suppose that (1.15)\(,\) (1.9)\(,\) and (1.16) hold. Then

$$\begin{aligned} \bigl |\langle {\mathbf{{v}}} , {G(z) \mathbf{{w}}}\rangle - m_{\phi ^{-1}}(z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \prec \sqrt{\frac{\hbox {Im }m_{\phi ^{-1}}(z)}{M \eta }} + \frac{1}{M \eta } \end{aligned}$$
(3.9)

uniformly in \(z \in \mathbf{{S}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\). Moreover\(,\)

$$\begin{aligned} \bigl |\langle {\mathbf{{v}}} , {G(z) \mathbf{{w}}}\rangle - m_{\phi ^{-1}}(z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \prec \sqrt{\frac{\hbox {Im }m_{\phi ^{-1}}(z)}{M\eta }} \asymp \frac{1}{1 + \phi } (\kappa + \eta )^{-1/4} K^{-1/2}\qquad \end{aligned}$$
(3.10)

uniformly in \(z \in \widetilde{\mathbf{{S}}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).

Remark 3.3

The probabilistic estimates (3.9) and (3.10) of Theorem 3.2 may be strengthened to hold simultaneously for all \(z \in \mathbf{{S}}\) and for all \(z\in \widetilde{\mathbf{{S}}}\), respectively. For instance, (3.10) may be strengthened to

$$\begin{aligned} \mathbb {P}\left[ {\bigcap _{z \in \widetilde{\mathbf{{S}}}} \Biggl \{{\bigl |\langle {\mathbf{{v}}} , {G(z) \mathbf{{w}}}\rangle - m_{\phi ^{-1}}(z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \leqslant N^\varepsilon \frac{1}{1 \!+\! \phi } (\kappa \!+\! \eta )^{-1/4} K^{-1/2}}\Biggr \}}\right] \geqslant 1 \!-\! N^{-D}, \end{aligned}$$

for all \(\varepsilon > 0\), \(D > 0\), and \(N \geqslant N_0(\varepsilon , D)\). See [10, Remark 2.6].

The next results are on the nontrivial (i.e. nonzero) eigenvalues of \(H \mathrel {\mathop :}=XX^*\) as well as the corresponding eigenvectors. The matrix \(H\) has \(K\) nontrivial eigenvalues, which we order according to

$$\begin{aligned} \lambda _1 \geqslant \lambda _2 \geqslant \cdots \geqslant \lambda _K. \end{aligned}$$
(3.11)

(The remaining \(M - K\) eigenvalues of \(H\) are zero.) Moreover, we denote by

$$\begin{aligned} \varvec{\zeta }_1, \varvec{\zeta }_2 ,\ldots , \varvec{\zeta }_K \in \mathbb {R}^M \end{aligned}$$
(3.12)

the unit eigenvectors of \(H\) associated with the nontrivial eigenvalues \(\lambda _1 \geqslant \lambda _2 \geqslant \cdots \geqslant \lambda _{K}\).

Theorem 3.4

(Isotropic delocalization [10]) Fix \(\tau > 0,\) and suppose that (1.15)\(,\) (1.9)\(,\) and (1.16) hold. Then for \(i \in [\![{1,K}]\!]\) we have

$$\begin{aligned} \langle {\varvec{\zeta }_i} , {\mathbf{{v}}}\rangle ^2 \prec M^{-1} \end{aligned}$$
(3.13)

if either \(i \leqslant (1 - \tau ) K\) or \(|\phi - 1 | \geqslant \tau \).

The following result is on the rigidity of the nontrivial eigenvalues of \(H\). Let \(\gamma _1 \geqslant \gamma _2 \geqslant \cdots \geqslant \gamma _K\) be the classical eigenvalue locations according to \(\varrho _{\phi }\) [see (3.3)], defined through

$$\begin{aligned} \int _{\gamma _i}^\infty \varrho _{\phi }(\mathrm {d}x) = \frac{i}{N}. \end{aligned}$$
(3.14)

Theorem 3.5

(Eigenvalue rigidity [10]) Fix \(\tau > 0,\) and suppose that (1.15)\(,\) (1.9)\(,\) and (1.16) hold. Then for \(i \in [\![{1,M}]\!]\) we have

$$\begin{aligned} \bigl |\lambda _i-\gamma _i \bigr | \prec \left( {i \wedge (K +1 - i)}\right) ^{-1/3}K^{-2/3} \end{aligned}$$
(3.15)

if \(i \leqslant (1 - \tau ) K\) or \(|\phi - 1 | \geqslant \tau \).

3.2 Link to the semicircle law

It will often be convenient to replace the Stieltjes transform \(m_\phi (z)\) of \(\varrho _{\phi }(\mathrm {d}x)\) with the Stieltjes transform \(w_\phi (z)\) of the measure

$$\begin{aligned} \phi ^{1/2} x \varrho _{\phi ^{-1}}(\mathrm {d}x) = \frac{1}{2\pi }\sqrt{[{(x-\gamma _-)(\gamma _+-x)}]_+}\, \mathrm {d}x. \end{aligned}$$
(3.16)

Note that this is nothing but Wigner’s semicircle law centred at \(\phi ^{1/2} + \phi ^{-1/2}\). Thus,

$$\begin{aligned} w_\phi (z)&\mathrel {\mathop :}= \int \frac{\phi ^{1/2} x \varrho _{\phi ^{-1}}(\mathrm {d}x)}{x - z} = \phi ^{1/2}\left( {1 + z m_{\phi ^{-1}}(z)}\right) \nonumber \\&= \frac{\phi ^{ 1/2}+\phi ^{-1/2}-z+\mathrm {i}\sqrt{(z-\gamma _-)(\gamma _+-z)}}{2}, \end{aligned}$$
(3.17)

where in the last step we used (3.4). Note that

$$\begin{aligned} w_\phi = w_{\phi ^{-1}}. \end{aligned}$$

Using \(w_\phi \) we can write (3.5) as

$$\begin{aligned} z = (1 - \phi ^{-1/2} w_\phi ^{-1}) (\phi ^{1/2} - w_\phi ). \end{aligned}$$
(3.18)

Lemma 3.6

For \(z \in \mathbf{{S}}\) and \(\phi \geqslant 1\) we have

$$\begin{aligned} |m_\phi (z) | \asymp |w_\phi (z) | \asymp 1, \qquad |1 - w_\phi (z)^2 | \asymp \sqrt{\kappa + \eta }, \end{aligned}$$
(3.19)

as well as

$$\begin{aligned} \hbox {Im }m_\phi (z) \asymp \hbox {Im }w_\phi (z) \asymp {\left\{ \begin{array}{ll} \sqrt{\kappa + \eta } &{} \text {if }E \in [\gamma _-, \gamma _+]\\ \frac{\eta }{\sqrt{\kappa + \eta }} &{} \text {if }E \notin [\gamma _-, \gamma _+]. \end{array}\right. } \end{aligned}$$
(3.20)

Similarly\(,\)

$$\begin{aligned} \hbox {Re }m_\phi (z) - I(z) \asymp \hbox {Re }w_\phi (z) - I(z) \asymp {\left\{ \begin{array}{ll} \frac{\eta }{\sqrt{\kappa + \eta }} + \kappa &{} \text {if }E \in [\gamma _-, \gamma _+]\\ \sqrt{\kappa + \eta } &{} \text {if }E \notin [\gamma _-, \gamma _+], \end{array}\right. } \end{aligned}$$
(3.21)

where \(I(z) \mathrel {\mathop :}=-1\) for \(E \geqslant \phi ^{1/2} + \phi ^{-1/2}\) and \(I(z) \mathrel {\mathop :}=+1\) for \(E < \phi ^{1/2} + \phi ^{-1/2}\). Finally\(,\) for \(z \in \mathbf{{S}}\) we have

$$\begin{aligned} \hbox {Im }m_{\phi ^{-1}}(z) \asymp \frac{1}{\phi } \hbox {Im }m_\phi (z). \end{aligned}$$
(3.22)

\((\)All implicit constants depend on \(\omega \) in the definition (3.7) of \(\mathbf{{S}}.)\)

Proof

The estimates (3.19) and (3.20) follow from the explicit expressions in (3.4) and (3.17). In fact, these estimates have already appeared in previous works. Indeed, for \(m_\phi \) the estimates (3.19) and (3.20) were proved in [10, Lemma 3.3]. In order to prove them for \(w_\phi \), we observe that the estimates (3.19) and (3.20) follow from the corresponding ones for the semicircle law, which were proved in [20, Lemma 4.3]. The estimates (3.21) follow from (3.20) and the elementary identity

$$\begin{aligned} \hbox {Re }w_\phi = - \frac{E - \phi ^{1/2} - \phi ^{-1/2}}{1 + \eta / \hbox {Im }w_\phi }, \end{aligned}$$

which can be derived from (3.18); the estimates for \(m_\phi \) are derived similarly. Finally, (3.22) follows easily from

$$\begin{aligned} m_{\phi ^{-1}}(z) = \frac{1}{\phi } \left( {m_\phi (z) + \frac{1 - \phi }{z}}\right) , \end{aligned}$$
(3.23)

which may itself be derived from (3.5). \(\square \)

In analogy to \(w_\phi \) [see (3.17)], we define the matrix-valued function

$$\begin{aligned} F(z) \mathrel {\mathop :}=\phi ^{1/2} (1 + z G(z)). \end{aligned}$$
(3.24)

Theorem 3.2 has the following analogue, which compares \(F\) with \(m_\phi \).

Lemma 3.7

Suppose that (1.15)\(,\) (1.9)\(,\) and (1.16) hold. Then

$$\begin{aligned} \bigl |\langle {\mathbf{{v}}} , {F(z) \mathbf{{w}}}\rangle - w_\phi (z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \prec \sqrt{\frac{\hbox {Im }w_\phi (z)}{K \eta }} + \frac{1}{K \eta } \end{aligned}$$
(3.25)

uniformly in \(z \in \mathbf{{S}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\). Moreover\(,\)

$$\begin{aligned} \bigl |\langle {\mathbf{{v}}} , {F(z) \mathbf{{w}}}\rangle - w_\phi (z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \prec \sqrt{\frac{\hbox {Im }w_\phi (z)}{K \eta }} \asymp (\kappa + \eta )^{-1/4} K^{-1/2} \end{aligned}$$
(3.26)

uniformly in \(z \in \widetilde{\mathbf{{S}}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).

Proof

The proof is an easy consequence of Theorem 3.2 and Lemma 3.6, combined with the fact that for \(z \in \mathbf{{S}}\) or \(z \in \widetilde{\mathbf{{S}}}\) we have \(|z | \asymp \phi ^{1/2}\) for \(\phi \geqslant 1\) and \(|z | \asymp \phi ^{-1/2}\) for \(\phi \leqslant 1\). \(\square \)

3.3 Extension of the spectral domain

In this section we extend the spectral domain on which Theorem 3.2 and Lemma 3.7 hold. The argument relies on the Helffer–Sjöstrand functional calculus [16]. Define the domains

$$\begin{aligned}&\widehat{\mathbf{{S}}} \equiv \widehat{\mathbf{{S}}}(\omega , K) \mathrel {\mathop :}=\bigl \{{z \in \mathbb {C}\mathrel {\mathop :}E \notin [\gamma _-, \gamma _+],\, \kappa \geqslant K^{-2/3 + \omega },\, \eta > 0}\bigr \}, \\&\quad \mathbf{{B}} \equiv \mathbf{{B}}(\omega ) \mathrel {\mathop :}=\{{z \in \mathbb {C}\mathrel {\mathop :}|z | < \omega }\}. \end{aligned}$$

Proposition 3.8

Fix \(\omega , \tau \in (0,1)\).

  1. (i)

    If \(\phi < 1 - \tau \) then

    $$\begin{aligned} \bigl |\langle {\mathbf{{v}}} , {G(z) \mathbf{{w}}}\rangle - m_{\phi ^{-1}}(z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \prec \frac{1}{(\kappa + \eta )^2 + (\kappa + \eta )^{1/4}} K^{-1/2} \end{aligned}$$
    (3.27)

    uniformly for \(z \in \widehat{\mathbf{{S}}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).

  2. (ii)

    If \(|\phi - 1 | \leqslant \tau \) then (3.27) holds uniformly for \(z \in \widehat{\mathbf{{S}}} \setminus \mathbf{{B}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).

  3. (iii)

    If \(\phi > 1 + \tau \) then

    $$\begin{aligned} \bigl |\langle {\mathbf{{v}}} , {G(z) \mathbf{{w}}}\rangle - m_{\phi ^{-1}}(z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \prec \frac{1}{\phi ^{1/2} |z | ((\kappa + \eta ) + (\kappa + \eta )^{1/4})} K^{-1/2}\qquad \end{aligned}$$
    (3.28)

    uniformly for \(z \in \widehat{\mathbf{{S}}} \setminus \{0\}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).

Proof

By polarization and linearity, we may assume that \(\mathbf{{w}} = \mathbf{{v}}\). Define the signed measure

$$\begin{aligned} \rho ^\Delta (\mathrm {d}x) \mathrel {\mathop :}=\sum _{i = 1}^M \langle {\mathbf{{v}}} , {\varvec{\zeta }_i}\rangle \langle {\varvec{\zeta }_i} , {\mathbf{{v}}}\rangle \, \delta _{\lambda _i}(\mathrm {d}x) - \varrho _{\phi ^{-1}}(\mathrm {d}x) , \end{aligned}$$
(3.29)

so that

$$\begin{aligned} m^\Delta (z) \mathrel {\mathop :}=\int \frac{\rho ^\Delta (\mathrm {d}x)}{x - z} = \langle {\mathbf{{v}}} , {G(z) \mathbf{{v}}}\rangle - m_{\phi ^{-1}}(z). \end{aligned}$$

The basic idea of the proof is to apply the Helffer–Sjöstrand formula to the function

$$\begin{aligned} f_z(x) \mathrel {\mathop :}=\frac{1}{x - z} - \frac{1}{x_0 - z}, \end{aligned}$$

where \(x_0\) is chosen below. To that end, we need a smooth compactly supported cutoff function \(\chi \) on the complex plane satisfying \(\chi (w) \in [0,1]\) and \(|\partial _{\bar{w}} \chi (w) | \leqslant C(\omega , \tau )\). We distinguish the three cases \(\phi < 1 - \tau \), \(|\phi - 1 | \leqslant \tau \), and \(\phi > 1 + \tau \).

Let us first focus on the case \(\phi < 1 - \tau \). Set \(x_0 \mathrel {\mathop :}=\phi ^{1/2} + \phi ^{-1/2}\) and choose a constant \(\omega ' = \omega '(\omega , \tau ) \in (0, \omega )\) small enough that \(\gamma _- \geqslant 4 \omega '\). We require that \(\chi \) be equal to \(1\) in the \(\omega '\)-neighbourhood of \([\gamma _-, \gamma _+]\) and \(0\) outside of the \(2\omega '\)-neighbourhood of \([\gamma _-,\gamma _+]\). By Theorem 3.5 we have \(\hbox {supp }\rho ^\Delta \subset \{\chi = 1\}\) with high probability. Now choose \(z\) satisfying \({{\mathrm{dist}}}(z, [\gamma _-, \gamma _+]) \geqslant 3 \omega '\). Then the Helffer–Sjöstrand formula [16] yields, for \(x \in \hbox {supp }\rho ^\Delta \),

$$\begin{aligned} f_z(x) = \frac{1}{\pi } \int _{\mathbb {C}} \frac{\partial _{\bar{w}} (f_z(w) \chi (w))}{x - w} \, \mathrm {d}w \end{aligned}$$
(3.30)

with high probability, where \(\mathrm {d}w\) denotes the two-dimensional Lebesgue measure in the complex plane. Noting that \(\int \mathrm {d}\rho ^{\Delta } = 0\), we may therefore write

$$\begin{aligned} m^\Delta (z) = \int \rho ^\Delta (\mathrm {d}x) \, f_z(x) = \frac{1}{\pi } \int _{\mathbb {C}} f_z(w) \, \partial _{\bar{w}} \chi (w) \, m^\Delta (w) \, \mathrm {d}w \end{aligned}$$
(3.31)

with high probability, where in second step we used (3.30) and the fact that \(f_z\) is holomorphic away from \(z\). The integral is supported on the set \(\{\partial _{\bar{w}} \chi \ne 0\} \subset \{{w \mathrel {\mathop :}{{\mathrm{dist}}}(w,[\gamma _-, \gamma _+]) \in [\omega ', 2 \omega ']}\}\), on which we have the estimates \(|f_z(w) | \leqslant C (\kappa (z) + \eta (z))^{-2}\) and \(|m^\Delta (w) | \prec K^{-1/2}\), as follows from Theorem 3.10 applied to \(\mathbf{{S}}(\omega ', K)\) and (3.22). Recalling Remark 3.3, we may plug these estimates into the integral to get

$$\begin{aligned} |m^\Delta (z) | \prec (\kappa + \eta )^{-2} K^{-1/2}, \end{aligned}$$

which holds for \({{\mathrm{dist}}}(z, [\gamma _-, \gamma _+]) \geqslant 3 \omega '\). (Recall that \(|\partial _{\bar{w}} \chi (w) | \leqslant C\).) Combining this estimate with (3.10), the claim (3.27) follows for \(z \in \widehat{\mathbf{{S}}}\).

Next, we deal with the case \(|\phi - 1 | \leqslant \tau \). The argument is similar. We again choose \(x_0 \mathrel {\mathop :}=\phi ^{1/2} + \phi ^{-1/2}\). We require that \(\chi \) be equal to \(1\) in the \(\omega \)-neighbourhood of \([0, \gamma _+]\) and \(0\) outside of the \(2\omega \)-neighbourhood of \([0,\gamma _+]\). We may now repeat the above argument almost verbatim. For \({{\mathrm{dist}}}\{z, [0, \gamma _+]\} \geqslant 3 \omega \) and \(w \in \{\partial _{\bar{w}} \chi \ne 0\}\) we find that \(|f_z(w) | \leqslant C (\kappa (z) + \eta (z))^{-2}\) and \(|m^\Delta (w) | \prec K^{-1/2}\). Hence, recalling (3.10), we get (3.27) for \(z \in \widehat{\mathbf{{S}}}\!\setminus \!\mathbf{{B}}\).

Finally, suppose that \(\phi > 1 + \tau \). Now we set \(x_0 \mathrel {\mathop :}=0\). We choose the same \(\omega '\) and cutoff function \(\chi \) as in the case \(\phi < 1 - \tau \) above. Suppose that \({{\mathrm{dist}}}(z, [\gamma _-, \gamma _+]) \geqslant 3 \omega '\) and \(z \ne 0\). Thus, (3.30) holds with high probability for \(x \in \hbox {supp }\rho ^\Delta \!\setminus \! \{0\}\). Since \(f_w(0) = 0\), we therefore find that (3.31) holds. As above, we find that for \(w \in \{\partial _{\bar{w}} \chi \ne 0\}\) we have

$$\begin{aligned} |f_z(w) | \leqslant \frac{C \phi ^{1/2}}{|z | (\kappa (z) + \eta (z))} \end{aligned}$$

and \(|m^\Delta (w) | \prec \phi ^{-1} K^{-1/2}\). Recalling (3.10), we find that (3.28) follows easily. \(\square \)

Proposition 3.8 yields the following result for \(F\) defined in (3.24).

Corollary 3.9

Fix \(\omega , \tau \in (0,1)\).

  1. (i)

    If \(\phi < 1 - \tau \) then

    $$\begin{aligned} \bigl |\langle {\mathbf{{v}}} , {F(z) \mathbf{{w}}}\rangle - w_{\phi }(z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \prec \frac{\phi ^{1/2} |z |}{(\kappa + \eta )^2 + (\kappa + \eta )^{1/4}} K^{-1/2} \end{aligned}$$
    (3.32)

    uniformly for \(z \in \widehat{\mathbf{{S}}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).

  2. (ii)

    If \(|\phi - 1 | \leqslant \tau \) then (3.32) holds uniformly for \(z \in \widehat{\mathbf{{S}}} \!\setminus \! \mathbf{{B}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).

  3. (iii)

    If \(\phi > 1 + \tau \) then

    $$\begin{aligned} \bigl |\langle {\mathbf{{v}}} , {F(z) \mathbf{{w}}}\rangle - w_{\phi }(z) \langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle \bigr | \prec \frac{1}{(\kappa + \eta ) + (\kappa + \eta )^{1/4}} K^{-1/2} \end{aligned}$$
    (3.33)

    uniformly for \(z \in \widehat{\mathbf{{S}}}\) and any deterministic unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\).

3.4 Identities for the resolvent and eigenvalues

In this section we derive the identities on which our analysis of the eigenvalues and eigenvectors relies. Recall the definition of the set \({\mathcal {R}}\) from (1.14). We write the population covariance matrix \(\Sigma \) from (1.12) as

$$\begin{aligned} \Sigma = 1 + \phi ^{1/2} V D V^*, \end{aligned}$$

where \(D = {{\mathrm{diag}}}(d_i)_{i \in {\mathcal {R}}}\) is an invertible diagonal \(|{\mathcal {R}} | \times |{\mathcal {R}} |\) matrix and \(V = [\mathbf{{v}}_i]_{i \in {\mathcal {R}}}\) is the matrix of eigenvectors \(\mathbf{{v}}_i\) of \(\Sigma \) indexed by the set \({\mathcal {R}}\). Note that \(V\) is an \(N \times |{\mathcal {R}} |\) isometry, i.e. \(V\) satisfies \(V^* V = I_{|{\mathcal {R}} |}\).

We use the definitions

$$\begin{aligned} G(z) \mathrel {\mathop :}=(H - z)^{-1} , \qquad \widetilde{G}(z) \mathrel {\mathop :}=(Q - z)^{-1}, \qquad F(z) \mathrel {\mathop :}=\phi ^{1/2} (1 + z G(z)), \end{aligned}$$

where \(H = X X^*\) and \(Q = \Sigma ^{1/2} H \Sigma ^{1/2}\). We introduce the \(|{\mathcal {R}} | \times |{\mathcal {R}} |\) matrix

$$\begin{aligned} W(z) \mathrel {\mathop :}=V^* F(z) V. \end{aligned}$$

We also denote by \(\sigma (A)\) the spectrum of a square matrix \(A\).

The following lemma collects the basic identities for analysing \(\sigma (Q)\) and \(\widetilde{G}\). We remark that versions of its part (i) have already appeared in several previous works on finite-rank deformations of random matrix ensembles [2, 7, 27, 37].

Lemma 3.10

  1. (i)

    Suppose that \(\mu \notin \sigma (H)\). Then \(\mu \in \sigma (Q)\) if and only if

    $$\begin{aligned} \det \left( {D^{-1} + W(\mu )}\right) = 0. \end{aligned}$$
    (3.34)
  2. (ii)

    We have

    $$\begin{aligned} \Sigma ^{1/2} \widetilde{G}(z) \Sigma ^{1/2} = G(z) - G(z) V \frac{\phi ^{1/2} z}{D^{-1} + W(z)} V^* G(z). \end{aligned}$$
    (3.35)

Proof

To prove (i), we write the condition \(\mu \in \sigma (Q)\) as

$$\begin{aligned} 0&= \det \left( {\Sigma ^{1/2} H \Sigma ^{1/2} - \mu }\right) = \det \left( {H - \mu \Sigma ^{-1}}\right) \det (\Sigma ) \\&= \det \left( {1 + G(\mu ) (1 - \Sigma ^{-1}) \mu }\right) \det (H - \mu ) \det (\Sigma ), \end{aligned}$$

where we used that \(\mu \notin \sigma (H)\). Using

$$\begin{aligned} 1 - \Sigma ^{-1} = V \frac{D}{\phi ^{-1/2} + D} V^*, \end{aligned}$$

the matrix identity \(\det (1 + XY) = \det (1 + YX)\), and \(\det (\Sigma ) \ne 0\), we find

$$\begin{aligned} 0 = \det \left( {1 + \frac{D}{\phi ^{-1/2} + D} \mu V^* G(\mu ) V}\right) , \end{aligned}$$

and the claim follows.

To prove (ii), we write

$$\begin{aligned} \Sigma ^{1/2} \widetilde{G}(z) \Sigma ^{1/2} = (H - \Sigma ^{-1} z)^{-1} = \left( {H - z + (1 - \Sigma ^{-1}) z}\right) ^{-1}. \end{aligned}$$

The claim now follows from the identity

$$\begin{aligned} (A + S B T)^{-1} = A^{-1} - A^{-1} S \left( {B^{-1} + T A^{-1} S}\right) ^{-1} T A^{-1} \end{aligned}$$
(3.36)

with \(A = H - z\), \(B = D (\phi ^{-1/2} + D)^{-1}\), \(S = V\), and \(T = zV^*\). \(\square \)

The result (3.35), when restricted to the range of \(V\), has an alternative form (3.37) which is often easier to work with, since it collects all of the randomness in the single quantity \(W(z)\) on its right-hand side.

Lemma 3.11

We have

$$\begin{aligned} V^* \widetilde{G}(z) V \!=\! \frac{1}{\phi ^{1/2} z} \left( {D^{-1} \!-\! \frac{\sqrt{1 + \phi ^{1/2} D}}{D} \frac{1}{D^{-1} \!+\! W(z)} \frac{\sqrt{1 + \phi ^{1/2} D}}{D}}\right) .\quad \end{aligned}$$
(3.37)

Proof

From (3.35) we get

$$\begin{aligned}&(1 + \phi ^{1/2} D)^{1/2} \, V^* \widetilde{G} V \, (1 + \phi ^{1/2} D)^{1/2} \\&\quad =V^*GV - V^*GV \frac{1}{(D^{-1} + \phi ^{1/2})/(z \phi ^{1/2}) + V^*GV} V^*GV. \end{aligned}$$

Applying the identity

$$\begin{aligned} A - A \, (A + B)^{-1} A = B - B \, (A + B)^{-1} B \end{aligned}$$

to the right-hand side yields

$$\begin{aligned}&(1 + \phi ^{1/2} D)^{1/2} \, V^* \widetilde{G} V \, (1 + \phi ^{1/2} D)^{1/2}\nonumber \\&\quad = \frac{1}{\phi ^{1/2} z} \left( {D^{-1} + \phi ^{1/2} - (D^{-1} + \phi ^{1/2}) \frac{1}{D^{-1} + W} (D^{-1} + \phi ^{1/2})}\right) , \end{aligned}$$

from which the claim follows. \(\square \)

4 Eigenvalue locations

In this section we prove Theorems 2.3 and 2.7. The arguments are similar to those of [27, Section 6], and we therefore only sketch the proofs. The proof of [27, Section 6] relies on three main steps: (i) establishing a forbidden region which contains with high probability no eigenvalues of \(Q\); (ii) a counting estimate for the special case where \(D\) does not depend on \(N\), which ensures that each connected component of the allowed region (complement of the forbidden region) contains exactly the right number of eigenvalues of \(Q\); and (iii) a continuity argument where the counting result of (ii) is extended to arbitrary \(N\)-dependent \(D\) using the gaps established in (i) and the continuity of the eigenvalues as functions of the matrix entries. The steps (ii) and (iii) are exactly the same as in [27], and will not be repeated here. The step (i) differs slightly from that of [27], and in the proofs below we explain these differences.

We need the following eigenvalue interlacing result, which is purely deterministic. It holds for any nonnegative definite \(M \times M\) matrix \(H\) and any rank-one deformation of the form \(Q = (1 + \tilde{d} \mathbf{{v}} \mathbf{{v}}^*)^{1/2} H (1 + \tilde{d} \mathbf{{v}} \mathbf{{v}}^*)^{1/2}\) with \(\tilde{d} \geqslant -1\) and \(\mathbf{{v}} \in \mathbb {R}^M\).

Lemma 4.1

(Eigenvalue interlacing) Let \(|{\mathcal {R}} | = 1\) and \(D = d \in {\mathcal {D}}\). For \(d > 0\) we have

$$\begin{aligned} \mu _1 \geqslant \lambda _1 \geqslant \mu _2 \geqslant \cdots \geqslant \lambda _{M-1} \geqslant \mu _M \geqslant \lambda _M \end{aligned}$$

and for \(d < 0\) we have

$$\begin{aligned} \lambda _1 \geqslant \mu _1 \geqslant \lambda _2 \geqslant \cdots \geqslant \mu _{M-1} \geqslant \lambda _M \geqslant \mu _M. \end{aligned}$$

Proof

Using a simple perturbation argument (using that eigenvalues depend continuously on the matrix entries), we may assume without loss of generality that \(\lambda _1,\ldots , \lambda _M\) are all positive and distinct. Writing \(\Sigma = 1 + \phi ^{1/2} d \mathbf{{v}} \mathbf{{v}}^*\), we get from (3.35) that

$$\begin{aligned} \widetilde{G}_{\mathbf{{v}} \mathbf{{v}}}(z)&= a^2 G_{\mathbf{{v}} \mathbf{{v}}}(z) - a^2 G_{\mathbf{{v}} \mathbf{{v}}}(z)^2 \frac{1}{b(z)^{-1} + G_{\mathbf{{v}} \mathbf{{v}}}(z)}, \qquad a \mathrel {\mathop :}=(\Sigma ^{-1/2})_{\mathbf{{v}} \mathbf{{v}}} , \\&\quad b(z) \mathrel {\mathop :}=\frac{z}{1 + \phi ^{-1/2} d^{-1}}. \end{aligned}$$

Note that \(a > 0\). Thus we get

$$\begin{aligned} \frac{1}{G_{\mathbf{{v}} \mathbf{{v}}}(z)} + b(z) = \frac{a^2}{\widetilde{G}_{\mathbf{{v}} \mathbf{{v}}}(z)}. \end{aligned}$$

Writing this in spectral decomposition yields

$$\begin{aligned} \left( {\sum _i \frac{\langle {\mathbf{{v}}} , {\varvec{\zeta }_i}\rangle ^2}{\lambda _i - z}}\right) ^{-1} = a^2 \left( {\sum _i \frac{\langle {\mathbf{{v}}} , {\varvec{\xi }_i}\rangle ^2}{\mu _i - z}}\right) ^{-1} - b(z). \end{aligned}$$
(4.1)

As above, a simple perturbation argument implies that we may without loss of generality assume that all scalar products in (4.1) are nonzero. Now take \(z \in (0, \infty )\). Note that \(b(z)\) and \(d\) have the same sign.

To conclude the proof, we observe that the left-hand side of (4.1) defines a function of \(z \in (0,\infty )\) with \(M - 1\) singularities and \(M\) zeros, which is smooth and decreasing away from the singularities. Moreover, its zeros are the eigenvalues \(\lambda _1,\ldots , \lambda _M\). The interlacing property now follows from the fact that \(z\) is an eigenvalue of \(Q\) if and only if the left-hand side of (4.1) is equal to \(-b(z)\). \(\square \)

Corollary 4.2

For the rank-\(|{\mathcal {R}} |\) model (1.10) we have

$$\begin{aligned} \mu _i \in [\lambda _{i + r}, \lambda _{i - r}] \qquad (i \in [\![{1,M}]\!]), \end{aligned}$$

with the convention that \(\lambda _{i} = 0\) for \(i > K\) and \(\lambda _i = \infty \) for \(i < 1\).

We now move on to the proof of Theorem 2.3. Note that the function \(\theta \) defined in (1.17) may be extended to a biholomorphic function from \(\{\zeta \in \mathbb {C}\mathrel {\mathop :}|\zeta | > 1\}\) to \(\{z \in \mathbb {C}\mathrel {\mathop :}z - (\phi ^{1/2} + \phi ^{-1/2}) \notin [-2,2])\}\). Moreover, using (3.18) it is easy to check that for \(|\zeta | > 1\) we have

$$\begin{aligned} w_\phi (z) = - \frac{1}{\zeta } \Longleftrightarrow z = \theta (\zeta ). \end{aligned}$$
(4.2)

Throughout the following we shall make use of the subsets of outliers

$$\begin{aligned} {\mathcal {O}}_\tau ^\pm \mathrel {\mathop :}=\bigl \{{i \mathrel {\mathop :}\pm d_i \geqslant 1 + K^{-1/3 + \tau }}\bigr \} \end{aligned}$$

for \(\tau \geqslant 0\). Note that \({\mathcal {O}} = {\mathcal {O}}_0^+ \cup {\mathcal {O}}_0^-\).

Proof of Theorem 2.3

The proof of Proposition 2.3 is similar to that of [27, Equation (2.20)]. We focus first on the outliers to the right of the bulk spectrum. Let \(\varepsilon > 0\). We shall prove that there exists an event \(\Xi \) of high probability (see Definition 3.1) such that for all \(i \in {\mathcal {O}}_{4 \varepsilon }^+\) we have

$$\begin{aligned} \mathbf{{1}} (\Xi ) |\mu _i - \theta (d_i) | \leqslant C \Delta (d_i) K^{-1/2 + \varepsilon } \end{aligned}$$
(4.3)

and for \(i \in [\![{|{\mathcal {O}}_{4 \varepsilon }^+ | + 1, |{\mathcal {O}}_{4 \varepsilon }^+ | + r}]\!]\) we have

$$\begin{aligned} \mathbf{{1}} (\Xi ) |\mu _i - \gamma _+ | \leqslant C K^{-2/3 + 8 \varepsilon }. \end{aligned}$$
(4.4)

Before proving (4.3) and (4.4), we show how they imply (2.4) for \(d_i > 0\) and (2.6). From (4.4) we get for \(i\) satisfying \(K^{-1/3} \leqslant d_i - 1 \leqslant K^{-1/3 + 4 \varepsilon }\)

$$\begin{aligned} \mathbf{{1}} (\Xi ) |\mu _i - \theta (d_i) |&\leqslant \mathbf{{1}} (\Xi ) \left( {|\mu _i - \gamma _+ | + |\theta (d_i) - \gamma _+ |}\right) \nonumber \\&\leqslant C K^{-2/3 + 8 \varepsilon } \leqslant C \Delta (d_i) K^{-1/2 + 8 \varepsilon }. \end{aligned}$$
(4.5)

Since \(\varepsilon > 0\) was arbitrary, (2.4) for \(d_i > 0\) and (2.6) follow from (4.3) and (4.5).

What remains is the proof of (4.3) and (4.4). As in [27, Proposition 6.5], the first step is to prove that with high probability there are no eigenvalues outside a neighbourhood of the classical outlier locations \(\theta (d_i)\). To that end, we define for each \(i \in {\mathcal {O}}_\varepsilon ^+\) the interval

$$\begin{aligned} I_i(D) \mathrel {\mathop :}=\Bigl [{\theta (d_i) - \Delta (d_i) K^{-1/2 + \varepsilon } ,\, \theta (d_i) + \Delta (d_i) K^{-1/2 + \varepsilon }}\Bigr ]. \end{aligned}$$

Moreover, we set \(I_0 \mathrel {\mathop :}=[0, \theta (1 + K^{-1/3 + 2 \varepsilon })]\).

We now claim that with high probability the complement of the set \(I(D) \mathrel {\mathop :}=I_0 \cup \bigcup _{i \in {\mathcal {O}}_{\varepsilon }^+} I_i(D)\) contains no eigenvalues of \(Q\). Indeed, from Theorem 3.5 and Corollary 3.9 combined with Remark 3.3 (with small enough \(\omega \equiv \omega (\varepsilon )\)), we find that there exists an event \(\Xi \) of high probability such that \(|\lambda _i - \gamma _+ | \leqslant K^{-2/3 + \varepsilon }\) for \(i \in [\![{1,2r}]\!]\) and

$$\begin{aligned} \mathbf{{1}} (\Xi ) \bigl \Vert W(x) - w_{\phi }(x) \bigr \Vert \leqslant {\mathcal {E}}(x) \, K^{-1/2 + \varepsilon /2} \end{aligned}$$

for all \(x \notin I_0\), where we defined

$$\begin{aligned} {\mathcal {E}}(x) \mathrel {\mathop :}={\left\{ \begin{array}{ll} \kappa (x)^{-1/4} &{} \text {if } \kappa (x) \leqslant 1\\ \frac{1}{\kappa (x)^2} \left( {1 + \frac{\kappa (x)}{1 + \phi ^{-1/2}}}\right) &{} \text {if } \kappa (x) > 1. \end{array}\right. } \end{aligned}$$

In particular, we have \(\mathbf{{1}} (\Xi ) \lambda _1 \leqslant \theta (1 + K^{-1/3 + \varepsilon })\). Hence we find from (3.34) that on the event \(\Xi \) the value \(x \notin I_0\) is an eigenvalue of \(Q\) if and only if the matrix

$$\begin{aligned} \mathbf{{1}} (\Xi ) \left( {D^{-1} + W(x)}\right) = \mathbf{{1}} (\Xi ) \left( {D^{-1} + w_\phi (x) + O({\mathcal {E}}(x) K^{-1/2 + \varepsilon /2})}\right) \end{aligned}$$

is singular. Since \(-d_i^{-1} = w_\phi (\theta (d_i))\) for \(i \in {\mathcal {O}}_\varepsilon ^+\), we conclude from the definition of \(I(D)\) that it suffices to show that if \(x \notin I(D)\) then

$$\begin{aligned} \min _{i \in {\mathcal {O}}_\varepsilon ^+} \bigl |w_\phi (x) - w_\phi (\theta (d_i)) \bigr | \gg {\mathcal {E}}(x) K^{-1/2 + \varepsilon /2}. \end{aligned}$$
(4.6)

We prove (4.6) using the two following observations. First, \(w_\phi \) is monotone increasing on \((\gamma _+, \infty )\) and

$$\begin{aligned} w_\phi '(x) \asymp (d_i^2 - 1)^{-1} \qquad (x \in I_i(D)), \end{aligned}$$

as follows from (4.2). Second,

$$\begin{aligned} \Delta (d_i) \asymp \frac{{\mathcal {E}}(\theta (d_i))}{|w_\phi '(\theta (d_i)) |} = (d_i^2 - 1) {\mathcal {E}}(\theta (d_i)). \end{aligned}$$

We omit further details, which may be found e.g. in [27, Section 6]. Thus we conclude that on the event \(\Xi \) the complement of \(I(D)\) contains no eigenvalues of \(Q\).

The next step of the proof consists in making sure that the allowed neighbourhoods \(I_i(D)\) contain exactly the right number of outliers; the counting argument (sketched in the steps (ii) and (iii) at the beginning of this section) follows that of [27, Section 6]. First we consider the case \(D = D(0)\) where for all \(i \ne j \in {\mathcal {O}}_{\varepsilon }^+\) we have \(d_i(0),d_j(0) \geqslant 2\) and \(|d_i(0) - d_j(0) | \geqslant 1\), and show that each interval \(\{I_i(D(0)) \mathrel {\mathop :}i \in {\mathcal {O}}_{\varepsilon }^+\}\) contains exactly one eigenvalue of \(Q\) (see [27, Proposition 6.6]). We then deduce the general case by a continuity argument, by choosing an appropriate continuous path \((D(t))_{t \in [0,1]}\) joining the initial configuration \(D(0)\) to the desired final configuration \(D = D(1)\). The continuity argument requires the existence of a gap in the set \(I(D)\) to the left of \(\bigcup _{i \in {\mathcal {O}}_{4 \varepsilon }^+} I_i(D)\). The existence of such a gap follows easily from the definition of \(I(D)\) and the fact that \(|{\mathcal {R}} |\) is bounded. The details are the same as in [27, Section 6.5]. Hence (4.3) follows. Moreover, (4.4) follows from the same argument combined with Corollary 4.2 for a lower bound on \(\mu _i\). This concludes the analysis of the outliers to the right of the bulk spectrum.

The case of outliers to the left of the bulk spectrum is analogous. Here we assume that \(\phi < 1 - \tau \). The argument is exactly the same as for \(d_i > 0\), except that we use the bound (3.32) to the left of the bulk spectrum as well as \(|\lambda _i - \gamma _- | \leqslant K^{-2/3 + \varepsilon }\) for \(i \in [\![{K-2r, K}]\!]\) with high probability. \(\square \)

Proof of Theorem 2.7

We only give the proof of (2.9); the proof of (2.10) is analogous. Fix \(\varepsilon > 0\). By Theorem 2.3, Theorem 3.5, Theorem 3.2, Lemma 3.7, and Remark 3.3, there exists a high-probability event \(\Xi \equiv \Xi _N(\varepsilon )\) satisfying the following conditions.

  1. (i)

    We have

    $$\begin{aligned}&\mathbf{{1}} (\Xi ) |\mu _{s_+ + 1} - \gamma _+ | \leqslant K^{-2/3 + \varepsilon },\nonumber \\&\mathbf{{1}} (\Xi ) |\lambda _i-\gamma _i | \leqslant i^{-1/3}K^{-2/3 + \varepsilon }\quad (i \leqslant (1 - \tau )K). \end{aligned}$$
    (4.7)
  2. (ii)

    For \(z \in \mathbf{{S}}(\varepsilon , K)\) we have

    $$\begin{aligned} \mathbf{{1}} (\Xi ) \bigl \Vert W(z) - w_{\phi }(z) \bigr \Vert \leqslant K^\varepsilon \left( {\sqrt{\frac{\hbox {Im }w_{\phi }(z)}{K \eta }} + \frac{1}{K \eta }}\right) \end{aligned}$$
    (4.8)

    and

    $$\begin{aligned} \max _{i,j} \bigl |\langle {\mathbf{{v}}_i} , {G(z) \mathbf{{v}}_j}\rangle - m_{\phi ^{-1}}(z) \delta _{ij} \bigr | \leqslant K^\varepsilon \left( {\sqrt{\frac{\hbox {Im }m_{\phi ^{-1}}(z)}{M \eta }} + \frac{1}{M \eta }}\right) . \end{aligned}$$
    (4.9)

For the following we fix a realization \(H \in \Xi \). We suppose first that

$$\begin{aligned} \alpha _+ \geqslant K^{-1/3 + \varepsilon }, \end{aligned}$$
(4.10)

and define \(\eta \mathrel {\mathop :}=K^{-1 + 2 \varepsilon } \alpha _+^{-1}\). Now suppose that \(x\) satisfies

$$\begin{aligned} x \in \bigl [{\gamma _+ - 1, \gamma _+ + K^{-2/3 + 2 \varepsilon }}\bigr ], \qquad {{\mathrm{dist}}}(x, \sigma (H)) \;>\; \eta . \end{aligned}$$
(4.11)

We shall show, using (3.34), that any \(x\) satisfying (4.11) cannot be an eigenvalue of \(Q\). First we deduce from (4.8) that

$$\begin{aligned} \bigl \Vert W(x) - W(x + \mathrm {i}\eta ) \bigr \Vert \leqslant C (1 + \phi ) \max _{i} \hbox {Im }G_{\mathbf{{v}}_i \mathbf{{v}}_i}(x + \mathrm {i}\eta ). \end{aligned}$$
(4.12)

The estimate (4.12) follows by spectral decomposition of \(F(\cdot )\) together with the estimate \(2 |\lambda _i - x | \geqslant \sqrt{(\lambda _i - x)^2 + \eta ^2}\) for all \(i\). We get from (4.12) and Lemma 3.6 that

$$\begin{aligned} W(x)&= w_\phi (x + \mathrm {i}\eta ) + O \left( {\hbox {Im }w_\phi (x + \mathrm {i}\eta ) + \frac{K^{\varepsilon }}{K \eta }}\right) \nonumber \\&= -1 + O \left( {\sqrt{\kappa (x)} + \sqrt{\eta } + K^{-\varepsilon } \alpha _+^{-1}}\right) , \end{aligned}$$

where we use the notation \(A = B + O(t)\) to mean \(\Vert A - B \Vert \leqslant C t\). Recalling (3.34), we conclude that on the event \(\Xi \) the value \(x\) is not an eigenvalue of \(Q\) provided

$$\begin{aligned} \min _i|1/d_i - 1 | \geqslant K^{\varepsilon /2} \left( {\sqrt{\kappa (x)} + \sqrt{\eta } + K^{-\varepsilon } \alpha _+^{-1}}\right) . \end{aligned}$$

It is easy to check that this condition is satisfied if

$$\begin{aligned} \kappa (x) + \eta \leqslant C K^{-\varepsilon } \alpha _+^2, \end{aligned}$$

which holds provided that

$$\begin{aligned} \kappa (x) \leqslant C K^{-\varepsilon } \alpha _+^2, \end{aligned}$$

where we used (4.10). Recalling (4.7), we therefore conclude that for \(i \leqslant K^{1 - 2 \varepsilon } \alpha _+^3\) the set

$$\begin{aligned} \Bigl \{{x \in \bigl [{\lambda _{i - r - 1}, \gamma _+ + K^{-2/3 + 2 \varepsilon }}\bigr ] \mathrel {\mathop :}{{\mathrm{dist}}}(x, \sigma (H)) > K^{-1 + 2 \varepsilon }\alpha _+^{-1}}\Bigr \} \end{aligned}$$

contains no eigenvalue of \(Q\).

The next step of the proof is a counting argument (sketched in the steps (ii) and (iii) at the beginning of this section), which uses the eigenvalue interlacing from Lemma 4.1. They details are the same as in [27, Section 6], and hence omitted here. The counting argument implies that for \(i \leqslant K^{1 - 2 \varepsilon } \alpha _+^3\) and assuming (4.10) we have

$$\begin{aligned} |\mu _{i + s_+} - \lambda _i | \leqslant C K^{-1 + 2 \varepsilon }\alpha _+^{-1}. \end{aligned}$$
(4.13)

What remains is to check (4.13) for the cases \(\alpha _+ < K^{-1/3 + \varepsilon }\) and \(i > K^{1 - 2 \varepsilon } \alpha _+^3\).

Suppose first that \(\alpha _+ < K^{-1/3 + \varepsilon }\). Then using the rigidity from (4.7) and interlacing from Corollary 4.2 we find

$$\begin{aligned} |\mu _{i + s_+} - \lambda _i | \leqslant C \, i^{-1/3}K^{-2/3 + \varepsilon } \leqslant C K^{-1 + 2 \varepsilon }\alpha _+^{-1}, \end{aligned}$$

where we used the trivial bound \(i \geqslant 1\). Similarly, if \(i > K^{1 - 2 \varepsilon } \alpha _+^3\) satisfies \(i \leqslant (1 - \tau )K\), we may repeat the same estimate.

We conclude that (4.13) under the sole assumption that \(i \leqslant (1 - \tau )K\). Since \(\varepsilon > 0\) was arbitrary, (2.9) follows. \(\square \)

5 Outlier eigenvectors

In this section we focus on the outlier eigenvectors \(\xi _a\), \(a \in {\mathcal {O}}\). Here we in fact prove Theorem 2.11 under the stronger assumption

$$\begin{aligned} 1 + K^{-1/3 + \tau } \leqslant d_i \leqslant \tau ^{-1} \qquad (i \in A) \end{aligned}$$
(5.1)

instead of \(1 + K^{-1/3} \leqslant d_i \leqslant \tau ^{-1}\). How to improve the lower bound from \(1 + K^{-1/3 + \tau }\) to the claimed \(K^{-1/3}\) requires a completely different approach, relying on eigenvector delocalization bounds, and is presented in Sect. 6 in conjunction with results for the non-outlier eigenvectors \(\xi _a\), \(a \notin {\mathcal {O}}\).

The proof of Theorem 2.16 is similar to that of Theorem 2.11; one has to adapt the proof to cover the range \(d_i \in [1 + \tau , \infty )\) instead of \(d_i \in [1 + K^{-1/3}, \tau ^{-1}]\). The key input is the extension of the spectral domain from Corollary 3.9. For the sake of brevity we omit the details of the proof of Theorem 2.16, and focus solely on Theorem 2.11.

The following proposition is the main result of this section.

Proposition 5.1

Fix \(\tau > 0\). Suppose that \(A\) satisfies (5.1). Then for all \(i,j = 1,\ldots , M\) we have

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle&= \delta _{ij} \mathbf{{1}} (i \in A) u(d_i) + O_\prec \Biggl [ \frac{\mathbf{{1}} (i,j \in A)}{(d_i - 1)^{1/4} (d_j - 1)^{1/4} M^{1/2}}\nonumber \\&+ \frac{\sqrt{\sigma _i \sigma _j}}{M} \left( {\frac{1}{\nu _i} + \frac{\mathbf{{1}} (i \in A)}{d_i - 1}}\right) \left( {\frac{1}{\nu _j} + \frac{\mathbf{{1}} (j \in A)}{d_j - 1}}\right) \nonumber \\&+ \frac{\mathbf{{1}} (i \in A) \mathbf{{1}} (j \notin A) (d_i - 1)^{1/2} \sqrt{\sigma _j}}{(1 + \phi )^{1/4} \nu _j M^{1/2}} + (i \leftrightarrow j) \Biggr ], \end{aligned}$$
(5.2)

where the symbol \((i \leftrightarrow j)\) denotes the preceding terms with \(i\) and \(j\) interchanged.

Note that, under the assumption (5.1), Theorem 2.11 is an easy consequence of Proposition 5.1. As explained above, the proof of Theorem 2.11 in full generality is given in Sect. 6, where we give the additional argument required to relax (5.1).

The rest of this section is devoted to the proof of Proposition 5.1.

5.1 Non-overlapping outliers

We first prove a slightly stronger version of (5.2) under the additional non-overlapping condition

$$\begin{aligned} \nu _i(A) \geqslant (d_i - 1)^{-1/2} K^{-1/2 + \delta } \end{aligned}$$
(5.3)

for all \(i \in A\), where \(\delta > 0\) is a positive constant. This is a precise version of the second condition of (2.14), whose interpretation was given below (2.14): an outlier indexed by \(A\) cannot overlap with an outlier indexed by \(A^c\). Note, however, that there is no restriction on the outliers indexed by \(A\) overlapping among themselves. The assumption (5.3) will be removed in Sect. 5.2. The main estimate for non-overlapping outliers is the following.

Proposition 5.2

Fix \(\tau > 0\) and \(\delta > 0\). Suppose that \(A\) satisfies (5.1) and (5.3) for all \(i \in A\). Then for all \(i,j = 1,\ldots , M\) we have

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle&= \delta _{ij} \mathbf{{1}} (i \in A) u(d_i) + O_\prec \Biggl [ \frac{\mathbf{{1}} (i,j \in A)}{(d_i - 1)^{1/4} (d_j - 1)^{1/4} M^{1/2}}\nonumber \\&+ \frac{\sqrt{\sigma _i \sigma _j}}{M} \left( {\frac{1}{\nu _i} + \frac{\mathbf{{1}} (i \in A)}{d_i - 1}}\right) \left( {\frac{1}{\nu _j} + \frac{\mathbf{{1}} (j \in A)}{d_j - 1}}\right) \nonumber \\&+ \frac{\mathbf{{1}} (i \in A) \mathbf{{1}} (j \notin A) (d_i - 1)^{1/2} \sqrt{\sigma _j}}{(1 + \phi )^{1/4} |d_i - d_j | M^{1/2}} + (i \leftrightarrow j) \Biggr ]. \end{aligned}$$
(5.4)

Remark 5.3

The only difference between (5.2) and (5.4) is the term proportional to \(\mathbf{{1}} (j \notin A)\) on the last line. In order to prove (5.2) without the overlapping condition (5.3), it is necessary to start from the stronger bound (5.4); see Sect. 5.2 below.

The rest of this subsection is devoted to the proof of Proposition 5.2. We begin by defining \(\omega \mathrel {\mathop :}=\tau / 2\) and letting \(\varepsilon < \min \{{\tau /3, \delta }\}\) be a positive constant to be determined later. We choose a high-probability event \(\Xi \equiv \Xi _N(\varepsilon , \tau )\) (see Definition 3.1) satisfying the following conditions.

  1. (i)

    We have

    $$\begin{aligned} \mathbf{{1}} (\Xi ) \bigl |W_{ij}(z) - w_\phi (z) \delta _{ij} \bigr | \leqslant |z - \gamma _+ |^{-1/4} K^{-1/2 + \varepsilon } \end{aligned}$$
    (5.5)

    for \(i,j \in {\mathcal {R}}\), large enough \(K\), and all \(z\) in the set

    $$\begin{aligned} \bigl \{{z \in \mathbb {C}\mathrel {\mathop :}\hbox {Re }z \geqslant \gamma _+ + K^{-2/3 + \omega } ,\, |z | \leqslant \omega ^{-1}}\bigr \}. \end{aligned}$$
    (5.6)
  2. (ii)

    For all \(i\) satisfying \(1 + K^{-1/3} \leqslant d_i \leqslant \omega ^{-1}\) we have

    $$\begin{aligned} \mathbf{{1}} (\Xi ) |\mu _{i} - \theta (d_i) | \leqslant (d_i - 1)^{1/2} \, K^{-1/2 + \varepsilon }. \end{aligned}$$
    (5.7)
  3. (iii)

    We have

    $$\begin{aligned} \mathbf{{1}} (\Xi ) |\mu _{s_++1} - \gamma _+ | \leqslant K^{-2/3+\varepsilon }. \end{aligned}$$
    (5.8)

Note that such an event \(\Xi \) exists. Indeed, (5.7) and (5.8) may be satisfied using Theorem 2.3, and (5.5) using Theorem 3.7 combined with Remark 3.3.

For the sequel we fix a realization \(H \in \Xi \) satisfying the conditions (i)–(iii) above. Hence, the rest of the proof of Proposition 5.2 is entirely deterministic, and the randomness only enters in ensuring that \(\Xi \) has high probability. Our starting point is a contour integral representation of the projection \(P_A\). In order to construct the contour, we define for each \(i \in A\) the radius

$$\begin{aligned} \rho _i \mathrel {\mathop :}=\frac{\nu _i \wedge (d_i - 1)}{2}. \end{aligned}$$
(5.9)

We define the contour \(\Gamma \mathrel {\mathop :}=\partial \Upsilon \) as the boundary of the union of discs \(\Upsilon \mathrel {\mathop :}=\bigcup _{i \in A} B_{\rho _i}(d_i)\), where \(B_\rho (d)\) is the open disc of radius \(\rho \) around \(d\). We shall sometimes need the decomposition \(\Gamma = \bigcup _{i \in A} \Gamma _i\), where \(\Gamma _i \mathrel {\mathop :}=\Gamma \cap \partial B_{\rho _i}(d_i)\). See Fig. 3 for an illustration of \(\Gamma \).

Fig. 3
figure 3

The integration contour \(\Gamma = \bigcup _{i \in A} \Gamma _i\). In this example \(\Gamma \) consists of two components, and we have \(|{\mathcal {R}} | = 6\) with \(A = \{2,3,4,5\}\). We draw the locations of \(d_i\) with \(i \in A\) using black dots and the other \(d_i\) using white dots. The contour is constructed by drawing circles of radius \(\rho _i\) around each \(d_i\) for \(i \in A\) (depicted with dotted lines). The piece \(\Gamma _i\) consists of the points on the circle centred at \(d_i\) that lie outside all other circles

We shall have to use the estimate (5.5) on the set \(\overline{\theta (\Upsilon )} \!\,\). Its applicability is an immediate consequence of the following lemma.

Lemma 5.4

The set \(\overline{\theta (\Upsilon )} \!\,\) lies in (5.6).

Proof

It is easy to check that \(\theta (\zeta ) \leqslant \omega ^{-1}\) for all \(\zeta \in \Upsilon \). In order to check the lower bound on \(\hbox {Re }\theta (\zeta )\), we note that for any \(\alpha \in (0,1)\) there exists a constant \(c \equiv c(\alpha , \tau )\) such that

$$\begin{aligned} \hbox {Re }\theta (\zeta ) \geqslant \gamma _+ + c (\hbox {Re }\zeta - 1)^2 \end{aligned}$$

for \(\hbox {Re }\zeta \geqslant 1\), \(|\hbox {Im }\zeta | \leqslant \alpha (\hbox {Re }\zeta - 1)\), and \(|\zeta | \leqslant \tau ^{-1}\). Now the claim follows easily from \(\hbox {Re }\zeta \geqslant 1 + K^{-1/3 + \tau }/2\) for all \(\zeta \in \Upsilon \), by choosing \(\alpha = 1/\sqrt{3}\). \(\square \)

Lemma 5.5

Each outlier \(\{\mu _{i}\}_{i \in A}\) lies in \(\theta (\Upsilon ),\) and all other eigenvalues of \(Q\) lie in the complement of \(\overline{\theta (\Upsilon )} \!\,\).

Proof

It suffices to prove that (a) for each \(i \in A\) we have \(\mu _{i} \in \theta (B_{\rho _i}(d_i))\) and (b) all the other eigenvalues \(\mu _j\) satisfy \(\mu _j \notin \theta (B_{\rho _i}(d_i))\) for all \(i \in A\).

In order to prove (a), we note that

$$\begin{aligned} \rho _i \geqslant \frac{1}{2} \, (d_i - 1)^{-1/2} K^{-1/2 + \delta }, \end{aligned}$$
(5.10)

for \(i \in A\), as follows from (5.3) and (5.1). Using

$$\begin{aligned} |\theta '(\zeta ) | \asymp |\zeta - 1 | \qquad (\hbox {Re }\zeta \geqslant 1 ,\, |\zeta | \leqslant \tau ^{-1}), \end{aligned}$$
(5.11)

it is then not hard to get (a) from (5.10) and (5.7).

In order to prove (b), we consider the two cases (i) \(1 + K^{-1/3} \leqslant d_j \leqslant \omega ^{-1}\) with \(j \notin A\), and (ii) and \(j \geqslant s_+ + 1\). In the case (i), the claim (b) follows using (5.7), (5.11), and (5.3). In the case (ii), the claim (b) follows from (5.8) and the estimate

$$\begin{aligned} |\theta (\zeta ) - \gamma _+ | \asymp |\zeta - 1 |^2 \qquad (\hbox {Re }\zeta \geqslant 1 , |\zeta | \leqslant \tau ^{-1}). \end{aligned}$$
(5.12)

This concludes the proof. \(\square \)

Using the spectral decomposition of \(\widetilde{G}(z)\), Lemma 5.5, and the residue theorem, we may write the projection \(P_A\) as

$$\begin{aligned} P_A = - \frac{1}{2 \pi \mathrm {i}} \oint _{\theta (\Gamma )} \widetilde{G}(z) \, \mathrm {d}z = - \frac{1}{2 \pi \mathrm {i}} \oint _{\Gamma } \widetilde{G}(\theta (\zeta )) \, \theta '(\zeta ) \, \mathrm {d}\zeta . \end{aligned}$$

Hence we get from (3.37) that

$$\begin{aligned} V^* P_A V = \phi ^{-1/2} \, \frac{1}{2 \pi \mathrm {i}} \oint _{\Gamma } \frac{\sqrt{1 + \phi ^{1/2} D}}{D} \frac{1}{D^{-1} + W(\theta (\zeta ))} \frac{\sqrt{1 + \phi ^{1/2} D}}{D} \frac{\theta '(\zeta )}{\theta (\zeta )} \, \mathrm {d}\zeta .\nonumber \\ \end{aligned}$$
(5.13)

This is the desired integral representation of \(P_A\).

We first use (5.13) to compute \(\langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle \) in the case \(i,j \in {\mathcal {R}}\), where \(\mathbf{{v}}_i\) and \(\mathbf{{v}}_j\) lie in the range of \(V\). In that case we get from (5.13) that

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle = \frac{\sqrt{\sigma _i \sigma _j}}{\phi ^{1/2}d_i d_j} \, \frac{1}{2 \pi \mathrm {i}} \oint _{\Gamma } \left( {\frac{1}{D^{-1} + W(\theta (\zeta ))}}\right) _{ij} \frac{\theta '(\zeta )}{\theta (\zeta )} \, \mathrm {d}\zeta . \end{aligned}$$

We now perform a resolvent expansion on the denominator

$$\begin{aligned} D^{-1} + W(\theta ) = (D^{-1} + w_\phi (\theta )) - \Delta (\theta ) , \qquad \Delta (\theta ) \mathrel {\mathop :}=w_\phi (\theta ) - W(\theta ). \end{aligned}$$
(5.14)

Thus we get

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle = \frac{\sqrt{\sigma _i \sigma _j}}{\phi ^{1/2}d_i d_j} \left( {S^{(0)}_{ij} + S^{(1)}_{ij} + S^{(2)}_{ij}}\right) , \end{aligned}$$
(5.15)

where we defined

$$\begin{aligned} S^{(0)}_{ij}&\mathrel {\mathop :}= \frac{1}{2 \pi \mathrm {i}} \oint _{\Gamma } \left( {\frac{1}{D^{-1} + w_\phi (\theta (\zeta ))}}\right) _{ij} \frac{\theta '(\zeta )}{\theta (\zeta )} \, \mathrm {d}\zeta ,\end{aligned}$$
(5.16)
$$\begin{aligned} S^{(1)}_{ij}&\mathrel {\mathop :}= \frac{1}{2 \pi \mathrm {i}} \oint _{\Gamma } \left( {\frac{1}{D^{-1} + w_\phi (\theta (\zeta ))} \Delta (\theta (\zeta )) \frac{1}{D^{-1} + w_\phi (\theta (\zeta ))}}\right) _{ij} \frac{\theta '(\zeta )}{\theta (\zeta )} \, \mathrm {d}\zeta ,\qquad \end{aligned}$$
(5.17)
$$\begin{aligned} S^{(2)}_{ij}&\mathrel {\mathop :}= \frac{1}{2 \pi \mathrm {i}} \oint _{\Gamma } \left( \frac{1}{D^{-1} + w_\phi (\theta (\zeta ))} \Delta (\theta (\zeta )) \frac{1}{D^{-1} + W(\theta (\zeta ))} \right. \nonumber \\&\left. \times \Delta (\theta (\zeta )) \frac{1}{D^{-1} + w_\phi (\theta (\zeta ))}\right) _{ij} \frac{\theta '(\zeta )}{\theta (\zeta )} \, \mathrm {d}\zeta . \end{aligned}$$
(5.18)

We begin by computing

$$\begin{aligned} S^{(0)}_{ij} = \delta _{ij} \frac{1}{2 \pi \mathrm {i}} \oint _{\Gamma } \left( {\frac{1}{d_i^{-1} - \zeta ^{-1}}}\right) _{ij} \frac{\theta '(\zeta )}{\theta (\zeta )} \, \mathrm {d}\zeta = \delta _{ij} \mathbf{{1}} (i \in A) \frac{d_i^2 - 1}{\theta (d_i)}, \end{aligned}$$
(5.19)

where we used Cauchy’s theorem, (4.2), and the fact that \(d_i\) lies in \(\Upsilon \) if and only if \(i \in A\).

Next, we estimate

$$\begin{aligned} S^{(1)}_{ij} \!=\! d_i d_j \, \frac{1}{2 \pi \mathrm {i}} \oint _{\Gamma } \frac{f_{ij}(\zeta )}{(\zeta \!-\! d_i)(\zeta \!-\! d_j)} \, \mathrm {d}\zeta , \quad f_{ij}(\zeta ) \mathrel {\mathop :}=\zeta ^2 \Delta (\theta (\zeta )) \frac{\theta '(\zeta )}{\theta (\zeta )},\quad \end{aligned}$$
(5.20)

using the fact that \(f_{ij}\) is holomorphic inside \(\Gamma \) and satisfies the bounds

$$\begin{aligned} |f_{ij}(\zeta ) | \leqslant C \phi ^{1/2} (1 \!+\! \phi )^{-1} |\zeta \!-\! 1 |^{1/2} K^{-1/2 + \varepsilon }, \quad |f_{ij}'(\zeta ) | \leqslant C \phi ^{1/2} (1 \!+\! \phi )^{-1} |\zeta \!-\! 1 |^{-1/2} K^{-1/2 + \varepsilon }.\nonumber \\ \end{aligned}$$
(5.21)

The first bound of (5.21) follows from (5.5), (5.11), and (5.12). The second bound of (5.21) follows by plugging the first one into

$$\begin{aligned} f_{ij}'(\zeta ) = \frac{1}{2 \pi \mathrm {i}} \oint _{{\mathcal {C}}} \frac{f_{ij}(\xi )}{(\xi - \zeta )^2} \, \mathrm {d}\xi , \end{aligned}$$

where the contour \({\mathcal {C}}\) is the circle of radius \(|\zeta - 1 |/2\) centred at \(\zeta \). (By assumptions on \(\varepsilon \) and \(\omega \), the function \(f_{ij}\) is holomorphic in a neighbourhood of the closed interior of \({\mathcal {C}}\).)

In order to estimate (5.20), we consider the three cases (i) \(i,j \in A\), (ii) \(i \in A\), \(j \notin A\), (iii) \(i \notin A\), \(j \in A\). Note that (5.20) vanishes if \(i,j \notin A\). We start with the case (i). Suppose first that \(i \ne j\) and \(d_i \ne d_j\). Then we find

$$\begin{aligned} |S^{(1)}_{ij} |&= |d_i d_j | \biggl |\frac{f_{ij}(d_i) - f_{ij}(d_j)}{d_i - d_j} \biggr | \leqslant \frac{|d_i d_j |}{|d_i - d_j |} \biggl |\int _{d_i}^{d_j} |f'_{ij}(t) | \, \mathrm {d}t \biggr | \\&\leqslant \frac{C |d_i d_j | \phi ^{1/2}}{(1 + \phi ) (d_i - 1)^{1/4} (d_j - 1)^{1/4}} K^{-1/2 + \varepsilon }. \end{aligned}$$

A simple limiting argument shows that this bound is also valid for \(d_i = d_j\) and \(i = j\). Next, in the case (ii) we get from (5.21)

$$\begin{aligned} |S^{(1)}_{ij} | = \frac{|d_i d_j f_{ij}(d_i) |}{|d_i - d_j |} \leqslant \frac{C |d_i d_j | \phi ^{1/2} (d_i - 1)^{1/2}}{(1 + \phi ) |d_i - d_j |} K^{-1/2 + \varepsilon }. \end{aligned}$$

A similar estimate holds for the case (iii). Putting all three cases together, we find

$$\begin{aligned} |S^{(1)}_{ij} |&\leqslant \frac{C \mathbf{{1}} (i,j \in A) |d_i d_j | \phi ^{1/2}}{(1 + \phi ) (d_i - 1)^{1/4} (d_j - 1)^{1/4}} K^{-1/2 + \varepsilon } \nonumber \\&+ \!\frac{C \mathbf{{1}} (i \in A) \mathbf{{1}} (j \notin A)|d_i d_j | \phi ^{1/2} (d_i - 1)^{1/2}}{(1 \!+\! \phi ) |d_i \!-\! d_j |} K^{-1/2 \!+\! \varepsilon } + (i \leftrightarrow j).\quad \end{aligned}$$
(5.22)

What remains is the estimate of \(S_{ij}^{(2)}\). Here residue calculations are unavailable, and the precise choice of the contour \(\Gamma \) is crucial. We use the following basic estimate to control the integral.

Lemma 5.6

For \(k \in A,\) \(l \in {\mathcal {R}},\) and \(\zeta \in \Gamma _k\) we have

$$\begin{aligned} |\zeta - d_l | \asymp \rho _k + |d_k - d_l |. \end{aligned}$$

Proof

The upper bound \(|\zeta - d_l | \leqslant \rho _k + |d_k - d_l |\) is trivial, so that we only focus on the lower bound. Suppose first that \(l \notin A\). Then we get \(|\zeta - d_l | \geqslant |d_k - d_l | - \rho _k\), from which the claim follows since \(|d_k - d_l | \geqslant 2 \rho _k\) by (5.9).

For the remainder of the proof we may therefore suppose that \(l \in A\). Define \(\delta \mathrel {\mathop :}=|d_k - d_l | - \rho _k - \rho _l\), the distance between the discs \(D_{\rho _k}(d_k)\) and \(D_{\rho _l}(d_l)\) (see Fig. 3). We consider the two cases \(4 \delta \leqslant |d_k - d_l |\) and \(4 \delta > |d_k - d_l |\) separately.

Suppose first that \(4 \delta \leqslant |d_k - d_l |\). Then by definition of \(\delta \) we have \(|d_k - d_l | \leqslant \frac{4}{3} (\rho _k + \rho _l)\). Now a simple estimate using the definition of \(\rho _i\) yields \(\rho _k / 5 \leqslant \rho _l \leqslant 5 \rho _k\), from which we conclude \(|d_k - d_l | \leqslant 8 \rho _k\). The claim now follows from the bound \(|\zeta - d_l | \geqslant \rho _l\).

Suppose now that \(4 \delta > |d_k - d_l |\). Hence \(\rho _k + \rho _l \leqslant \frac{3}{4} |d_k - d_l |\), so that in particular \(\rho _k \leqslant |d_k - d_l |\). Thus we get

$$\begin{aligned} |\zeta - d_l | \geqslant |d_k - d_l | - \rho _k - \rho _l \geqslant \frac{1}{4} |d_k - d_l | \geqslant \frac{1}{8} \left( {|d_k - d_l | + \rho _k}\right) \!. \end{aligned}$$

This concludes the proof. \(\square \)

From (5.18), (5.5), (5.11), and (5.12) we get

$$\begin{aligned} \bigl |S_{ij}^{(2)} \bigr | \leqslant C \oint _{\Gamma } \frac{\phi ^{1/2} |d_i d_j | K^{-1 + 2 \varepsilon }}{(1 + \phi ) |\zeta - d_i | |\zeta - d_j |} \biggl \Vert \frac{1}{D^{-1} + W(\theta (\zeta ))} \biggr \Vert \, |\mathrm {d}\zeta |, \end{aligned}$$
(5.23)

where we also used the estimate \(|\theta (\zeta ) | \asymp \phi ^{-1/2} (1 + \phi )\) for \(\zeta \in \Gamma \).

In order to estimate the matrix norm, we observe that for \(\zeta \in \Gamma _k\) we have on the one hand

$$\begin{aligned} \Vert W(\theta ) - w_\phi (\theta ) \Vert \leqslant (d_k - 1)^{-1/2} K^{-1/2 + \varepsilon } \end{aligned}$$

from (5.5) and on the other hand

$$\begin{aligned} |w_\phi (\theta ) - d_l^{-1} | \geqslant c (|\zeta - d_l | \wedge 1) \geqslant c |\zeta - d_k | = c \rho _k \geqslant c (d_k - 1)^{-1/2} K^{-1/2 + \delta } \end{aligned}$$

for any \(l \in {\mathcal {R}}\), where in the last step we used (5.10). Since \(\varepsilon < \delta \), these estimates combined with a resolvent expansion give the bound

$$\begin{aligned} \biggl \Vert \frac{1}{D^{-1} + W(\theta (\zeta ))} \biggr \Vert \leqslant \frac{1}{\min _{b \in {\mathcal {R}}} |w_\phi (\theta ) - d_b^{-1} | - \Vert W(\theta ) - w_\phi (\theta ) \Vert } \leqslant \frac{C}{\rho _k} \end{aligned}$$

for \(\zeta \in \Gamma _k\). Decomposing the integration contour in (5.23) as \(\Gamma = \bigcup _{k \in A} \Gamma _k\), and recalling that \(\Gamma _k\) has length bounded by \(2 \pi \rho _k\), we get from Lemma 5.6

$$\begin{aligned} \bigl |S_{ij}^{(2)} \bigr |&\leqslant C \sum _{k \in A} \sup _{\zeta \in \Gamma _k} \frac{\phi ^{1/2} |d_i d_j | K^{-1 + 2 \varepsilon }}{(1 + \phi ) |\zeta - d_i | |\zeta - d_j |} \nonumber \\&\leqslant C \sum _{k \in A} \frac{\phi ^{1/2} |d_i d_j | K^{-1 + 2 \varepsilon }}{(1 + \phi ) (\rho _k + |d_k - d_i |) (\rho _k + |d_k - d_j |)}. \end{aligned}$$
(5.24)

We estimate the right-hand side using Cauchy–Schwarz. For \(i \notin A\) we find, using (5.9),

$$\begin{aligned} \sum _{k \in A} \frac{1}{(\rho _k + |d_k - d_i |)^2} \leqslant \sum _{k \in A} \frac{1}{|d_k - d_i |^2} \leqslant \frac{C}{\nu _i^2}. \end{aligned}$$
(5.25)

For \(i \in A\) we use (5.9) and the estimate \(\rho _k + |d_i - d_k | \geqslant \rho _i\) for all \(k \in A\) to get

$$\begin{aligned} \sum _{k \in A} \frac{1}{(\rho _k + |d_k - d_i |)^2} \leqslant \frac{C}{\rho _i^2} \leqslant \frac{C}{\nu _i^2} + \frac{C}{(d_i - 1)^2}. \end{aligned}$$
(5.26)

From (5.24), (5.25), and (5.26), we get

$$\begin{aligned} \bigl |S_{ij}^{(2)} \bigr | \leqslant \frac{C \phi ^{1/2} |d_i d_j | K^{-1 + 2 \varepsilon }}{1 + \phi } \left( {\frac{1}{\nu _i} + \frac{\mathbf{{1}} (i \in A)}{d_i - 1}}\right) \left( {\frac{1}{\nu _j} + \frac{\mathbf{{1}} (j \in A)}{d_j - 1}}\right) . \end{aligned}$$
(5.27)

Recall that \(M \asymp (1 + \phi ) K\). Hence, plugging (5.19), (5.22), and (5.27) into (5.15), we find

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle&= \delta _{ij} \mathbf{{1}} (i \in A) u(d_i) + O \Biggl [ \frac{\mathbf{{1}} (i,j \in A) K^\varepsilon }{(d_i - 1)^{1/4} (d_j - 1)^{1/4} M^{1/2}}\nonumber \\&+ \frac{\sqrt{\sigma _i \sigma _j} K^{2 \varepsilon }}{M} \left( {\frac{1}{\nu _i} + \frac{\mathbf{{1}} (i \in A)}{d_i - 1}}\right) \left( {\frac{1}{\nu _j} + \frac{\mathbf{{1}} (j \in A)}{d_j - 1}}\right) \nonumber \\&+ \frac{\mathbf{{1}} (i \in A) \mathbf{{1}} (j \notin A) (d_i - 1)^{1/2} \sqrt{\sigma _j} K^\varepsilon }{(1 + \phi )^{1/4} |d_i - d_j | M^{1/2}} + (i \leftrightarrow j) \Biggr ]. \end{aligned}$$
(5.28)

We have proved (5.28) under the assumption that \(i, j \in {\mathcal {R}}\). The general case is an easy corollary. For general \(i,j \in [\![{1,M}]\!]\), we define \(\widehat{{\mathcal {R}}} \mathrel {\mathop :}={\mathcal {R}} \cup \{i,j\}\) and consider

$$\begin{aligned} \widehat{\Sigma }\mathrel {\mathop :}=1 + \phi ^{1/2} \widehat{V} \widehat{D} \widehat{V}^*, \qquad \widehat{V} \mathrel {\mathop :}=[\mathbf{{v}}_k]_{k \in \widehat{{\mathcal {R}}}} , \qquad \widehat{D} \mathrel {\mathop :}={{\mathrm{diag}}}(\widehat{d}_k)_{k \in \widehat{{\mathcal {R}}}}, \end{aligned}$$

where \(\widehat{d}_k \mathrel {\mathop :}=d_k\) for \(k \in {\mathcal {R}}\) and \(\widehat{d}_k \in (0,1/2)\) for \(k \in \widehat{{\mathcal {R}}} \!\setminus \! {\mathcal {R}}\). Since \(|\widehat{{\mathcal {R}}} | \leqslant r + 2\) and \(\widehat{D}\) is invertible, we may apply the result (5.28) to this modified model. Now taking the limit \(\widehat{d}_k \rightarrow 0\) for \(k \in \widehat{{\mathcal {R}}} \!\setminus \! {\mathcal {R}}\) in (5.28) concludes the proof in the general case. Now Proposition 5.2 follows since \(\varepsilon \) may be chosen arbitrarily small. This concludes the proof of Proposition 5.2.

5.2 Removing the non-overlapping assumption

In this subsection we complete the proof of Proposition 5.1 by extending Proposition 5.2 to the case where (5.3) does not hold.

Proof of Proposition 5.1

Let \(\delta < \tau /4\). We say that \(i,j \in {\mathcal {O}}_{\tau /2}^+\) overlap if \(|d_i - d_j | \leqslant (d_i - 1)^{-1/2} K^{-1/2 + \delta }\) or \(|d_i - d_j | \leqslant (d_j - 1)^{-1/2} K^{-1/2 + \delta }\). For \(A \subset {\mathcal {O}}_{\tau }^+\) we introduce sets \(S(A), L(A) \subset {\mathcal {O}}_{\tau /2}^+\) satisfying \(S(A) \subset A \subset L(A)\). Informally, \(S(A) \subset A\) is the largest subset of indices of \(A\) that do not overlap with its complement. It is by definition constructed by successively choosing \(k \in A\), such that \(k\) overlaps with an index of \(A^c\), and removing \(k\) from \(A\); this process is repeated until no such \(k\) exists. One can check that the result is independent of the choice of \(k\) at each step. Note that \(S(A)\) may be empty.

Informally, \(L(A) \supset A\) is the smallest subset of indices in \({\mathcal {O}}_{\tau /2}^+\) that do not overlap with its complement. It is by definition constructed by successively choosing \(k \in {\mathcal {O}}_{\tau /2}^+{\setminus } A\), such that \(k\) overlaps with an index of \(A\), and adding \(k\) to \(A\); this process is repeated until no such \(k\) exists. One can check that the result is independent of the choice of \(k\) at each step. See Fig. 4 for an illustration of \(S(A)\) and \(L(A)\). Throughout the following we shall repeatedly make use of the fact that, for any \(A \subset {\mathcal {O}}_{\tau }^+\), Proposition 5.2 is applicable with \((\tau , A)\) replaced by \((\tau /2, S(A))\) or \((\tau /2, L(A))\).

After these preparations, we move on to the proof of (5.2). We divide the argument into four steps.

(a) \(i = j \notin A\). We consider two cases, \(i \notin L(A)\) and \(i \in L(A)\). Suppose first that \(i \notin L(A)\). Using that \(|{\mathcal {R}} |\) is bounded, it is not hard to see that \(\nu _i(A) \asymp \nu _i(L(A))\). We now invoke Proposition 5.2 and get

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_i}\rangle \leqslant \langle {\mathbf{{v}}_i} , {P_{L(A)} \mathbf{{v}}_i}\rangle \prec \frac{\sigma _i}{M \nu _i(L(A))^2} \leqslant C \frac{\sigma _i}{M \nu _i(A)^2}. \end{aligned}$$
(5.29)

In the complementary case, \(i \in L(A)\), a simple argument yields

$$\begin{aligned} \nu _i(A) \leqslant C (d_i - 1)^{-1/2} K^{-1/2 + \delta } \leqslant C \nu _i(L(A)), \end{aligned}$$
(5.30)

as well as \(\sigma _i \asymp 1 + \phi ^{1/2}\). From Proposition 5.2 we therefore get

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_i}\rangle&\leqslant \langle {\mathbf{{v}}_i} , {P_{L(A)} \mathbf{{v}}_i}\rangle \prec \frac{d_i - 1}{1 + \phi ^{1/2}} + \frac{1}{(d_i - 1)^{1/2} M^{1/2}}\\&+ \frac{1 + \phi ^{1/2}}{M \nu _i(L(A))^2} + \frac{1 + \phi ^{1/2}}{M (d_i - 1)^2}\\&\leqslant C K^{2 \delta } \frac{d_i - 1}{1 + \phi ^{1/2}} \leqslant C K^{2 \delta } \frac{1 + \phi ^{1/2}}{M \nu _i(A)^2} \leqslant C K^{2 \delta } \frac{\sigma _i}{M \nu _i(A)^2}, \end{aligned}$$

where we used that \(M \asymp (1 + \phi ) K\). Recalling (5.29), we conclude

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_i}\rangle \prec K^{2 \delta } \frac{\sigma _i}{M \nu _i(A)^2} \qquad (i \notin A). \end{aligned}$$
(5.31)

(b) \(i = j \in A\). We consider the two cases \(i \in S(A)\) and \(i \notin S(A)\). Suppose first that \(i \in S(A)\). We write

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_i}\rangle = \langle {\mathbf{{v}}_i} , {P_{S(A)} \mathbf{{v}}_i}\rangle + \langle {\mathbf{{v}}_i} , {P_{A \setminus S(A)} \mathbf{{v}}_i}\rangle . \end{aligned}$$
(5.32)

We compute the first term of (5.32) using Proposition 5.2 and the observation that \(\nu _i(A) \asymp \nu _i(S(A))\):

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_{S(A)} \mathbf{{v}}_i}\rangle = u(d_i) + O_\prec \Biggl [{\frac{1}{(d_i - 1)^{1/2} M^{1/2}} + \frac{\sigma _i}{M} \left( {\frac{1}{\nu _i(A)^2} + \frac{1}{(d_i - 1)^2}}\right) }\Biggr ]. \end{aligned}$$

In order to estimate the second term of (5.32), we note that \(\nu _i(A) \asymp \nu _i(A {\setminus } S(A))\). We therefore apply (5.31) with \(A\) replaced by \(A \setminus S(A)\) to get

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_{A \setminus S(A)} \mathbf{{v}}_i}\rangle \prec K^{2 \delta } \frac{\sigma _i}{M \nu _i(A)^2}. \end{aligned}$$

Going back to (5.32), we have therefore proved that

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_i}\rangle = u(d_i) + K^{2 \delta } O_\prec \Biggl [{\frac{1}{(d_i - 1)^{1/2} M^{1/2}} + \frac{\sigma _i}{M} \left( {\frac{1}{\nu _i(A)^2} + \frac{1}{(d_i - 1)^2}}\right) }\Biggr ]\nonumber \\ \end{aligned}$$
(5.33)

for \(i \in S(A)\).

Next, we consider the case \(i \notin S(A)\). Now we have (5.30), so that Proposition 5.2 yields

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_i}\rangle \!\leqslant \! \langle {\mathbf{{v}}_i} , {P_{L(A)} \mathbf{{v}}_i}\rangle \prec u(d_i) \!+\! \frac{1}{(d_i - 1)^{1/2} M^{1/2}} \!+\! \frac{\sigma _i}{M} \left( {\frac{1}{\nu _i(A)^2} \!+\! \frac{1}{(d_i - 1)^2}}\right) . \end{aligned}$$

By (5.30) and \(M \asymp (1 + \phi ) K\), we have

$$\begin{aligned} u(d_i) = \frac{\sigma _i}{\phi ^{1/2} \theta (d_i)} (1 - d_i^{-2}) \leqslant C K^{2 \delta } \frac{1}{(d_i - 1)^{1/2} M^{1/2}}, \end{aligned}$$

from which we deduce (5.33) also in the case \(i \notin S(A)\).

(c) \(i \ne j\) and \(i \notin A\) or \(j \notin A\). From cases (a) and (b) [i.e. (5.31) and (5.33)], combined with the estimate

$$\begin{aligned} \bigl |\langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle \bigr |^2 \leqslant \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_i}\rangle \langle {\mathbf{{v}}_j} , {P_A \mathbf{{v}}_j}\rangle , \end{aligned}$$

we find, assuming \(i \notin A\) or \(j \notin A\), that (5.2) holds with an additional factor \(K^{2 \delta }\) multiplying the right-hand side.

(d) \(i \ne j\) and \(i,j \in A\). We now deal with the last remaining case by using the splitting

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle = \langle {\mathbf{{v}}_i} , {P_{S(A)} \mathbf{{v}}_j}\rangle + \langle {\mathbf{{v}}_i} , {P_{A \setminus S(A)} \mathbf{{v}}_j}\rangle . \end{aligned}$$
(5.34)

The goal is to show that

$$\begin{aligned} \bigl |\langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle \bigr |&\prec \frac{K^{2 \delta }}{(d_i - 1)^{1/4} (d_j - 1)^{1/4} M^{1/2}} \nonumber \\&+ K^{2 \delta } \frac{\sqrt{\sigma _i \sigma _j}}{M} \left( {\frac{1}{\nu _i(A)} + \frac{1}{d_i - 1}}\right) \left( {\frac{1}{\nu _j(A)} + \frac{1}{d_j - 1}}\right) . \end{aligned}$$
(5.35)

Note that here \(\sigma _i \asymp \sigma _j \asymp 1 + \phi ^{1/2}\). We consider the four cases (i) \(i,j \in S(A)\), (ii) \(i \in S(A)\) and \(j \notin S(A)\), (iii) \(i \notin S(A)\) and \(j \in S(A)\), and (iv) \(i,j \notin S(A)\).

Consider first the case (i). The first term of (5.34) is bounded using Proposition 5.2 combined with \(\nu _i(A) \asymp \nu _i(S(A))\) and \(\nu _j(A) \asymp \nu _j(S(A))\). The second term of (5.34) is bounded using (5.2) from case (c) combined with \(\nu _i(A) \leqslant C \nu _i(A {\setminus } S(A))\) and \(\nu _j(A) \leqslant C \nu _j(A {\setminus } S(A))\). This yields (5.35) for \(\langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle \) in the case (i).

Next, consider the case (ii). For the first term of (5.34) we use the estimates

$$\begin{aligned} \nu _i(S(A))&\asymp \nu _i(A), \qquad \nu _j(A) \leqslant C (d_j - 1)^{-1/2} K^{-1/2 + \delta } \leqslant C \nu _j(S(A)),\nonumber \\ \nu _i(A)&\leqslant C |d_i - d_j |. \end{aligned}$$
(5.36)

Thus we get from (5.4)

$$\begin{aligned} \bigl |\langle {\mathbf{{v}}_i} , {P_{S(A)} \mathbf{{v}}_j}\rangle \bigr |&\prec \frac{1 + \phi ^{1/2}}{M \nu _i(S(A)) \nu _j(S(A))} + \frac{1 + \phi ^{1/2}}{M \nu _j(S(A)) (d_i - 1)} + \frac{(d_i - 1)^{1/2}}{M^{1/2} |d_i - d_j |} \nonumber \\&\leqslant \frac{1 + \phi ^{1/2}}{M \nu _i(A) \nu _j(A)} + \frac{1 + \phi ^{1/2}}{M \nu _j(A) (d_i - 1)} + \frac{(d_i - 1)^{1/2}}{M^{1/2} |d_i - d_j |}. \end{aligned}$$
(5.37)

In order to estimate the last term, we first assume that \(d_j \leqslant d_i\) and \(d_i - 1 \leqslant 2 |d_i - d_j |\). Then we find

$$\begin{aligned} \frac{(d_i - 1)^{1/2}}{M^{1/2} |d_i - d_j |} \leqslant \frac{2}{M^{1/2} (d_i - 1)^{1/2}} \leqslant \frac{2}{M^{1/2} (d_i - 1)^{1/4} (d_j - 1)^{1/4}}. \end{aligned}$$
(5.38)

Conversely, if \(d_i \leqslant d_j\) or \(d_i - 1 \geqslant 2 |d_i - d_j |\), we have \(d_i - 1 \leqslant 2 (d_j - 1)\). Therefore, using (5.36) and the estimate \(M \asymp (1 + \phi ) K\), we get

$$\begin{aligned} \frac{(d_i - 1)^{1/2}}{M^{1/2} |d_i - d_j |} \leqslant \frac{C (d_j - 1)^{1/2}}{M^{1/2} \nu _i(A)} \leqslant K^\delta \frac{1 + \phi ^{1/2}}{M \nu _i(A) \nu _j(A)}. \end{aligned}$$
(5.39)

Putting (5.37), (5.38), and (5.39) together, we may estimate the first term of (5.34) in the case (ii) as

$$\begin{aligned} \bigl |\langle {\mathbf{{v}}_i} , {P_{S(A)} \mathbf{{v}}_j}\rangle \bigr |&\prec \frac{1 + \phi ^{1/2}}{M \nu _i(A) \nu _j(A)} + K^\delta \frac{1 + \phi ^{1/2}}{M \nu _j(A) (d_i - 1)}\nonumber \\&\quad + \frac{1}{M^{1/2} (d_i - 1)^{1/4} (d_j - 1)^{1/4}}. \end{aligned}$$
(5.40)

For the second term of (5.34) in the case (ii) we use the estimates

$$\begin{aligned} \nu _i(A {\setminus } S(A))&\asymp \nu _i(A), \qquad \nu _j(A) \leqslant C \nu _j(A {\setminus } S(A)) \leqslant C (d_j - 1)^{-1/2} K^{-1/2 + \delta } ,\nonumber \\&\quad \nu _i(A) \leqslant C |d_i - d_j |. \end{aligned}$$

Thus we get from case (c) that

$$\begin{aligned} \bigl |\langle {\mathbf{{v}}_i} , {P_{A \setminus S(A)} \mathbf{{v}}_j}\rangle \bigr | \leqslant \frac{1+ \phi ^{1/2}}{M \nu _i(A)} \left( {\frac{1}{\nu _j(A)} + \frac{1}{d_j - 1}}\right) + \frac{(d_j - 1)^{1/2}}{M^{1/2} |d_i - d_j |}, \end{aligned}$$

where the last term is bounded by \(K^\delta \frac{1 + \phi ^{1/2}}{M \nu _i(A) \nu _j(A)}\). Recalling (5.40), we find (5.35) in the case (ii). The case (iii) is dealt with in the same way.

What remains therefore is case (iv). For the first term of (5.34) we use the estimates

$$\begin{aligned} \nu _i(A) \leqslant C \nu _i(S(A)), \qquad \nu _j(A) \leqslant C \nu _j(S(A)). \end{aligned}$$

Thus we get from (5.4) that

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_{S(A)} \mathbf{{v}}_j}\rangle \leqslant \frac{1 + \phi ^{1/2}}{M \nu _i(A) \nu _j(A)}. \end{aligned}$$

For the second term of (5.34) we use the estimates

$$\begin{aligned} \nu _i(A)&\leqslant C \nu _i(A {\setminus } S(A)) \leqslant C (d_i - 1)^{-1/2} K^{-1/2 + \delta },\nonumber \\ \nu _j(A)&\leqslant C \nu _j(A {\setminus } S(A)) \leqslant C (d_j - 1)^{-1/2} K^{-1/2 + \delta }. \end{aligned}$$

Therefore we get from case (c) that

$$\begin{aligned}&\langle {\mathbf{{v}}_i} , {P_{A \setminus S(A)} \mathbf{{v}}_i}\rangle \prec \frac{d_i - 1}{1 + \phi ^{1/2}} + \frac{1}{(d_i - 1)^{1/2} M^{1/2}} \\&\quad + \frac{1 + \phi ^{1/2}}{M} \left( {\frac{1}{\nu _i(A {\setminus } S(A))^2} + \frac{1}{(d_i - 1)^2}}\right) \leqslant C K^{2 \delta } \frac{1 + \phi ^{1/2}}{M \nu _i(A)^2}, \end{aligned}$$

and a similar estimate holds for \(\langle {\mathbf{{v}}_j} , {P_{A \setminus S(A)} \mathbf{{v}}_j}\rangle \). Thus we conclude that

$$\begin{aligned} \bigl |\langle {\mathbf{{v}}_i} , {P_{A \setminus S(A)} \mathbf{{v}}_j}\rangle \bigr | \leqslant \langle {\mathbf{{v}}_i} , {P_{A \setminus S(A)} \mathbf{{v}}_i}\rangle ^{1/2} \, \langle {\mathbf{{v}}_j} , {P_{A \setminus S(A)} \mathbf{{v}}_j}\rangle ^{1/2} \leqslant C K^{2 \delta } \frac{1 + \phi ^{1/2}}{M \nu _i(A) \nu _j(A)}, \end{aligned}$$

which is (5.35). This concludes the analysis of case (iv), and hence of case (d).

Conclusion of the proof Putting the cases (a)–(d) together, we have proved that (5.2) holds for arbitrary \(i,j\) with an additional factor \(K^{2 \delta }\) multiplying the error term on the right-hand side. Since \(\delta > 0\) can be chosen arbitrarily small, (5.2) follows. This concludes the proof of Proposition 5.1. \(\square \)

Fig. 4
figure 4

The construction of the sets \(S(A)\) and \(L(A)\). The black and white dots are the outlier indices \(\{{d_i \mathrel {\mathop :}i \in {\mathcal {O}}_{\tau /2}^+}\}\), contained in the interval \([1, \infty )\). Around each outlier index \(d_i\) we draw a grey circle of radius \((d_i - 1)^{-1/2} K^{-1/2 + \delta }\). By definition, two dots overlap if one is contained in the grey circle of the other. The three pictures depict (from top to bottom) the sets \(A\), \(S(A)\), and \(L(A)\), respectively. In each case, the given set is drawn using black dots and its complement using white dots

6 Non-outlier eigenvectors

In this section we focus on the non-outlier eigenvectors \(\varvec{\xi }_a\), \(a \notin {\mathcal {O}}\), as well as outlier eigenvectors close to the bulk spectrum. We derive isotropic delocalization bounds for \(\varvec{\xi }_a\) and establish the asymptotic law of the generalized components of \(\varvec{\xi }_a\). We also use the former result to complete the proof of Theorem 2.11 on the outlier eigenvectors.

In Sect. 6.1 we derive isotropic delocalization bounds on \(\varvec{\xi }_a\) for \({{\mathrm{dist}}}\{d_a, [-1,1]\} \leqslant 1 + K^{-1/3 + \tau }\). In Sect. 6.2 we use these bounds to prove Theorem 2.11 and to complete the proof of Theorem 2.17 started in Sect. 5. Next, in Sect. 6.3 we derive the law of the generalized components of \(\varvec{\xi }_a\) for \(a \notin {\mathcal {O}}\). This argument requires two tools as input: level repulsion (Proposition 6.3) and quantum unique ergodicity (see Sect. 1.1) of the eigenvectors \(\mathbf{{\zeta }}_b\) of \(H\) (Proposition 6.6). Both are explained in detail and proved below.

6.1 Bound on the spectral projections in the neighbourhood of the bulk spectrum

We first consider eigenvectors near the right edge of the bulk spectrum. Recall the typical distance from \(\mu _a\) to the spectral edges, denoted by \(\kappa _a\) and defined in (2.18).

Proposition 6.1

(Eigenvectors near the right edge) Fix \(\tau \in (0,1/3)\). For \(a \in [\![{s_+ + 1, (1 - \tau )K}]\!]\) we have

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle ^2 \prec \frac{1}{M} + \frac{\sigma _i}{M (|d_i - 1 |^2 + \kappa _a)}. \end{aligned}$$
(6.1)

Moreover\(,\) if \(a \in [\![{1, s_+}]\!]\) satisfies \(d_a \leqslant 1 + K^{-1/3 + \tau }\) then

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle ^2 \prec K^{3 \tau } \left( {\frac{1}{M} + \frac{\sigma _i}{M (|d_i - 1 |^2 + \kappa _a)}}\right) . \end{aligned}$$
(6.2)

Proposition 6.1 has a close analogue for the left edge of the bulk spectrum, which holds under the additional condition \(|\phi - 1 | \geqslant \tau \); we omit its detailed statement.

Proof of Proposition 6.1

Suppose first that \(i \in {\mathcal {R}}\). Let \(\varepsilon > 0\) and set \(\omega \mathrel {\mathop :}=\varepsilon / 2\). Using (3.25), Remark 3.3, Theorem 2.3, and (3.15), we choose a high-probability event \(\Xi \) satisfying (4.7), (4.8), and

$$\begin{aligned} \mathbf{{1}} (\Xi ) |\mu _i \!-\! \theta (d_i) | \leqslant (d_i - 1)^{1/2} K^{-1/2 + \varepsilon } \quad (1 + K^{-1/3} \leqslant d_i \!\leqslant \! 1 \!+\! K^{-1/3 \!+\! \tau }).\qquad \end{aligned}$$
(6.3)

For the following we fix a realization \(H \in \Xi \). We choose the spectral parameter \(z = \mu _a + \eta \), where \(\eta > 0\) is the smallest (in fact unique) solution of the equation \(\hbox {Im }w_\phi (\mu _a + \mathrm {i}\eta ) = K^{-1 + 6 \varepsilon } \eta ^{-1}\). Hence (4.8) reads

$$\begin{aligned} \Vert W(z) - w_\phi (z) \Vert \leqslant \frac{K^{2 \varepsilon }}{K \eta }. \end{aligned}$$
(6.4)

Abbreviating \(\kappa \equiv \kappa (\mu _a)\), we find from (3.20) that

$$\begin{aligned} \eta \asymp \frac{K^{6\varepsilon }}{K \sqrt{\kappa } + K^{2/3 + 2\varepsilon }} \qquad (\mu _a \leqslant \gamma _+ + K^{-2/3 + 4 \varepsilon }) \end{aligned}$$
(6.5)

and

$$\begin{aligned} \eta \asymp K^{-1/2 + 3 \varepsilon } \kappa ^{1/4} \qquad (\mu _a \geqslant \gamma _+ + K^{-2/3 + 4 \varepsilon }). \end{aligned}$$
(6.6)

Armed with these definitions, we may begin the estimate of \(\langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle ^2\). The starting point is the bound

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle ^2 \leqslant \eta \hbox {Im }\widetilde{G}_{\mathbf{{v}}_i \mathbf{{v}}_i}(z), \end{aligned}$$
(6.7)

which follows easily by spectral decomposition. Since \(i \in {\mathcal {R}}\), we get from (3.37), omitting the arguments \(z\) for brevity,

$$\begin{aligned} \phi ^{1/2} z \, \widetilde{G}_{\mathbf{{v}}_i \mathbf{{v}}_i}&= \frac{1}{d_i} - \frac{\sigma _i}{d_i^2} \left( {\frac{1}{D^{-1} + W}}\right) _{ii} \nonumber \\&=\frac{1}{d_i} - \frac{\sigma _i}{d_i^2}\left[ \frac{1}{d_i^{-1} + w_\phi } + \frac{1}{(d_i^{-1} + w_\phi )^2} \left( ({w_\phi - W}) \right. \right. \nonumber \\&\quad \left. \left. + ({w_\phi - W}) \frac{1}{D^{-1} + W} ({w_\phi - W})\right) _{ii}\right] , \end{aligned}$$
(6.8)

where the last step follows from a resolvent expansion as in (5.14). We estimate the error terms using

$$\begin{aligned} \min _j |d_j^{-1} + w_\phi | \geqslant \hbox {Im }w_\phi = \frac{K^{6 \varepsilon }}{K \eta } \gg \frac{K^{2 \varepsilon }}{K \eta } \geqslant \Vert W - w_\phi \Vert , \end{aligned}$$

where we used the definition of \(\eta \) and (6.4). Hence a resolvent expansion yields

$$\begin{aligned} \biggl \Vert \frac{1}{D^{-1} + W} \biggr \Vert \leqslant \frac{2}{\hbox {Im }w_\phi } = 2 K^{1 - 6 \varepsilon } \eta . \end{aligned}$$

We therefore get from (6.8) that

$$\begin{aligned} \phi ^{1/2} z \, \widetilde{G}_{\mathbf{{v}}_i \mathbf{{v}}_i} = \frac{w_\phi - \phi ^{1/2}}{1 + d_i w_\phi } + O \left( {\frac{\sigma _i}{|1 + d_i w_\phi |^2} \frac{K^{2 \varepsilon }}{K \eta }}\right) . \end{aligned}$$
(6.9)

Next, we claim that for any fixed \(\delta \in [0, 1/3-\varepsilon )\) we have the lower bound

$$\begin{aligned} |1 + d w_\phi | \geqslant c \left( {K^{-2 \delta } |d - 1 | + \hbox {Im }w_\phi }\right) \end{aligned}$$
(6.10)

whenever \(\mu _a \in [\theta (0), \theta (1 + K^{-1/3 + \delta + \varepsilon })]\). To prove (6.10), suppose first that \(|d - 1 | \geqslant 1/2\). By (3.21), there exists a constant \(c_0 > 0\) such that for \(\kappa \leqslant c_0\) we have \(|\hbox {Re }w_\phi + 1 | \leqslant 1/4\). Thus we get, for \(\kappa \leqslant c_0\),

$$\begin{aligned} |1 + d w_\phi | \geqslant |1 + d \hbox {Re }w_\phi | \asymp |d-1 | + \hbox {Im }w_\phi , \end{aligned}$$

where we used that \(\hbox {Im }w_\phi \leqslant C\) by (3.19). Moreover, if \(\kappa \geqslant c_0\) we find from (3.20) that \(\hbox {Im }w_\phi \geqslant c\), from which we get

$$\begin{aligned} |1 + d w_\phi | \geqslant |1 + d \hbox {Re }w_\phi | + |d | \hbox {Im }w_\phi \geqslant c (1 + |d |) \asymp |d-1 | + \hbox {Im }w_\phi , \end{aligned}$$

where in the second step we used \(|\hbox {Re }w_\phi | \leqslant C\) as follows from (3.19). This concludes the proof of (6.10) for the case \(|d - 1 | \geqslant 1/2\).

Suppose now that \(|d - 1 | \leqslant 1/2\). Then we get

$$\begin{aligned} |1 + d w_\phi | \asymp |1 + d \hbox {Re }w_\phi | + |d | \hbox {Im }w_\phi \geqslant \left( {|d - 1 | - |\hbox {Re }w_\phi + 1 |}\right) _+ + \hbox {Im }w_\phi . \end{aligned}$$

We shall estimate this using the elementary bound

$$\begin{aligned} (x-y)_+ + z \geqslant \frac{x}{3M} + \frac{z}{3} \qquad \text {if } y \leqslant M z \text { for some } M \geqslant 1. \end{aligned}$$
(6.11)

For \(\mu _a \in [\theta (0), \theta (1)]\) we get from (6.11) with \(M = C\), recalling (3.20) and (3.21), that \(|1 + d w_\phi | \geqslant c(|d - 1 | + \hbox {Im }w_\phi )\). By a similar argument, for \(\mu _a \in [\theta (1), \theta (1 + K^{-1/3 + \delta + \varepsilon })]\) we set \(M = K^{2 \delta }\) and get (6.10) using (6.5) and (6.6). This concludes the proof of (6.10).

Going back to (6.7), we find using (6.9)

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle ^2&\leqslant \eta \phi ^{-1/2} \hbox {Im }\left( {\frac{w_\phi - \phi ^{1/2}}{z (1 + d_i w_\phi )}}\right) + \frac{C \sigma _i}{|1 + d_i w_\phi |^2} \frac{K^{2 \varepsilon }}{K \eta } \nonumber \\&= \frac{\eta ^2}{\phi ^{1/2} |z |^2} \hbox {Re }\frac{\phi ^{1/2} - w_\phi }{1 + d_i w_\phi } + \frac{\eta \mu _a}{\phi ^{1/2} |z |^2} \hbox {Im }\frac{w_\phi - \phi ^{1/2}}{1 + d_i w_\phi } \nonumber \\&\quad + \frac{C \sigma _i}{\phi ^{1/2} |z | |1 + d_i w_\phi |^2} \frac{K^{2 \varepsilon }}{K}. \end{aligned}$$
(6.12)

Using \(|z | \asymp \mu _a \asymp \phi ^{-1/2} + \phi ^{1/2}\) and (6.10), we estimate the first term on the right-hand side of (6.12) as

$$\begin{aligned} \frac{\eta ^2}{\phi ^{1/2} |z |^2} \hbox {Re }\frac{\phi ^{1/2} - w_\phi }{1 + d_i w_\phi }&\leqslant \frac{\eta ^2}{(1 + \phi ) |1 + d_i w_\phi |} \leqslant \frac{\eta ^2}{(1 + \phi ) \hbox {Im }w_\phi }\\&\leqslant C \frac{\eta ^3 K}{1 + \phi } \leqslant C \frac{K^{12 \varepsilon + 3 \delta }}{M}, \end{aligned}$$

where in the last step we used that \(\eta \leqslant K^{-2/3 + 4 \varepsilon + \delta }\), as follows from (6.5) and (6.6).

Next, we estimate the second term of (6.12) as

$$\begin{aligned} \frac{\eta \mu _a}{\phi ^{1/2} |z |^2} \hbox {Im }\frac{w_\phi - \phi ^{1/2}}{1 + d_i w_\phi }&= \frac{\eta \mu _a}{\phi ^{1/2} |z |^2} \frac{\sigma _i \hbox {Im }w_\phi }{|1 + d_i w_\phi |^2} \asymp \frac{\sigma _i \eta \hbox {Im }w_\phi }{(1 + \phi ) |1 + d_i w_\phi |^2} \\&\leqslant \frac{C \sigma _i K^{6 \varepsilon }}{M |1 + d_i w_\phi |^2}. \end{aligned}$$

We estimate the last term of (6.12) as

$$\begin{aligned} \frac{C \sigma _i}{\phi ^{1/2} |z | |1 + d_i w_\phi |^2} \frac{K^{2 \varepsilon }}{K} \leqslant \frac{C \sigma _i K^{2 \varepsilon }}{M |1 + d_i w_\phi |^2}. \end{aligned}$$

Putting all three estimates together, we conclude that

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle ^2 \leqslant \frac{K^{12 \varepsilon + 3 \delta }}{M} + \frac{C \sigma _i K^{6 \varepsilon }}{M |1 + d_i w_\phi |^2}. \end{aligned}$$
(6.13)

In order to estimate the denominator of (6.13) from below using (6.10), we need a suitable lower bound on \(\hbox {Im }w_\phi (\mu _a + \mathrm {i}\eta )\). First, if \(a \geqslant s_+ + 1\) then we get from (4.7), Corollary 4.2, (6.5), and (3.20) that

$$\begin{aligned} \hbox {Im }w_\phi (\mu _a + \mathrm {i}\eta ) \geqslant c \sqrt{\kappa _a}, \end{aligned}$$

in which case we get by choosing \(\delta = 0\) in (6.10) that

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle ^2 \leqslant \frac{K^{12 \varepsilon }}{M} + \frac{C \sigma _i K^{6 \varepsilon }}{M (|d_i - 1 |^2 + \kappa _a)}. \end{aligned}$$
(6.14)

Next, if \(a \leqslant s_+\) satisfies \(d_a \leqslant K^{-1/3 + \tau }\) we get from (6.5), (6.6), and (3.20) that

$$\begin{aligned} \hbox {Im }w_\phi (\mu _a + \mathrm {i}\eta ) \geqslant c \sqrt{\eta } \geqslant c K^{-1/3 + 2 \varepsilon }. \end{aligned}$$

In this case we have \(\mu _a \leqslant \theta (1 + K^{-1/3 + \tau + \varepsilon })\) by (6.3), so that setting \(\delta = \tau \) in (6.10) yields

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle ^2 \leqslant \frac{K^{12 \varepsilon + 3 \tau }}{M} + \frac{C \sigma _i K^{6 \varepsilon + 2 \tau }}{M (|d_i - 1 |^2 + \kappa _a)}. \end{aligned}$$
(6.15)

Since \(\varepsilon > 0\) was arbitrary, (6.1) and (6.2) follow from (6.14) and (6.15) respectively. This concludes the proof of Proposition 6.1 in the case \(i \in {\mathcal {R}}\).

Finally, the case \(i \notin {\mathcal {R}}\) is handled by replacing \({\mathcal {R}}\) with \(\widehat{{\mathcal {R}}} \mathrel {\mathop :}={\mathcal {R}} \cup \{i\}\) and using a limiting argument, exactly as after (5.28). \(\square \)

6.2 Proof of Theorems 2.11 and 2.17

We now have all the ingredients needed to prove Theorems 2.11 and 2.17.

Proof of Theorem 2.17

The estimate (2.19) is an immediate corollary of (6.1) from Proposition 6.1. The estimate (2.20) is proved similarly (see also the remark following Proposition 6.1). \(\square \)

Proof of Theorem 2.11

We prove Theorem 2.11 using Propositions 5.15.2, and 6.1. First we remark that it suffices to prove that (5.2) holds for \(A \subset {\mathcal {O}}\) satisfying \(1 + K^{-1/3} \leqslant d_k \leqslant \tau ^{-1}\) for all \(k \in A\). Indeed, supposing this is done, we get the estimate

$$\begin{aligned} \langle {\mathbf{{w}}} , {P_A \mathbf{{w}}}\rangle&= \langle {\mathbf{{w}}} , {Z_A \mathbf{{w}}}\rangle + O_\prec \Biggl [ \sum _{i \in A} \frac{w_i^2}{M^{1/2}(d_i - 1)^{1/2}} + \sum _{i \in A} \frac{\sigma _i w_i^2}{M (d_i - 1)^2} \\&+ \sum _{i = 1}^M \frac{\sigma _i w_i^2}{M \nu _i^2} + \langle {\mathbf{{w}}} , {Z_A \mathbf{{w}}}\rangle ^{1/2} \left( {\sum _{i \notin A} \frac{\sigma _i w_i^2}{M \nu _i^2}}\right) ^{1/2} \Biggr ], \end{aligned}$$

from which Theorem 2.11 follows by noting that the second error term may be absorbed into the first, recalling that \(\sigma _i \asymp 1 + \phi ^{1/2}\) for \(i \in A\), that \(M \asymp (1 + \phi ) K\), and that \(d_i - 1 \geqslant K^{-1/3}\).

Fix \(\varepsilon > 0\). Note that there exists some \(s \in [1, |{\mathcal {R}} |]\) satisfying the following gap condition: for all \(k\) such that \(d_k > 1 + s K^{-1/3 + \varepsilon }\) we have \(d_k \geqslant 1 + (s+1) K^{-1/3 + \varepsilon }\). The idea of the proof is to split \(A = A_0 \sqcup A_1\), such that \(d_k \leqslant 1 + s K^{-1/3 + \varepsilon }\) for \(k \in A_0\) and \(d_k \geqslant 1 + (s + 1) K^{-1/3 + \varepsilon }\) for \(k \in A_1\). Note that such a splitting exists by the above gap property. Without loss of generality, we assume that \(A_0 \ne \emptyset \) (for otherwise the claim follows from Proposition 5.1).

It suffices to consider the six cases (a) \(i,j \in A_0\), (b) \(i \in A_0\) and \(j \in A_1\), (c) \(i \in A_0\) and \(j \notin A\), (d) \(i,j \in A_1\), (e) \(i \in A_1\) and \(j \notin A\), (f) \(i,j \notin A\).

(a) \(i,j \in A_0\)

We split

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle = \langle {\mathbf{{v}}_i} , {P_{A_0} \mathbf{{v}}_j}\rangle + \langle {\mathbf{{v}}_i} , {P_{A_1} \mathbf{{v}}_j}\rangle . \end{aligned}$$
(6.16)

We apply Cauchy–Schwarz and Proposition 6.1 to the first term, and Proposition 5.1 to the second term. Using the above gap condition, we find

$$\begin{aligned} \bigl |\langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle \bigr |&\prec \frac{K^{3 \varepsilon } \sqrt{\sigma _i \sigma _j}}{M |d_i - 1 | |d_j - 1 |} + \frac{\sqrt{\sigma _i \sigma _j}}{M} \left( {\frac{1}{\nu _i} + \frac{1}{d_i - 1}}\right) \left( {\frac{1}{\nu _j} + \frac{1}{d_j - 1}}\right) \\&= \delta _{ij}u(d_i) + K^{3 \varepsilon } \, O \left[ \frac{1}{(d_i - 1)^{1/4} (d_j - 1)^{1/4} M^{1/2}} \right. \\&\left. \quad + \frac{\sqrt{\sigma _i \sigma _j}}{M} \left( {\frac{1}{\nu _i} + \frac{1}{d_i - 1}}\right) \left( {\frac{1}{\nu _j} + \frac{1}{d_j - 1}}\right) \right] , \end{aligned}$$

where the last step follows from \(d_i - 1 \leqslant C K^{-1/3 + \varepsilon }\).

(b) \(i \in A_0\) and \(j \in A_1\)

For this case it is crucial to use the stronger bound (5.4) and not (5.2). Hence, we need the non-overlapping condition (5.3). To that end, we assume first that (5.3) holds with \(\delta \mathrel {\mathop :}=\varepsilon \). Thus, by the above gap assumption (5.3) also holds for \(A_1\). In this case we get from (6.16) and Propositions 5.2 and 6.1 that

$$\begin{aligned} \bigl |\langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle \bigr |&\prec \frac{K^{3 \varepsilon } \sqrt{\sigma _i \sigma _j}}{M |d_i - 1 | |d_j - 1 |} + \frac{\sqrt{\sigma _i \sigma _j}}{M} \left( {\frac{1}{\nu _i} + \frac{1}{d_i - 1}}\right) \left( {\frac{1}{\nu _j} + \frac{1}{d_j - 1}}\right) \\&+\frac{(d_j - 1)^{1/2} \sqrt{\sigma _i}}{(1 + \phi ^{1/4}) |d_i - d_j | M^{1/2}}. \end{aligned}$$

Clearly, the first two terms are bounded by the right-hand side of (5.2) times \(K^{3 \varepsilon }\). The last term is estimated as

$$\begin{aligned} \frac{(d_j - 1)^{1/2} \sqrt{\sigma _i}}{(1 + \phi ^{1/4}) |d_i - d_j | M^{1/2}} \asymp \frac{(d_j - 1)^{1/2}}{|d_i - d_j | M^{1/2}} \leqslant \frac{1}{(d_i - 1)^{1/4} (d_j - 1)^{1/4} M^{1/2}}, \end{aligned}$$

where we used that \(d_i - 1 \leqslant d_j - 1 \leqslant C |d_i - d_j |\) be the above gap condition. This concludes the proof in the case where the non-overlapping condition (5.3) holds.

If (5.3) does not hold, we replace \(A_1\) with the smaller set \(S(A_1)\) defined in Sect. 5.2. Then we proceed as above, except that we have to deal in addition with the term \(\langle {\mathbf{{v}}_i} , {P_{A_1 \setminus S(A_1)}}\rangle \). The details are analogous to those of Sect. 5.2, and we omit them here.

(c), (e), (f) \(j \notin A\)

We use the splitting (6.16) and apply Cauchy–Schwarz and Proposition 6.1 to the first term, and Proposition 5.1 to the second term. Since \(\nu _j(A_1) \leqslant |d_j - 1 |\) in all cases, it is easy to prove that (6.16) is bounded by \(K^{3 \varepsilon }\) times the right-hand side of (5.2).

(d) \(i,j \in A_1\)

From (6.16) and Propositions 6.1 and 5.1 we get

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {P_A \mathbf{{v}}_j}\rangle&= \delta _{ij}u(d_i) + O_\prec \left[ \frac{K^{3 \varepsilon } \sqrt{\sigma _i \sigma _j}}{M |d_i - 1 | |d_j - 1 |} + \frac{1}{(d_i - 1)^{1/4} (d_j - 1)^{1/4} M^{1/2}} \right. \\&\left. + \frac{\sqrt{\sigma _i \sigma _j}}{M} \left( {\frac{1}{\nu _i} + \frac{1}{d_i - 1}}\right) \left( {\frac{1}{\nu _j} + \frac{1}{d_j - 1}}\right) \right] , \end{aligned}$$

From which we get (5.2) with the error term multiplied by \(K^{3 \varepsilon }\).

Conclusion of the proof We have proved that, for all \(i,j \in [\![{1,M}]\!]\) and \(A\) satisfying the assumptions of Theorem 2.11, the estimate (5.2) holds with an additional factor \(K^{3 \varepsilon }\) multiplying the error term. Since \(\varepsilon \) was arbitrary, we get (5.2). This concludes the proof. \(\square \)

6.3 The law of the non-outlier eigenvectors

For \(a \leqslant K/2\) define

$$\begin{aligned} \Delta _a \mathrel {\mathop :}=K^{-2/3} a^{-1/3}, \end{aligned}$$
(6.17)

the typical distance between \(\lambda _{a+1}\) and \(\lambda _a\). More precisely, the classical locations \(\gamma _a\) defined in (3.14) satisfy \(\gamma _a - \gamma _{a+1} \asymp \Delta _a\) for \(a \leqslant K/2\).

We may now state the main result behind the proof of Theorem 2.20. Recall the definitions (2.8) of \(\alpha _+\) and (2.3) of \(s_+\), the number of outliers to the right of the bulk spectrum. Recall also from (3.11) and (3.12) that \(\{\lambda _a\}\) and \(\{\varvec{\zeta }_a\}\) denote the eigenvalues and eigenvectors of \(H = X X^*\).

Proposition 6.2

Let \(s_+ + 1 \leqslant a \leqslant K^{1 - \tau } \alpha _+^3\) and define \(b \mathrel {\mathop :}=a - s_+\). Define the event

$$\begin{aligned} \Omega \equiv \Omega _{a, b, \tau }&\mathrel {\mathop :}=\left\{ |\mu _{b' + s_+} - \lambda _{b'} | \leqslant K^{-\tau /4} \Delta _a\ \mathrm{{ for }}\ |b' - b | \leqslant 1\right\} \\&\quad \quad \cap \ \Bigl \{{|\lambda _{b'} - \lambda _b | \geqslant K^{-\tau /6} \Delta _a\ \mathrm{{ for }}\ |b' - b | = 1}\Bigr \}. \end{aligned}$$

Then

$$\begin{aligned} \mathbf{{1}} (\Omega ) \langle {\mathbf{{w}}} , {\varvec{\xi }_a}\rangle ^2&= \mathbf{{1}} (\Omega ) \biggl |\biggl \langle {\sum _i \frac{\sqrt{\sigma _i} \, w_i}{d_i - 1} \, \mathbf{{v}}_i} ,\, {\varvec{\zeta }_b}\biggr \rangle \biggr |^2 \nonumber \\&\quad + O_\prec \Biggl [{\frac{K^{-1/3 + \tau /5} a^{1/3}}{\alpha _+} \sum _i \frac{\sigma _i w_i^2}{M (d_i - 1)^2}}\Biggr ]. \end{aligned}$$
(6.18)

Informally, Proposition 6.2 expresses generalized components of the eigenvectors of \(Q\) in terms of generalized components of eigenvectors of \(H\), under the assumption that \(\Omega \) has high probability. We first show how Proposition 6.2 implies Theorem 2.20. This argument requires two key tools. The first one is level repulsion, which, together with the eigenvalue sticking from Theorem 2.7, will imply that \(\Omega \) indeed has high probability. The second tool is quantum unique ergodicity (See Sect. 1.1) of the eigenvectors of \(H\), which establishes the law of the generalized components of the eigenvectors of \(H\).

The precise statement of level repulsion sufficient for our needs is as follows.

Proposition 6.3

(Level repulsion) Fix \(\tau \in (0, 1)\). For any \(\varepsilon > 0\) there exists a \(\delta > 0\) such that for all \(a \leqslant K^{1 - \tau }\) we have

$$\begin{aligned} \mathbb {P}\left( {|\lambda _a - \lambda _{a+1} | \leqslant \Delta _a K^{-\varepsilon }}\right) \leqslant K^{-\delta }. \end{aligned}$$
(6.19)

The proof of Proposition 6.3 consists of two steps: (i) establishing (6.19) for the case of Gaussian \(X\) and (ii) a comparison argument showing that if \(X^{(1)}\) and \(X^{(2)}\) are two matrix ensembles satisfying (1.15) and (1.16), and if (6.19) holds for \(X^{(1)}\), then (6.19) also holds for \(X^{(2)}\). Both steps have already appeared, in a somewhat different form, in the literature. Step (i) is performed in Lemma 6.4 below, and step (ii) in Lemma 6.5 below. Together, Lemmas 6.4 and 6.5 immediately yield Proposition 6.3.

Lemma 6.4

(Level repulsion for the Gaussian case) Proposition 6.3 holds if \(X\) is Gaussian.

Proof

We mimic the proof of Theorem 3.2 in [14]. Indeed, the proof from [14, Appendix D] carries over almost verbatim. The key input is the eigenvalue rigidity from Theorem 3.5, which for the model of [14] was established using a different method than Theorem 3.5. As in [14], we condition on the eigenvalues \(\{\lambda _i \mathrel {\mathop :}i > K^{1 - \tau }\}\). On the conditioned measure, level repulsion follows as in [14]. Finally, thanks to Theorem 3.5 we know that the frozen eigenvalues \(\{\lambda _i \mathrel {\mathop :}i > K^{1 - \tau }\}\) are with high probability near their classical locations. Note that for \(\phi \approx 1\), the rigidity estimate (3.15) only holds for indices \(i \leqslant (1 - \tau ) K\); however, this is enough for the argument of [14, Appendix D], which is insensitive to the locations of eigenvalues at a distance of order one from the right edge \(\gamma _+\). We omit the full details. \(\square \)

Lemma 6.5

(Stability of level repulsion) Let \(X^{(1)}\) and \(X^{(2)}\) be two matrix ensembles satisfying (1.15) and (1.16). Suppose that Proposition 6.3 holds for \(X^{(1)}\). Then Proposition 6.3 also holds for \(X^{(2)}\).

The proof of Lemma 6.5 relies on Green function comparison, and is given in Sect. 7.4.

The second tool behind the proof of Theorem 2.20 is the quantum unique ergodicity of the eigenvectors \(\varvec{\zeta }_a\) of the matrix \(H = X X^*\), stated in Proposition 6.6 below. As noted in Sect. 1.1, quantum unique ergodicity is a term borrowed from quantum chaos that describes the complete “flatness” of the eigenvectors of \(H\). Here “flatness” means that the eigenvectors are asymptotically uniformly distributed on the unit sphere of \(\mathbb {R}^M\). The first result on quantum unique ergodicity of Wigner matrices is [29], where the quantum unique ergodicity of eigenvectors near the spectral edge was established. Under an additional four-moment matching condition, this result was extended to the bulk. Subsequently, this second result was derived using a different method in [41]. Recently, a new approach to the quantum unique ergodicity was developed in [15], where quantum unique ergodicity is established for all eigenvectors of generalized Wigner matrices. In this paper, we adopt the approach of [29], based on Green function comparison. As compared to the method of [15], its first advantage is that it is completely local in the spectrum, and in particular when applied near the right-hand edge of the spectrum it is insensitive to the presence of a hard edge at the origin. The second advantage of the current method is that it is very robust and may be used to establish the asymptotic joint distribution of an arbitrary family of generalized components of eigenvectors, as in Remark 6.7 below; we remark that such joint laws cannot currently be analysed using the method of [15]. On the other hand, our results only hold for eigenvector indices \(a\) satisfying \(a \leqslant K^{1 - \tau }\) for some \(\tau > 0\), while those of [15] admit \(\tau = 0\).

Our proof of quantum unique ergodicity generalizes that of [29] in three directions. First, we extend the method of [29] to sample covariance matrices (in fact to general sample covariance matrices of the form (1.10) with \(\Sigma = T T^* = I_M\); see Sect. 8). Second, we consider generalized components \(\langle {\mathbf{{w}}} , {\zeta _a}\rangle \) of the eigenvectors instead of the cartesian components \(\zeta _a(i)\). The third and deepest generalization is that we establish quantum unique ergodicity much further into the bulk, requiring only that \(a \leqslant K^{1 - \tau }\) instead of the assumption \(a \leqslant (\log K)^{C \log \log K}\) from [29].

Proposition 6.6

(Quantum unique ergodicity) Fix \(\tau \in (0,1)\). Then for any \(a \leqslant K^{1 - \tau }\) and deterministic unit vector \(\mathbf{{w}} \in \mathbb {R}^M\) we have

$$\begin{aligned} M \langle {\mathbf{{w}}} , {\varvec{\zeta }_a}\rangle ^2 \; \longrightarrow \; \chi _1^2, \end{aligned}$$
(6.20)

in the sense of moments\(,\) uniformly in \(a\) and \(\mathbf{{w}}\).

Remark 6.7

For simplicity, and bearing the application to Theorem 2.20 in mind, in Proposition 6.6 we establish the convergence of a single generalized component of a single eigenvector. However, our method may be easily extended to yield

$$\begin{aligned} \left( {M \langle {\mathbf{{v}}_1} , {\varvec{\zeta }_{a_1}}\rangle \langle {\varvec{\zeta }_{a_1}} , {\mathbf{{w}}_1}\rangle ,\ldots , M \langle {\mathbf{{v}}_k} , {\varvec{\zeta }_{a_k}}\rangle \langle {\varvec{\zeta }_{a_k}} , {\mathbf{{w}}_k}\rangle }\right) \;\overset{d}{\sim }\; (Z_1,\ldots , Z_k) \end{aligned}$$

for any deterministic unit vectors \(\mathbf{{v}}_1,\ldots , \mathbf{{v}}_l, \mathbf{{w}}_1,\ldots , \mathbf{{w}}_k \in \mathbb {R}^M\) and \(a_1 < \cdots < a_k \leqslant K^{1 - \tau }\), whereby we use the notation \(A_N \overset{d}{\sim }B_N\) to mean that \(A_N\) and \(B_N\) are tight, and \(\lim _{N \rightarrow \infty }\mathbb {E}(f(A_N) - f(B_N)) = 0\) for all polynomially bounded and continuous \(f\). Here \((Z_1,\ldots , Z_k)\) is a family of independent random variables defined by \(Z_i = A_i B_i\), where \(A_i\) and \(B_i\) are jointly Gaussian with covariance matrix

$$\begin{aligned} \begin{pmatrix} 1 &{} \langle {\mathbf{{v}}_i} , {\mathbf{{w}}_i}\rangle \\ \langle {\mathbf{{v}}_i} , {\mathbf{{w}}_i}\rangle &{} 1 \end{pmatrix} . \end{aligned}$$

The proof of this generalization of Proposition 6.6 follows that of Proposition 6.6 presented in Sect. 7, requiring only heavier notation. In fact, our method may also be used to prove the universality of the joint eigenvalue–eigenvector distribution for any matrix \(Q\) of the form (1.10) with \(\Sigma = T T^* = I_M\); see Theorem 8.3 below for a precise statement.

The proof of Proposition 6.6 is postponed to Sect. 7.

Supposing Proposition 6.2 holds, together with Propositions 6.3 and 6.6, we may complete the proof of Theorem 2.20.

Proof of Theorem 2.20

Abbreviating \(b \mathrel {\mathop :}=a - s_+\) and

$$\begin{aligned} \mathbf{{u}} \mathrel {\mathop :}=\frac{1}{\sqrt{M}} \sum _i \frac{\sqrt{\sigma _i}\, w_i}{d_i - 1} \, \mathbf{{v}}_i, \end{aligned}$$

we define

$$\begin{aligned} \widehat{\Theta }(a, \mathbf{{w}}) \mathrel {\mathop :}=M \frac{\langle {\mathbf{{u}}} , {\varvec{\zeta }_b}\rangle ^2}{|\mathbf{{u}} |^2}. \end{aligned}$$

Then, by assumption on \(a\), we may rewrite (6.18) as

$$\begin{aligned} \mathbf{{1}} (\Omega ) \langle {\mathbf{{w}}} , {\varvec{\xi }_a}\rangle ^2 = \mathbf{{1}} (\Omega ) |\mathbf{{u}} |^2 \, \widehat{\Theta }(a, \mathbf{{w}}) + O_\prec (K^{-2\tau /15} |\mathbf{{u}} |^2). \end{aligned}$$

Moreover, by Theorem 2.7 and Proposition 6.3, we have \(\mathbb {P}(\Omega ) \geqslant 1 - K^{-c}\) for some constant \(c > 0\). Finally, by Proposition 6.6 we have \(\widehat{\Theta }(a, \mathbf{{w}}) \rightarrow \chi _1^2\) in distribution (even in the sense of moments). The claim now follows easily. \(\square \)

The remainder of this section is devoted to the proof of Proposition 6.2.

Proof of Proposition 6.2

We define the contour \(\Gamma _a\) as the positively oriented circle of radius \(K^{-\tau /5} \Delta _a\) with centre \(\lambda _b\). Let \(\varepsilon > 0\) and \(\tau \mathrel {\mathop :}=1/2\), and choose a high-probability event \(\Xi \) such that (4.7), (4.8), and (4.9) hold. For the following we fix a realization \(H \in \Omega \cap \Xi \). Define

$$\begin{aligned} Y_a(\mathbf{{w}}) \mathrel {\mathop :}=- \frac{1}{2 \pi \mathrm {i}} \oint _{\Gamma _a} \phi ^{1/2} z \, \widetilde{G}_{\mathbf{{w}} \mathbf{{w}}}(z) \, \mathrm {d}z. \end{aligned}$$

By the residue theorem and the definition of \(\Omega \), we find

$$\begin{aligned} Y_a(\mathbf{{w}}) = \phi ^{1/2} \mu _a \langle {\mathbf{{w}}} , {\varvec{\xi }_a}\rangle ^2. \end{aligned}$$
(6.21)

To simplify notation, suppose now that \(i \in {\mathcal {R}}\) and consider \(\mathbf{{w}} = \mathbf{{v}}_i\). From (3.37) we find that

$$\begin{aligned} Y_a(\mathbf{{v}}_i) = \frac{\sigma _i}{d_i^2} \, \frac{1}{2 \pi \mathrm {i}} \oint _{\Gamma _a} \left( {\frac{1}{D^{-1} + W(z)}}\right) _{ii} \, \mathrm {d}z. \end{aligned}$$
(6.22)

In order to compute (6.22), we need precise estimates for \(W\) on \(\Gamma _a\). Because the contour \(\Gamma _a\) crosses the branch cut of \(w_\phi \), we should not compare \(W(z)\) to \(w_\phi (z)\) for \(z \in \Gamma _a\). Instead, we compare \(W(z)\) to \(w_\phi (z_0)\), where

$$\begin{aligned} z_0 \mathrel {\mathop :}=\lambda _b + \mathrm {i}\eta , \qquad \eta \mathrel {\mathop :}=K^{-\tau /5} \Delta _a. \end{aligned}$$

We claim that

$$\begin{aligned} \Vert W(z) - w_\phi (z_0) \Vert \leqslant C K^{-1 + \varepsilon } \eta ^{-1}. \end{aligned}$$
(6.23)

for all \(z \in \Gamma _a\). To see this, we split

$$\begin{aligned} \Vert W(z) - w_\phi (z_0) \Vert \leqslant \Vert W(z) - W(z_0) \Vert + \Vert W(z_0) - w_\phi (z_0) \Vert . \end{aligned}$$
(6.24)

We estimate the first term of (6.24) by spectral decomposition, using that \({{\mathrm{dist}}}(z, \sigma (H)) \geqslant c \eta \), similarly to (4.12). The result is

$$\begin{aligned} \Vert W(z) - W(z_0) \Vert&= C (1 + \phi ) \max _{i} \hbox {Im }G_{\mathbf{{v}}_i \mathbf{{v}}_i}(z_0)\\&\leqslant C (1 + \phi ) \left( {\hbox {Im }m_{\phi ^{-1}}(z_0) + \frac{K^\varepsilon }{1 + \phi } \frac{1}{K \eta }}\right) \\&\leqslant C \hbox {Im }w_\phi (z_0) + C K^{-1 + \varepsilon } \eta ^{-1}\\&\leqslant C K^{-1 + \varepsilon } \eta ^{-1}, \end{aligned}$$

where we used (4.7), (4.9), and Lemma 3.6. Moreover, we estimate the second term of (6.24) using (4.8) as

$$\begin{aligned} \Vert W(z_0) - w_\phi (z_0) \Vert \leqslant K^{-1 + \varepsilon } \eta ^{-1}. \end{aligned}$$

This concludes the proof of (6.23).

Next, we claim that

$$\begin{aligned} |1 + d_i w_\phi (z_0) | \geqslant c |d_i - 1 |. \end{aligned}$$
(6.25)

The proof of (6.25) is analogous to that of (6.10), using (4.7) and the assumption on \(a\); we omit the details.

Armed with (6.23) and (6.25), we may analyse (6.22). A resolvent expansion in the matrix \(w_\phi (z_0) - W(z)\) yields

$$\begin{aligned} Y_a(\mathbf{{v}}_i)&= \frac{\sigma _i}{d_i^2} \, \frac{1}{2 \pi \mathrm {i}} \oint _{\Gamma _a} \left( \frac{1}{d_i^{-1} + w_\phi (z_0)} + \frac{w_\phi (z_0) - W_{ii}(z)}{(d_i^{-1} + w_\phi (z_0))^2} \right. \nonumber \\&\left. + \left( {\frac{w_\phi (z_0) - W(z)}{d_i^{-1} + w_\phi (z_0)} \frac{1}{D^{-1} + W(z)} \frac{w_\phi (z_0) - W(z)}{d_i^{-1} + w_\phi (z_0)}}\right) _{ii}\right) \, \mathrm {d}z. \end{aligned}$$
(6.26)

We estimate the third term using the bound

$$\begin{aligned} \biggl \Vert \frac{1}{D^{-1} + W(z)} \biggr \Vert \leqslant \frac{C}{\alpha _+}. \end{aligned}$$
(6.27)

To prove (6.27), we note first that by (6.25) we have

$$\begin{aligned} \min _i \bigl |d_i^{-1} + w_\phi (z_0) \bigr | \geqslant \min _i \frac{|1 + d_i w_\phi (z_0) |}{|d_i |} \geqslant c \, \min _i \frac{|d_i - 1 |}{|d_i |} \geqslant c \, \alpha _+. \end{aligned}$$

By (6.23) and assumption on \(a\), it is easy to check that

$$\begin{aligned} \min _i \bigl |d_i^{-1} + w_\phi (z_0) \bigr | \geqslant K^{\tau /5} \Vert W(z) - w_\phi (z_0) \Vert , \end{aligned}$$

from which (6.27) follows.

We may now return to (6.26). The first term vanishes, the second is computed by spectral decomposition of \(W\), and the third is estimated using (6.27). This gives

$$\begin{aligned} Y_a(\mathbf{{v}}_i) = \frac{\phi ^{1/2} \sigma _i \lambda _b}{(1 + d_i w_\phi (z_0))^2} \langle {\mathbf{{v}}_i} , {\varvec{\zeta }_b}\rangle ^2 + O \left( {\frac{\sigma _i}{|d_i - 1 |^2 K} \, \frac{K^{2 \varepsilon }}{K \eta \alpha _+}}\right) , \end{aligned}$$

where we also used (6.25).

Recalling (6.21) and (4.7), we therefore get

$$\begin{aligned} \langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle ^2 = \frac{\sigma _i \lambda _b / \mu _a}{(1 + d_i w_\phi (z_0))^2} \langle {\mathbf{{v}}_i} , {\varvec{\zeta }_b}\rangle ^2 + O \left( {\frac{\sigma _i}{|d_i - 1 |^2 M} \, \frac{K^{-1/3 + 2 \varepsilon + \tau /5} a^{1/3}}{\alpha _+}}\right) , \end{aligned}$$

where we used \(\phi ^{1/2}\mu _a \asymp 1 + \phi \).

In order to simplify the leading term, we use

$$\begin{aligned} \frac{-1}{1 + d_i w_\phi (z_0)} = \frac{1}{d_i - 1} + O\left( {\frac{K^{-1/3 + \varepsilon } a^{1/3}}{|d_i - 1 | \alpha _+}}\right) , \end{aligned}$$

as follows from

$$\begin{aligned} |w_\phi (z_0) + 1 | \leqslant C \sqrt{\kappa (z_0) + \eta } \leqslant K^{- 1/3 + \varepsilon } a^{1/3}, \end{aligned}$$

where we used Lemma 3.6. Moreover, we use that

$$\begin{aligned} \lambda _b / \mu _a = 1 + O(K^{-2/3}). \end{aligned}$$

Using that \(\Xi \) has high probability for all \(\varepsilon > 0\) and recalling the isotropic delocalization bound (3.13), we therefore get for any random \(H\) that

$$\begin{aligned} \mathbf{{1}} (\Omega ) \langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle ^2 = \mathbf{{1}} (\Omega ) \frac{\sigma _i}{(d_i - 1)^2} \langle {\mathbf{{v}}_i} , {\varvec{\zeta }_b}\rangle ^2 + O_\prec \left( {\frac{\sigma _i}{|d_i - 1 |^2 M} \, \frac{K^{-1/3 + \tau /5} a^{1/3}}{\alpha _+}}\right) .\nonumber \\ \end{aligned}$$
(6.28)

We proved (6.28) under the assumption that \(i \in {\mathcal {R}}\), but a continuity argument analogous to that given after (5.28) implies that (6.28) holds for all \(i \in [\![{1,M}]\!]\). The above argument may be repeated verbatim to yield

$$\begin{aligned} \mathbf{{1}} (\Omega ) \langle {\mathbf{{v}}_i} , {\varvec{\xi }_a}\rangle \langle {\varvec{\xi }_a} , {\mathbf{{v}}_j}\rangle&= \mathbf{{1}} (\Omega ) \frac{\sqrt{\sigma _i\sigma _j}}{(d_i - 1)(d_j - 1)} \langle {\mathbf{{v}}_i} , {\varvec{\zeta }_b}\rangle \langle {\varvec{\zeta }_b} , {\mathbf{{v}}_j}\rangle \\&+ O_\prec \left( {\frac{\sqrt{\sigma _i \sigma _j}}{|d_i - 1 | |d_j - 1 | M} \, \frac{K^{-1/3 + \tau /5} a^{1/3}}{\alpha _+}}\right) . \end{aligned}$$

Since we may always choose the basis \(\{\mathbf{{v}}_i\}_{i = 1}^M\) so that at most \(|{\mathcal {R}} | + 1\) components of \((w_1,\ldots , w_M)\) are nonzero, the claim now follows easily. \(\square \)

7 Quantum unique ergodicity near the soft edge of \(H\)

This section is devoted to the proof of Proposition 6.6.

Lemma 7.1

Fix \(\tau \in (0,1)\). Let \(h\) be a smooth function satisfying

$$\begin{aligned} |h'(x) | \leqslant C (1 + |x |)^{C} \end{aligned}$$
(7.1)

for some positive constant \(C\). Let \(a \leqslant K^{1 - \tau }\) and suppose that \(\lambda _a\) satisfies (6.19) with some constants \(\varepsilon \) and \(\delta \). Then for small enough \(\delta _1 = \delta _1(\varepsilon , \delta )\) and \(\delta _2 = \delta _2(\varepsilon , \delta , \delta _1)\) the following holds. Defining

$$\begin{aligned} \eta \mathrel {\mathop :}=\Delta _a K^{ -2 \varepsilon }, \quad E^\pm \mathrel {\mathop :}=E\pm K^{ \delta _1}\eta , \quad I\mathrel {\mathop :}=\bigl [{\gamma _a - K^{ \delta _2} \Delta _a,\, \gamma _a +K^{ \delta _2} \Delta _a}\bigr ],\quad \end{aligned}$$
(7.2)

we have

$$\begin{aligned} \mathbb {E}\, h \left( {M \langle {\mathbf{{w}}} , {\varvec{\zeta }_a}\rangle ^2}\right) \!-\! \mathbb {E}\,h \left( {\frac{M}{\pi }\int _{I} \hbox {Im }G_{\mathbf{{w}} \mathbf{{w}} }(E \!+\! \mathrm {i}\eta ) \, \chi (E) \, \mathrm {d}E}\right) \!=\! O(K^{-\delta _2/2}),\quad \end{aligned}$$
(7.3)

where we defined \(\chi (E) \mathrel {\mathop :}=\mathbf{{1}} (\lambda _{a+1} \leqslant E^- \leqslant \lambda _a)\).

Proof

By the assumption (7.1) on \(h\), rigidity (3.15), and delocalization (3.13), we can write

$$\begin{aligned} \mathbb {E}\,h \left( {M \langle {\mathbf{{w}}} , {\mathbf{{\varvec{\zeta }}}_a}\rangle ^2}\right) = \mathbb {E}\,h \left( {\frac{M\eta }{\pi }\, \int _{I\cap [\alpha ,\beta ]} \frac{ \langle {\mathbf{{w}}} , {\mathbf{{\varvec{\zeta }}}_a}\rangle ^2}{(E-\lambda _a)^2+\eta ^2} \mathrm {d}E}\right) + O(K^{-\delta _1/2}) \end{aligned}$$
(7.4)

provided that

$$\begin{aligned} \alpha \leqslant \lambda _a^-, \qquad \beta \geqslant \lambda _a^+, \end{aligned}$$

where we defined \(\lambda _a^\pm \mathrel {\mathop :}=\lambda _a \pm K^{\delta _1}\eta \). For the following we choose

$$\begin{aligned} \alpha \mathrel {\mathop :}=\lambda _a^- \wedge \lambda _{a + 1}^+, \qquad \beta \mathrel {\mathop :}=\lambda _a^+. \end{aligned}$$

Now from (6.19) we get \(\mathbb {P}(\lambda _{a+1}^+\geqslant \lambda _a^-)\leqslant K^{-\delta }\) for \(\delta _1 < \varepsilon \). For \(\delta _1 < \delta \wedge \varepsilon \), we therefore get

$$\begin{aligned} \mathbb {E}\,h \left( {M \bigl |\langle {\mathbf{{w}}} , {\mathbf{{\varvec{\zeta }}}_a }\rangle \bigr |^2}\right) = \mathbb {E}\,h\left( \frac{M\eta }{\pi }\int _{I} \frac{\bigl |\langle {\mathbf{{w}}} , {\mathbf{{\varvec{\zeta }}}_a }\rangle \bigr |^2}{(E-\lambda _a)^2+\eta ^2} \, \chi (E) \, \mathrm {d}E \right) + O(K^{-\delta _1/2}).\nonumber \\ \end{aligned}$$
(7.5)

In order to obtain (7.3), we have to rewrite the integrand on the right-hand side of (7.5) in terms of

$$\begin{aligned} \hbox {Im }G_{\mathbf{{w}}\mathbf{{w}}}(E+ \mathrm {i}\eta ) = \sum _{b \ne a} \frac{\eta \langle {\mathbf{{w}}} , {\varvec{\zeta }_b}\rangle ^2}{(E-\lambda _b)^2+\eta ^2} +\frac{\eta \langle {\mathbf{{w}}} , {\mathbf{{\varvec{\zeta }}}_a}\rangle ^2}{(E-\lambda _a)^2+\eta ^2}. \end{aligned}$$
(7.6)

Hence (7.5) and (7.6) combined with the mean value theorem imply that the left-hand side of (7.3) is bounded by

$$\begin{aligned} K^{C \delta _2} \mathbb {E}\, \sum _{b \ne a} \frac{M\eta }{\pi }\, \int _{I} \frac{\langle {\mathbf{{w}}} , {\varvec{\zeta } _b}\rangle ^2}{(E-\lambda _b)^2+\eta ^2}\, \chi (E) \, \mathrm {d}E + C K^{-\delta _1 / 2} \end{aligned}$$
(7.7)

for any fixed \(\delta _2 \in (0,\delta _1)\). When applying the mean value theorem, we estimated the value of \(\theta '(\cdot )\) using (7.1), the fact that all terms on the right-hand side of (7.6) are nonnegative, and the estimate

$$\begin{aligned} M \int _I \hbox {Im }G_{\mathbf{{w}} \mathbf{{w}}}(E + \mathrm {i}\eta ) \, \mathrm {d}E \prec K^{\delta _2}. \end{aligned}$$
(7.8)

The proof of (7.8) follows by using the spectral decomposition from (7.6) with the delocalization bound (3.13); for \(|b - a | \geqslant K^{\delta _2}\) we use the rigidity bound (3.15), and for \(|b - a | \leqslant K^{\delta _2}\) we estimate the integral using \(\int \frac{\eta }{e^2 + \eta ^2} \, \mathrm {d}e = \pi \). We omit the full details.

Next, using the eigenvalue rigidity from (3.15), it is not hard to see that there exists a constant \(C_1\) such that the contribution of \(|b - a| \geqslant K^{C_1\delta _2}\) to (7.7) is bounded by \(K^{-\delta _2}\). In order to prove (7.3), therefore, it suffices to prove

$$\begin{aligned} \mathbb {E}\, \sum _{b \ne a \mathrel {\mathop :}|b -a| \leqslant K^{C_1\delta _2} }\frac{M \eta }{\pi }\, \int _{I} \frac{\langle {\mathbf{{w}}} , {\varvec{\zeta } _b}\rangle ^2}{(E-\lambda _b)^2+\eta ^2}\, \chi (E) \, \mathrm {d}E = O(K^{-\delta _2/2}). \end{aligned}$$
(7.9)

For \(b > a\), we get using (3.13) that

$$\begin{aligned}&\sum _{b > a : |b - a| \leqslant K^{C_1\delta _2}} \frac{M\eta }{\pi }\, \mathbb {E}\int _{I} \frac{\langle {\mathbf{{w}}} , {\varvec{\zeta } _b}\rangle ^2}{(E-\lambda _b)^2+\eta ^2}\, \chi (E) \, \mathrm {d}E\\&\quad \leqslant K^{C_1\delta _2}\, \mathbb {E}\int _{\lambda _{a +1}^+}^{\infty } \frac{\eta }{(E-\lambda _{a + 1})^2+\eta ^2} \, \mathrm {d}E\leqslant C K^{ C_1\delta _2 -\delta _1/2}, \end{aligned}$$

which is the right-hand side of (7.9) provided \(\delta _2\) is chosen small enough. Here in the first step we replaced \(\lambda _b\) with \(\lambda _{a+1}\) using the estimates \(\lambda _b \leqslant \lambda _{a+1} \leqslant E - K^{\delta _1} \eta \) valid for \(b > a\) and \(E\) in the support of \(\chi \).

For \(b < a\), we partition \(I = I_1 \cup I_2\) with \(I_1 \cap I_2 = \emptyset \) and

$$\begin{aligned} I_1 \mathrel {\mathop :}=\Big \{E\in I \,\mathrel {\mathop :}\,\exists \, b < a, \; |b -a| \leqslant K^{C_1\delta _2} ,\, |E-\lambda _b|\leqslant \eta K^{\delta _1} \Big \}. \end{aligned}$$

As above, we find

$$\begin{aligned} \sum _{b < a: |b - a| \leqslant K^{C_1\delta _2}} \frac{M\eta }{\pi }\, \mathbb {E}\int _{I_2} \frac{\langle {\mathbf{{w}}} , {\varvec{\zeta } _b}\rangle ^2}{(E-\lambda _b)^2+\eta ^2}\, \chi (E) \, \mathrm {d}E \leqslant K^{ C_2\delta _2 -\delta _1/2}. \end{aligned}$$

Let us therefore consider the integral over \(I_{1}\). One readily finds, for \(\lambda _a \leqslant \lambda _{a -1} \leqslant \lambda _b\), that

$$\begin{aligned} \frac{ 1\, }{(E-\lambda _b)^2+\eta ^2} \, \mathbf{{1}} (E^- \leqslant \lambda _a) \leqslant \frac{K^{2\delta _1}}{(\lambda _b-\lambda _a)^2+\eta ^2} \leqslant \frac{K^{2\delta _1}}{(\lambda _{a + 1}-\lambda _a)^2+\eta ^2}. \end{aligned}$$

Using delocalization (3.13) we therefore find that

$$\begin{aligned}&\sum _{b < a: |b -a| \leqslant K^{C_1\delta _2}} \frac{M\eta }{\pi }\, \mathbb {E}\int _{I_1} \frac{\langle {\mathbf{{w}}} , {\varvec{\zeta } _b}\rangle ^2}{(E-\lambda _b)^2+\eta ^2}\, \chi (E) \, \mathrm {d}E\nonumber \\&\quad \leqslant K^{C_1\delta _2+2\delta _1} \mathbb {E}\, \frac{\eta ^2}{(\lambda _{a - 1}-\lambda _a)^2+\eta ^2}. \end{aligned}$$
(7.10)

The expectation \(\mathbb {E}\, \frac{\eta ^2}{(\lambda _{a - 1}-\lambda _a)^2+\eta ^2}\) in (7.10) is bounded by \(\mathbb {P}(|\lambda _{a - 1}-\lambda _a|\leqslant \Delta _a K^{-\varepsilon }) +O(K^{-\varepsilon })\). Using (6.19), we therefore obtain

$$\begin{aligned} \sum _{b< a : |b -a | \leqslant K^{C_1\delta _2}} \frac{M\eta }{\pi }\, \mathbb {E}\int _{I_1} \frac{\langle {\mathbf{{w}}} , {\varvec{\zeta } _b}\rangle ^2}{(E-\lambda _b)^2+\eta ^2}\, \chi (E) \, \mathrm {d}E \leqslant K^{C_1\delta _2+2\delta _1-\delta }. \end{aligned}$$

This concludes the proof. \(\square \)

In the next step, stated in Lemma 7.2 below, we replace the sharp cutoff function \(\chi \) in (7.3) with a smooth function of \(H\). Note first that from Lemma 7.1 and the rigidity (3.15), we get

$$\begin{aligned}&\mathbb {E}\, h \left( {M \langle {\mathbf{{w}}} , {\mathbf{{\varvec{\zeta }}}_a}\rangle ^2}\right) - \mathbb {E}\,h \left( {\frac{M}{\pi }\int _{I} \hbox {Im }G_{\mathbf{{w}} \mathbf{{w}} }(E + \mathrm {i}\eta ) \, \mathbf{{1}} \left( {{\mathcal {N}}( E^-,\tilde{E})= a}\right) \, \mathrm {d}E}\right) \nonumber \\&\quad = O(K^{-\delta _2/2}), \end{aligned}$$
(7.11)

where \(\tilde{E} \mathrel {\mathop :}=\gamma _+ + 1\) and \({\mathcal {N}}( E^-,\tilde{E}) \mathrel {\mathop :}=|\{i \mathrel {\mathop :}E^- < \lambda _i < \tilde{E}\} |\) is an eigenvalue counting function.

Next, for any \(E_1, E_2 \in [\gamma _- - 1, \gamma _+ + 1]\) and \(\tilde{\eta }>0\) we define \(f(\lambda ) \equiv f_{E_1,E_2,\tilde{\eta }}(\lambda )\) to be the characteristic function of \([E_1, E_2]\) smoothed on scale \(\tilde{\eta }\): \(f = 1\) on \([E_1, E_2]\), \(f = 0\) on \(\mathbb {R}\setminus [E_1-\tilde{\eta }, E_2+\tilde{\eta }]\) and \(|f' |\leqslant C\,\tilde{\eta }^{-1}\), \(|f''|\leqslant C\,\tilde{\eta }^{-2}\). Moreover, let \(q \equiv q_a:\mathbb {R}\rightarrow \mathbb {R}_+\) be a smooth cutoff function concentrated around \(a\), satisfying

$$\begin{aligned} q(x) \!=\! q_a(x) = 1 \quad \text {if} \quad |x - a| \leqslant 1/3, \qquad q(x) \!=\! 0 \quad \text {if} \quad |x - a| \geqslant 2/3, \quad |q' | \leqslant 6.\nonumber \\ \end{aligned}$$
(7.12)

The following result is the appropriate smoothed version of (7.11). It is a simple extension of Lemma 3.2 and Equation (5.8) from [29], and its proof is omitted.

Lemma 7.2

Let \(\tilde{E} \mathrel {\mathop :}=\gamma _+ + 1\) and

$$\begin{aligned} \tilde{\eta }\mathrel {\mathop :}=\eta K^{-\varepsilon } = \Delta _a K^{-3\varepsilon }, \end{aligned}$$

and abbreviate \(q \equiv q_a\) and \(f_E \equiv f_{ E^-, \tilde{E},\tilde{\eta }}\). Then under the assumptions of Lemma 7.1 we have

$$\begin{aligned} \mathbb {E}h \left( {M \langle {\mathbf{{w}}} , {\mathbf{{\varvec{\zeta }}}_a}\rangle ^2}\right) - \mathbb {E}h \left( {\frac{M}{\pi }\int _{I} \hbox {Im }G_{\mathbf{{w}} \mathbf{{w}} }(E + \mathrm {i}\eta ) q\left( {\hbox {Tr }f_E (H)}\right) \mathrm {d}E}\right) = O(K^{-\delta _2/2}).\nonumber \\ \end{aligned}$$
(7.13)

We may now conclude the proof of Proposition 6.6.

Proof of Proposition 6.6

The basic strategy of the proof is to compare the distribution of \(\langle {\mathbf{{w}}} , {\mathbf{{\varvec{\zeta }}}_a}\rangle ^2\) under a general \(X\) to that under a Gaussian \(X\). In the latter case, by unitary invariance of \(H = X X^*\), we know that \(\varvec{\zeta }_a\) is uniformly distributed on the unit sphere of \(\mathbb {R}^M\), so that \(M\langle {\mathbf{{w}}} , {\mathbf{{\varvec{\zeta }}}_a}\rangle ^2 \rightarrow \chi _1^2\) in distribution.

For the comparison argument, we use the Green function comparison method applied to the Helffer–Sjöstrand representation of \(f(H)\). Using Lemma 7.2 it suffices to estimate

$$\begin{aligned} (\mathbb {E}-\mathbb {E}^{\mathrm{{Gauss}}}) \,h \left( {\frac{M}{\pi }\int _{I} \hbox {Im }G_{\mathbf{{w}} \mathbf{{w}} }(E + \mathrm {i}\eta ) \, q(\hbox {Tr }f_E(H))\, \mathrm {d}E}\right) , \end{aligned}$$
(7.14)

where \(\mathbb {E}^{\mathrm{{Gauss}}}\) denotes the expectation with respect to Gaussian \(X\). Now we express \(f (H)\) in terms of Green functions using Helffer–Sjöstrand functional calculus. Recall the definition of \(\kappa _a\) from (2.18). Let \(g(y)\) be a smooth cutoff function with support in \([-\kappa _a, \kappa _a]\), with \(g(y)=1\) for \(|y| \leqslant \kappa _a/2\) and \(\Vert g^{(n)}\Vert _\infty \leqslant C\kappa _a^{-n}\), where \(g^{(n)}\) denotes the \(n\)-th derivative of \(g\). Then, similarly to (3.30), we have (see e.g. Equation (B.12) of [21])

$$\begin{aligned} f_E (\lambda ) = \frac{1}{2\pi }\int _{\mathbb {R}^2}\frac{\mathrm {i}\sigma f_E ''(e)g (\sigma )+ \mathrm {i}f_E (e) g'(\sigma )-\sigma f_E '(e)g'(\sigma )}{\lambda -e-\mathrm {i}\sigma } \, \mathrm {d}e \, \mathrm {d}\sigma . \end{aligned}$$

Thus we get the functional calculus, with \(G(z)=(H-z)^{-1}\),

$$\begin{aligned} \hbox {Tr }f_E (H)&= \frac{1}{2\pi }\int _{\mathbb {R}^2}\left( {\mathrm {i}\sigma f_E ''(e)g (\sigma )\!+\! \mathrm {i}f_E (e) g'(\sigma )\!-\!\sigma f_E '(e)g'(\sigma )}\right) \hbox {Tr }G(e \!+\! \mathrm {i}\sigma ) \, \mathrm {d}e \, \mathrm {d}\sigma \nonumber \\&= \frac{1}{2\pi }\int _{\mathbb {R}^2}\left( {\mathrm {i}f_E (e) g'(\sigma )-\sigma f_E '(e)g'(\sigma )}\right) \hbox {Tr }G(e + \mathrm {i}\sigma )\, \mathrm {d}e \, \mathrm {d}\sigma \nonumber \\&\qquad {}+{} \frac{\mathrm {i}}{2 \pi } \int _{|\sigma | > \tilde{\eta }K^{-d \varepsilon }} \mathrm {d}\sigma \, g(\sigma ) \int \mathrm {d}e \; f''_E(e) \, \sigma \hbox {Tr }G(e + \mathrm {i}\sigma ) \nonumber \\&\qquad {}+{} \frac{\mathrm {i}}{2 \pi } \int _{-\tilde{\eta }K^{-d \varepsilon }}^{\tilde{\eta }K^{-d \varepsilon }} \mathrm {d}\sigma \int \mathrm {d}e \; f''_E(e) \, \sigma \hbox {Tr }G(e + \mathrm {i}\sigma ) . \end{aligned}$$
(7.15)

As in Lemma 5.1 of [29], one can easily extend (3.9) to \(\eta \) satisfying the lower bound \(\eta > 0\) instead of \(\eta \geqslant K^{-1 + \omega }\) in (3.7); the proof is identical to that of [29, Lemma 5.1]. Thus we have, for \(e \in [\gamma _+ - 1, \gamma _+ + 1]\) and \(\sigma \in (0, 1)\),

$$\begin{aligned} \sigma \hbox {Tr }G(e + \mathrm {i}\sigma ) = O_\prec (1). \end{aligned}$$
(7.16)

Therefore, by the trivial symmetry \(\sigma \mapsto -\sigma \) combined with complex conjugation, the third term on the right-hand side of (7.15) is bounded by

$$\begin{aligned} \frac{\mathrm {i}}{2 \pi } \int _{-\tilde{\eta }K^{-d \varepsilon }}^{\tilde{\eta }K^{-d \varepsilon }} \mathrm {d}\sigma \int \mathrm {d}e \; f''_E(e) \, \sigma \hbox {Tr }G(e + \mathrm {i}\sigma ) = O_\prec ( K^{-d \varepsilon }), \end{aligned}$$
(7.17)

where we used that \(\int |f''_E(e) | \, \mathrm {d}e = O(\tilde{\eta }^{-1})\). Next, we note that (3.9) and Lemma 3.6 imply

$$\begin{aligned} \frac{M}{\pi }\int _{I} \hbox {Im }G_{\mathbf{{w}} \mathbf{{w}} }(E + \mathrm {i}\eta ) \, \mathrm {d}E = O_\prec (K^{3 \varepsilon }). \end{aligned}$$
(7.18)

Recalling (7.1) and using the mean value theorem, we find from (7.13), (7.17), and (7.18) that for large enough \(d\), in order to estimate (7.14), and hence prove (6.20), it suffices to prove the following lemma. Note that in it we choose \(X^{(1)}\) to be the original ensemble and \(X^{(2)}\) to be the Gaussian ensemble. \(\square \)

Lemma 7.3

Suppose that the two \(M \times N\) matrix ensembles \(X^{(1)}\) and \(X^{(2)}\) satisfy (1.15) and (1.16). Suppose that the assumptions of Lemma 7.1 hold\(,\) and recall the notations

$$\begin{aligned} \eta \!\mathrel {\mathop :}=\! \Delta _a K^{-2 \varepsilon }, \qquad \tilde{\eta }\!\mathrel {\mathop :}=\!\Delta _a K^{-3\varepsilon }, \qquad E^\pm \!\mathrel {\mathop :}=\! E\pm K^{ \delta _1}\eta , \qquad \tilde{E} \!\mathrel {\mathop :}=\! \gamma _+ + 1,\qquad \end{aligned}$$
(7.19)

as well as

$$\begin{aligned} f_E \equiv f_{E^-, \tilde{E}, \tilde{\eta }}, \qquad I \mathrel {\mathop :}=\bigl [{\gamma _a - K^{ \delta _2} \Delta _a ,\, \gamma _a +K^{ \delta _2} \Delta _a}\bigr ], \end{aligned}$$
(7.20)

where \(f_{E^-, \tilde{E}, \tilde{\eta }}\) was defined above (7.12). Recall \(q \equiv q_a\) from (7.12). Finally\(,\) suppose that \(\varepsilon >4\delta _1\) and \(\delta _1>4 \delta _2\).

Then for any \(d > 1\) and for small enough \(\varepsilon \equiv \varepsilon (\tau ,d) > 0\) and \(\delta _2 \equiv \delta _2(\varepsilon , \delta , \delta _1)\) we have

$$\begin{aligned} \bigl [{\mathbb {E}^{(1)} - \mathbb {E}^{(2)}}\bigr ] \,h \biggl [{ \int _{I} x(E) \, q \left( {y(E) }\right) \, \mathrm {d}E }\biggr ] = O(K^{-\delta _2}), \end{aligned}$$
(7.21)

where we defined

$$\begin{aligned} x(E) \mathrel {\mathop :}=\frac{M }{\pi }\hbox {Im }G_{\mathbf{{w}} \mathbf{{w}}}(E + \mathrm {i}\eta ) \end{aligned}$$
(7.22)

and

$$\begin{aligned} y(E)&\mathrel {\mathop :}= \frac{1}{2\pi }\int _{\mathbb {R}^2} \mathrm {i}\sigma f_E''(e) g (\sigma ) \hbox {Tr }G(e + \mathrm {i}\sigma )\, \mathbf{{1}} \left( {|\sigma | > \tilde{\eta }K^{-d \varepsilon }}\right) \,\mathrm {d}e \, \mathrm {d}\sigma \nonumber \\&+\frac{1}{2\pi }\int _{\mathbb {R}^2} \left( { \mathrm {i}f_E(e) g'(\sigma )- \sigma f_E'(e)g'(\sigma )}\right) \hbox {Tr }G(e + \mathrm {i}\sigma ) \, \mathrm {d}e \, \mathrm {d}\sigma . \end{aligned}$$
(7.23)

The rest of this section is devoted to the proof of Lemma 7.3.

7.1 Proof of Lemma 7.3 I: preparations

We shall use the Green function comparison method [22, 23, 29] to prove Lemma 7.3. For definiteness, we assume throughout the remainder of Sect. 7 that \(\phi \geqslant 1\). The case \(\phi < 1\) is dealt with similarly, and we omit the details.

We first collect some basic identities and estimates that serve as a starting point for the Green function comparison argument. We work on the product probability space of the ensembles \(X^{(1)}\) and \(X^{(2)}\). We fix a bijective ordering map \(\Phi \) on the index set of the matrix entries,

$$\begin{aligned} \Phi \mathrel {\mathop :}\{(i, \mu ) \mathrel {\mathop :}1\leqslant i\leqslant M, \; 1\leqslant \mu \leqslant N \} \;\longrightarrow \; [\![{1, MN}]\!], \end{aligned}$$

and define the interpolating matrix \(X_\gamma \), \(\gamma \in [\![{1, MN}]\!]\), through

$$\begin{aligned} (X_\gamma )_{i \mu } \mathrel {\mathop :}={\left\{ \begin{array}{ll} X_{i \mu }^{(1)} &{} \text {if } \Phi (i,\mu ) > \gamma \\ X_{i \mu }^{(2)} &{} \text {if } \Phi (i,\mu ) \leqslant \gamma . \end{array}\right. } \end{aligned}$$

In particular, \(X_0 = X^{(1)}\) and \( X_{MN} = X^{(2)}\). Hence we have the telescopic sum

$$\begin{aligned}&\big [ \mathbb {E}^{(1)} - \mathbb {E}^{(2)} \big ] \,h \left[ \int _{I } x(E) \, q (y(E) ) \, \mathrm {d}E \right] \nonumber \\&\qquad = \sum _{\gamma = 1}^{MN} \Big [ \mathbb {E}^{X_{\gamma -1}} - \mathbb {E}^{X_{\gamma }} \Big ] \,h \left[ \int _{I } x(E) \, q (y(E) ) \, \mathrm {d}E\right] \end{aligned}$$
(7.24)

(in self-explanatory notation).

Let us now fix a \(\gamma \) and let \((b,\beta )\) be determined by \(\Phi (b, \beta ) = \gamma \). Throughout the following we consider \(b, \beta \) to be arbitrary but fixed and often omit dependence on them from the notation. Our strategy is to compare \(X_{\gamma -1}\) with \(X_\gamma \) for each \(\gamma \). In the end we shall sum up the differences in the telescopic sum (7.24).

Note that \(X_{\gamma - 1}\) and \(X_\gamma \) differ only in the matrix entry indexed by \((b,\beta )\). Thus we may write

$$\begin{aligned} X_{\gamma -1}&= \bar{X} + \widetilde{U}, \qquad \widetilde{U}_{i \mu } \mathrel {\mathop :}=\delta _{i b} \delta _{\mu \beta } X^{(1)}_{b\beta } , \nonumber \\ X_\gamma&= \bar{X} + U, \qquad U_{i \mu } \mathrel {\mathop :}=\delta _{i b} \delta _{\mu \beta } X^{(2)}_{b\beta } . \end{aligned}$$
(7.25)

Here \(\bar{X}\) is the matrix obtained from \(X_{\gamma }\) (or, equivalently, from \(X_{\gamma - 1}\)) by setting the entry indexed by \((b, \beta )\) to zero. Next, we define the resolvents

$$\begin{aligned} T(z) \mathrel {\mathop :}=(\bar{X}\bar{X}^*-z)^{-1},\qquad \qquad S(z) \mathrel {\mathop :}=(X_\gamma X_\gamma ^*-z)^{-1}. \end{aligned}$$
(7.26)

We shall show that the difference between the expectations \(\mathbb {E}^{X_{\gamma }}\) and \(\mathbb {E}^{\bar{X}}\) depends only on the first two moments of \(X^{(2)}_{b \beta }\), up to an error term that is negligible even after summation over \(\gamma \). Together with same argument applied to \(\mathbb {E}^{X_{\gamma -1}}\), and the fact that the second moments of \(X^{(1)}_{b \beta }\) and \(X^{(2)}_{b \beta }\) coincide, this will prove Lemma 7.3.

We define \(x_T(E)\) and \(y_T(E)\) as in (7.22) and (7.23) with \(G\) replaced by \(T\), and similarly \(x_S(E)\) and \(y_S(E)\) with \(G\) replaced by \(S\). Throughout the following we use the notation \(\mathbf{{w}} = (w(i))_{i = 1}^M\) for the components of \(\mathbf{{w}}\). In order to prove (7.21) using (7.24), it is enough to prove that for some constant \(c > 0\) we have

$$\begin{aligned}&\mathbb {E}h \left[ \int _{I} x_S(E) \, q \left( {y_S(E) }\right) \, \mathrm {d}E \right] - \mathbb {E}h \left[ \int _{I} x_T(E) \, q \left( {y_T(E) }\right) \, \mathrm {d}E \right] \nonumber \\&\qquad =\mathbb {E}{\mathcal {A}}+O (K^{-c}) \left( {\phi ^{-1}K^{-2} +K^{-1} |w(b) |^2}\right) , \end{aligned}$$
(7.27)

where \({\mathcal {A}}\) is polynomial of degree two in \(U_{b \beta }\) whose coefficients are \(\bar{X}\)-measurable.

The rest of this section is therefore devoted to the proof of (7.27). Recall that we assume throughout that \(\phi \geqslant 1\) for definiteness; in particular, \(K = N\).

We begin by collecting some basic identities from linear algebra. In addition to \(G(z) \mathrel {\mathop :}=(X X^* - z)^{-1}\) we introduce the auxiliary resolvent \(R(z) \mathrel {\mathop :}=(X^* X - z)^{-1}\). Moreover, for \(\mu \in [\![{1,M}]\!]\) we split

$$\begin{aligned} X = X_{[\mu ]} + X^{[\mu ]} , \qquad (X_{[\mu ]})_{i \nu } \mathrel {\mathop :}=\mathbf{{1}} (\nu = \mu ) X_{i \nu }, \qquad (X^{[\mu ]})_{i \nu } \mathrel {\mathop :}=\mathbf{{1}} (\nu \ne \mu ) X_{i \nu }. \end{aligned}$$

We also define the resolvent \(G^{[\mu ]} \mathrel {\mathop :}=(X^{[\mu ]} (X^{[\mu ]})^* - z)^{-1}\). A simple Neumann series yields the identity

$$\begin{aligned} G = G^{[\mu ]} - \frac{G^{[\mu ]} X_{[\mu ]} X_{[\mu ]}^* G^{[\mu ]}}{1 + (X^* G^{[\mu ]} X)_{\mu \mu }}. \end{aligned}$$
(7.28)

Moreover, from [10, Equation (3.11)], we find

$$\begin{aligned} z R_{\mu \mu } = - \frac{1}{1 + (X^* G^{[\mu ]} X)_{\mu \mu }}. \end{aligned}$$
(7.29)

From (7.28) and (7.29) we easily get

$$\begin{aligned} GX_{[\mu ]} = -z \,R_{\mu \mu } \, G^{[\mu ]}X_{[\mu ]}, \qquad X^*_\mu G = -z R_{\mu \mu } X_{[\mu ]}^* G^{[\mu ]}. \end{aligned}$$
(7.30)

Throughout the following we shall make use of the fundamental error parameter

$$\begin{aligned} \Psi (z) \mathrel {\mathop :}=\sqrt{\frac{\hbox {Im }m_\phi (z)}{N \eta }} + \frac{1}{N \eta }, \end{aligned}$$
(7.31)

which is analogous to the right-hand side of (3.9) and will play a similar role. We record the following estimate, which is analogous to Theorem 3.2.

Lemma 7.4

Under the assumptions of Theorem 3.2 we have\(,\) for \(z \in \mathbf{{S}},\)

$$\begin{aligned} \bigl |(GX )_{\mathbf{{w}} \mu } \bigr | \prec \phi ^{-1/4} \Psi \end{aligned}$$
(7.32)

and

$$\begin{aligned} \bigl |(X^*GX)_{\mu \nu } - \delta _{\mu \nu }(1+zm_{\phi }) \bigr | \prec \phi ^{1/2}\Psi . \end{aligned}$$
(7.33)

Proof

This result is a generalization of (5.22) in [36]. The key identity is (7.30). Since \(G^{[\mu ]}\) is independent of \((X_{i \mu })_{i = 1}^N\), we may apply the large deviation estimate [10, Lemma 3.1] to \(G^{[\mu ]}X_{[\mu ]}\). Moreover, \(|R_{\mu \mu } | \prec 1\), as follows from Theorem 3.2 applied to \(X^*\), and Lemma 3.6. Thus we get

$$\begin{aligned} |(GX )_{\mathbf{{w}} \mu } |&\prec \phi ^{1/2} \left( {(M N)^{-1/2} \sum _{i = 1}^M |G_{\mathbf{{w}} i}^{[\mu ]} |^2}\right) ^{1/2} = \phi ^{1/4} \left( {\frac{1}{N \eta } \hbox {Im }G_{\mathbf{{w}} \mathbf{{w}}}^{[\mu ]}}\right) ^{1/2} \nonumber \\&\prec \phi ^{1/4} \left( {\frac{1}{N \eta } \left( {\hbox {Im }m_{\phi ^{-1}} + \phi ^{-1} \Psi }\right) }\right) ^{1/2} \leqslant C \phi ^{-1/4} \Psi , \end{aligned}$$

where the second step follows by spectral decomposition, the third step from Theorem 3.2 applied to \(X^{[\mu ]}\) as well as (3.22), and the last step by definition of \(\Psi \). This concludes the proof of (7.32).

Finally, (7.33) follows easily from Theorem 3.2 applied to the identity \(X^*G X = 1 + zR\). \(\square \)

After these preparations, we continue the proof of (7.27). We first expand the difference between \(S\) and \(T\) in terms of \(V\) [see (7.25)]. We use the resolvent expansion: for any \(m \in \mathbb {N}\) we have

$$\begin{aligned} S&= T + \sum _{k=1}^m (-1)^{k}[T( \bar{X}U^*+U \bar{X}^* + U U^* )]^kT \nonumber \\&+ (-1)^{m+1} [T( \bar{X}U^*+U \bar{X}^*+ U U^* )]^{m+1} S \end{aligned}$$
(7.34)

and

$$\begin{aligned} T = S + \sum _{k=1}^m [S( XU^*\!+\!UX^* \!+\! U U^* )]^kS \!+\! [S( XU^*+UX^*\!+\! U U^* )]^{m+1} T.\qquad \end{aligned}$$
(7.35)

Note that Theorem 3.2 and Lemma 7.4 immediately yield for \(z \in \mathbf{{S}}\)

$$\begin{aligned}&\bigl |S_{\mathbf{{v}}\mathbf{{w}}}-\langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle m_{\phi ^{-1}} \bigr | \prec \phi ^{-1}\Psi , \qquad |(SX_\gamma )_{\mathbf{{v}}i} | \prec \phi ^{-1/4}\Psi , \\&\quad \bigl |(X_\gamma ^* SX_\gamma )_{\mu \nu } - \delta _{\mu \nu }(1+zm_{\phi }) \bigr | \prec \phi ^{1/2}\Psi \end{aligned}$$

Using (7.35), we may extend these estimates to analogous ones on \(\bar{X}\) and \(T\) instead of \(X_\gamma \) and \(S\). Indeed, using the facts \(\Vert R \Vert \leqslant \eta ^{-1}\), \(\Psi \geqslant N^{-1/2}\), and \(|U_{b \beta } | \prec \phi ^{-1/4}N^{-1/2}\) (which are easily derived from the definitions of the objects on the left-hand sides) combined with (7.35), we get the following result.

Lemma 7.5

For \(A\in \{S,T\}\) and \(B\in \{X_\gamma , \bar{X}\}\) we have

$$\begin{aligned}&\bigl |A _{\mathbf{{v}}\mathbf{{w}}}-\langle {\mathbf{{v}}} , {\mathbf{{w}}}\rangle m_{\phi ^{-1}} \bigr | \prec \phi ^{-1}\Psi , \qquad |( A B)_{\mathbf{{v}}i} | \prec \phi ^{-1/4}\Psi ,\\&\bigl |(B^* A B )_{\mu \nu } - \delta _{\mu \nu }(1+zm_{\phi }(z)) \bigr | \prec \phi ^{1/2}\Psi . \end{aligned}$$

The final tool that we shall need is the following lemma, which collects basic algebraic properties of stochastic domination \(\prec \). We shall use them tacitly throughout the following. Their proof is an elementary exercise using union bounds and Cauchy–Schwarz. See [10, Lemma 3.2] for a more general statement.

Lemma 7.6

  1. (i)

    Suppose that \(A(v) \prec B(v)\) uniformly in \(v \in V\). If \(|V | \leqslant N^C\) for some constant \(C\) then \(\sum _{v \in V} A(v) \prec \sum _{v \in V} B(v)\).

  2. (ii)

    Suppose that \(A_{1} \prec B_{1}\) and \(A_{2} \prec B_{2}\). Then \(A_{1} A_{2} \prec B_{1} B_{2}\).

  3. (iii)

    Suppose that \(\Psi \geqslant N^{-C}\) is deterministic and \(A\) is a nonnegative random variable satisfying \(\mathbb {E}A^2 \leqslant N^{C}\). Then \(A \prec \Psi \) implies that \(\mathbb {E}A \prec \Psi \).

If the above random variables depend on an additional parameter \(u\) and all hypotheses are uniform in \(u\) then so are the conclusions.

7.2 Proof of Lemma 7.3 II: the main expansion

Lemma 7.5 contains the a-priori estimates needed to control the resolvent expansion (7.34). The precise form that we shall need is contained in the following lemma, which is our main expansion. Define the control parameter

$$\begin{aligned} \Psi _b(z) \mathrel {\mathop :}=\phi ^{1/2} |w(b) | + \Psi (z), \end{aligned}$$

where we recall the notation \(\mathbf{{w}} = (w(i))_{i = 1}^M\) for the components of \(\mathbf{{w}}\).

Lemma 7.7

(Resolvent expansion of \(x(E)\) and \(y(E))\) The following results hold for \(E \in I\). \((\)Recall the definition (7.20). For brevity\(,\) we omit \(E\) from our notation\(.)\)

  1. (i)

    We have the expansion

    $$\begin{aligned} x_S - x_T = \sum _{l=1}^3 x_l \, U_{b\beta }^l + O_\prec \left( {\phi ^{-1} N^{-1} (\phi ^{1/4} |w(b) | + \Psi )^2}\right) , \end{aligned}$$
    (7.36)

    where \(x_l\) is a polynomial\(,\) with constant number of terms\(,\) in the variables

    $$\begin{aligned} \bigl \{{T_{bb}, T_{\mathbf{{w}} b}, T_{b \mathbf{{w}}}, (T\bar{X})_{\mathbf{{w}} \beta }, (\bar{X}^*T)_{\beta \mathbf{{w}}}, (\bar{X}^*T\bar{X})_{\beta \beta }}\bigr \}. \end{aligned}$$

    In each term of \(x_l,\) the index \(\mathbf{{w}}\) appears exactly twice\(,\) while the indices \(b\) and \(\beta \) each appear exactly \(l\) times.

    Moreover\(,\) we have the estimates

    $$\begin{aligned} |x_1 | + |x_3 | \prec \phi ^{-1/4}N \Psi \Psi _b, \qquad |x_2 | \prec \phi ^{-1/2}N \Psi _b^2+N\Psi ^2, \end{aligned}$$
    (7.37)

    where the spectral parameter on the right-hand side is \(z = E + \mathrm {i}\eta \).

  2. (ii)

    We have the expansion

    $$\begin{aligned} \hbox {Tr }S - \hbox {Tr }T = \sum _{l=1}^3 J_l U_{b\beta }^l + O_\prec \left( {\phi ^{-1} N^{-1} \Psi ^2}\right) , \end{aligned}$$
    (7.38)

    where \(J_l\) is a polynomial\(,\) with constant number of terms\(,\) in the variables

    $$\begin{aligned} \bigl \{{T_{bb}, (T^2)_{bb}, (T^2 \bar{X})_{b\beta }, (\bar{X}^*T^2)_{\beta b}, (\bar{X}^*T\bar{X})_{\beta \beta }, (\bar{X}^*T^2\bar{X})_{\beta \beta }}\bigr \}. \end{aligned}$$

    In each term of \(J_l,T^2\) appears exactly once\(,\) while the indices \(b\) and \(\beta \) each appear exactly \(l\) times.

    Moreover\(,\) for \(z \in \mathbf{{S}}\) we have the estimates

    $$\begin{aligned} |J_1 | + |J_3 | \prec \phi ^{-1/4}N\Psi ^2, \qquad |J_2 | \prec N \Psi ^2. \end{aligned}$$
    (7.39)
  3. (iii)

    Defining

    $$\begin{aligned}&y_l \mathrel {\mathop :}=\frac{1}{2\pi }\int _{\mathbb {R}^2} J_{l} \, \Big ( \mathrm {i}\sigma f_E''(e) g(\sigma ) \, \mathbf{{1}} \left( {|\sigma | > \tilde{\eta }N^{-d \varepsilon }}\right) \\&\quad \qquad +\mathrm {i}f_E(e) g'(\sigma )- \sigma f_E'(e)g'(\sigma )\Big )\,\mathrm {d}e \, \mathrm {d}\sigma , \end{aligned}$$

    we have the expansion

    $$\begin{aligned} y_S - y_T = \sum _{l=1}^3 y_{l} U_{a\beta }^l+ O_\prec \left( {N^{C \varepsilon } \phi ^{-1} N^{-2} \kappa _a^{1/2}}\right) \end{aligned}$$
    (7.40)

    together with the bounds

    $$\begin{aligned} |y_1 | + |y_3 | \prec \phi ^{-1/4}N^{C\varepsilon }\kappa _a^{1/2}, \qquad |y_2 | \prec N^{C\varepsilon }\kappa _a^{1/2}. \end{aligned}$$
    (7.41)

    Here all constants \(C\) depend on the fixed parameter \(d\).

Proof

The proof is an application of the resolvent expansion (7.34) with \(m = 3\) to the definitions of \(x\) and \(y\).

We begin with part (i). The expansion (7.36) is obtained by expanding the resolvent \(S_{\mathbf{{w}} \mathbf{{w}}}\) in the definition of \(x_S\) using (7.34) with \(m = 3\). The terms are regrouped according the power, \(l\), of \(U_{b \beta }\). The error term of (7.36) contains all terms with \(l \geqslant 4\). It is a simple matter to check that the polynomials \(x_l\), for \(l = 1,2,3\), have the claimed algebraic properties. In order to establish the claimed bounds on the terms of the expansion, we use Lemma 7.5 to derive the estimates

$$\begin{aligned}&|T_{\mathbf{{w}} b} | \prec \phi ^{-1} \Psi _b,\, \qquad |(T\bar{X} )_{\mathbf{{w}} \beta } | \prec \phi ^{-1 /4}\Psi , \qquad |T_{bb} | \prec C\phi ^{-1/2},\nonumber \\&\quad |(\bar{X}^*T\bar{X})_{\beta \beta } | \prec \phi ^{1/2}, \end{aligned}$$
(7.42)

and the same estimates hold if \(T\) is replaced by \(S\). Note that in (7.42) we used the bound \(|m_{\phi ^{-1}} | \asymp \phi ^{-1/2}\), which follows from the identity

$$\begin{aligned} m_{\phi ^{-1}}(z) = \frac{1}{\phi } \left( {m_\phi (z) + \frac{1 - \phi }{z}}\right) \end{aligned}$$

and Lemma 3.6. Using (7.42), it is not hard to conclude the proof of part (i).

Part (ii) is proved in the same way as part (i), simply by setting \(\mathbf{{w}} = \mathbf{{e}}_i\) and summing over \(i = 1,\ldots , M\).

What remains is to prove the bounds in part (iii). To that end, we integrate by parts, first in \(e\) and then in \(\sigma \), in the term containing \( f_E''(e)\), and obtain

$$\begin{aligned}&\int _{\mathbb {R}^2} \mathrm {i}\sigma f_E''(e) g (\sigma ) J_l(e + \mathrm {i}\sigma )\, \mathbf{{1}} (|\sigma | > \tilde{\eta }_d) \,\mathrm {d}e \, \mathrm {d}\sigma \\&\quad =\sum _{\pm } \mp \int \tilde{\eta }_d f_E'(e) g(\pm \tilde{\eta }_d) J_l(e \pm \mathrm {i}\tilde{\eta }_d)\,\mathrm {d}e \\&\qquad + \int _{\mathbb {R}^2} \left( {\sigma g' (\sigma ) + g(\sigma )}\right) f_E'(e) J_l(e + \mathrm {i}\sigma )\, \mathbf{{1}} (|\sigma | > \tilde{\eta }_d) \,\mathrm {d}e \, \mathrm {d}\sigma , \end{aligned}$$

where we abbreviated \(\tilde{\eta }_d \mathrel {\mathop :}=\tilde{\eta }N^{-d \varepsilon }\). Thus we get the bound

$$\begin{aligned} |y_l(E) |&\leqslant \int \mathrm {d}e \, \tilde{\eta }_d | f_E'(e) | \bigl |J_l(e + \mathrm {i}\tilde{\eta }_d) \bigr | \nonumber \\&+ \int \mathrm {d}e \, \mathrm {d}\sigma \, \left( {|f_E(e) g'(\sigma ) | + |\sigma f_E'(e)g'(\sigma ) |}\right) \bigl |J_l(e + \mathrm {i}\sigma ) \bigr | \nonumber \\&+\int \mathrm {d}e \int _{\tilde{\eta }_d}^\infty \mathrm {d}\sigma \, \left( {\sigma |g' (\sigma ) | + g(\sigma )}\right) |f_E'(e) | \bigl |J_l(e + \mathrm {i}\sigma ) \bigr |. \end{aligned}$$
(7.43)

Using (7.43), the conclusion of the proof of part (iii) follows by a careful estimate of each term on the right-hand side, using part (ii) as input. The ingredients are the definitions of \(\tilde{\eta }_d\) and \(\kappa _a\), as well as the estimate

$$\begin{aligned} \Psi ^2(z) \leqslant C \frac{\sqrt{\kappa } + \sqrt{\eta }}{N \eta } + \frac{C}{N^2 \eta ^2}. \end{aligned}$$

The same argument yields the error bound in (7.40). This concludes the proof. \(\square \)

Armed with the expansion from Lemma 7.7, we may do a Taylor expansion of \(q\). To that end, we record the estimate \( \int _I |x_T(E) | \, \mathrm {d}E \prec N^{C \varepsilon }\), as follows from Lemma 7.5. Hence using Lemma 7.7 and expanding \(q(y_S(E))\) around \(q(y_T(E))\) with a fourth order rest term, we get

$$\begin{aligned}&\int _{I} x_S(E) \, q \left( {y_S(E) }\right) \, \mathrm {d}E - \int _{I} x_T(E) \, q \left( {y_T(E) }\right) \, \mathrm {d}E \nonumber \\&\quad =\; \sum _{\mathbf{{l}} \in {\mathcal {L}}} A_{\mathbf{{l}}} \, U_{b\beta }^{|\mathbf{{l}} |} +O_\prec \left( {\phi ^{-1} N^{-2+C\varepsilon }\kappa _a^{1/2} + \phi ^{-1/2}N^{-1+C\varepsilon } \Delta _a |w(b) |^2}\right) ,\quad \end{aligned}$$
(7.44)

where we defined

$$\begin{aligned} {\mathcal {L}}&\mathrel {\mathop :}= \bigl \{{\mathbf{{l}} = (l_0,\ldots , l_m) \in [\![{0,3}]\!] \times [\![{1,3}]\!]^m \mathrel {\mathop :}m \in \mathbb {N},\, 1 \leqslant |\mathbf{{l}} | \leqslant 3}\bigr \},\\&\quad |\mathbf{{l}} | \mathrel {\mathop :}=\sum _{i = 0}^m l_i, \end{aligned}$$

as well as the polynomial

$$\begin{aligned} A_{\mathbf{{l}}} \mathrel {\mathop :}=\int _I \frac{q^{(m)}(y_T)}{m!} x_{l_0} y_{l_1} \cdots y_{l_m} \, \mathrm {d}E, \end{aligned}$$
(7.45)

where we abbreviated \(m \equiv m(\mathbf{{l}})\). Here we use the convention that \(x_0 \mathrel {\mathop :}=x_T\). Note that \({\mathcal {L}}\) is a finite set (it has 14 elements), and for each \(\mathbf{{l}} \in {\mathcal {L}}\) the polynomial \(A_{\mathbf{{l}}}\) is independent of \(U_{b \beta }\). In the estimate of the error term on the right-hand side of (7.44) we also used the fact that for \(E\in I\) we have \(\Psi (E+i\eta ) \leqslant N^{C\varepsilon }\kappa _a^{1/2}\)

Next, using lemma 7.7 and \(N \Delta _a \asymp \kappa _a^{-1/2}\), we find

$$\begin{aligned} |A_{\mathbf{{l}}} | \prec N^{C \varepsilon } {\left\{ \begin{array}{ll} \phi ^{-1/4} \left( {\kappa _a^{1/2} + \phi ^{1/2} |w(b) |}\right) &{} \text {if } |\mathbf{{l}} | = 1\\ \kappa _a^{1/2} + \kappa _a^{-1/2} \phi ^{1/2} |w(b) |^2 &{} \text {if } |\mathbf{{l}} | = 2\\ \phi ^{-1/4} \left( {\kappa _a^{1/2} + \phi ^{1/2} |w(b) | + \phi |w(b) |^2}\right) &{} \text {if } |\mathbf{{l}} | = 3. \end{array}\right. } \end{aligned}$$
(7.46)

Using (7.44) and (7.46), we may do a Taylor expansion of \(h\) on the left-hand side of (7.27). This yields

$$\begin{aligned}&h \left[ \int _{I} x_S \, q \left( {y_S }\right) \, \mathrm {d}E \right] - h \left[ \int _{I} x_T \, q \left( {y_T }\right) \, \mathrm {d}E \right] = \sum _{k=1}^3 \frac{1}{k!}h^{(k)}(A_{\mathbf{{0}}}) \left( {\sum _{\mathbf{{l}} \in {\mathcal {L}}} A_{\mathbf{{l}}} U_{b\beta }^{|\mathbf{{l}} |}}\right) ^{k} \nonumber \\&\quad +O_\prec \left( {\phi ^{-1} N^{-2+C\varepsilon }\kappa _a^{1/2} + N^{ -2+C\varepsilon } \kappa _a^{-1/2} |w(b) |^2 }\right) , \end{aligned}$$
(7.47)

where we abbreviated \(A_{\mathbf{{0}}} \mathrel {\mathop :}=\int _I x_0 \, \mathrm {d}E\). Since \(a \leqslant N^{1 - \tau }\), it is easy to see that, by choosing \(\varepsilon \) small enough depending on \(\tau \), the error term in (7.47) is bounded by \(N^{-c} (\phi ^{-1} N^{-2} + N^{-1} |w(b) |^2)\) for some positive constant \(c\). Taking the expectation and recalling that \(|U_{b \beta } | \prec \phi ^{-1/4} N^{-1/2}\) , we therefore get

$$\begin{aligned}&\mathbb {E}h \left[ \int _{I} x_S \, q \left( {y_S }\right) \, \mathrm {d}E \right] - \mathbb {E}h \left[ \int _{I} x_T \, q \left( {y_T }\right) \, \mathrm {d}E \right] \nonumber \\&\quad =\mathbb {E}{\mathcal {A}} +O_\prec \left( {N^{-c} \left( {\phi ^{-1} N^{-1} + N^{-1} |w(b) |^2}\right) }\right) \nonumber \\&\qquad + \mathbb {E}U_{b \beta }^3 \, \mathbb {E}\sum _{k=1}^3 \frac{1}{k!}h^{(k)}(A_{\mathbf{{0}}}) \sum _{\mathbf{{l}}_1,\ldots , \mathbf{{l}}_{k} \in {\mathcal {L}}} \mathbf{{1}} \left( {\sum _{i = 1}^{k} |\mathbf{{l}}_i | = 3}\right) \prod _{i = 1}^{k} A_{\mathbf{{l}}_i} , \end{aligned}$$
(7.48)

where \(\mathbb {E}{\mathcal {A}}\) is as described after (7.27), i.e. it depends on the random variable \(U_{b \beta }\) only through its first two moments.

At this point we note that if we make the stronger assumption that the first three moments of \(X^{(1)}\) and \(X^{(2)}\) match (which, in the ultimate application to the proof of Proposition 6.6, means that \(\mathbb {E}X_{i \mu }^3 = 0\)), the proof is now complete. Indeed, in that case we may allow \({\mathcal {A}}\) to be a polynomial of degree three in \(U_{b \beta }\) with \(\bar{X}\)-measurable coefficients, and we may absorb the last line of (7.48) into \(\mathbb {E}{\mathcal {A}}\). This completes the proof of (7.27), and hence of Lemma 7.3, for the special case that the third moments of \(X^{(1)}\) and \(X^{(2)}\) match.

For the general case, we still have to estimate the last line of (7.48). The terms that we need to analyse are

$$\begin{aligned}&h^{(3)}(A_{ \mathbf{0} })\, A_{ {{(1 )}}} ^m A_{ {(0,1)} }^n \qquad (m+n=3), \end{aligned}$$
(7.49a)
$$\begin{aligned}&h^{(2)}(A_{\mathbf{{0}}} )\, \left( { A_{ (2 ) }+ A_{(0,2) } + A_{ {(1,1)} } + A_{ {(0,1,1)} } }\right) \left( {A_{(1)} +A_{(0,1)}}\right) , \end{aligned}$$
(7.49b)
$$\begin{aligned}&h^{(1)}(A_\mathbf{0})\, \left( { A_{(3)}+A_{(0,3)}+ A_{(1,2)} + A_{(2,1)} + A_{(0,1,2)} +A_{(1,1,1)}+A_{(0,1,1,1)}}\right) .\nonumber \\ \end{aligned}$$
(7.49c)

These terms are dealt with in the following lemma.

Lemma 7.8

Let \(Y\) denote any term of (7.49). Then there is a constant \(c > 0\) such that

$$\begin{aligned} |\mathbb {E}Y | \leqslant N^{-c}\left( {\phi ^{-1/4}N^{-1/2} +\phi ^{3/4} |w(b) |^2}\right) . \end{aligned}$$
(7.50)

Plugging the estimate of Lemma 7.8 into (7.48), and recalling that \(\mathbb {E}U_{b \beta }^3 \leqslant C \phi ^{-3/4} N^{-3/2}\), it is easy to complete the proof of (7.27), and hence of Lemma 7.3. Lemma 7.8 is proved in the next subsection.

7.3 Proof of Lemma 7.3 III: the terms of order three and proof of Lemma 7.8

Recall that we assume \(\phi \geqslant 1\), i.e. \(K = N\); the case \(\phi \leqslant 1\) is dealt with analogously, and we omit the details.

We first remark that using the bounds (7.46) we find

$$\begin{aligned} |\mathbb {E}Y | \leqslant N^{C \varepsilon } \left( {\kappa _a^{-1/2} \phi ^{3/4} |w(b) |^2 + \phi ^{1/4} |w(b) | + \phi ^{-1/4} \kappa _a^{1/2}}\right) . \end{aligned}$$
(7.51)

Comparing this to (7.50), we see that we need to gain an additional factor \(N^{-1/2}\). How to do so is the content of this subsection.

The basic idea behind the additional factor \(N^{-1/2}\) is that the expectation \(\mathbb {E}Y\) is smaller than the typical size \(\sqrt{\mathbb {E}|Y |^2}\) of \(Y\) by a factor \(N^{-1/2}\). This is a rather general property of random variables which can be written, up to a negligible error term, as a polynomial of odd degree in the entries \(\{\bar{X}_{i \beta }\}_{i = 1}^m\). A systematic representation of a large family of random variables in terms of polynomials was first given in [18], and was combined with a parity argument in [10]. Subsequently, an analogous parity argument for more singular functions was developed in [44]. Following [44], we refer to the process of representing a random variable \(Y\) as a polynomial in \(\{\bar{X}_{i \beta }\}_{i = 1}^M\) up to a negligible error term as the polynomialization of \(Y\).

We shall develop a new approach to the polynomialization of the variables (7.49). The main reason is that these variables have a complicated algebraic structure, which needs to be combined with the Helffer–Sjöstrand representation (7.23). These difficulties lead us to define a family of graded polynomials (given in Definitions 7.107.12), which is general enough to cover the polynomialization of all terms from (7.49) and imposes conditions on the coefficients that ensure the gain of \(N^{-1/2}\). The basic structure behind these polynomials is a classification based on the \(\ell ^2\)- and \(\ell ^3\)-norms of their coefficients.

Let us outline the rough idea of the parity argument. We use the notations \(\bar{X} = \bar{X}_{[\beta ]} + \bar{X}^{[\beta ]}\) and \(T^{[\beta ]}(z) \mathrel {\mathop :}=(\bar{X}^{[\beta ]} (\bar{X}^{[\beta ]})^* - z)^{-1}\), in analogy to those introduced before (7.28). A simple example of a polynomial is

$$\begin{aligned} {\mathcal {P}}_2 = (\bar{X}^* T^{[\beta ]} \bar{X})_{\beta \beta } = \sum _{i,j} T^{[\beta ]}_{ij} \bar{X}_{i \beta } \bar{X}_{j \beta }. \end{aligned}$$

This is a polynomial of degree two. Note that the coefficients \(T^{[\beta ]}_{ij}\) are \(\bar{X}^{[\beta ]}\)-measurable, i.e. independent of \(\bar{X}_{[\beta ]}\). It is not hard to see that \(\mathbb {E}{\mathcal {P}}_2\) is of the same order as \(\sqrt{\mathbb {E}|{\mathcal {P}}_2 |^2}\), so that taking the expectation of \({\mathcal {P}}_2\) does not yield better bounds. The situation changes drastically if the polynomial is odd degree. Consider for instance the polynomial

$$\begin{aligned} {\mathcal {P}}_3 \mathrel {\mathop :}=(\bar{X}^* T^{[\beta ]} \bar{X})_{\beta \beta } (T^{[\beta ]} \bar{X})_{\mathbf{{w}} \beta } = \sum _{i,j,k} T^{[\beta ]}_{ij} T^{[\beta ]}_{\mathbf{{w}} k} \bar{X}_{i \beta } \bar{X}_{j \beta } \bar{X}_{k \beta }. \end{aligned}$$

Now we have \(|\mathbb {E}{\mathcal {P}}_3 | \lesssim N^{-1/2} \sqrt{\mathbb {E}|{\mathcal {P}}_3 |^2}\). The reason for this gain of a factor \(N^{-1/2}\) is clear: taking the expectation forces all three summation indices \(i,j,k\) to coincide.

In the following we define a large family of \(\mathbb {Z}_2\)-graded polynomials that is sufficiently general to cover the polynomializations of the terms (7.49). We shall introduce a notation \(O_{\prec , *}(A)\), which generalizes the notation \(O_{\prec }(A)\) from 2.1; here \(* \in \{{\mathrm{{even}}, \mathrm{{odd}}}\}\) denotes the parity of the polynomial, and \(A\) its size. We always have the trivial bound \(O_{\prec , *}(A) = O_{\prec }(A)\). In addition, we roughly have the estimates

$$\begin{aligned} \mathbb {E}O_{\prec , \mathrm{{even}}}(A) \!\lesssim \! A, \qquad \mathbb {E}O_{\prec , \mathrm{{odd}}}(A) \!\lesssim \! N^{-1/2} A. \end{aligned}$$

The need to gain an additional factor \(N^{-1/2}\) from odd polynomials imposes nontrivial constraints on the polynomial coefficients, which are carefully stated in Definitions 7.107.12; they have been tailored to the class of polynomials generated by the terms (7.49).

We now move on to the proof of Lemma 7.8. We recall that we assume throughout that \(\phi \geqslant 1\). We first introduce a family of graded polynomials suitable for our purposes. It depends on a constant \(C_0\), which we shall fix during the proof to be some large but fixed number.

Definition 7.9

(Admissible weights) Let \(\varrho = (\varrho _i \mathrel {\mathop :}i \in [\![{1, \phi N}]\!])\) be a family of deterministic nonnegative weights. We say that \(\varrho \) is an admissible weight if

$$\begin{aligned} \frac{1}{N^{1/2} \phi ^{1/4}} \left( {\sum _i \varrho _i^2}\right) ^{1/2} \!\leqslant \! 1, \qquad \frac{1}{N^{1/2} \phi ^{1/4}} \left( {\sum _i \varrho _i^3}\right) ^{1/3} \!\leqslant \! N^{-1/6}. \end{aligned}$$
(7.52)

Definition 7.10

(\(O_{\prec , d}(\cdot )\)) For a given degree \(d \in \mathbb {N}\) let

$$\begin{aligned} {\mathcal {P}} \!=\! \sum _{i_1,\ldots , i_d = 1}^{\phi N} V_{i_1 \cdots i_d} \bar{X}_{i_1 \beta } \cdots \bar{X}_{i_d \beta } \end{aligned}$$
(7.53)

be a polynomial in \(\bar{X}\). Analogously to the notation \(O_\prec (\cdot )\) introduced in Definition 2.1, we write \({\mathcal {P}} = O_{\prec , d}(A)\) if the following conditions are satisfied.

  1. (i)

    \(A\) is deterministic and \(V_{i_1 \cdots i_d}\) is \(\bar{X}^{[\beta ]}\)-measurable.

  2. (ii)

    There exist admissible weights \(\varrho ^{(1)},\ldots , \varrho ^{(d)}\) such that

    $$\begin{aligned} |V_{i_1 \cdots i_d} | \prec A \, \varrho ^{(1)}_{i_1} \cdots \varrho ^{(d)}_{i_d}. \end{aligned}$$
    (7.54)
  3. (iii)

    We have the deterministic bound \(|V_{i_1 \cdots i_d} | \leqslant N^{C_0}\).

The above definition extends trivially to the case \(d = 0\), where \({\mathcal {P}} \!=\! V\) is \(\bar{X}^{[\beta ]}\)-measurable.

Definition 7.11

(\(O_{\prec , \diamond }(\cdot )\)) Let \({\mathcal {P}}\) be a polynomial of the form

$$\begin{aligned} {\mathcal {P}} = \sum _{i = 1}^{\phi N} V_i \left( {\bar{X}_{i \beta }^2 - \frac{1}{N \phi ^{1/2}}}\right) . \end{aligned}$$
(7.55)

We write \({\mathcal {P}} = O_{\prec , \diamond }(A)\) if \(V_i\) is \(\bar{X}^{[\beta ]}\)-measurable, \(|V_i | \leqslant N^{C_0}\), and \(|V_i | \prec A\) for some deterministic \(A\).

Definition 7.12

(Graded polynomials) We write \({\mathcal {P}} = O_{\prec , \mathrm{{even}}}(A)\) if \({\mathcal {P}}\) is a sum of at most \(C_0\) terms of the form

$$\begin{aligned} A {\mathcal {P}}_0 \prod _{s = 1}^m {\mathcal {P}}_i, \qquad {\mathcal {P}}_0 = O_{\prec , 2n}(1) , \qquad {\mathcal {P}}_i = O_{\prec , \diamond }(1), \end{aligned}$$

where \(n,m \leqslant C_0\) and \(A\) is deterministic.

Moreover, we write \({\mathcal {P}} = O_{\prec , \mathrm{{odd}}}(A)\) if \({\mathcal {P}} = \widehat{{\mathcal {P}}}\, {\mathcal {P}}_{\mathrm{{even}}}\), where \(\widehat{{\mathcal {P}}} = O_{\prec , 1}(1)\) and \({\mathcal {P}}_{\mathrm{{even}}} = O_{\prec , \mathrm{{even}}}(A)\).

Definitions 7.107.12 refine Definition 2.1 in the sense that

$$\begin{aligned} {\mathcal {P}} = O_{\prec , d}(A) \quad \text {or} \quad {\mathcal {P}} = O_{\prec , \diamond }(A) \Longrightarrow {\mathcal {P}} = P_\prec (A). \end{aligned}$$
(7.56)

Indeed, let \({\mathcal {P}} = O_{\prec , d}(A)\) be of the form (7.53). Then a simple large deviation estimate (e.g. a trivial extension of [19, Theorem B.1(iii)]) yields

$$\begin{aligned} |{\mathcal {P}} | \prec \left( {\left( {N \phi ^{1/2}}\right) ^{-d} \sum _{i_1,\ldots , i_d} |V_{i_1 \cdots i_d} |^2}\right) ^{1/2} \prec A, \end{aligned}$$

where the last step follows from the definition of admissible weights. Similarly, if \({\mathcal {P}} = O_{\prec , \diamond }(A)\) is of the form (7.55), a large deviation estimate (e.g. [19, Theorem B.1(i)]) yields

$$\begin{aligned} |{\mathcal {P}} | \prec \left( {N^{-2} \phi ^{-1} \sum _i |V_i |^2}\right) ^{1/2} \prec N^{-1/2} A \leqslant A. \end{aligned}$$

Note that terms of the form \(O_{\prec , \mathbf{{\cdot }}}(A)\) satisfy simple algebraic rules. For instance, we have

$$\begin{aligned} O_{\prec , \mathrm{{even}}}(A_1) + O_{\prec , \mathrm{{even}}}(A_2) = O_{\prec , \mathrm{{even}}}(A_1 + A_2), \end{aligned}$$

and

$$\begin{aligned} O_{\prec , \mathrm{{odd}}}(A_1) \, O_{\prec , \mathrm{{even}}}(A_2) = O_{\prec , \mathrm{{odd}}}(A_1 A_2) \end{aligned}$$

after possibly increasing \(C_0\). (As with the standard big O notation, such expressions are to be read from left to right.) We stress that such operations may be performed an arbitrary, but bounded, number of times. It is a triviality that all of the following arguments will involve at most \(C_0\) such algebraic operations on graded polynomials, for large enough \(C_0\).

The point of the graded polynomials is that bounds of the form (7.56) are improved if \(d\) is odd and we take the expectation. The precise statement is the following.

Lemma 7.13

Let \({\mathcal {P}} = O_{\prec , \mathrm{{odd}}}(A)\) for some deterministic \(A \leqslant N^C\). Then for any fixed \(D > 0\) we have

$$\begin{aligned} |\mathbb {E}{\mathcal {P}} | \prec N^{-1/2} A + N^{-D}. \end{aligned}$$

Proof

It suffices to set \(A = 1\) and consider \({\mathcal {P}} = \widehat{{\mathcal {P}}} {\mathcal {P}}_0 \prod _{s = 1}^{m}{\mathcal {P}}_i\), where \(\widehat{{\mathcal {P}}}\), \({\mathcal {P}}_0\), and \({\mathcal {P}}_i\) are as in Definition 7.12. By linearity, it suffices to consider

$$\begin{aligned} {\mathcal {P}} = \sum _{i_0} W_{i_0} \bar{X}_{i_0 \beta } \sum _{i_1,\ldots , i_d} V_{i_1 \cdots i_d} \bar{X}_{i_1 \beta } \cdots \bar{X}_{i_d \beta } \prod _{l = d+1}^{d+m} \left( {\sum _{i_l} V^{(l)}_{i_l} \left( {\bar{X}_{i_l \beta }^2 - \frac{1}{N \phi ^{1/2}}}\right) }\right) , \end{aligned}$$

where \(d = 2n\) is even. We suppose that \(|W_{i_0} | \prec \varrho ^{(0)}_{i_0}\), \(|V_{i_1 \cdots i_d} | \prec \varrho ^{(1)}_{i_1} \cdots \varrho ^{(d)}_{i_d}\), and \(|V^{(d+l)}_{i_{d+l}} | \prec 1\) for \(l = d+1,\ldots , d+m\). Here \(\varrho ^{(k)}_{i_k}\) denotes an admissible weight (see Definition 7.9). Thus we have

$$\begin{aligned} |\mathbb {E}{\mathcal {P}} | \prec \sum _{i_0,\ldots , i_{d+m}} \varrho _{i_0}^{(0)} \cdots \varrho _{i_d}^{(d)} \Biggl |\mathbb {E}\left( {\bar{X}_{i_0 \beta } \cdots \bar{X}_{i_d \beta } \prod _{l = d+1}^{d+m} \left( {\bar{X}_{i_l \beta }^2 - \frac{1}{N \phi ^{1/2}}}\right) }\right) \Biggr | + N^{-D}, \end{aligned}$$

where the term \(N^{-D}\) comes from the trivial deterministic bound \(|V_{i_1 \cdots i_d} | \leqslant N^C\) on the low-probability event of \(\prec \) [i.e. the event inside \(\mathbb {P}[\,\cdot \,]\) in (2.1)] in (7.54), and analogous bounds for the other \(\bar{X}^{[\beta ]}\)-measurable coefficients.

The expectation imposes that each summation index \(i_0,\ldots , i_{d+m}\) coincide with at least one other summation index. Thus we get

$$\begin{aligned} |\mathbb {E}{\mathcal {P}} | \prec \sum _{i_0,\ldots , i_{d+m}} \tilde{\varrho }^{(0)}_{i_0} \cdots \tilde{\varrho }^{(d)}_{i_d} I(i_0,\ldots , i_{d+m}) \frac{1}{(N \phi ^{1/2})^m} + N^{-D}, \end{aligned}$$
(7.57)

where the indicator function \(I(\cdot )\) imposes the condition that each summation index must coincide with at least another one, and we introduced the weight \(\tilde{\varrho }_i^{(k)} \mathrel {\mathop :}=N^{-1/2} \phi ^{-1/4} \varrho _i\). Note that

$$\begin{aligned} \sum _i \tilde{\varrho }^{(k)}_i \leqslant N^{1/2} \phi ^{1/2} , \qquad \sum _i \left( {\tilde{\varrho }^{(k)}_i}\right) ^2 \leqslant 1 , \qquad \sum _i \left( {\tilde{\varrho }_i^{(k)}}\right) ^q \leqslant N^{-q/6} \quad (q \geqslant 3).\nonumber \\ \end{aligned}$$
(7.58)

Here for \(q > 3\) we used the inequality \(\Vert \tilde{\varrho }^{(k)} \Vert _{\ell ^q} \leqslant \Vert \tilde{\varrho }^{(k)} \Vert _{\ell ^p}\) for \(q \geqslant p\). The indicator function \(I\) on the right-hand side of (7.57) imposes a reduction in the number of independent summation indices. We may write \(I = \sum _{P} I_P\) as a sum over all partitions \(P\) of the set \([\![{0, d+m}]\!]\) with blocks of size at least two, whereby

$$\begin{aligned} I_P(i_1,\ldots , i_{d+m}) = \prod _{p \in P} \mathbf{{1}} (i_k = i_l \,\mathrm{{ for\,\, all }}\, k,l \in p). \end{aligned}$$

Hence the summation over \(i_1,\ldots , i_{d+m}\) factors into a product over the blocks of \(P\). We shall show that the contribution of each block is at most one, and that there is a block whose contribution is at most \(N^{-1/2}\).

Fix \(p \in P\) and denote by \(S_p\) the contribution of the block \(p\) to the summation in the main term of (7.57). Define \(s \mathrel {\mathop :}=|p \cap [\![{0,d}]\!] |\) and \(t \mathrel {\mathop :}=|p \cap [\![{d+1, d+m}]\!] |\). By definition of \(P\), we have \(s + t \geqslant 2\). By the inequality of arithmetic and geometric means, we have

$$\begin{aligned} S_p \leqslant \max _k \sum _{i} \left( {\tilde{\varrho }_i^{(k)}}\right) ^s \frac{1}{(N \phi ^{1/2})^t}. \end{aligned}$$

Using (7.58) it is easy to conclude that

$$\begin{aligned} S_p \leqslant {\left\{ \begin{array}{ll} 1 &{} \text {if } (s,t) = (2,0)\\ N^{-1/2} &{} \text {if } (s,t) \ne (2,0). \end{array}\right. } \end{aligned}$$

Moreover, since \(d\) is even, at least one block of \(P\) satisfies \((s,t) \ne (2,0)\).

Thus we find that

$$\begin{aligned} |\mathbb {E}{\mathcal {P}} | \prec \sum _{P} \prod _{p \in P} S_p + N^{-D} \leqslant C_{d+m} N^{-1/2} + N^{-D}. \end{aligned}$$

Since \(d+m \leqslant 2 C_0\), the proof is complete. \(\square \)

In order to apply Lemma 7.13 to the terms \(Y\) from (7.49), we need to expand \(Y\) in terms of graded polynomials. This expansion is summarized in the following result, which gives the polynomializations of the coefficients of the terms from (7.49). For an arbitrary unit vector \(\mathbf{{v}} \in \mathbb {R}^M\) we define the control parameter

$$\begin{aligned} \Psi ^{\mathbf{{v}}} \mathrel {\mathop :}=\Psi + (N^{-1} \Vert \mathbf{{v}} \Vert _{\infty })^{1/3} , \qquad \Vert \mathbf{{v}} \Vert _{\infty } \mathrel {\mathop :}=\max _i |v(i) |. \end{aligned}$$

Lemma 7.14

Fix \(D > 0\). Then there exists \(C_0 = C_0(D)\) such that for any unit vector \(\mathbf{{v}} \in \mathbb {R}^M\) we have

$$\begin{aligned} T_{\mathbf{{v}} \mathbf{{v}}}&= T_{\mathbf{{v}} \mathbf{{v}}}^{[\beta ]} + O_{\prec , \mathrm{{even}}}(\phi ^{-1} (\Psi ^{\mathbf{{v}}})^2) + O_\prec (N^{-D}), \end{aligned}$$
(7.59)
$$\begin{aligned} T_{bb}&= O_{\prec , \mathrm{{even}}}(\phi ^{-1/2}) + O_\prec (N^{-D}), \end{aligned}$$
(7.60)
$$\begin{aligned} T_{\mathbf{{w}} b}&= O_{\prec , \mathrm{{even}}}(\phi ^{-1} \Psi _b) + O_\prec (N^{-D}), \end{aligned}$$
(7.61)
$$\begin{aligned} (T \bar{X})_{\mathbf{{v}} \beta }&= O_{\prec , \mathrm{{odd}}}(\phi ^{-1/4} \Psi ^{\mathbf{{v}}}) + O_\prec (N^{-D}), \end{aligned}$$
(7.62)
$$\begin{aligned} (\bar{X}^* T \bar{X})_{\beta \beta }&= O_{\prec , \mathrm{{even}}}(\phi ^{1/2}) + O_\prec (N^{-D}), \end{aligned}$$
(7.63)

uniformly for \(z \in \mathbf{{S}}\).

Proof

We begin by noting that (3.9) applied to \(X^{[\mu ]}\) and (3.22) combined with a large deviation estimate (see [19, Theorem B.1]) yields

$$\begin{aligned} (X^* G^{[\beta ]} X)_{\beta \beta } = \phi ^{1/2} m_{\phi ^{-1}} + O_\prec (\phi ^{-1/2} \Psi ). \end{aligned}$$

Using (7.35) and Lemma 7.5, it is not hard to deduce that

$$\begin{aligned} (\bar{X}^* T^{[\beta ]} \bar{X})_{\beta \beta } = \phi ^{1/2} m_{\phi ^{-1}} + O_\prec (\phi ^{-1/2} \Psi ). \end{aligned}$$

Thus for any fixed \(n\) we may expand

$$\begin{aligned} -\frac{1}{1 + (\bar{X}^* T^{[\beta ]} \bar{X})_{\beta \beta }}&= - \frac{1}{1 + \phi ^{1/2} m_{\phi ^{-1}} - \left( {\phi ^{1/2} m_{\phi ^{-1}} - (\bar{X}^* T^{[\beta ]} \bar{X})_{\beta \beta }}\right) }\\&= -\sum _{k = 0}^n (z m_\phi )^{k + 1} \left( {\phi ^{1/2} m_{\phi ^{-1}} - (\bar{X}^* T^{[\beta ]} \bar{X})_{\beta \beta }}\right) ^k \\&+O_\prec (\phi ^{1/2} \Psi ^{n+1}), \end{aligned}$$

where in the second step we used (3.5) and (3.23). Now we split

$$\begin{aligned} (\bar{X}^* T^{[\beta ]} \bar{X})_{\beta \beta } - \phi ^{1/2} m_{\phi ^{-1}}&= \sum _{i \ne j} T^{[\beta ]}_{ij} \bar{X}_{i \beta } \bar{X}_{j \beta } \\&+ \sum _i T^{[\beta ]}_{ii} \left( {\bar{X}_{i \beta } - \frac{1}{N \phi ^{1/2}}}\right) \\&+ \sum _i \frac{1}{N \phi ^{1/2}} \left( {T_{ii}^{[\beta ]} - m_{\phi ^{-1}}}\right) \\&= O_{\prec ,2}(\phi ^{-1/2} \Psi ) + O_{\prec , \diamond }(\phi ^{-1/2}) + O_{\prec , 0}(\phi ^{-1/2} \Psi )\\&= O_{\prec , \mathrm{{even}}}(\phi ^{-1/2}), \end{aligned}$$

where in the second step we used the estimates \(|T_{ij}^{[\beta ]} - \delta _{ij} m_{\phi ^{-1}} | \prec \phi ^{-1} \Psi \) and \(|m_{\phi ^{-1}} | \leqslant C \phi ^{-1/2}\). Since \(|z m_\phi | \leqslant C \phi ^{1/2}\), we therefore conclude that

$$\begin{aligned} -\frac{1}{1 + (\bar{X}^* T^{[\beta ]} \bar{X})_{\beta \beta }} = O_{\prec , \mathrm{{even}}}(n \phi ^{1/2}) + O_\prec (\phi ^{1/2} \Psi ^{n+1}). \end{aligned}$$

From (7.31) and the definition of \(\eta \), we readily find that \(\Psi \leqslant N^{-c \tau }\) for some constant \(c\). Therefore choosing \(n \equiv n(\tau ,D)\) large enough yields

$$\begin{aligned} -\frac{1}{1 + (\bar{X}^* T^{[\beta ]} \bar{X})_{\beta \beta }} = O_{\prec , \mathrm{{even}}}(\phi ^{1/2}) + O_\prec (\phi ^{1/2} N^{-D}). \end{aligned}$$
(7.64)

Having established (7.64), the remainder of the proof is relatively straightforward. From (7.29) and (7.30) we get

$$\begin{aligned} (T \bar{X})_{\mathbf{{v}} \beta } = \frac{1}{1 + (\bar{X}^* T^{[\beta ]} \bar{X})_{\beta \beta }} (T^{[\beta ]} \bar{X})_{\mathbf{{v}} \beta }. \end{aligned}$$

Moreover, using \(\Psi \geqslant c N^{-1/2}\) and \(T_{\mathbf{{v}} i}^{[\beta ]} = v(i) m_{\phi ^{-1}} + O_\prec (\phi ^{-1} \Psi ) = O_\prec (\phi ^{-1/2} |v(i) | + \phi ^{-1} \Psi )\), we find

$$\begin{aligned} \frac{1}{N \phi ^{1/2}} \sum _i \bigl |T_{\mathbf{{v}} i}^{[\beta ]} \bigr |^2 \prec \phi ^{-3/2} \Psi ^2, \qquad \frac{1}{N^{3/2} \phi ^{3/4}} \sum _i \bigl |T_{\mathbf{{v}} i}^{[\beta ]} \bigr |^3 \prec N^{-1/2} \phi ^{-9/4} (\Psi ^{\mathbf{{v}}})^3. \end{aligned}$$

We conclude that

$$\begin{aligned} (T^{[\beta ]} \bar{X})_{\mathbf{{v}} \beta } = \sum _{i} T_{\mathbf{{v}} i}^{[\beta ]} \bar{X}_{i \beta } = O_{\prec , 1}(\phi ^{-3/4} \Psi ^{\mathbf{{v}}}). \end{aligned}$$
(7.65)

Now (7.62) follows easily from (7.65) and (7.64).

Moreover, (7.59) and (7.61) follow from (7.28) combined with (7.65) and (7.64). For (7.61) we estimate the second term in (7.28) by

$$\begin{aligned} O_{\prec , \mathrm{{even}}}\left( {\phi ^{1/2} \phi ^{-3/2} \Psi ^{\mathbf{{w}}} \Psi ^{\mathbf{{e}}_b}}\right) = O_{\prec , \mathrm{{even}}}\left( {\phi ^{-1} (\Psi + N^{-1/3})^2}\right) = O_{\prec , \mathrm{{even}}}\left( {\phi ^{-1} \Psi }\right) , \end{aligned}$$

where in the last step we used that \(\Psi \geqslant c N^{-1/2}\). Moreover, (7.60) is a trivial consequence of (7.59). Finally, (7.63) follows from (7.28) and (7.64) combined with

$$\begin{aligned} (\bar{X}^* T^{[\beta ]} \bar{X})_{\beta \beta } = O_{\prec , 2}(1). \end{aligned}$$

This concludes the proof. \(\square \)

Note that the upper bounds in Lemma 7.14 are the same as those of (7.42), except that \(\Psi \) is replaced with the larger quantity \(\Psi ^{\mathbf{{v}}}\). In order to get back to \(\Psi \) from \(\Psi ^{\mathbf{{v}}}\), we use the following trivial result.

Lemma 7.15

We have

$$\begin{aligned} \Psi ^{\mathbf{{v}}} \prec \Psi \end{aligned}$$

if

$$\begin{aligned} \Psi \geqslant N^{-1/3} \qquad \text {or} \qquad \Vert \mathbf{{v}} \Vert _{\infty } \prec N^{-1/2}. \end{aligned}$$
(7.66)

Proof

The claim follows immediately from the upper bound \(\Psi \geqslant N^{-1/2}\), valid for all \(z \in \mathbf{{S}}\). \(\square \)

In each application of Lemma 7.14, we shall verify one of the conditions of (7.66). The first condition is verified for \(\eta \leqslant N^{-2/3}\), which always holds for the coefficients of \(x_1\), \(x_2\), and \(x_3\) (recall (7.2)).

The second condition of (7.66) will be verified when computing the coefficients of \(y_1\), \(y_2\), and \(y_3\). To that end, we make use of the freedom of the choice of basis when computing the trace in the definition of \(J_1\), \(J_2\), and \(J_3\). We shall choose a basis that is completely delocalized. The following simple result guarantees the existence of such a basis.

Lemma 7.16

There exists an orthonormal basis \(\mathbf{{w}}_1,\ldots , \mathbf{{w}}_M\) of \(\mathbb {R}^M\) satisfying

$$\begin{aligned} |w_i(j) | \prec M^{-1/2} \end{aligned}$$
(7.67)

uniformly in \(i\) and \(j\).

Proof

Let the matrix \([\mathbf{{w}}_1 \cdots \mathbf{{w}}_M]\) of orthonormal basis vectors be uniformly distributed on the orthogonal group \(\mathrm O(M)\). Then each \(\mathbf{{w}}_i\) is uniformly distributed on the unit sphere, and by standard Gaussian concentration arguments one finds that \(|w_i(j) | \prec M^{-1/2}\). In particular, there exists an orthonormal basis \(\mathbf{{w}}_1,\ldots , \mathbf{{w}}_M\) satisfying (7.67). In fact, a slightly more careful analysis shows that one can choose \(|w_i(j) | \leqslant (2 + \varepsilon ) (\log M)^{1/2} M^{-1/2}\) for any fixed \(\varepsilon > 0\) and large enough \(M\). \(\square \)

We may now derive estimates on the matrix \(T^2\) by writing \((T^2)_{jk} = \sum _{i}T_{j \mathbf{{w}}_i} T_{\mathbf{{w}}_i k}\), where \(\{\mathbf{{w}}_i\}\) is a basis satisfying (7.67). From Lemmas 7.14 and 7.15 we get the following result.

Lemma 7.17

Fix \(D > 0\). Then there exists \(C_0 = C_0(D)\) such that

$$\begin{aligned} \hbox {Tr }T&= \hbox {Tr }T^{[\beta ]} + O_{\prec , \mathrm{{even}}}(\phi ^{-1} N \Psi ^2) + O_\prec (N^{-D}), \end{aligned}$$
(7.68)
$$\begin{aligned} (T^2)_{bb}&= O_{\prec , \mathrm{{even}}}(N \phi ^{-1} \Psi ^2) + O_\prec (N^{-D}), \end{aligned}$$
(7.69)
$$\begin{aligned} (T^2 \bar{X})_{b \beta }&= O_{\prec , \mathrm{{odd}}}(\phi ^{-1/4} N \Psi ^2) + O_\prec (N^{-D}), \end{aligned}$$
(7.70)
$$\begin{aligned} (\bar{X}^* T^2 \bar{X})_{\beta \beta }&= O_{\prec , \mathrm{{even}}}(\phi ^{1/2} N \Psi ^2) + O_\prec (N^{-D}), \end{aligned}$$
(7.71)

uniformly for \(z \in \mathbf{{S}}\).

Proof

We prove (7.70); the other estimates are proved similarly. We choose a basis \(\mathbf{{w}}_1,\ldots , \mathbf{{w}}_M\) as in Lemma 7.16, and write

$$\begin{aligned}&(T^2 \bar{X})_{b \beta } = \sum _i T_{b \mathbf{{w}}_i} (T\bar{X})_{\mathbf{{w}}_i \beta } \\&\quad = \sum _{i = 1}^{N\phi } O_{\prec , \mathrm{{even}}} \left( {\phi ^{-1} \Psi + \phi ^{-1/2} |w_i(b) |}\right) \, O_{\prec , \mathrm{{odd}}}(\phi ^{-1/4} \Psi ) + O_{\prec }(N^{-D}), \end{aligned}$$

where we used (7.61) with \(\mathbf{{w}}\) replaced by \(\mathbf{{w}}_i\), (7.62), and Lemma 7.15. Summing over \(i\), and recalling that \(\Psi \geqslant N^{-1/2}\), it is easy to conclude (7.70). \(\square \)

In particular, as in (7.39) we find

$$\begin{aligned}&J_1,J_3 \!=\! O_{\prec , \mathrm{{odd}}}(\phi ^{-1/4}N\Psi ^2) \!+\! O_\prec (N^{-D}), \, J_2 \!=\! O_{\prec , \mathrm{{even}}} (N \Psi ^2) \!+\! O_{\prec }(N^{-D}),\quad \nonumber \\ \end{aligned}$$
(7.72)

where the parity of \(J_i\) follows easily from its definition.

The estimates from Lemma 7.14 are compatible with integration in the following sense. Suppose that \({\mathcal {P}}(s)\) depends on a parameter \(s \in S\), where \(S \subset \mathbb {R}^k\) has bounded volume, and that \({\mathcal {P}}(s) = O_{\prec , *}(A(s)) + O_\prec (N^{-D})\) uniformly in \(s \in S\), where \(A(s)\) is a deterministic function of \(s\) and \(* \in \{\mathrm{{even}}, \mathrm{{odd}}\}\) denotes the parity of \({\mathcal {P}}\). Suppose in addition that \({\mathcal {P}}(s)\) is Lipschitz continuous with Lipschitz constant \(N^C\). Then, analogously to Remark 3.3, we have

$$\begin{aligned} \int _S {\mathcal {P}}(s) \, \mathrm {d}s = O_{\prec , *}\left( {\int _S A(s) \, \mathrm {d}s}\right) + O_\prec \left( {\int _S A(s) \, \mathrm {d}s \, N^{-D}}\right) . \end{aligned}$$

Lemmas 7.14 and 7.17 are the key estimates of the coefficients appearing in (7.49). We claim that all estimates of Lemma 7.7, along with (7.42), remain valid, in the sense that an estimate of the form \(|u | \prec v\) is to be replaced with

$$\begin{aligned} u = O_{\prec , *}(v) + O_\prec (N^{-D}), \end{aligned}$$

where \(* \in \{\mathrm{{even}}, \mathrm{{odd}}\}\) denotes the parity of polynomialization of \(u\). Indeed, for the estimates (7.37) on \(x_i\), we always have \(\hbox {Im }z = \eta \leqslant N^{-2/3}\), so that by Lemma 7.15 we have \(\Psi ^{\mathbf{{v}}} \prec \Psi \). Thus we get from Lemma 7.14 that

$$\begin{aligned} x_1, x_3&= O_{\prec , \mathrm{{odd}}}\left( {\phi ^{-1/4}N \Psi \Psi _b}\right) + O_\prec (N^{-D}),\\ x_2&= O_{\prec , \mathrm{{even}}} \left( {\phi ^{-1/2}N \Psi _b^2+N\Psi ^2}\right) + O_\prec (N^{-D}), \end{aligned}$$

where the parity of \(x_i\) may be easily deduced from their definitions. Moreover, for the estimates (7.41) we use (7.72) to get

$$\begin{aligned} y_1,y_3&= O_{\prec , \mathrm{{odd}}} \left( {\phi ^{-1/4}N^{C\varepsilon }\kappa _a^{1/2}}\right) + O_\prec (N^{-D}),\\ y_2&= O_{\prec , \mathrm{{even}}} \left( {N^{C\varepsilon }\kappa _a^{1/2}}\right) + O_\prec (N^{-D}). \end{aligned}$$

Note that, thanks to Lemmas 7.14 and 7.17, we have obtained exactly the same upper bounds on the coefficients \(x_i\) and \(y_i\) as the ones obtained in Lemma 7.7, but we have in addition expressed them, up to a negligible error, as graded polynomials, to which Lemma 7.13 is applicable.

In addition to the coefficients \(x_i\) and \(y_i\), we have to control the coefficient \(q^{(m)}(y_T)\) in the definition (7.45) of \(A_{\mathbf{{l}}}\). We in fact claim that

$$\begin{aligned} q^{(m)}(y_T) = O_{\prec , \mathrm{{even}}}(N^{C \varepsilon }) + O_\prec (N^{-D}). \end{aligned}$$
(7.73)

This follows from the estimate

$$\begin{aligned} y_{T} = y_{T^{[\beta ]}} + O_{\prec , \mathrm{{even}}}(N^{C \varepsilon } \kappa _a) + O_\prec (N^{-D}) = O_{\prec , \mathrm{{even}}}(N^{C \varepsilon }) + O_\prec (N^{-D}), \end{aligned}$$

which may be derived from (7.68), combined with a Taylor expansion of \(q^{(m)}\). Similarly, we find that

$$\begin{aligned} h^{(k)}(A_{\mathbf{{0}}}) = O_{\prec , \mathrm{{even}}}(N^{C \varepsilon }) + O_\prec (N^{-D}) \qquad (k = 1,2,3). \end{aligned}$$
(7.74)

We may now put everything together. Noting that the degree of the polynomializations of the expressions (7.49) is always odd, we obtain, in analogy to (7.51) that

$$\begin{aligned} Y = O_{\prec , \mathrm{{odd}}} \left( {N^{C \varepsilon } \left( {\kappa _a^{-1/2} \phi ^{3/4} |w(b) |^2 + \phi ^{1/4} |w(b) | + \phi ^{-1/4} \kappa _a^{1/2}}\right) }\right) + O_\prec (N^{-D}) \end{aligned}$$

for \(Y\) being any term of (7.49). Hence Lemma 7.8 follows from Lemma 7.13 and Young’s inequality.

7.4 Stability of level repulsion: proof of Lemma 6.5

This is a Green function comparison argument, using the machinery introduced in Sect. 7.1. A similar comparison argument was given in Propositions 2.4 and 2.5 of [29]. The details in the sample covariance case and for indices \(a\) satisfying \(a \leqslant K^{1 - \tau }\) follow an argument very similar to (in fact simpler than) the one from Sects. 7.17.3. As in the proofs of Propositions 2.4 and 2.5 of [29], one writes the level repulsion condition in terms of resolvents. In our case, one uses the representation (7.15) as the starting point. Then the machinery of Sects. 7.17.3 may be applied with minor modifications. We omit the details.

8 Extension to general \(T\) and universality for the uncorrelated case

In this section we relax the assumption (3.1), and hence extend all arguments of Sects. 37 to cover general \(T\). We also prove the fixed-index joint eigenvector-eigenvalue universality of the matrix \(H\) defined in (2.7), for indices bounded by \(K^{1 - \tau }\) for some \(\tau > 0\).

Bearing the applications in the current paper in mind, we state the results of this section for the matrix \(H\) from (2.7), but it is a triviality that all results and their proofs carry over to case of arbitrary \(Q\) from (1.10) provided that \(\Sigma = T T^* = I_M\).

8.1 The isotropic Marchenko–Pastur law for \(Y Y^*\)

We start with the singular value decomposition of \(T\), which we write as

$$\begin{aligned} T = O' (\Lambda , 0) O'' = O' \Lambda (I_M, 0) O'', \end{aligned}$$

where \(O' \in \mathrm O(M)\) and \(O'' \in \mathrm O(M+r)\) are orthogonal matrices, \(0\) is the \(M \times r\) zero matrix, and \(\Lambda \) is an \(M \times M\) diagonal matrix containing the singular values of \(T\). Setting

$$\begin{aligned} \Sigma ^{1/2} = O' \Lambda (O')^* , \quad O \mathrel {\mathop :}=\begin{pmatrix} O' &{} 0\\ 0 &{} I_r \end{pmatrix} O'', \end{aligned}$$
(8.1)

we have

$$\begin{aligned} T = \Sigma ^{1/2} (I_M, 0) O. \end{aligned}$$

We conclude that

$$\begin{aligned} Q = \Sigma ^{1/2} H \Sigma ^{1/2} , \end{aligned}$$

where \(H \mathrel {\mathop :}=Y Y^*\) and \(Y \mathrel {\mathop :}=(I_M, 0) O X\) were defined in (2.7). Comparing this to (3.2), we find that to relax the assumption (3.1) we have to generalize the arguments of Sects. 37 by replacing \(X X^*\) with \(H = Y Y^*\).

The generalization of \(G = (X X^* - z)^{-1}\) is the resolvent of \(Y Y^*\),

$$\begin{aligned} \widehat{G}(z) \mathrel {\mathop :}=(Y Y^* - z)^{-1}. \end{aligned}$$

We also abbreviate

$$\begin{aligned} G' \mathrel {\mathop :}=(O X X^* O^* - z)^{-1}. \end{aligned}$$

Throughout the following we identify \(\mathbf{{w}} \in \mathbb {R}^M\) with its natural embedding \(\left( {\begin{array}{c}\mathbf{{w}}\\ 0\end{array}}\right) \in \mathbb {R}^{M+r}\). Thus, for example, for \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\) we may write \(G'_{\mathbf{{v}} \mathbf{{w}}}\).

Theorem 8.1

(Local laws for \(Y Y^*)\) Theorem 3.2 remains valid with \(G\) replaced by \(\widehat{G}\). Moreover\(,\) Theorems 3.4 and 3.5 remain valid for \(\varvec{\zeta }_i\) and \(\lambda _i\) denoting the eigenvectors and eigenvalues of \(Y Y^*\).

Proof

It suffices to prove the first sentence, since all claims in the second sentence follow from the isotropic law (see [10] for more details). We only prove (3.9) for \(\widehat{G}\); the other bound, (3.10) for \(\widehat{G}\), is proved similarly. To simplify the presentation, we suppose that \(r = 1\); the case \(r \geqslant 2\) is a trivial extension. Abbreviate \(\bar{M} \mathrel {\mathop :}=M + 1\). Noting that \(Y_{i \mu } = \mathbf{{1}} (i \ne \bar{M}) (OX)_{i \mu }\), we find from [10, Definition 3.5 and Equation (3.7)] that

$$\begin{aligned} \widehat{G}_{\mathbf{{v}} \mathbf{{w}}} = G'_{\mathbf{{v}} \mathbf{{w}}} - \frac{G'_{\mathbf{{v}} \bar{M}} G'_{\bar{M} \mathbf{{w}}}}{G'_{\bar{M} \bar{M}}}. \end{aligned}$$
(8.2)

For definiteness, we focus on (3.9) for \(\widehat{G}\); the proof of (3.10) for \(\widehat{G}\) is similar. Since \(G' = O G O^*\), we have \(G'_{\mathbf{{v}} \mathbf{{w}}} = G_{O^* \mathbf{{v}} \, O^* \mathbf{{w}}}\). Hence, using (3.9) and (8.2), the proof will be complete provided we can show that

$$\begin{aligned} \biggl |\frac{G'_{\mathbf{{v}} \bar{M}} G'_{\bar{M} \mathbf{{w}}}}{G'_{\bar{M} \bar{M}}} \biggr | \prec \Phi , \qquad \Phi \mathrel {\mathop :}=\sqrt{\frac{\hbox {Im }m_{\phi ^{-1}}(z)}{M \eta }} + \frac{1}{M \eta } \asymp \phi ^{-1} \Psi , \end{aligned}$$
(8.3)

where we recall the definition (7.31) of \(\Psi \). In fact, from Lemma 3.6 and (3.23) we find that \(\Phi / |m_{\phi ^{-1}} | \leqslant N^{-c}\) for some positive constant \(c\) depending on \(\tau \). Hence (3.9) yields

$$\begin{aligned} \biggl |\frac{G'_{\mathbf{{v}} \bar{M}} G'_{\bar{M} \mathbf{{w}}}}{G'_{\bar{M} \bar{M}}} \biggr | \prec \Phi ^2 / |m_{\phi ^{-1}} | \leqslant \Phi . \end{aligned}$$

This concludes the proof. \(\square \)

Having established Theorem 8.1, all arguments from Sects. 36 that use it as input may be taken over verbatim, after replacing \(G\) by \(\widehat{G}\). More precisely, all results from Sects. 36 remain valid for a general \(Q\), with the exception of Proposition 6.3, Lemmas 6.4 and 6.5, and Proposition 6.6. Therefore we have completed the proofs of all of our main results except Theorem 2.20.

In order to prove Theorem 2.20, we still have to prove Lemmas 6.4 and 6.5 and Proposition 6.6 for \(Y Y^*\) instead of \(X X^*\). Lemma 6.4 is easy: for Gaussian \(X\) we have \(Y \overset{d}{=}(I_M, 0) X = \widetilde{X}\), where \(\widetilde{X}\) is the \(M \times N\) matrix obtained from \(X\) by deleting its bottom \(r\) rows.

The proofs of Lemma 6.5 and Proposition 6.6 rely on Green function comparison. What remains, therefore, is to extend the argument of Sect. 7 from \(H = X X^*\) to \(H = Y Y^*\).

8.2 Quantum unique ergodicity for \(YY^*\)

In this subsection we prove Proposition 6.6 for the eigenvectors \(\varvec{\zeta }_a\) of \(H = Y Y^*\). As explained in Sect. 7.4, the proof of Lemma 6.5 is analogous and therefore omitted. We proceed exactly as in Sect. 7, replacing \(G\) with \(\widehat{G}\). It suffices to prove the following result.

Lemma 8.2

Lemma 7.3 remains valid if \(x(E)\) and \(y(E)\) are replaced with \(\widehat{x}(E)\) and \(\widehat{y}(E),\) obtained from the definitions (7.22) and (7.23) by replacing \(G\) with \(\widehat{G}\).

Proof

We take over the notation from the proof of Theorem 8.1, and to simplify notation again assume that \(r = 1\). As in Sect. 7, we suppose for definiteness that \(\phi \geqslant 1\). Defining \(\mathbf{{u}} \mathrel {\mathop :}=O \mathbf{{w}}\) and \(\mathbf{{r}} \mathrel {\mathop :}=O \mathbf{{e}}_{M+1}\), we have \(\langle {\mathbf{{u}}} , {\mathbf{{r}}}\rangle = 0\) and, using (8.2),

$$\begin{aligned} \widehat{G}_{\mathbf{{w}} \mathbf{{w}}}= G_{\mathbf{{u}} \mathbf{{u}}} - \frac{G_{\mathbf{{u}} \mathbf{{r}}} G_{\mathbf{{r}} \mathbf{{u}}}}{G_{\mathbf{{r}} \mathbf{{r}}}}, \qquad \hbox {Tr }\widehat{G} = \hbox {Tr }G - \frac{(G^2)_{\mathbf{{r}} \mathbf{{r}}}}{G_{\mathbf{{r}} \mathbf{{r}}}}. \end{aligned}$$

We conclude that

$$\begin{aligned} \widehat{x}(E) = x(E) - \frac{M}{\pi } \hbox {Im }\left( {\frac{G_{\mathbf{{u}} \mathbf{{r}}} G_{\mathbf{{r}} \mathbf{{u}}}}{G_{\mathbf{{r}} \mathbf{{r}}}}}\right) (E + \mathrm {i}\eta ). \end{aligned}$$

Recalling (3.9) and (7.31), we find that the second term is stochastically dominated by

$$\begin{aligned} M \frac{\phi ^{-2} \Psi ^2}{\phi ^{-1/2}} \leqslant N \Psi ^2 \leqslant C N \frac{1}{N^2 \eta ^2} = \frac{C N^{4 \varepsilon }}{N \Delta _a} \, \frac{1}{\Delta _a} \leqslant C N^{4 \varepsilon } \kappa _a^{1/2} \frac{1}{\Delta _a}, \end{aligned}$$

where in the second step we used that \(\Psi \leqslant C (N \eta )^{-1}\), as follows from Lemma 3.6 and the definition of \(\eta \) in (7.2). Recalling the definitions from (7.2), we therefore conclude that for small enough \(\varepsilon \equiv \varepsilon (\tau )\) we have

$$\begin{aligned} |I | \sup _{E \in I} \bigl |\widehat{x}(E) - x(E) \bigr | \prec N^{-c} \end{aligned}$$
(8.4)

for some positive constant \(c\) depending on \(\tau \).

Similarly, we have for any \(z \in \mathbf{{S}}\)

$$\begin{aligned} \bigl |\hbox {Tr }G - \hbox {Tr }\widehat{G} \bigr | \prec N \Psi ^2 \leqslant N^{-c} \eta ^{-1} \end{aligned}$$
(8.5)

for some positive constant \(c\) depending on \(\tau \). Plugging (8.5) into the definition of \(\widehat{y}(E)\) and estimating the error term using integration by parts, as in (7.43), we get

$$\begin{aligned} \sup _{E \in I} \bigl |\widehat{y}(E) - y(E) \bigr | \prec N^{-c}. \end{aligned}$$

Using the mean value theorem and the bound \(|y(E) | \prec 1\), we therefore get

$$\begin{aligned} h \biggl [{ \int _{I} \widehat{x}(E) \, q \left( {\widehat{y}(E) }\right) \, \mathrm {d}E }\biggr ] = h \biggl [{ \int _{I} x(E) \, q \left( {y(E) }\right) \, \mathrm {d}E }\biggr ] + O_\prec (N^{-c}). \end{aligned}$$

Combined with (7.21), this concludes the proof. \(\square \)

This concludes the proof of Theorem 2.20 for the case of general \(T\).

8.3 The joint eigenvalue–eigenvector universality of \(YY^*\) near the spectral edges

In this section we observe that the technology developed in Sect. 7 allows us to establish the universality of the joint eigenvalue–eigenvector distribution of \(Q\) provided that \(\Sigma = I_M\). Without loss of generality, we consider the case where \(Q\) is given by \(H = Y Y^*\) defined in (2.7). This result applies to arbitrary eigenvalue and eigenvector indices which are bounded by \(K^{1 - \tau }\), and does in particular not need to invoke eigenvalue correlation functions.

This result generalizes the quantum unique ergodicity from Proposition 6.6 and its extension from Remark 6.7 by also including the distribution of the eigenvalues. The universality of both the eigenvalues and the eigenvectors is formulated in the sense of fixed indices. A result in a similar spirit was given in [29, Theorem 1.6], except that the upper bound on the eigenvalue and eigenvector indices \((\log K)^{C \log \log N}\) from [29] is improved all the way to \(K^{1 - \tau }\), for any \(\tau > 0\). A result covering all eigenvalue and eigenvector indices, i.e. with an index upper bound \(K\), was given in [29, Theorem 1.10] and [41, Theorem 8], but under the assumption of a four-moment matching assumption. Theorem 8.3 is a true universality result in that it does not require any moment matching assumptions, but it does require an index upper bound of \(K^{1 - \tau }\) instead of \(K\) on the eigenvalue and eigenvector indices.

In addition, Theorem 8.3 extends the previous results from [29] and [41] by considering arbitrary generalized components \(\langle {\varvec{\zeta }_a} , {\mathbf{{v}}}\rangle \) of the eigenvectors. Finally, Theorem 8.3 holds for the general class of covariance matrices defined in (2.7).

Theorem 8.3

(Universality for the uncorrelated case) Fix \(\tau > 0,\) \(k = 1,2,3, \ldots ,\) and \(r = 0,1,2, \ldots \). Choose an observable \(h \in C^4(\mathbb {R}^{2k})\) satisfying

$$\begin{aligned} |\partial ^\alpha h(x) | \leqslant C(1 + |x |)^C \end{aligned}$$

for some constant \(C > 0\) and for all \(\alpha \in \mathbb {N}^{2k}\) satisfying \(|\alpha | \leqslant 4\). Let \(X\) be an \((M + r) \times N\) matrix\(,\) and define \(H\) through (2.7) for some orthogonal \(O \in \mathrm O(M + r)\). Denote by \(\lambda _1 \geqslant \cdots \geqslant \lambda _M\) the eigenvalues of \(H\) and by \(\varvec{\zeta }_1,\ldots , \varvec{\zeta }_M\) the associated unit eigenvectors. Let \(\mathbb {E}^{(1)}\) and \(\mathbb {E}^{(2)}\) denote the expectations with respect to two laws on \(X,\) both of which satisfy (1.15) and (1.16). Recall the definition (6.17) of \(\Delta _a,\) the typical distance between \(\lambda _a\) and \(\lambda _{a+1},\) and (3.14) of the classical location \(\gamma _a\).

Then for any indices \(a_1,\ldots , a_k, b_1,\ldots , b_k \in [\![{1, K^{1 - \tau }}]\!]\) and deterministic unit vectors \(\mathbf{{u}}_1, \mathbf{{w}}_1,\ldots , \mathbf{{u}}_k, \mathbf{{w}}_k \in \mathbb {R}^M\) we have

$$\begin{aligned}&(\mathbb {E}^{(1)} - \mathbb {E}^{(2)}) \, h \left( {\frac{\lambda _{a_1} - \gamma _{a_1}}{\Delta _{a_1}},\ldots , \frac{\lambda _{a_k} - \gamma _{a_k}}{\Delta _{a_k}}, M \langle {\mathbf{{u}}_1} , {\varvec{\zeta }_{b_1}}\rangle \langle {\varvec{\zeta }_{b_1}} , {\mathbf{{w}}_1}\rangle ,\ldots , M \langle {\mathbf{{u}}_k} , {\varvec{\zeta }_{b_k}}\rangle \langle {\varvec{\zeta }_{b_k}} , {\mathbf{{w}}_k}\rangle }\right) \\&\quad = O(N^{-c}) \end{aligned}$$

for some constant \(c \equiv c(\tau , k, r, h) > 0\).

Proof

The proof is a Green function comparison argument, a minor modification of that developed in Sect. 7. We write the distribution of \(\lambda _a - \gamma _a\) in terms of the resolvent \(\widehat{G}\), starting from the Helffer–Sjöstrand representation (7.15), exactly as in [29, Sections 4 and 5]. We omit further details. \(\square \)

Remark 8.4

In particular, Theorem 8.3 establishes the fixed-index universality of eigenvalues with indices bounded by \(K^{1 - \tau }\). Indeed, we may choose \(\mathbb {E}^{(2)}\) to be the expectation with respect to a Gaussian law, in which case \(H \overset{d}{=}\widetilde{X} \widetilde{X}^*\), where \(\widetilde{X}\) is a \(M \times N\) and Gaussian. (For example, the top eigenvalue of \(H\) is distributed according to the Tracy–Widom-1 distribution, etc.)

We note that even this fixed-index universality of eigenvalues is a new result, having previously only been established under the four-moment matching condition [29, 41] (in the context of Wigner matrices).

Remark 8.5

We formulated Theorem 8.3 for the real symmetric covariance matrices of the form (2.7), but it and its proof remain valid for complex Hermitian covariance matrices, as well as Wigner matrices (both real symmetric and complex Hermitian).

Remark 8.6

Assuming \(|\phi - 1 | > \tau \), the condition \(a \leqslant K^{1 - \tau }\) on the indices in Theorem 8.3 may be replaced with \(a \notin [\![{K^{1 - \tau }, K - K^{1 - \tau }}]\!]\).

Remark 8.7

Combining Theorems 8.3 and 2.7, we get the following universality result for \(Q\). Fix \(\tau > 0\), \(k = 1,2,3, \ldots \), and \(r = 0,1,2, \ldots \). For any continuous and bounded function \(h\) on \(\mathbb {R}^k\) we have

$$\begin{aligned}&\lim _{N \rightarrow \infty } \left[ \mathbb {E}\, h\left( {\frac{\mu _{s_+ + a_1} - \gamma _{a_1}}{\Delta _{a_1}}, \ldots , \frac{\mu _{s_+ + a_k} - \gamma _{a_k}}{\Delta _{a_k}}}\right) \right. \\&\left. \quad - \mathbb {E}^{\mathrm{{Wish}}} \, h \left( {\frac{\lambda _{a_1} - \gamma _{a_1}}{\Delta _{a_1}},\ldots , \frac{\lambda _{a_k} - \gamma _{a_k}}{\Delta _{a_k}}}\right) \right] = 0 \end{aligned}$$

for any indices \(a_1,\ldots , a_k \leqslant K^{1 - \tau } \alpha _+^3\). Here \(\mathbb {E}^{\mathrm{{Wish}}}\) denotes expectation with respect to the Wishart case, where \(r = 0\), \(T = I_M\), and \(X\) is Gaussian. A similar result holds near the left edge provided that \(|\phi - 1 | \geqslant \tau \).

9 Extension to\(\dot{Q}\) and proof of Theorem 2.23

In this section we explain how to extend our analysis from \(Q\) defined in (1.10) to \(\dot{Q}\) defined in (2.23), hence proving Theorem 2.23. We define the resolvent

$$\begin{aligned} \dot{G}(z) \mathrel {\mathop :}=\left( {X (1 - \mathbf{{e}} \mathbf{{e}}^*) X^* - z}\right) ^{-1}, \end{aligned}$$

which will replace \(G(z) = (X X^* - z)^{-1}\) when analysing with \(\dot{Q}\) instead of \(Q\). We begin by noting that the isotropic local laws hold for also for \(\dot{G}\).

Theorem 9.1

(Local laws for \(X (1 - \mathbf{{e}} \mathbf{{e}}^*) X^*)\) Theorem 3.2 remains valid with \(G\) replaced by \(\dot{G}\). Moreover\(,\) Theorems 3.4 and 3.5 remain valid for \(\varvec{\zeta }_i\) and \(\lambda _i\) denoting the eigenvectors and eigenvalues of \(X (1 - \mathbf{{e}} \mathbf{{e}}^*) X^*\).

Proof

As in the proof of Theorem 8.1, we only prove (3.9) for \(\dot{G}\). Using the identity (3.36) we get

$$\begin{aligned} \dot{G} = \left( {X X^* - z - X \mathbf{{e}} \mathbf{{e}}^* X^*}\right) ^{-1} = G + \frac{1}{1 - (X^* G X)_{\mathbf{{e}} \mathbf{{e}}}} \, G X \mathbf{{e}} \mathbf{{e}}^* X^* G. \end{aligned}$$
(9.1)

Using (3.9), the proof will be complete provided we can show that

$$\begin{aligned} \Biggl |\frac{(GX)_{\mathbf{{v}} \mathbf{{e}}} (X^* G)_{\mathbf{{e}} \mathbf{{w}}}}{1 - (X^* G X)_{\mathbf{{e}} \mathbf{{e}}}} \Biggr | \prec \Phi \end{aligned}$$
(9.2)

for unit vectors \(\mathbf{{v}}, \mathbf{{w}} \in \mathbb {R}^M\), where \(\Phi \) was defined in (8.3). Recall the definition \(R(z) \mathrel {\mathop :}=(X^* X - z)^{-1}\). From the elementary identity \(X^*G X = 1 + zR\) and Theorem 3.2 applied to \(X^*\) instead of \(X\), we get

$$\begin{aligned} \biggl |\frac{1}{1 - (X^* G X)_{\mathbf{{e}} \mathbf{{e}}}} \biggr | \prec \frac{1}{|z | |m_\phi |} \leqslant \frac{C}{1 + \phi ^{1/2}}, \end{aligned}$$

where in the last step we used (3.19) and (3.23). Using Lemma 9.3 below with \(\mathbf{{x}} = \mathbf{{e}}\), (9.2) therefore follows provided we can prove that

$$\begin{aligned} \phi ^{1/2} (1 + \phi ^{1/2}) \Phi ^2 \leqslant C \Phi . \end{aligned}$$

This is an immediate consequence of the estimate \((1 + \phi ) \Phi \leqslant C\), which itself easily follows from the definition (3.7) of \(\mathbf{{S}}\) and (3.22). \(\square \)

Next, we deal with the quantum unique ergodicity of \(X (1 - \mathbf{{e}} \mathbf{{e}}^*) X^*\). As explained in Sect. 8.2, it suffices to prove the following result.

Lemma 9.2

Lemma 7.3 remains valid if \(x(E)\) and \(y(E)\) are replaced with \(\dot{x}(E)\) and \(\dot{y}(E),\) obtained from the definitions (7.22) and (7.23) by replacing \(G\) with \(\dot{G}\).

Proof

The proof mirrors closely that of Lemma 8.2, using the identity (9.1) instead of (8.2) as input. We omit the details. \(\square \)

Using Theorem 9.1 and Lemma 9.2, combined with the results of Sect. 8.2, we conclude the proof of Theorem 2.23. To be precise, the arguments of Sects. 8 and 9 have to be successively combined so as to obtain the isotropic local laws and quantum unique ergodicity of the matrix \(Y (1 - \mathbf{{e}} \mathbf{{e}}^*) Y^*\). This has to be done in the following order. First, using Theorem 9.1 and Lemma 9.2, one establishes the local laws and quantum unique ergodicity for \(X (1 - \mathbf{{e}} \mathbf{{e}}^*) X^*\). Second, using these results as input, one repeats the arguments of Sect. 8, except that \(X X^*\) is replaced with \(X (1 - \mathbf{{e}} \mathbf{{e}}^*) X^*\); this is a trivial modification of the arguments presented in Sect. 8. Thus we get the local laws and quantum unique ergodicity for the matrix

$$\begin{aligned} Y (1 - \mathbf{{e}} \mathbf{{e}}^*) Y^* = (I_M,0) O X (I_N - \mathbf{{e}} \mathbf{{e}}^*) X^* O^* (I_M,0)^*. \end{aligned}$$

Moreover, we find that Theorem 8.3 also holds if \(H = Y Y^*\) from (2.7) is replaced with \(Y (1 - \mathbf{{e}} \mathbf{{e}}^*) Y^*\).

All that remains is the proof of the following estimate, which generalizes (7.32).

Lemma 9.3

For \(z \in \mathbf{{S}}\) and deterministic unit vectors \(\mathbf{{v}} \in \mathbb {R}^M\) and \(\mathbf{{x}} \in \mathbb {R}^N,\) we have

$$\begin{aligned} \bigl |(GX )_{\mathbf{{v}} \mathbf{{x}}} \bigr | \prec \phi ^{1/4} (1 + \phi ^{1/2}) \Phi , \end{aligned}$$
(9.3)

where \(\Phi \) was defined in (8.3).

Proof

In the case where \(\mathbf{{x}} = \mathbf{{e}}_\mu \) is a standard unit vector of \(\mathbb {R}^N\), (9.3) is a trivial extension of (7.32) (which was proved under the assumption that \(\phi \geqslant 1\)). For general \(\mathbf{{x}}\), the proof requires more work. Indeed, writing \((GX )_{\mathbf{{v}} \mathbf{{x}}} = \sum _\mu (GX )_{\mathbf{{v}} \mu } \, u(\mu )\) and estimating \(|(GX )_{\mathbf{{v}} \mu } |\) by \(O_\prec (\phi ^{1/4} (1 + \phi ^{1/2}) \Phi )\) leads to a bound proportional to the \(\ell ^1\)-norm of \(\mathbf{{x}}\) instead of its \(\ell ^2\)-norm. In order to obtain the sharp bound, which is proportional to the \(\ell ^2\)-norm, we need to exploit cancellations among the summands. This phenomenon is related to the fluctuation averaging from [18], and was previously exploited in [10] to obtain the isotropic laws from Theorem 3.2. It is best made use of by estimating the \(p\)-th moment for an even integer \(p\),

$$\begin{aligned}&\mathbb {E}\bigl |(GX )_{\mathbf{{v}} \mathbf{{x}}} \bigr |^p\nonumber \\&\quad = |z |^p \sum _{\mu _1,\ldots , \mu _p} x(\mu _1) \cdots x(\mu _p) \, \mathbb {E}\left( {R_{\mu _1\mu _1} \, (G^{[\mu _1]} X)_{\mathbf{{v}} \mu _1} \cdots \overline{R_{\mu _p \mu _p} \, (G^{[\mu _p]} X)_{\mathbf{{v}} \mu _p}} \!\, }\right) \,;\nonumber \\ \end{aligned}$$
(9.4)

here we used the first identity of (7.30). A similar argument was given in [10, Section 5]. The basic idea is to make all resolvents on the right-hand side of (9.4) independent of the columns of \(X\) indexed by \(\{\mu _1,\ldots , \mu _p\}\) (see [10, Definition 3.7]). As in [10, Section 5], we do this using the identities from [10, Lemma 3.8] for the entries of \(R\). In addition, for the entries of \(G\) we use the identity (in the notation of [10, Definition 3.7])

$$\begin{aligned} G^{[T]}_{\mathbf{{v}} \mathbf{{w}}} = G^{[T \mu ]}_{\mathbf{{v}} \mathbf{{w}}} + z R^{[T]}_{\mu \mu } \sum _{k,l = 1}^M G^{[T \mu ]}_{\mathbf{{v}} k} G^{[T \mu ]}_{l \mathbf{{w}}} X_{k \mu } X_{l \mu }, \end{aligned}$$
(9.5)

which follows from (7.28) and (7.29). As in [10, Section 5], the resulting expansion may be conveniently organized using graphs, and brutally truncated after a number of steps that depends only on \(p\) and \(\omega \) (here \(\omega \) is the constant from \(\mathbf{{S}}\) in (3.7)). The key observation is that, once the expansion is performed, we may take the pairing among the variables \(\{X_{k \mu _i} \mathrel {\mathop :}k \in [\![{1,N}]\!], i \in [\![{1,p}]\!]\}\); we find that each independent summation index \(\mu _i\) comes with a weight bounded by \(x(\mu _i)^2 + N^{-1}\), which sums to \(O(1)\). We refer to [10] for the full details of the method, and leave the modifications outlined above to the reader. \(\square \)