1 Introduction

A considerable literature about random matrices focuses on Hermitian or symmetric matrices with independent entries. These models are paradigms for local eigenvalues statistics of many random Hamiltonians, as envisioned by Wigner. The study of non-Hermitian random matrices goes back to Ginibre, then in Princeton and motivated by Wigner. Ginibre’s viewpoint on the problem was described as follows [12]:

Apart from the intrinsic interest of the problem, one may hope that the methods and results will provide further insight in the cases of physical interest or suggest as yet lacking applications.

In fact the eigenvalues statistics found by Ginibre, in the case of Gaussian complex or real entries, correspond to bidimensional gases, with distinct temperatures and symmetry conditions; this is therefore a model for many interacting particle systems in dimension 2 (see e.g. [10] chap. 15). The spectral statistics found in [12] in the complex case are the following: given a \(N\times N\) matrix with independent entries \(\frac{1}{\sqrt{N}}z_{ij}\), the \(z_{ij}\)’s being identically distributed according to the standard complex Gaussian measure \(\mu _g=\frac{1}{\pi }e^{-|z|^2}\mathrm{d A}(z)\) (where \(\mathrm{d A}\) denotes the Lebesgue measure on \(\mathbb{C }\)), its eigenvalues \(\mu _1,\dots ,\mu _N\) have a probability density proportional to

$$\begin{aligned} \prod _{i<j}|\mu _i-\mu _j|^2e^{-N\sum _{k}|\mu _k|^2}, \end{aligned}$$

with respect to the Lebesgue measure on \(\mathbb C ^{N}\). This law is a determinantal point process (because of the Vandermonde determinant) with an explicit kernel given by (see [12, 16] for a proof)

$$\begin{aligned} K_N(z_1,z_2)=\frac{N}{\pi }e^{-\frac{N}{2}(|z_1|^2+|z_2|^2)} \sum _{\ell =0}^{N-1}\frac{(N z_1\overline{z_2})^\ell }{\ell !}, \end{aligned}$$

with respect to the Lebesgue measure on \(\mathbb C \). This integrability property allowed Ginibre to derive the circular law for the eigenvalues, i.e., the empirical spectral distribution converges to the uniform measure on the unit circle,

$$\begin{aligned} \frac{1}{\pi }1\!\!1_{|z|<1}\mathrm{d A}(z). \end{aligned}$$
(1.1)

This phenomenon is the non-Hermitian counterpart of the semicircular law for Wigner random Hermitian matrices, and the quarter circular limit for Marchenko–Pastur random covariance matrices.

In the case of real Gaussian entries, the join distribution of the eigenvalues is more complicated but still integrable, allowing Edelman [7] to prove the limiting circular law as well; for more precise asymptotic properties of the real Ginibre ensemble, see [4, 11, 21]. We note also that the (right) eigenvalues of the quaternionic Ginibre ensemble were recently shown to converge to a (non-uniform) measure on the unit ball of the quaternions field [3].

For non-Gaussian entries, there is no explicit formula for the eigenvalues. Furthermore, the spectral measure, as a measure on \(\mathbb C \), cannot be characterized by computing \({{\mathrm{Tr}}}(M^\alpha \bar{M}^\beta )\). Thus the moment method, which is the popular way to prove the semicircle law, cannot be applied to solve this problem. Nevertheless, Girko [13] partially proved that the spectral measure of a non-Hermitian matrix \(M\) with independent entries converges to the circular law (1.1). The key insight of this work was the introduction of the Hermitization technique. This allows him to translate the convergence of complex empirical measures into the convergence of logarithmic transforms for a family of Hermitian matrices. More precisely, if we denote the original non-Hermitian matrix by \(X\) and the eigenvalues of \(X\) by \(\mu _j\), then for any \(\fancyscript{C}^2\) function \(F\) we have the identity

$$\begin{aligned} \frac{1}{N} \sum _{j=1}^N F (\mu _j) = \frac{1}{4\pi N} \int \Delta F(z) {{\mathrm{Tr}}}\log (X^* - z^* ) (X-z) \mathrm{d A}(z). \end{aligned}$$
(1.2)

From this formula, it is clear that the small eigenvalues of the Hermitian matrix \((X^* - z^* ) (X-z) \) play a special role due to the logarithmic singularity at \(0\). The key question is to estimate the smallest eigenvalues of \((X^* - z^* ) (X-z)\), or in other words, the smallest singular values of \( (X-z)\). This problem was not treated in [13], but the gap was remedied in a series of papers. First Bai [1] was able to treat the logarithmic singularity assuming bounded density and bounded high moments for the entries of the matrix (see also [2]). Lower bounds on the smallest singular values were given in Rudelson and Vershynin [19, 20], and subsequently Tao and Vu [22], Pan and Zhou [17] and Götze and Tikhomirov [14] weakened the moments and smoothness assumptions for the circular law, till the optimal \(\text{ L }^2\) assumption, under which the circular law was proved in [23].

The purpose of this paper is to prove a local version of the circular law, up to the optimal scale \(N^{-1/2 + {\varepsilon }}\) (see Sect. 2 for a precise statement). Below this scale, detailed local statistics will be important and that is beyond the scope of the current paper. The main tool of this paper is a detailed analysis of the self-consistent equations of the Green functions

$$\begin{aligned} G_{ij}(w) = [(X^* - z^* ) (X-z) - w]^{-1}_{ij}. \end{aligned}$$

Our method is related to the proof of a local semicircular law in [9] or to a local Marchenko–Pastur law in [18]. We are able to control \(G_{ij}(E + \mathrm{i}\eta )\) for the energy parameter \(E\) in any compact set and sufficient small \(\eta \). This provides sufficient information to use the formula (1.2) for functions \(F\) at the scales \(N^{-1/2+ {\varepsilon }}\). We also notice that a local Marchenko–Pastur law for \(X^*X\) was proved in [5], simultaneously with the present article.

Finally, we remark that the local circular law demonstrates that the eigenvalue distribution in the unit disk is extremely “uniform”. If the eigenvalues are distributed in the unit disk by a uniform statistics or any other statistics with summable decay of correlations, then there will be big holes or some clusterings of eigenvalues in the disk. While the usual circular law does not rule out these phenomena, the local law established in this paper does. This implies that the eigenvalue statistics cannot be any probability laws with summable decay of correlations.

2 The local circular law

We first introduce some notations. Let \(X\) be an \(N \times N\) matrix with independent centered entries of variance \( N^{-1} \). The matrix elements can be either real or complex, but for the sake of simplicity we will consider real entries in this paper. Denote the eigenvalues of \(X\) by \(\mu _j, j=1, \ldots , N\). We will use the following notion of stochastic domination which simplifies the presentation of the results and their proofs.

Definition 2.1

(Stochastic domination) Let \(W=(W_N)_{N\ge 1}\) be family a random variables and \(\Psi =(\Psi _N)_{N\ge 1}\) be deterministic parameters. We say that \(W\) is stochastically dominated by \(\Psi \) if for any \( \sigma > 0\) and \(D > 0\) we have

$$\begin{aligned} \mathbb P \Bigl [{\bigl |W_N \bigr | > N^\sigma \Psi _N }\Bigr ] \;\le \; N^{-D} \end{aligned}$$

for sufficiently large \(N\). We denote this stochastic domination property by

$$\begin{aligned} W \;\prec \; \Psi \,,\quad or \quad W ={{\mathrm{O}}}_\prec (\Psi ). \end{aligned}$$

In this paper, we will assume that the probability distributions for the matrix elements have the uniform subexponential decay property, i.e.,

$$\begin{aligned} \sup _{(i,j)\in [\![ 1,N ]\!] ^2} \mathbb{P }\left( |\sqrt{N}X_{i,j}|>\lambda \right) \le \vartheta ^{-1} e^{-\lambda ^\vartheta } \end{aligned}$$
(2.1)

for some constant \(\vartheta >0\) independent of \(N\). This condition can of course be weakened to an hypothesis of boundedness on sufficiently high moments, but the error estimates in the following Theorem would be weakened as well. We now state our local circular law, which holds up to the optimal scale \(N^{-1/2+{\varepsilon }}\).

Theorem 2.2

Let \(X\) be an \(N \times N\) matrix with independent centered entries of variance \( N^{-1} \). Suppose that the probability distributions of the matrix elements satisfy the uniformly subexponentially decay condition (2.1). We assume that for some fixed \( \tau >0\), for any \(N\) we have \(\tau \le ||z_0|-1|\le \tau ^{-1} \) (\(z_0\) can depend on \(N\)). Let \(f \) be a smooth non-negative function which may depend on \(N\), such that \(\Vert f\Vert _\infty \le C, \Vert f^{\prime }\Vert _\infty \le N^C\) and \(f(z)=0\) for \(|z|\ge C\), for some constant \(C\) independent of \(N\). Let \(f_{z_0}(z)=N^{2a}f(N^{a}(z-z_0))\) be the approximate delta function obtained from rescaling \(f\) to the size order \(N^{-a}\) around \(z_0\). We denote by \(D\) the unit disk. Then for any \(a\in (0,1/2]\),

$$\begin{aligned} \left( N^{-1} \sum _{j}f_{z_0} (\mu _j)-\frac{1}{\pi }\int _D f_{z_0}(z) \, \mathrm{d A}(z) \right) \prec N^{-1+2a } \Vert \Delta f \Vert _{L_1}. \end{aligned}$$
(2.2)

3 Hermitization and local Green function estimate

In the following, we will use the notation

$$\begin{aligned} Y_z=X-z I \quad \end{aligned}$$

where \(I\) is the identity operator. Let \(\lambda _j(z)\) be the \(j\)th eigenvalue (in the increasing ordering) of \(Y^*_z Y_z \). We will generally omit the \(z\)-dependence in these notations. Thanks to the Hermitization technique of Girko [13], the first step in proving the local circular law is to understand the local statistics of eigenvalues of \(Y^*_z Y_z\), for \(z\) strictly inside the unit circle. In this section, we first recall some well-known facts about the Stieltjes transform of the empirical measure of eigenvalues of \(Y^*_z Y_z\). We then present the key estimate concerning the Green function of \(Y^*_z Y_z\) in almost optimal spectral windows. This result will be used later on to prove a local version of the circular law.

3.1 Properties of the limiting density of the Hermitization matrix

Define the Green function of \(Y^*_z Y_z\) and its trace by

$$\begin{aligned} G(w):=G(w,z)&= (Y^*_z Y_z-w)^{-1},\nonumber \\ m(w):=m(w,z)&= \frac{1}{N}{{\mathrm{Tr}}}G(w,z) =\frac{1}{N}\sum _{j=1}^N\frac{1}{ \lambda _j(z) - w}, \quad w = E + \mathrm{i}\eta . \end{aligned}$$

We will also need the following version of the Green function later on:

$$\begin{aligned} \mathcal{G }(w):=\mathcal{G }(w,z)= ( Y_z Y^*_z-w)^{-1}. \quad \end{aligned}$$

As we will see, with high probability \(m(w,z)\) converges to \(m_\mathrm{c}(w,z)\) pointwise, as \(N\rightarrow \infty \) where \( m_\mathrm{c}(w,z)\) is the unique solution of

$$\begin{aligned} m_\mathrm{c}^{-1}=-w(1+m_\mathrm{c})+|z|^2(1+m_\mathrm{c})^{-1} \end{aligned}$$
(3.1)

with positive imaginary part (see Sect. 3 in [14] for the existence and uniqueness of such a solution). The limit \( m_\mathrm{c}(w,z)\) is the Stieltjes transform of a density \( \rho _\mathrm{c} (x,z)\) and we have

$$\begin{aligned} m_\mathrm{c}(w,z)= \int _\mathbb R \frac{\rho _\mathrm{c} (x,z)}{x-w}\mathrm{d}x \end{aligned}$$

whenever \(\eta >0\). The function \(\rho _\mathrm{c} (x,z)\) is the limiting eigenvalue density of the matrix \(Y^*_z Y_z\) (cf. Lemmas 4.2 and 4.3 in [1]). Let

$$\begin{aligned} \lambda _\pm :=\lambda _{\pm }(z):=\frac{( \alpha \pm 3)^3}{8(\alpha \pm 1)} ,\quad \alpha :=\sqrt{1+8|z|^2}. \end{aligned}$$
(3.2)

Note that \(\lambda _-\) has the same sign as \(|z|-1\). The following two propositions summarize the properties of \(\rho _\mathrm{c}\) and \(m_\mathrm{c}\) that we will need to understand the main results in this section. They will be proved in Appendix A. In the following, we use the notation \(A\sim B\) when \(c B \le A\le c^{-1}B\), where \(c>0\) is independent of \(N\).

Proposition 3.1

The limiting density \(\rho _\mathrm{c}\) is compactly supported and the following properties regarding \(\rho _\mathrm{c}\) hold.

  1. (i)

    The support of \(\rho _\mathrm{c}(x, z)\) is \([\max \{0,\lambda _-\}, \lambda _+]\).

  2. (ii)

    As \(x\rightarrow \lambda _+\) from below, the behavior of \(\rho _\mathrm{c}(x, z)\) is given by \(\rho _\mathrm{c}(x, z)\sim \sqrt{\lambda _+-x}. \)

  3. (iii)

    For any \({\varepsilon }>0\), if \( \max \{0,\lambda _-\}+{\varepsilon }\le x \le \lambda _+-{\varepsilon }\), then \(\rho _\mathrm{c}(x, z)\sim 1\).

  4. (iv)

    Near \(\max \{0,\lambda _-\}\), the behavior of \(\rho _\mathrm{c}(x, z)\) can be classified as follows.

    • If \(|z|\ge 1+\tau \) for some fixed \(\tau >0\), then \(\lambda _-> {\varepsilon }(\tau ) > 0 \) and \(\rho _\mathrm{c}(x, z)\sim 1\!\!1_{x>\lambda _-}\sqrt{ x-\lambda _-}\).

    • If \(|z|\le 1-\tau \) for some fixed \(\tau >0\), then \(\lambda _-< - {\varepsilon }(\tau ) < 0\) and \(\rho _\mathrm{c}(x, z)\sim 1/ \sqrt{x} \).

    All of the estimates in this proposition are uniform in \(|z|<1-\tau \), or \(\tau ^{-1}\ge |z|\ge 1+\tau \) for fixed \(\tau >0\).

Proposition 3.2

The preceding Proposition implies that, uniformly in \(w\) in any compact set,

$$\begin{aligned} |m_\mathrm{c}(w,z)|={{\mathrm{O}}}(|w|^{-1/2} ) \end{aligned}$$

Moreover, the following estimates on \(m_\mathrm{c}(w,z)\) hold.

  • If \(|z|\ge 1+\tau \) for some fixed \(\tau >0\), then \(m_\mathrm{c}\sim 1\) for \(w\) in any compact set.

  • If \(|z|\le 1-\tau \) for some fixed \(\tau >0\), then \(m_\mathrm{c}\sim |w|^{-1/2} \) for \(w\) in any compact set.

3.2 Concentration estimate of the Green function up to the optimal scale

We now state precisely the estimate regarding the convergence of \(m\) to \(m_\mathrm{c}\). Since the matrix \(Y^*_z Y_z\) is symmetric, we will follow the approach of [9]. We will use extensively the following definition of high probability events.

Definition 3.3

(High probability events) Define

$$\begin{aligned} \varphi \;\mathrel {\mathop :}=\; (\log N)^{\log \log N}\,. \end{aligned}$$
(3.3)

Let \(\zeta > 0\). We say that an \(N\)-dependent event \(\Omega \) holds with \(\zeta \) -high probability if there is some constant \(C\) such that

$$\begin{aligned} \mathbb P (\Omega ^c) \;\le \; N^C \exp (-\varphi ^\zeta ) \end{aligned}$$

for large enough \(N\).

For \( \alpha \ge 0\), define the \(z\)-dependent set

$$\begin{aligned} \underline{\mathrm{S}}(\alpha )\;\mathrel {\mathop :}=\; \bigl \{{w \!\in \! \mathbb C \,\mathrel {\mathop :}\,\max (\lambda _-/5, 0) \!\le \! E \!\le \! 5\lambda _+ \,,\; \varphi ^\alpha N^{-1} |m_\mathrm{c}|^{-1}\!\le \! \eta \le 10 }\bigr \},\quad \end{aligned}$$
(3.4)

where \(\varphi \) is defined in (3.3). Here we have suppressed the explicit \(z\)-dependence. Notice that for \(|z|<1-{\varepsilon }\), as \(|m_\mathrm{c}|\sim |\omega |^{-1/2}\) we allow \(\eta \sim |w| \sim {N^{-2} \varphi ^{2\alpha }}\) in the set \(\underline{\mathrm{S}} (\alpha )\). This is a key feature of our approach which shows that the Green function estimates hold until a scale much smaller than the typical \(N^{-1}\) value of \(\eta \).

Theorem 3.4

(Strong local Green function estimates) Suppose \(\tau \le ||z|-1|\le \tau ^{-1} \) for some \(\tau >0\) independent of \(N\). Then for any \(\zeta >0\), there exists \(C_\zeta >0\) such that the following event holds with \( \zeta \)-high probability:

$$\begin{aligned} \bigcap _{w \in \underline{\mathrm{S}} (C_\zeta )} \biggl \{{|m(w)-m_\mathrm{c}(w)| \le \varphi ^{C_\zeta } \frac{1}{N\eta }}\biggr \}. \end{aligned}$$
(3.5)

Moreover, the individual matrix elements of the Green function satisfy, with \( \zeta \)-high probability,

$$\begin{aligned} \bigcap _{w \in \underline{\mathrm{S}} (C_\zeta )} \biggl \{{\max _{ij}\left| G_{ij}-m_\mathrm{c}\delta _{ij}\right| \le \varphi ^{C_\zeta } \left( \sqrt{\frac{{{\mathrm{Im}}}\, m_\mathrm{c} }{N\eta }}+ \frac{1}{N\eta }\right) }\biggr \}. \end{aligned}$$
(3.6)

4 Properties of \(\rho _\mathrm{c}\) and \(m_\mathrm{c}\)

We have introduced some basic properties of \(\rho _\mathrm{c}\) and \(m_\mathrm{c}\) in Proposition 3.1 and 3.2. In this section, we collect some more useful properties used in this paper, proved in Appendix A. Recall that \(w = E + \mathrm{i}\eta , \alpha =\sqrt{1+8|z|^2}\) from (3.2), and define \(\kappa := \kappa (w, z) \) as the distance from \(E\) to \(\{\lambda _+, \lambda _-\}\):

$$\begin{aligned} \kappa =\min \{|E-\lambda _-|, |E-\lambda _+|\}. \end{aligned}$$
(4.1)

For \(|z| < 1\), we have \(\lambda _- < 0\) (see Proposition 3.1), so in this case we define \(\kappa :=|E-\lambda _+|\).

Lemma 4.1

There exists \(\tau _0>0\) such that for any \(\tau \le \tau _0\) if \(|z|\le 1-\tau \) and \(|w|\le \tau ^{-1} \) then the following properties concerning \(m_\mathrm{c}\) hold. All constants in the following estimates depend on \(\tau \).

  • Case 1: \(E\ge \lambda _+\) and \(|w-\lambda _+|\ge \tau \). We have

    $$\begin{aligned} |{{\mathrm{Re}}}m_\mathrm{c}|\sim 1, \quad -\frac{1}{2}\le {{\mathrm{Re}}}m_\mathrm{c} <0 , \quad {{\mathrm{Im}}}\, m_\mathrm{c}\sim \eta . \end{aligned}$$
    (4.2)
  • Case 2: \(|w-\lambda _+|\le \tau \) (Notice that there is no restriction on whether \(E\le \lambda _+\) or not ). We have

    $$\begin{aligned} m_\mathrm{c}(w, z)=- \frac{2}{3+\alpha } +\sqrt{\frac{8(1+\alpha )^3}{\alpha (3+\alpha )^5}}\, (w-\lambda _+ )^{1/2} +{{\mathrm{O}}}(\lambda _+-w), \end{aligned}$$
    (4.3)

    and

    $$\begin{aligned} {{\mathrm{Im}}}\, m_\mathrm{c}\sim&\left\{ \begin{array}{cc} \frac{\eta }{\sqrt{ \kappa }} &{}\quad \text{ if }\ \kappa \ge \eta \, \hbox {and}\, E\ge \lambda _+, \\ \sqrt{ \eta } &{}\text{ if }\, \kappa \le \eta \, \hbox {or}\,\,\, E\le \lambda _+. \end{array}\right. \end{aligned}$$
    (4.4)
  • Case 3: \(|w|\le \tau \). We have

    $$\begin{aligned} m_\mathrm{c}(w,z)=\mathrm{i}\frac{(1-|z|^2)}{\sqrt{w}} +\frac{1-2|z|^2}{2|z|^2-2}+{{\mathrm{O}}}(\sqrt{w}) \end{aligned}$$
    (4.5)

    as \(w\rightarrow 0\), and

    $$\begin{aligned} {{\mathrm{Im}}}\, m_\mathrm{c}(w,z)\sim |w|^{-1/2}. \end{aligned}$$
    (4.6)
  • Case 4: \(|w|\ge \tau , |w-\lambda _+|\ge \tau \) and \(E\le \lambda _+\). We have

    $$\begin{aligned} |m_\mathrm{c}|\sim 1,\quad {{\mathrm{Im}}}\, m_\mathrm{c}\sim 1. \end{aligned}$$
    (4.7)

Here Case 1 covers the regime where \(E \ge \lambda _+\) and \(w\) is far away from \(\lambda _+\). Case 2 concerns the regime that \(w\) is near \(\lambda _+\), while Case 3 is for \(w\) is near the origin. Finally Case 4 is for \(w\) not covered by the first three cases.

Lemma 4.2

There exists \( \tau _0>0\) such that for any \(\tau \le \tau _0\), if \(|z|\ge 1+\tau \) and \(|w|\le \tau ^{-1}\) then the following properties concerning \(m_\mathrm{c}\) hold. All constants in the following estimates depend on \(\tau \). Recall from (3.2) that \(\lambda _-=\frac{( \alpha -3)^3}{8(\alpha -1)} >0\).

  • Case 1: \(E\ge \lambda _+\) and \(|w-\lambda _+|\ge \tau \). We have

    $$\begin{aligned} |{{\mathrm{Re}}}m_\mathrm{c}|\sim 1,\quad -\frac{1}{2}\le {{\mathrm{Re}}}m_\mathrm{c}<0 , \quad {{\mathrm{Im}}}\, m_\mathrm{c}\sim \eta . \end{aligned}$$
  • Case 2: \(E\le \lambda _-\) and \(|w-\lambda _-|\ge \tau \). We have

    $$\begin{aligned} |{{\mathrm{Re}}}m_\mathrm{c}|\sim 1,\quad 0\le {{\mathrm{Re}}}m_\mathrm{c}, \quad {{\mathrm{Im}}}\, m_\mathrm{c}\sim \eta . \end{aligned}$$
  • Case 3: \(|\kappa +\eta |\le \tau \). We have

    $$\begin{aligned} m_\mathrm{c}(w, z)&= \frac{2}{-3\mp \alpha } + \sqrt{\frac{8(\pm 1+\alpha )^3}{\pm \alpha (\pm 3+\alpha )^5}}\, (w-\lambda _\pm )^{1/2} +{{\mathrm{O}}}(\lambda _\pm -w), \nonumber \\ {{\mathrm{Im}}}\, m_\mathrm{c}&\sim \left\{ \begin{array}{cc} \frac{\eta }{\sqrt{ \kappa }} &{}\quad \text{ if }\, \kappa \ge \eta \quad \hbox {and}\, E\notin [\lambda _-, \lambda _+], \\ \sqrt{\eta } &{}\text{ if }\, \kappa \le \eta \, \quad \hbox {or}\, E\in [\lambda _-, \lambda _+]. \end{array} \right. \end{aligned}$$
    (4.8)
  • Case 4: \(|w|\ge \tau \), \(|w-\lambda _+|\ge \tau \) and \(\lambda _- \le E\le \lambda _+\). We have

    $$\begin{aligned} |m_\mathrm{c}|\sim 1,\quad {{\mathrm{Im}}}\, m_\mathrm{c}\sim 1. \end{aligned}$$

Here Case 1 covers the regime \(E \ge \lambda _+\) and \(w\) is far away from \(\lambda _+\). Case 2 concerns the regime \(E \le \lambda _-\) and \(w\) is far away from \(\lambda _-\). Case 3 is for \(w\) near \(\lambda _\pm \). Finally Case 4 is for \(w\) not covered by the first three cases.

The following lemma concerns the two cases covered in Lemmas 4.1 and 4.2, i.e., \(z\) is either strictly inside or outside of the unit disk.

Lemma 4.3

There exists \( \tau _0 >0\) such that for any \(\tau \le \tau _0\) if either the conditions \(|z| \le 1-\tau \) and \(|w|\le \tau ^{-1}\) hold or the conditions \(|z| \ge 1+\tau , |w|\le \tau ^{-1}, {{\mathrm{Re}}}\omega \ge \lambda _-/5\) hold, then we have the following three bounds concerning \(m_\mathrm{c}\) (all constants in the following estimates depend on \(\tau \)):

$$\begin{aligned}&\displaystyle |m_\mathrm{c}+1|\sim |m_\mathrm{c}|\sim |w|^{-1/2},\end{aligned}$$
(4.9)
$$\begin{aligned}&\displaystyle \left| {{\mathrm{Im}}}\frac{1}{ w(1+m_\mathrm{c})} \right| \le C {{\mathrm{Im}}}\, m_\mathrm{c}, \end{aligned}$$
(4.10)
$$\begin{aligned}&\displaystyle \left| (-1 + |z^2|) \left( m_\mathrm{c}-\frac{-2}{3+\alpha } \right) \left( m_\mathrm{c}-\frac{-2}{3-\alpha }\right) \right| \ge C\frac{\sqrt{\kappa +\eta }}{ |w|}. \end{aligned}$$
(4.11)

5 Proof of Theorem 2.2, local circular law in the bulk

Our main tool in this section will be Theorem 3.4, which critically uses the hypothesis \(||z|-1|\ge \tau \): when \(z\) is on the unit circle the self-consistent equation (which is a fixed point equation for the function \(g(m)=(1+w m(1+m)^2)/(|z|^2-1)\) see (6.21) later in this paper) becomes unstable

We follow Girko’s idea [13] of Hermitization, which can be reformulated as the following identity (see e.g. [15]): for any smooth \(F\)

$$\begin{aligned} \begin{aligned} \frac{1}{N} \sum _{j=1}^N F (\mu _j)&=\frac{1}{4\pi N} \int \Delta F(z) \sum _j \log ( z-\mu _j )(\bar{z} - \bar{\mu }_j)\\ \mathrm{d A}(z)&=\frac{1}{4\pi N} \int \Delta F(z) {{\mathrm{Tr}}}\log Y^*_z Y_z \mathrm{d A}(z) \end{aligned} \end{aligned}$$
(5.1)

We will use the notation \(z= z(\xi )=z_0+N^{-a} \xi \). Choosing \(F= f_{z_0}\) defined in Theorem 2.2 and changing the variable to \(\xi \), we can rewrite the identity (5.1) as

$$\begin{aligned} N^{-1} \sum _j f_{z_0}(\mu _j)&= \frac{1}{4\pi } N^{-1+2a}\int (\Delta f)(\xi ) {{\mathrm{Tr}}}\log Y^*_z Y_z \mathrm{d A}(\xi )\nonumber \\&= \frac{1}{4\pi } N^{-1+2a}\int (\Delta f)(\xi ) \sum _j \log \lambda _j(z) \mathrm{d A}(\xi ). \end{aligned}$$

Recall that \(\lambda _j(z)\)’s are the ordered eigenvalues of \(Y_z^* Y_z \), and define \(\gamma _j(z)\) as the classical location of \(\lambda _j(z)\), i.e.

$$\begin{aligned} \int _{0 }^ {\gamma _j(z) }\rho _\mathrm{c}(x,z)\mathrm{d}x=j/N. \end{aligned}$$
(5.2)

Suppose we have

$$\begin{aligned} \left| \int \Delta f(\xi ) \left( \sum _j \log \lambda _j(z (\xi ))- \sum _j \log \gamma _j(z(\xi ))\right) \mathrm{d A}(\xi ) \right| \prec \Vert \Delta f \Vert _{L_1}. \end{aligned}$$
(5.3)

Thanks to Proposition 3.1, one can check that uniformly in \( |z| < 1-\tau \), and also in the domain \(1+\tau \le |z|\le \tau ^{-1}\) (\(\tau >0\)), for any \(\delta >0\) we have

$$\begin{aligned} \left| \sum _j \log \gamma _j(z)- N \left( \int _0^\infty (\log x)\rho _\mathrm{c}(x,z) \mathrm{d}x\right) \right| \le N^{\delta } \end{aligned}$$

for large enough \(N\). We therefore have

$$\begin{aligned} N^{-1} \sum _j f_{z_0}(\mu _j) = \frac{1}{4\pi }\int f(\xi )\left( \int _0^\infty (\log x)\Delta _z\rho _\mathrm{c}(x,z) \mathrm{d}x\right) \mathrm{d A}(\xi ) +{{\mathrm{O}}}_\prec \Vert \Delta f \Vert _{L_1}\nonumber \\ \end{aligned}$$
(5.4)

where we have used that

$$\begin{aligned}&\frac{1}{4\pi } N^{2a}\int \Delta f(\xi ) \int _0^\infty (\log x)\rho _\mathrm{c}(x,z) \mathrm{d}x \mathrm{d A}(\xi ) \\&\quad = \frac{1}{4\pi }\int f(\xi )\left( \int _0^\infty (\log x)\Delta _z\rho _\mathrm{c}(x,z) \mathrm{d}x\right) \mathrm{d A}(\xi ). \end{aligned}$$

It is known, by Lemma 4.4 of [1], that

$$\begin{aligned} \int _0^\infty (\log x)\Delta _z\rho _\mathrm{c}(x,z) dx =4 \chi _D (z). \end{aligned}$$
(5.5)

Combining (5.4) and (5.5), we have proved (2.2) provided that we can prove (5.3). To prove (5.3), we need the following rigidity estimate which is a consequence of Theorem 3.4.

Lemma 5.1

Suppose \(\tau \le ||z|-1|\le \tau ^{-1} \) for some \(\tau >0\) independent of \(N\). Then for any \(\zeta >0\), there exists \(C_\zeta >0\) such that the following event holds with \( \zeta \)-high probability: for any \( \varphi ^{C_\zeta }<j<N-\varphi ^{C_\zeta }\) we have

$$\begin{aligned} \gamma _{j-\varphi ^{C_\zeta }}\le \lambda _j\le \gamma _{j+\varphi ^{C_\zeta }}. \end{aligned}$$
(5.6)

and in the case \(|z|\le 1-\tau \),

$$\begin{aligned} \frac{| \lambda _{j }-\gamma _j|}{\gamma _j}\le \frac{C \varphi ^{C_\zeta }}{ j (1- \frac{j}{N})^{1/3} }, \end{aligned}$$
(5.7)

in the case \(|z|\ge 1+\tau \),

$$\begin{aligned} \frac{| \lambda _{j }-\gamma _j|}{\gamma _j}\le \frac{C \varphi ^{C_\zeta }}{ (\min \{\frac{j}{N}, 1-\frac{j}{N}\})^{1/3}N }. \end{aligned}$$
(5.8)

Proof

First, with (3.5) and the definition (3.4), for any \(\zeta \) there exists \(C_\zeta >0\) such that

$$\begin{aligned} \max _{E+i\eta \in \underline{\mathrm{S}}(C_\zeta )} { \eta | m(E+\mathrm{i}\eta )- m_\mathrm{c}(E+\mathrm{i}\eta )|\le C\varphi ^{2C_\zeta }N^{-1}}. \end{aligned}$$
(5.9)

holds with with \( \zeta \)-high probability. It also implies that for \(\eta =\varphi ^{C_\zeta }N^{-1}|m_\mathrm{c} |^{-1}\),

$$\begin{aligned} \eta \,{{\mathrm{Im}}}\, m(E+\mathrm{i}\eta ) \le C\varphi ^{2C_\zeta }N^{-1}.\quad \end{aligned}$$
(5.10)

Then using the fact that \( \eta \, {{\mathrm{Im}}}\,m(E+\mathrm{i}\eta ) \) and \( \eta {{\mathrm{Im}}}\, m_\mathrm{c}(E+\mathrm{i}\eta ) \) are increasing with \(\eta \), we obtain that (5.10) holds for any \(0\le \eta \le {{\mathrm{O}}}( \varphi ^{C_\zeta }N^{-1}|m_\mathrm{c} |^{-1})\) with \( \zeta \)-high probability. Notice that \({{\mathrm{Im}}}\, m\) and \({{\mathrm{Im}}}\, m_\mathrm{c}\) are positive number. Define the interval

$$\begin{aligned} I_E=[E_1,E_2]=[\gamma _j, 4\lambda _+] \end{aligned}$$

and define \(\eta _j\ge 0\) as the smallest positive solution of

$$\begin{aligned} \eta _{j} =2\varphi ^{C_\zeta }|m_\mathrm{c}(E_j+ \mathrm{i}\eta _j)|^{-1}N^{-1},\quad j=1, \;2. \end{aligned}$$

Since

$$\begin{aligned} \#\{j: E-\eta \le \lambda _j\le E+\eta \}\le CN\eta {{\mathrm{Im}}}\, m(E+\mathrm{i}\eta ), \end{aligned}$$

we have by (5.10) that

$$\begin{aligned} \#\{j: E_1-\eta _1\le \lambda _j\le E_1+\eta _1\} + \#\{j: E_2-\eta _2\le \lambda _j\le E_2+\eta _2\}\le C\varphi ^{2C_\zeta }.\nonumber \\ \end{aligned}$$
(5.11)

Using the Helffer–Sjöstrand functional calculus (see e.g. [6]), letting \(\chi (\eta )\) be a smooth cutoff function with support in \([-1,1]\), with \(\chi (\eta )= 1\) for \( |\eta |\le 1/2\) and with bounded derivatives, we have for any \(q: \mathbb R \rightarrow \mathbb R \),

$$\begin{aligned} q(\lambda )=\frac{1}{2\pi }\int _\mathbb{R ^2}\frac{\mathrm{i}y q^{\prime \prime }(x)\chi (y)+\mathrm{i}(q(x)+\mathrm{i}y q^{\prime }(x))\chi ^{\prime }(y)}{\lambda -x-\mathrm{i}y}\mathrm{d}x\mathrm{d}y. \end{aligned}$$

To prove (5.6), we choose \(q\) to be supported in \([E_1 , E_2 ]\) such that \(q(x)=1\) if \(x\in [E_1+\eta _1, E_2-\eta _2]\) and \(|q^{\prime }|\le C(\eta _{i})^ {-1}, |q^{\prime \prime }|\le C(\eta _{i})^{-2}\) if \(|x-E_i|\le \eta _i\). We now claim that

$$\begin{aligned} \left| \int q(\lambda )\Delta \rho (\lambda )\mathrm{d}\lambda \right| \le C\varphi ^{2C_\zeta }N^{-1},\quad \mathrm{where}\ \Delta \rho =\rho -\rho _\mathrm{c},\ \rho =\frac{1}{N}\sum _j \delta _{\lambda _j(z)}.\nonumber \\ \end{aligned}$$
(5.12)

Combining (5.12) and (5.11), we have for any \(1\le j\le N\),

$$\begin{aligned} \#\{k: \lambda _k\ge \gamma _j\}-(N-j)={{\mathrm{O}}}(\varphi ^{2C_\zeta }) \end{aligned}$$

which implies (5.6) with \(C_\zeta \) in (5.6) replaced by \(2 C_\zeta \).

It remains to prove (5.12). Since \(q\) and \(\chi \) are real, with \(\Delta m=m-m_\mathrm{c}\)

$$\begin{aligned} \left| \int q(\lambda )\Delta \rho (\lambda )\mathrm{d}\lambda \right|&\le C\int _\mathbb{R ^2} \big ( |q(E)| +|\eta | |q^{\prime }(E)|\big ) |\chi ^{\prime }(\eta )| | \Delta m(E+\mathrm{i}\eta )| \mathrm{d}E\mathrm{d}\eta \nonumber \\&+C\sum _i\left| \ \int _{|\eta |\le \eta _i}\int _{|E-E_i|\le \eta _i} \eta q^{\prime \prime }(E) \chi (\eta ) {{\mathrm{Im}}}\Delta m(E+\mathrm{i}\eta )\mathrm{d}E\mathrm{d}\eta \right| \nonumber \\&+ C\sum _i\left| \ \int _{|\eta |\ge \eta _i}\int _{|E-E_i|\le \eta _i} \eta q^{\prime \prime }(E)\chi (\eta ) {{\mathrm{Im}}}\Delta m(E+\mathrm{i}\eta )\mathrm{d}E \mathrm{d}\eta \right| ,\nonumber \\ \end{aligned}$$
(5.13)

The first term is estimated by

$$\begin{aligned} \int _\mathbb{R ^2} ( |q(E)| +|\eta | |q^{\prime }(E)|) |\chi ^{\prime }(\eta )| | \Delta m(E+\mathrm{i}\eta )| \mathrm{d}E\mathrm{d}\eta \le CN^{-1}\varphi ^{C_\zeta }, \end{aligned}$$
(5.14)

using (3.5) and that on the support of \(\chi ^{\prime }\) is in \(1\ge |\eta |\ge 1/2\).

For the second term in the r.h.s. of (5.13), with \(|q^{\prime \prime }|\le C\eta _i^{-2}\), (5.9) and (5.10), we obtain

$$\begin{aligned} \text{ second } \text{ term } \text{ in } \text{ r.h.s. } \text{ of } (5.13) \le CN^{-1}\varphi ^{C_\zeta }. \end{aligned}$$
(5.15)

We now integrate the third term in (5.13) by parts first in \(E\), then in \(\eta \) (and use the Cauchy-Riemann equation \(\frac{\partial }{\partial E}{{\mathrm{Im}}}(\Delta m)=-\frac{\partial }{\partial \eta } {{\mathrm{Re}}}(\Delta m))\) so that

$$\begin{aligned}&\int \eta q^{\prime \prime }(E) \chi (\eta ) {{\mathrm{Im}}}(\Delta m(E+\mathrm{i}\eta ))\mathrm{d}E\mathrm{d}\eta \nonumber \\&\quad = -\int _{|E-E_i|\le \eta _i} \eta _i \chi (\eta ) q^{\prime } (E){{\mathrm{Re}}}(\Delta m(E+\mathrm{i}\eta )) \mathrm{d}E \\&\quad \quad - \int (\eta \chi ^{\prime }(\eta )+\chi (\eta )) q^{\prime }(E){{\mathrm{Re}}}(\Delta m(E+\mathrm{i}\eta ))\mathrm{d}E\mathrm{d}\eta \end{aligned}$$

We therefore can bound the third term in (5.13) with absolute value by

$$\begin{aligned}&C\sum _i\int _{ |E-E_i|\le \eta _i}\eta _i |q^{\prime }(E)| |{{\mathrm{Re}}}{ \Delta } m(E+\mathrm{i}\eta _i)|\mathrm{d}E \nonumber \\&\quad + C\sum _i \eta _i^{-1}\int _{\eta _i\le \eta \le 1}\int _{ |E-E_i|\le \eta _i} |{{\mathrm{Re}}}{ \Delta } m(E+\mathrm{i}\eta )|\mathrm{d}E\mathrm{d}\eta \nonumber \\&\quad +\int _\mathbb{R ^2} |\eta | |q^{\prime }(E)| |\chi ^{\prime }(\eta )| | \Delta m(E+\mathrm{i}\eta )| \mathrm{d}E\mathrm{d}\eta \end{aligned}$$
(5.16)

where the last term can be bounded as the first term in r.h.s. of (5.13). By using (5.9) we have

$$\begin{aligned} (5.16)\le CN^{-1}\varphi ^{C_\zeta }\!+\! CN^{-1}\varphi ^{C_\zeta }\sum _i\eta _i^{-1} \int _{|E-E_i|\le \eta _i}\mathrm{d}E \int _{\eta _i\le \eta \le 1}\frac{1}{\eta N}\mathrm{d}\eta \le CN^{-1}\varphi ^{C_\zeta +1} \end{aligned}$$

where we used \(\eta _i\ge N^{-C}\). Together with (5.14) and (5.15), we obtain (5.12) and complete the proof of (5.6).

Now we prove (5.7). Using (5.2) and Proposition 3.1, we have

$$\begin{aligned} \gamma _j={{\mathrm{O}}}( j^2N^{-2}), \quad j \le N/2; \qquad \gamma _j=\lambda _+ -{{\mathrm{O}}}\left( \frac{ N-j}{ N} \right) ^{2/3}\!, \quad j \ge N/2.\nonumber \\ \end{aligned}$$
(5.17)

One can check easily that

$$\begin{aligned} \gamma _j- \gamma _{j-1}={{\mathrm{O}}}\left( \frac{j}{N^{5/3}(N-j)^{1/3}} \right) \end{aligned}$$

and for \(j\ge 2\)

$$\begin{aligned} \frac{| \gamma _j-\gamma _{j\pm 1}|}{\gamma _j}\le C j^{-1}N^{1/3}(N-j)^{-1/3}\le \frac{C \varphi ^{C_\zeta } }{ j (1-\frac{j}{N})^{1/3} }. \end{aligned}$$
(5.18)

Combining (5.18) with (5.6), we obtain (5.7).

For (5.8), the proof is similar to the above reasoning, but simpler: in this case \(\gamma _j\sim 1\) for \(j\le N/2\). For \(j\ge N/2, \gamma _j\) is bounded as (5.17), and one can check if \(1+\tau \le |z|\le \tau ^{-1}\), Proposition 3.1, we have

$$\begin{aligned} \gamma _j- \gamma _{j-1}={{\mathrm{O}}}\left( \left( \min \left\{ \frac{j}{N}, 1-\frac{j}{N}\right\} \right) ^{-1/3}N^{-1} \right) \end{aligned}$$

which implies (5.8).\(\square \)

We return to the proof of the local circular law, Theorem 2.2. We now only need to prove (5.3) from Lemma 5.1. From (5.7) and (5.8), we have

$$\begin{aligned} \left| \log \lambda _j(z)-\log \gamma _j(z)\right| \le C \frac{| \lambda _{j }-\gamma _j|}{\gamma _j}\le \frac{C \varphi ^{C_\zeta }}{ j (1- \frac{j}{N})^{1/3} }, \quad |z|\le 1-\tau \end{aligned}$$

and

$$\begin{aligned} \left| \log \lambda _j(z)\!-\!\log \gamma _j(z)\right| \!\le \! C \frac{| \lambda _{j }\!-\!\gamma _j|}{\gamma _j}\le \frac{C \varphi ^{C_\zeta }}{ (\min \{\frac{j}{N}, 1-\frac{j}{N}\})^{1/3}N }, \quad 1\!+\!\tau \le |z|\le \tau ^{-1}. \end{aligned}$$

Notice that, for large enough \(C\), there is a constant \(c>0\) such that for any \(j\) we have

$$\begin{aligned} \lambda _j\le N^C \end{aligned}$$

with probability larger than \(1-\exp ({{ -N^c}})\) (for this elementary fact, one can for example see that the entries of \(X\) are smaller that \(1\) with probability greater than \(1-\vartheta ^{-1}e^{-N^\vartheta }\) by the subexponential decay assumption (2.1) and then use \(\sum \lambda _j={{\mathrm{Tr}}}Y^* Y \)), so together with the above bounds on \(\left| \log \lambda _j(z)-\log \gamma _j(z)\right| \) this proves that for any \(\zeta >0\), there exists \(C_\zeta >0\) such that

$$\begin{aligned} \left| \sum _{j> \varphi ^{C_\zeta }} \left( \log \lambda _j(z)-\log \gamma _j(z)\right) \right| \le \varphi ^{2C_\zeta } \end{aligned}$$
(5.19)

with \( \zeta \)-high probability. Furthermore, one can see that or estimates hold uniformly for \(z\)’s in this region.

On the other hand, the following important Lemma 5.2 holds, concerning the smallest eigenvalue. It implies that

$$\begin{aligned} \sum _{j\le \varphi ^{C_\zeta }} | \log \lambda _j(z) | \prec 1 \end{aligned}$$

holds uniformly for \(z\) in any fixed compact set. It is easy to check that for any \(\delta >0\), for large enough \(N\),

$$\begin{aligned} \sum _{j\le \varphi ^{C_\zeta }} | \log \gamma _j(z) | \le N^{ \delta }. \end{aligned}$$

Hence we can extend the summation in (5.19) to all \(j \ge 1\), which gives (5.3) and completes the proof of Theorem 2.2.

Lemma 5.2

(Lower bound on the smallest eigenvalue) Under the same assumptions of Theorem 2.2,

$$\begin{aligned} | \log \lambda _1(z) | \prec 1 \end{aligned}$$

holds uniformly for \(z\) in any fixed compact set.

Proof

This lemma followsFootnote 1 from [20] or Theorem 2.1 of [22], which gives the required estimate uniformly in \(z\). Note that the typical size of \(\lambda _1\) is \(N^{-2}\) [20], and we need a much weaker bound of type \(\mathbb{P }(\lambda _1(z)\le e^{-N^{-{\varepsilon }}})\le N^{-C}\) for any \({\varepsilon },C>0\). This estimate is very simple to prove if, for example, the entries of \(X\) have a density bounded by \(N^C\). Then, from the variational characterization \(\lambda _1(z)=\min _{|u|=1}\Vert X(z)u\Vert ^2\), one easily gets

$$\begin{aligned}&\lambda _1(z)^{1/2}\ge N^{-1/2}\min _{k\in [\![ 1,N]\!]}\text{ dist }(X(z)e_k,\text{ span }\{X(z)e_\ell ,\ell \ne k\})\nonumber \\&\quad = N^{-1/2}\min _{k\in [\![ 1,N]\!]}|\langle X(z)e_k, u_k(z)\rangle |, \end{aligned}$$

where \(u_k(z)\) is a unit vector independent of \(X(z)e_k\). By conditioning on \(u_k(z)\), the result of this lemma is straightforward since the matrix entries have a density.\(\square \)

6 Weak local Green function estimate

In this section, we make a first step towards Theorem 3.4, with a weaker version of it, stated hereafter.

Theorem 6.1

(Weak local Green function estimates) Under the assumption of Theorem 3.4, the following event hold with \( \zeta \)-high probability (see (3.4) for the definition of \(\underline{\mathrm{S}}\)):

$$\begin{aligned} \bigcap _{w \in \underline{\mathrm{S}} (b )} \biggl \{{\max _{ij}|G_{ij}(w)-m_\mathrm{c}(w)\delta _{ij}| \le \varphi ^{C_\zeta } \frac{1}{| w^{1/2}|} \left( \frac{| w^{1/2}|}{N\eta } \right) ^{1/4} }\biggr \}, \quad b > 5 C_\zeta .\qquad \end{aligned}$$
(6.1)

This theorem will be proved in the subsequent subsections.

6.1 Identities for Green functions and their minors

There are many different ways to form minors for the matrices \(Y^* Y\) and \( Y Y^* \). We will use the following definition (where we use the notation \([\![ a, b]\!]=[a,b]\cap \mathbb Z \)).

Definition 6.2

Let \(\mathbb{T }, \mathbb{U } \subset [\![1, N]\!]\). Then we define \(Y^{(\mathbb{T }, \mathbb{U })}\) as the \( (N-|\mathbb{U }|)\times ( N-|\mathbb{T }|) \) matrix obtained by removing all columns of \(Y\) indexed by \(i \in \mathbb{T }\) and all rows of \(Y\) indexed by \(i \in \mathbb{U }\). Notice that we keep the labels of indices of \(Y\) when defining \(Y^{(\mathbb{T }, \mathbb{U })}\).

Let \(\mathbf{y}_i\) be the \(i \)th column of \(Y\) and \(\mathbf{y}^{(\mathbb{S })}_i\) be the vector obtained by removing \(\mathbf{y}_i (j) \) for all \( j \in \mathbb{S }\). Similarly we define \(\mathrm{y}_i\) be the \(i \)th row of \(Y\). Define

$$\begin{aligned} G^{(\mathbb{T }, \mathbb{U })}&= \Big [ (Y^{(\mathbb{T }, \mathbb{U })})^* Y^{(\mathbb{T }, \mathbb{U })}- w\Big ]^{-1},\ \ m_G^{(\mathbb{T }, \mathbb{U })} =\frac{1}{N}{{\mathrm{Tr}}}G^{(\mathbb{T }, \mathbb{U })}, \\ \mathcal{G }^{(\mathbb{T }, \mathbb{U })}&= \Big [ Y^{(\mathbb{T }, \mathbb{U })}(Y^{(\mathbb{T }, \mathbb{U })})^*- w \Big ]^{-1},\ \ m_\mathcal{G }^{(\mathbb{T }, \mathbb{U })} =\frac{1}{N}{{\mathrm{Tr}}}\mathcal{G }^{(\mathbb{T }, \mathbb{U })}. \end{aligned}$$

By definition, \(m^{(\emptyset , \emptyset )} = m\). Since the eigenvalues of \(Y^* Y \) and \(Y Y^*\) are the same except the zero eigenvalue, it is easy to check that

$$\begin{aligned} m_G^{(\mathbb{T }, \mathbb{U })}(w) =m_\mathcal{G }^{(\mathbb{T }, \mathbb{U })} +\frac{|\mathbb{U }|-|\mathbb{T }|}{N w} \end{aligned}$$
(6.2)

For \(|\mathbb{U }|=| \mathbb{T }|\), we define

$$\begin{aligned} m ^{(\mathbb{T }, \mathbb{U })}:= m_G^{(\mathbb{T }, \mathbb{U })} = m_\mathcal{G }^{(\mathbb{T }, \mathbb{U })} \end{aligned}$$
(6.3)

By definition, \(G^{(\mathbb{T }, \mathbb{U })} \) is a \((N-|\mathbb{T }|)\times (N-|\mathbb{T }|)\) matrix and \(\mathcal{G }^{(\mathbb{T }, \mathbb{U })} \) is a \((N-|\mathbb{U }|)\times (N-|\mathbb{U }|)\) matrix. For \(i\) or \(j\in \mathbb{T }, G_{ij}^{(\mathbb{T }, \mathbb{U })}\) has no meaning from the previous definition. But we define \(G_{ij}^{(\mathbb{T }, \mathbb{U })} = 0\) whenever either \(i\) or \(j \in \mathbb{T }\). Similar convention applies to \(\mathcal{G }_{i j}^{(\mathbb{T }, \mathbb{U })}\), which is zero if \(i\) or \(j \in \mathbb{U }\).

Notice that we can view \(Y_z Y^*_z = (W_{z^*})^* W_{z^*} \) where \( W_{z^*} = Y^*_z\), so all properties of \(G^{(\mathbb{T }, \mathbb{U })}\) have parallel versions for \(\mathcal{G }^{(\mathbb{U }, \mathbb{T })}\). We shall call this property row–column reflection symmetry, i.e., we interchange \(G^{(\mathbb{U },\mathbb{T })}, Y, z, \mathbf{y}_i \) by \(\mathcal{G }^{(\mathbb{T }, \mathbb{U })}, Y^*, z^*, \mathrm{y}_i \). Here \(\mathbf{y}_i\) is a \(N\times 1 \) column vector and \(\mathrm{y}_i\) a \(1\times N \) row vector. The following lemma provides the formulas relating Green functions and their minors.

Lemma 6.3

(Relation between \(G, G^{(\mathbb{T },\emptyset )}\)and \(G^{( \emptyset , \mathbb{T })}\)) For \(i,j \ne k \) ( \(i = j\) is allowed) we have

$$\begin{aligned} G_{ij}^{(k,\emptyset )}&= G_{ij}-\frac{G_{ik}G_{kj}}{G_{kk}} ,\quad \mathcal{G }_{ij}^{(\emptyset ,k)}=\mathcal{G }_{ij}-\frac{\mathcal{G }_{ik} \mathcal{G }_{kj}}{\mathcal{G }_{kk}},\end{aligned}$$
(6.4)
$$\begin{aligned} G^{ ( \emptyset ,i)}&= G+\frac{(G \mathrm{y} _i^*) \, ( \mathrm{y} _i G)}{1- \mathrm{y} _i G \mathrm{y} _i ^*} ,\quad G =G^{ ( \emptyset ,i)}-\frac{( G^{ ( \emptyset ,i)} \mathrm{y} _i^*) \, ( \mathrm{y} _i G^{ ( \emptyset , i)})}{1+ \mathrm{y} _i G^{ ( \emptyset ,i)} \mathrm{y} _i ^* }, \end{aligned}$$
(6.5)

and

$$\begin{aligned} \mathcal{G }^{ (i,\emptyset )} =\mathcal{G }+\frac{(\mathcal{G }\mathbf{y}_i) \, (\mathbf{y}_i^* \mathcal{G })}{1- \mathbf{y}_i^* \mathcal{G }\mathbf{y}_i } ,\quad \mathcal{G }=\mathcal{G }^{ (i,\emptyset )}-\frac{(\mathcal{G }^{ (i,\emptyset )} \mathbf{y}_i) \, ( { \mathbf{y}_i}^ * \mathcal{G }^{ (i,\emptyset )})}{1+ \mathbf{y}_i^*\mathcal{G }^{ (i,\emptyset )} \mathbf{y}_i }. \end{aligned}$$

Furthermore, the following crude bound on the difference between \(m\) and \(m_G^{(\mathbb U , \mathbb T )}\) holds: for \(\mathbb{U }, \mathbb{T }\subset [\![1, N]\!]\) we have

$$\begin{aligned} |m-m^{(\mathbb{U }, \mathbb{T })}_G|+|m-m^{(\mathbb{U }, \mathbb{T })}_\mathcal{G }| \le \frac{|\mathbb{U }|+|\mathbb{T }|}{N\eta }.\quad \end{aligned}$$
(6.6)

Proof

By the row–column reflection symmetry, we only need to prove those formulas involving \(G\). We first prove (6.4). In [8, 9], was proved a lemma concerning Green functions of matrices and their minors. This lemma is stated as Lemma 9.2 in Appendix B. Let

$$\begin{aligned} H:= Y^* Y \end{aligned}$$
(6.7)

For \(\mathbb T \subset [\![1, N]\!]\), denote \(H^{[\mathbb T ]}\) as the \(N-|\mathbb T |\) by \(N-|\mathbb T |\) minor of \(H\) after removing the \(i\)th rows and columns index by \(i\in \mathbb T \). Following the convention in Definition 9.1, we define

$$\begin{aligned} G^{[\mathbb T ]}=(H^{[\mathbb T ]}-wI)^{-1}. \end{aligned}$$
(6.8)

By definition, we have

$$\begin{aligned} G^{[\mathbb T ]}=G^{(\mathbb T ,\emptyset )}. \end{aligned}$$
(6.9)

Then we can apply (9.4) to \(G^{(\mathbb T ,\emptyset )}\) and obtain (6.4).

We now prove (6.5). Recall the rank one perturbation formula

$$\begin{aligned} (A + \mathbf{v}^* \mathbf{v})^{-1} = A^{-1} - \frac{ (A^{-1} \mathbf{v}^*) (\mathbf{v}A^{-1})}{ 1 + \mathbf{v}A^{-1} \mathbf{v}^*} \end{aligned}$$

where \(\mathbf{v}\) is a row vector and \(\mathbf{v}^*\) is its Hermitian conjugate. Together with

$$\begin{aligned} G^{-1}=Y^* Y-w I= \sum _j \mathrm{y}_j^* \mathrm{y}_j-w I =\left( G^{(\emptyset , i)}\right) ^{-1}+\mathrm{y}_i^* \mathrm{y}_i \end{aligned}$$

we obtain (6.5).

We now prove (6.6). With (6.4), we have

$$\begin{aligned} m_{G}^{(i,\emptyset )}-m=-\frac{1}{N}\frac{ \sum _{j}G_{ji}G_{ij}}{G_{ii}}. \end{aligned}$$

Moreover, by diagonalization in an orthonormal basis and the obvious identity \(|(\lambda -\omega )^{-2}|=\eta ^{-1}{{\mathrm{Im}}}[(\lambda -\omega )^{-1}]\) (\(\lambda \in \mathbb{R }\)), we have

$$\begin{aligned} \left| \sum _{j}G_{ji}G_{ij}\right| =|[G^2]_{ii}|=\frac{{{\mathrm{Im}}}G_{ii}}{\eta }, \end{aligned}$$

so we have proved that

$$\begin{aligned} |m-m^{(i, \emptyset )}_G| \le \frac{1 }{N\eta }. \end{aligned}$$
(6.10)

By (6.3), (6.10) holds for \(m_\mathcal{G }^{( i, \emptyset )}\) as well. Similar arguments can be used to prove (6.6) for \(m_ \mathcal{G }^{(i, j)}, m_ G^{(i, j)}\) and the general cases. This completes the proof of Lemma 6.3.\(\square \)

The next step is to derive equations between the matrix and its minors. The main results are stated as the following Lemma 6.5. We first need the following definition.

Definition 6.4

In the following, \(\mathbb E _X\) means the integration with respect to the random variable \(X\). For any \(\mathbb{T }\subset [\![ 1, N]\!]\), we introduce the notations

$$\begin{aligned} Z^{(\mathbb{T })}_{i }:=(1-\mathbb E _{\mathrm{y}_i}) \mathrm{y}^{(\mathbb{T })}_i G^{(\mathbb{T }, i)} \mathrm{y}_i^{(\mathbb{T })*} \end{aligned}$$

and

$$\begin{aligned} \mathcal{Z }^{(\mathbb{T })}_{i }:=(1-\mathbb E _{\mathbf{y}_i}) \mathbf{y}_i^{(\mathbb{T }) *} \mathcal{G }^{(i, \mathbb{T })} \mathbf{y}_i^{(\mathbb{T })}. \end{aligned}$$

Recall by our convention that \(\mathbf{y}_i\) is a \(N\times 1 \) column vector and \(\mathrm{y}_i\) is a \(1\times N \) row vector. For simplicity we will write

$$\begin{aligned} Z _{i } =Z^ {(\emptyset )}_{i}, \quad \mathcal{Z }_{i} =\mathcal{Z }^ {(\emptyset )}_{i}. \end{aligned}$$

Lemma 6.5

(Identities for \(G, \mathcal{G }, Z\)and \(\mathcal{Z }\)) For any \( \mathbb T \subset [\![ 1, N]\!]\), we have

$$\begin{aligned} G^{(\emptyset , \mathbb{T })} _{ii}&= - w^{-1}\left[ 1+ m_\mathcal{G }^{(i, \mathbb{T })}+ |z|^2 \mathcal{G }_{ii}^{(i, \mathbb{T })} +\mathcal{Z }^{(\mathbb{T })}_{i } \right] ^{-1},\end{aligned}$$
(6.11)
$$\begin{aligned} {G_{ij} ^{(\emptyset , \mathbb{T }) } }&= -wG_{ii}^{(\emptyset , \mathbb{T }) } G^{(i,\mathbb{T })}_{jj} \left( \mathbf{y}_i^{(\mathbb{T })*} \mathcal{G }^{(ij, \mathbb{T })} \mathbf{y}_j^{(\mathbb{T })}\right) , \quad i\ne j, \end{aligned}$$
(6.12)

where, by definition, \(\mathcal{G }_{ii}^{(i,\mathbb{T })}=0\) if \(i\in \mathbb{T }\). Similar results hold for \(\mathcal{G }\):

$$\begin{aligned} \left[ \mathcal{G }^{(\mathbb{T }, \emptyset )} _{ii} \right] ^{-1}&= - w\left[ 1+ m_ G^{(\mathbb{T },i)}+ |z|^2 G_{ii}^{(\mathbb{T },i)} + Z^{(\mathbb{T })}_{i } \right] \end{aligned}$$
(6.13)
$$\begin{aligned} {\mathcal{G }_{ij}^{(\mathbb{T }, \emptyset )}}&= -w\mathcal{G }_{ii}^{(\mathbb{T }, \emptyset )}\mathcal{G }^{(\mathbb{T }, i)}_{jj} \left( \mathrm{y}_i^{(\mathbb{T })} G^{( \mathbb{T },ij)} \mathrm{y}_j^{(\mathbb{T })*}\right) , \quad i\ne j. \end{aligned}$$
(6.14)

Proof

By the row–column reflection symmetry, we only need to prove the \(G\) part of this lemma. Furthermore, for simplicity, we prove the case \(T=\emptyset \), the general case can be proved in the same way.

We first prove (6.11). Let \(H=Y^* Y\). Similarly to (6.7) and (6.8), we define \(G^{[i]}\) and \(H^{[i]}\). Then using (9.2) and (6.9), we have

$$\begin{aligned} \left[ G _{ii} \right] ^{-1}=h_{ii}-w-\sum _{k,l\ne i}h _{ik}G^{(i, \emptyset )}_{kl}h_{li}. \end{aligned}$$

From the definition of \( H \), we have \(h _{ik}= \mathbf{y}_i^* \mathbf{y}_k \). Then

$$\begin{aligned} \left[ G _{ii} \right] ^{-1}= \mathbf{y}_i^* \mathbf{y}_i -w- \mathbf{y}_i^* Y^{(i, \emptyset )}G^{(i, \emptyset )} \left( Y^{(i, \emptyset )}\right) ^* \mathbf{y}_i. \end{aligned}$$
(6.15)

For any matrix \(A\), we have the identity

$$\begin{aligned} A (A^* A - w)^{-1} A^*=1+ w ( A A^* - w)^{-1}, \end{aligned}$$
(6.16)

and as a consequence

$$\begin{aligned} Y^{(i, \emptyset )}G^{(i, \emptyset )} \left( Y^{(i, \emptyset )}\right) ^*= 1+ w\mathcal{G }^{(i, \emptyset )}. \end{aligned}$$
(6.17)

Combining (6.15) and (6.17), we have

$$\begin{aligned} \left[ G _{ii} \right] ^{-1}=-w-w\,\mathbf{y}_i^* \mathcal{G }^{(i, \emptyset )} \mathbf{y}_i \end{aligned}$$
(6.18)

We now write

$$\begin{aligned} \mathbf{y}_i^* \mathcal{G }^{(i, \emptyset )} \mathbf{y}_i = \mathbb E _{\mathbf{y}_i}\mathbf{y}_i^* \mathcal{G }^{(i, \emptyset )} \mathbf{y}_i +\mathcal{Z }_i \end{aligned}$$

By definition

$$\begin{aligned} \mathbb E _{\mathbf{y}_i}\mathbf{y}_i^* \mathcal{G }^{(i, \emptyset )} \mathbf{y}_i =\frac{1}{N}{{\mathrm{Tr}}}\mathcal{G }^{(i, \emptyset )}+|z|^2\mathcal{G }^{(i, \emptyset )} _{ii}=m_\mathcal{G }^{(i, \emptyset )}+|z|^2 \mathcal{G }^{(i, \emptyset )} _{ii} \end{aligned}$$

which complete the proof of (6.11).

We now prove (6.12). As above, using now (9.3), we have

$$\begin{aligned} {G_{ij} ^{(\emptyset ,\mathbb{T }) } } = G_{ii}^{(\emptyset ,\mathbb{T }) } G^{(i,\mathbb{T })}_{jj} \left( h_{ij}-\sum _{kl\ne ij}h _{ik}G^{(ij, \emptyset )}_{kl}h_{lj} \right) \end{aligned}$$

where

$$\begin{aligned} h_{ij}-\sum _{kl\ne ij}h _{ik}G^{(ij, \emptyset )}_{kl}h_{lj} = \mathbf{y}_i^* \mathbf{y}_j - \mathbf{y}_i^* Y^{(ij, \emptyset )}G^{(ij, \emptyset )} \left( Y^{(ij, \emptyset )}\right) ^* \mathbf{y}_j. \end{aligned}$$

Then using (6.16) again, we obtain (6.12).\(\square \)

6.2 The self-consistent equation and its stability

We now derive the self-consistent equation for \(m(w)\) and its stability estimates. Following [9], we introduce the following control parameter:

Definition 6.6

Define the control parameter

$$\begin{aligned} \Psi = \left( \sqrt{\frac{{{\mathrm{Im}}}\, m_\mathrm{c}+\Lambda }{N\eta }}+\frac{1}{N\eta }\right) , \quad \Lambda = |m-m_\mathrm{c}| \end{aligned}$$

Notice that all quantities depend on \(w\) and \(z\). Furthermore, if \(\Lambda \le C |m_c|\) then for \(w \in \underline{\mathrm{S}} (b)\) (see (3.4)),

$$\begin{aligned} |m_\mathrm{c}|^{-1} \Psi \le \frac{ 1}{\sqrt{ N\eta |m_c|}}+\frac{1}{N\eta |m_c| } \le C \varphi ^{-b/2}. \end{aligned}$$
(6.19)

The quantity \(|m_\mathrm{c}|^{-1} \Psi \) will be our controlling small parameter in this paper.

Before we start to prove Theorem 3.4, we make the following observation. The parameter \(z\) can be either inside the unit ball or outside of it. Recall the properties of \(m_\mathrm{c}\) in section 4. By Lemma 3.1, the limiting density \(\rho _c\) of \(YY^*\) is supported on \([\lambda _-, \lambda _+]\), where \(\lambda _- < 0\) and \(\lambda _+ \sim 1\) when \(|z| \le 1 - \tau \). Since \(\lambda _- < 0\) in this case, we will never approach \(\lambda _-\). On the other hand, we will have to consider the behavior when \(w \sim 0\). When \( 1+ \tau \le |z| \le \tau ^{-1}\), we have \(\lambda _-> 0\) and \(w\) stays away from the origin by definition of \(\underline{\mathrm{S}} (C_\zeta )\), i.e., the condition \(E \ge \lambda _-/5\). Our approach to the local Green function estimates will use the self-consistent equation of \(m(w)\). This approach depends crucially on the stability properties of this equation which can be divided roughly into three cases: \(w\) near the edges \(\lambda _\pm , w \sim 0\) or \(w\) in the bulk (defined here as the rest of possible \(w \in \underline{\mathrm{S}} (C_\zeta )\)). From Lemma 4.1 and Lemma 4.2, the behavior of \(m_c\) near the edges \(\lambda _\pm \) when \( |z| \ge 1 + \tau \) are identical to its behavior near the edge \(\lambda _+\) when \( |z| \le 1 - \tau \). In the bulk, the behavior for both cases are the same. Thus we will only consider the case \( |z| \le 1 - \tau \) since it covers all three different behaviors. Hence from now on, we will assume that \(|z| \le 1 -\tau \). We emphasize that \({{\mathrm{Im}}}\, m_\mathrm{c} \ll |m_\mathrm{c}|\) when \(|\lambda _+-w|\ll 1\). All stability results concerning the self-consistent equation will be under the following assumption (6.20).

Lemma 6.7

(Self consistent equation) Suppose \(|z| \le 1 -\tau \) for some \(\tau > 0\). Then there exists a small constant \( \alpha > 0\) independent of \(N\) such that if the estimate

$$\begin{aligned} \Lambda \le \alpha |m_{c}| \end{aligned}$$
(6.20)

holds for some \(|w|\le C\) on a set \(A\) in the probability space of matrix elements for \(X\), then in the set \(A\) we have with \( \zeta \)-high probability

$$\begin{aligned} w\, m (1+ m)^2 - m |z|^2 + 1 + m = \Upsilon , \quad \Upsilon = {{\mathrm{O}}}\left( \varphi ^{ Q_\zeta } \Psi \right) \!, \end{aligned}$$
(6.21)

provided that \( w \in \underline{\mathrm{S}} (b)\) for some \(b > 5 Q_\zeta \) with \(Q_\zeta \) defined in Lemma 10.1.

Proof

By (4.9), (4.10) and (6.20), for \(|z| \le 1-t\) the following inequalities hold on the set \(A\):

$$\begin{aligned} |w|^{-1}\frac{1}{ |1+ m|^2 }&\le |w|^{-1}\frac{1}{ |1+ m_\mathrm{c} + {{\mathrm{O}}}(\Lambda ) |^2 } \le C,\end{aligned}$$
(6.22)
$$\begin{aligned} \left| {{\mathrm{Im}}}\frac{1}{ w(1+ m) } \right|&\le \left| {{\mathrm{Im}}}\frac{1}{ w(1+ m_\mathrm{c}) } \right| + \left| \frac{1}{ w(1+ m_\mathrm{c}) } (m-m_\mathrm{c}) \frac{1}{ (1+ m) } \right| \nonumber \\&\le {{\mathrm{Im}}}\, m_\mathrm{c} + C \Lambda . \end{aligned}$$
(6.23)

Furthermore, using (6.22), (4.9), (4.10), (6.20) and (3.1), we have in the set \(A\)

$$\begin{aligned} 1+ m - \frac{|z|^2}{ w(1+ m) } = 1+ m_\mathrm{c} - \frac{|z|^2}{ w(1+ m_\mathrm{c}) } + {{\mathrm{O}}}(\Lambda ) = \frac{1}{ w m_\mathrm{c}} + {{\mathrm{O}}}(\Lambda ).\qquad \end{aligned}$$
(6.24)

The origin of the self-consistent equation (6.21) relies on the choice \(\mathbb T = \{i\}\) in (6.13):

$$\begin{aligned} \left[ \mathcal{G }^{(i, \emptyset )} _{ii} \right] ^{-1} = - w\left[ 1+ m_ G^{( i ,i)}+Z^{(i)}_{i } \right] . \end{aligned}$$
(6.25)

By definition of \(\Psi \) and (6.6),

$$\begin{aligned} |m_ G^{( i ,i)}- m| \le \frac{C}{ N \eta } \le C \Psi . \end{aligned}$$
(6.26)

Moreover, we have from (10.1) that with \( \zeta \)-high probability in \(A\)

$$\begin{aligned} | Z^{(i)}_{i }| \le \varphi ^{ Q_\zeta /2} \sqrt{\frac{{{\mathrm{Im}}}\, m_G^{(i,i)}+ |z|^2 {{\mathrm{Im}}}\, G^{(i, i)}_{ii} }{N\eta }} \le \varphi ^{ Q_\zeta /2} \Psi \end{aligned}$$
(6.27)

where we have used (6.26), (6.20) and, by definition, \(G^{(i, i)}_{ii} = 0\). We would like to estimate \( (\mathcal{G }_{ii}^{(i,\emptyset )})^{-1}\) in (6.25) by treating \(( 1+ m )\) as the main term and the rest as error terms. From the Eqs. (6.20) and (6.19), the ratio between the error terms and the main term for \( w \in \underline{\mathrm{S}} (b)\) with \(b > 5 Q_\zeta \) is bounded by

$$\begin{aligned} |m|^{-1} | Z^{(i)}_{i }| + |m|^{-1} |m_ G^{( i ,i)}- m| \le \varphi ^{- Q_\zeta }. \end{aligned}$$
(6.28)

Therefore for any \( w \in \underline{\mathrm{S}} (b)\) with \(b > 5 Q_\zeta \) we have with \( \zeta \)-high probability

$$\begin{aligned} \mathcal{G }_{ii}^{(i,\emptyset )} = - \frac{1}{ w( 1+ m )} + \mathcal E _1 \end{aligned}$$
(6.29)

where

$$\begin{aligned} \mathcal E _1 = w^{-1}\frac{1}{ (1+ m)^2 }\Big [ m_ G^{( i ,i)}- m + Z^{(i)}_{i } \Big ] + {{\mathrm{O}}}\left( \frac{|Z^{(i)}_{i }|^2 + \frac{1}{ (N \eta )^2} }{ |w| |1+ m|^3 } \right) = {{\mathrm{O}}}(\varphi ^{ Q_\zeta /2} \Psi )\nonumber \\ \end{aligned}$$
(6.30)

where we have used (6.22) and \( |m_c| \sim |w|^{-1/2}\). Together with (6.23), we thus have with \( \zeta \)-high probability

$$\begin{aligned} \left| {{\mathrm{Im}}}\mathcal{G }_{ii}^{(i,\emptyset )} \right| \le \left| {{\mathrm{Im}}}\frac{1}{ w( 1+ m )} \right| + {{\mathrm{O}}}(\varphi ^{ Q_\zeta /2} \Psi ) \le {{\mathrm{Im}}}m_\mathrm{c} + C \Lambda + {{\mathrm{O}}}(\varphi ^{ Q_\zeta /2} \Psi ).\nonumber \\ \end{aligned}$$
(6.31)

Using this estimate, (6.6) and (6.29), we can estimate \(\mathcal{Z }_{i }:= \mathcal{Z }_{i }^{(\emptyset )}\) by

$$\begin{aligned} |\mathcal{Z }_{i }|&\le \varphi ^{Q_\zeta /2} \sqrt{\frac{{{\mathrm{Im}}}m_\mathcal{G }^{(i,\emptyset )}+ |z|^2 {{\mathrm{Im}}}\, \mathcal{G }^{(i, \emptyset )}_{ii} }{N\eta }}\nonumber \\&\le \varphi ^{Q_\zeta /2}\sqrt{\frac{{{\mathrm{Im}}}\, m+ {{\mathrm{Im}}}\, m_\mathrm{c} + \Lambda + \varphi ^{Q_\zeta /2} \Psi }{N\eta }} + \frac{ \varphi ^{Q_\zeta }}{ N \eta } \le \varphi ^{ Q_\zeta } \Psi \nonumber \\ \end{aligned}$$
(6.32)

We can now use (6.32), (6.29) and (6.6) to estimate the right hand side of (6.11) such that

$$\begin{aligned} G _{ii}&= - w^{-1}\left[ 1+ m_\mathcal{G }^{(i, \emptyset )}+ |z|^2 \mathcal{G }_{ii}^{(i, \emptyset )} +\mathcal{Z }_{i } \right] ^{-1} \nonumber \\&= - w^{-1}\left[ 1+ m- \frac{ |z|^2 }{ w( 1+ m)} + (m_\mathcal{G }^{(i, \emptyset )} - m) + \mathcal E _1 + \mathcal{Z }_{i } \right] ^{-1} \end{aligned}$$
(6.33)
$$\begin{aligned}&= - w^{-1} \left[ 1+ m -\frac{ |z|^2 }{ w(1+ m)} \right] ^{-1} - \mathcal E _2 \end{aligned}$$
(6.34)

where \(\mathcal E _1\) and \(\mathcal{Z }_{i }\) are bounded in (6.30) and (6.32) and \(\mathcal E _2\) is bounded by

$$\begin{aligned} \mathcal E _2 = {{\mathrm{O}}}\left( w^{-1} \left[ 1+ m - \frac{ |z|^2 }{ w(1+ m)} \right] ^{-2} \varphi ^{ Q_\zeta } \Psi \right) \le {{\mathrm{O}}}(\varphi ^{ Q_\zeta } \Psi ). \end{aligned}$$

In the last inequality, we have used (6.24) to bound \( 1+ m - \frac{ |z|^2 }{ w(1+ m)} \) and (4.9) for \(m_\mathrm{c}\).

Summing over the index \(i\) in (6.34), we have

$$\begin{aligned} 0= w m + \left[ 1+ m -\frac{ |z|^2 }{ w(1+ m)} \right] ^{-1} + {{\mathrm{O}}}( |w|\varphi ^{ Q_\zeta } \Psi ) \end{aligned}$$
(6.35)

Hence we have proved

$$\begin{aligned} 0 = w m ( 1+ m)^2 - m |z|^2 + 1+m = {{\mathrm{O}}}\Big [ \big (|w| |m+1|^2+|z^2| \big )\varphi ^{ Q_\zeta } \Psi \Big ] \end{aligned}$$

Together with the assumption (6.20) on \(\Lambda \) and (4.9) on the order of \(m_\mathrm{c}\), this proves (6.21).\(\square \)

Corollary 6.8

Under the assumptions of Lemma 6.7, the following properties hold. Let \(\mathbb{T }, \mathbb{U }\in [\![ 1, N]\!]\) such that \(i\notin \mathbb{T }\) and \(|\mathbb{T }|+| \mathbb N |\le C\). For any \(\zeta >0\) and \( w \in \underline{\mathrm{S}} (b)\) for some \(b > 5 Q_\zeta \) with \(Q_\zeta \) defined in Lemma 10.1, we have with \( \zeta \)-high probability for any \(i\in \mathbb{U }\) that

$$\begin{aligned} G_{ii}^{(\mathbb{T }, \mathbb{U })}-G^{( \emptyset , i)}_{ii}={{\mathrm{O}}}(\varphi ^{ Q_\zeta }\Psi ). \end{aligned}$$
(6.36)

If \(i \not \in \mathbb{U }\), then

$$\begin{aligned} G_{ii}^{(\mathbb{T },\mathbb{U })}-G _{ii}={{\mathrm{O}}}(\varphi ^{ Q_\zeta }\Psi ). \end{aligned}$$
(6.37)

Proof

We first prove the case \(i \not \in \mathbb{U }\). We claim that the parallel version of (6.34) holds as well, i.e.,

$$\begin{aligned} G_{ii}^{(\mathbb{T }, \mathbb{U })}= - w^{-1} \left[ 1+ m -\frac{ |z|^2 }{ w(1+ m)} \right] ^{-1} + {{\mathrm{O}}}(\varphi ^{Q_\zeta }\Psi ) \end{aligned}$$
(6.38)

Comparing (6.38) with (6.34), we have proved (6.37).

We now prove the case \(i \in \mathbb{U }\). By row–column symmetry, we have

$$\begin{aligned} G^{( \mathbb{T }, \mathbb{U })}&= \Big [ (Y^{(\mathbb{T }, \mathbb{U })})^* Y^{(\mathbb{T }, \mathbb{U })} - w \Big ]^{-1} = \Big [ A^{( \mathbb{U }, \mathbb{T })} (A^{( \mathbb{U }, \mathbb{T })})^* - w \Big ]^{-1}\nonumber \\&:= \mathcal{G }(A)_{ii}^{(\mathbb{U } , \mathbb{T })}\, \quad A = Y^* . \end{aligned}$$

Hence we have to prove, for \(i \in \mathbb{U }\) and \(i \not \in \mathbb{T }\), that

$$\begin{aligned} \mathcal{G }(A)_{ii}^{(\mathbb{U } , \mathbb{T })}-\mathcal{G }(A)_{ii}^{(i ,\emptyset )}={{\mathrm{O}}}(\varphi ^{Q_\zeta }\Psi ). \end{aligned}$$

We will omit \(A\) in the following argument.

One can extend (6.25)–(6.30) to \(\mathcal{G }_{ii}^{(\mathbb{U } , \mathbb{T })}\) and obtain

$$\begin{aligned} \mathcal{G }_{ii}^{(\mathbb{U } , \mathbb{T })} = - \frac{1}{ w( 1+ m )} + \mathcal E _1^{(\mathbb{T }, \mathbb{U })},\quad \mathcal E _1^{(\mathbb{T }, \mathbb{U })}={{\mathrm{O}}}( \varphi ^{Q_\zeta } \Psi ) \end{aligned}$$
(6.39)

as in (6.29). Comparing (6.39) with the equation for \(\mathcal{G }_{ii}^{(i ,\emptyset )}\) (6.29), we obtain (6.36) in the case \(i\in \mathbb{U }\).\(\square \)

We define for any sequence \(A_i\) (\(1\le i\le N\)) the quantity

$$\begin{aligned}{}[A]:= \mathbf{N^{-1} } \sum _i A_i. \end{aligned}$$

In application, we often use \( A=Z\) or \(A=\mathcal{Z }\). Define

$$\begin{aligned} \mathcal D (m)=m^{-1} + w + wm-\frac{|z|^2}{1+m}. \end{aligned}$$

The following lemma is our stability estimate for the equation \( \mathcal D (m)=0\). Notice that it is a deterministic result. It assumes that \(|\mathcal D (m)| \) has a crude upper bound and then derives a more precise estimate on \(\Lambda =|m-m_c|\).

Lemma 6.9

(Stability of the self-consistent Equation) Suppose that \(1 - |z|^2 > t > 0\). Let \(\delta : \mathbb C \mapsto \mathbb R _+\) be a continuous function satisfying the bound

$$\begin{aligned} |\delta (w)| \le (\log N)^{-8} |w^{1/2}|. \end{aligned}$$
(6.40)

Suppose that, for a fixed \(E\) with \( 0 \le E \le C\) for some constant \(C\) independent of \(N\), (6.20) and the estimate

$$\begin{aligned} |\Upsilon (m)(w, z)| =|\mathcal D (m)m(1+m) (w, z)|\le \delta (w) |m_\mathrm{c} |^2 \end{aligned}$$
(6.41)

hold for \(10\ge \eta \ge \tilde{\eta }\) for some \(\tilde{\eta }\) which may depend on \(N\). Denote \( {\varepsilon }^2 := \kappa + \eta \) where \(\kappa = |E- \lambda _+|\) (4.1) in our case that \(1 - |z|^2 > t > 0\). Then there is an \(M_0\) large enough independent of \(N\) such that for any fixed \(M>M_0\) and \(N\) large enough (depending on \(M\)) the following estimates for \( \Lambda = |m-m_\mathrm{c}|\) hold for \(10\ge \eta \ge \tilde{\eta }\):

$$\begin{aligned}&\text {Case 1}: \; \Lambda \le \frac{M^{3/2} \delta }{ |w| } \quad {or} \; \Lambda \ge \frac{1}{M ^2|w^{ 1/2}|} \quad {if} \; {\varepsilon }^2 \ge 1/M^2 \end{aligned}$$
(6.42)
$$\begin{aligned}&\text {Case 2a}: \; \Lambda \le \frac{M \delta }{ {{\varepsilon }} } \quad {or} \; \Lambda \ge \frac{2M \delta }{ {{\varepsilon }} } \quad \text {if} \; {\varepsilon }^2 \le 1/M^2 \; {and} \; \delta \le \frac{ {\varepsilon }^2 }{ M^{3/2}} \end{aligned}$$
(6.43)
$$\begin{aligned}&\text {Case 2b}: \; \Lambda \le M \sqrt{\delta }, \quad {or} \; \Lambda \ge 2 M \sqrt{\delta }\quad {if} \; {\varepsilon }^2 \le 1/M^2 \; {and} \; \delta \ge \frac{ {\varepsilon }^2 }{ M^{3/2}} \nonumber \\ \end{aligned}$$
(6.44)

The three upper bounds (i.e., the first inequalities in (6.42)–(6.44)) can be summarized as

$$\begin{aligned} \Lambda \le C \frac{ \delta (w)|w|^{-1}}{ \sqrt{\kappa +\eta +\delta }} . \end{aligned}$$
(6.45)

Proof

Define the polynomial

$$\begin{aligned} P_{w, z}(x) = w x (1+x)^2 +x(1-|z|^2) +1. \end{aligned}$$

By definition of \(\Upsilon \) (6.21), we have

$$\begin{aligned} P_{w, z}(m) = w m (1+m)^2 +m (1-|z|^2) +1 = \Upsilon = \mathcal D (m) m (1+m). \end{aligned}$$

Since \(P_{w, z}(m_\mathrm{c}) = 0\), we have

$$\begin{aligned}&\displaystyle w u^3+ B(w,z) u^2 + A(w,z) u =\Upsilon , \quad u = m-m_\mathrm{c},\\&\displaystyle B= w( 3 m_\mathrm{c} + 2),\\&\displaystyle A (w, z) = w (3 m_\mathrm{c} + 1) (m_\mathrm{c} + 1) + 1- |z|^2= 2 w m_\mathrm{c} (1+ m_\mathrm{c}) - \frac{1}{m}_\mathrm{c}. \end{aligned}$$

By definition of \(P_{w, z}\), we can express \(A\) and \(B\) by

$$\begin{aligned} P_{w, z}^{\prime } (m_\mathrm{c} (w, z) ) = A(w, z), \quad P_{w, z}^{\prime \prime } (m_\mathrm{c} (w, z) ) = 2B(w, z). \end{aligned}$$

Case 1: In this case, we claim that the following estimates concerning \(A\) and \(B\) hold:

$$\begin{aligned} |A| \ge C /M , \quad B = {{\mathrm{O}}}(|w^{ 1/2}|). \end{aligned}$$
(6.46)

Since \(A\) and \(B\) are explicit functions of \(m_\mathrm{c}\), Eq. (6.46) is just properties of the solution \(m_\mathrm{c}\) of the third order polynomial \(P_{w, z}(m)\). We now give a sketch of the proof. Consider first the case \(|w| \ll 1\). Then (6.46) follows from (4.9), (4.10), (4.6) and the definitions of \(A\) and \(B\).

We now assume that \( w\sim 1\) . Clearly, \( |B|\le {{\mathrm{O}}}(1) \sim |w^{ 1/2}|\), which gives (6.46) for \(B\). To prove \(|A|\ge C/M\), by definition of \(m_\mathrm{c}\) (3.1), we have \(w= \frac{-1 - m_\mathrm{c} + m_\mathrm{c} |z|^2}{ m_\mathrm{c} (1 + m_\mathrm{c})^2 }\). Thus we can rewrite \(A\) as

$$\begin{aligned} A&= \frac{-1 - 3 m_\mathrm{c} + 2 m_\mathrm{c}^2 (-1 + |z^2|)}{m_\mathrm{c}(1+m_\mathrm{c})}=\frac{2(-1 + |z^2|)}{m_\mathrm{c}(1+m_\mathrm{c})}(m_\mathrm{c}-a_+)(m_\mathrm{c}-a_-), \\ a_\pm&:= \frac{3 \pm \sqrt{ 1 + 8 |z|^2}}{ 4 (-1 + |z|^2)}=\frac{-2}{3\mp \sqrt{ 1 + 8 |z|^2}}\, . \end{aligned}$$

By (4.9) and (4.11) (where \(\alpha = \sqrt{ 1 + 8 |z|^2}\)), we obtain (6.46).

We now prove (6.42) by contradiction. If (6.42) is violated then with \(u = m-m_c\) we have

$$\begin{aligned} |\Upsilon | = |u| | A(w,z) + B(w,z) u + w u^2 | \ge \frac{ M^{3/2} \delta }{ |w| } \left[ {\frac{C}{M}} - \frac{C_2}{ M^{2}} - \frac{ C_3}{M^4} \right] \ge \frac{ C \sqrt{M} \ \delta }{ |w| }, \end{aligned}$$

where \(M\) is a large constant in the last inequality. By (6.41) and (4.9), \(|\Upsilon | \le C \delta / |w|\). Thus we have

$$\begin{aligned} \frac{ C \sqrt{M} \delta }{ |w| } \le |\Upsilon | \le \frac{C \delta }{ |w|} \end{aligned}$$

which is a contradiction provided that \(M\) is large enough.

Case 2: \( {\varepsilon }^2 := \kappa + \eta \le 1/M^2 \). Note in this case \(w\sim 1\). Then by (4.3) we have

$$\begin{aligned} B \sim 1, \quad A(\lambda _+, z) = 0 \end{aligned}$$
(6.47)

where the last equation can be checked by direct computation and we used \(|z|^2<1-t<1\). There is a more intrinsic reason why the last equation for \(A\) holds. Notice that \(\lambda _+\) is a point that the polynomial \(P_{w, z} (m)|_{w = \lambda _+}\) has a double root. Therefore, we have \(0=P^{\prime }_{w, z} (m_\mathrm{c} (\lambda _+, z) ) = A(\lambda _+, z)\).

Notice that in the case \(\kappa + \eta \) is small enough, we can approximate \( A(w, z) \) by linearizing w.r.t. \(w= \lambda _+\). Thus by the defining equation \(P^{\prime }_{w, z} (m_\mathrm{c} (\lambda _+, z) ) = A(\lambda _+, z)\), we have

$$\begin{aligned}&A(w, z) \sim P_{w, z}^{\prime \prime } (m_\mathrm{c} (\lambda _+, z) ) (m_\mathrm{c} (w, z) - m_\mathrm{c} (\lambda _+, z)) \nonumber \\&\quad + \frac{ \partial P_{w, z}}{ \partial w} (m_\mathrm{c} (\lambda _+, z) ) (w - \lambda _+) \sim \sqrt{\kappa + \eta }= {\varepsilon }\end{aligned}$$
(6.48)

where we have used that \(P_{w, z}^{\prime \prime } (m_\mathrm{c} (\lambda _+, z) ) = B(\lambda _+, z) \sim 1, \frac{ \partial P_{w, z}}{ \partial w} (m_\mathrm{c} (\lambda _+, z) ) \sim 1\) and, by (4.3), that \( (m_\mathrm{c} (w, z) - m_\mathrm{c} (\lambda _+, z)) \sim \sqrt{\kappa + \eta }\). While we can also check the conclusion of (6.48) by direction computation, the current derivation provides a more intrinsic reason why it is correct.

Case 2a: Suppose (6.43) is violated. We first choose \(M\) large enough so that \(|m_\mathrm{c} (1 + m_\mathrm{c}) | \le M^{1/4} \) in this regime. Then by (6.47) and (6.48), with \(w\sim 1 \), we have

$$\begin{aligned} C \delta M^ {1/4}&\ge |\Upsilon | = |u| | A(w,z) + B(w,z) u + w u^2 | \nonumber \\&\ge \frac{\delta M}{{\varepsilon }} \left[ C_1{\varepsilon }-\frac{C_2 M\delta }{{\varepsilon }} - \frac{ C_3 M^2 \delta ^2}{{\varepsilon }^2 } \right] \ge C_1\delta M /2, \end{aligned}$$

which is a contradiction provided that \(M\) is large enough. Here we have used that, by the restriction of \({\varepsilon }\) and \(\delta \) in (6.43) that \({\varepsilon }\ge M^{3/4} \sqrt{\delta }, M\) is large enough constant and \(\delta \ll 1\).

Case 2b: Suppose (6.44) is violated. Similarly we have

$$\begin{aligned} C \delta M^{1/4} \!&\ge \! |\Upsilon | =|u| | B(w,z) u +A(w,z) + w u^2 | \!\ge \! |u| \left[ C_1 M \sqrt{\delta }-C_2 {\varepsilon }- C_3 M^2 \delta \right] \\ \!&\ge \! C_1 |u| \left[ M \sqrt{\delta }/2 -C_2 {\varepsilon }\right] \ge C_1 M^2 \delta /4 \end{aligned}$$

which is a contradiction. Here we have used, by the restriction of \({\varepsilon }\) and \(\delta \) in (6.44) and \(M\) is large enough constant, that \( C_2{\varepsilon }\le C_2 M^{3/4} \sqrt{\delta }\le M \sqrt{\delta }/20\).\(\square \)

With a slighter strong condition on \(\delta \) and an initial estimate \(\Lambda \ll 1\) when \(\eta \sim 1\), the first inequalities in (6.42)–(6.44), i.e., (6.45), always hold. We state this as the following Corollary, which is a deterministic statement.

Corollary 6.10

(Deterministic continuity argument) Suppose that the assumptions of Lemma 6.9 hold. If we have

$$\begin{aligned} \Lambda (E+10\mathrm{i})\ll 1 \end{aligned}$$

and that \(\delta \) is decreasing in \(\eta \) for \({\varepsilon }=\sqrt{\kappa +\eta }\) small enough, then (6.45) holds all \(\eta \in [\tilde{\eta }, 10]\).

Proof

By assumption \(\Lambda (E+10 \mathrm{i})\ll 1\) and the left inequality of (6.42) holds for \(\eta = 10\). By continuity of \(\Lambda \), the same inequality,

$$\begin{aligned} \Lambda \le \frac{M^{3/2} \delta }{ |w| }, \end{aligned}$$

holds for \(w=E+\mathrm{i}\eta \) as long as \(\eta \in [\widetilde{\eta }, 10]\) and \({\varepsilon }\ge 1/M\).

Suppose that as \(\eta \) decreases, we get to Case 2a. Notice that when we decrease \(\eta \), by the conditions on \({\varepsilon }\) we will not go back to Case 1 from either Case 2a or Case 2b. For any \({\varepsilon }\le 1/M\) with \(M\) large, we have

$$\begin{aligned} \frac{M^{3/2} \delta }{ |w| } \le \frac{M \delta }{ {2\,{\varepsilon }}}. \end{aligned}$$

Hence at the transition point from Case 1 to Case 2a, the inequality \( \Lambda (E+ i \eta ) \le \frac{M \delta }{ {{\varepsilon }} } \) holds. Thus by continuity of \(\Lambda \), the bound \( \Lambda (E+ i \eta ) \le \frac{M \delta }{ {{\varepsilon }} } \) in (6.44) holds until we leave Case 2a.

It is possible that we cross from Case 2a to Case 2b. At the transition point, we have \( \delta = \frac{ {\varepsilon }^2 }{M^{3/2}}\) and thus

$$\begin{aligned} \frac{M \delta }{ {{\varepsilon }} } \le \frac{1}{2} M \sqrt{\delta }\end{aligned}$$

for \(M\) large. Hence the first inequality of Case 2b, i.e., \(\Lambda \le M \sqrt{\delta }\) holds. By continuity, this bound continues to hold unless we leave Case 2b. Since \(\delta \) is decreasing in \(\eta \) when \({\varepsilon }\) is small, once we get to Case 2b, we will not go back to Case 2a (or Case 1 as explained before).

It is possible that the Case 2a is omitted and we get to Case 2b directly from Case 1. Notice that \({\varepsilon }=1/M\) at such a transition point and we have \(|w| \sim 1\). Furthermore, by (6.40), we get \(\delta \le 1/\log N \) at the transition point. Putting these together, we have for \(M\) large,

$$\begin{aligned} \frac{M^{3/2} \delta }{ |w| } \le \frac{1}{2} M \sqrt{\delta }. \end{aligned}$$

Hence the bound \( \Lambda (E+ i \eta ) \le M \sqrt{\delta }\) in (6.44) holds.\(\square \)

6.3 The large \(\eta \) case

Our method to estimate the Green functions and the Stieltjes transform is to fix the energy \(E\) and apply a continuity argument in \(\eta \) by first showing that the crude bound in Lemma 6.9 holds for large \(\eta \). In order to start this scheme, we need to establish estimates on the Green functions when \(\eta ={{\mathrm{O}}}(1)\). This is the main focus of this subsection. We start with the following lemma which provide a crude bound on the Green functions.

Lemma 6.11

For any \(w\in \mathrm{S}(0)\) and \(\eta >c>0\) for fixed \(c\), we have the bound

$$\begin{aligned} \max _{i,j\notin U} |G^{(\mathbb U ,\mathbb T )}_{ij}(w)| \le C \;. \end{aligned}$$
(6.49)

for some \(C>0\). Notice that this bound is deterministic and is independent of the randomness.

Proof

By definition, we have

$$\begin{aligned} \left| G_{ij}\right| =\left| \sum _{\alpha }\frac{\mathbf{u}_\alpha (i)\overline{\mathbf{u}}_ \alpha (j)}{\lambda _\alpha -w}\right| \le \frac{1}{\eta }\sum _{\alpha } \mathbf{u}_\alpha (i)\overline{\mathbf{u}}_ \alpha (j) \le \frac{1}{\eta }\le C \end{aligned}$$

where we have used \(|\lambda _\alpha -w |\ge {{\mathrm{Im}}}w=\eta \). Furthermore, \(G^{(\mathbb U ,\mathbb T )}_{ij}\) can be bounded similarly.\(\square \)

The main result of this subsection is the following bound on \(\Lambda \).

Lemma 6.12

For any \(\zeta >0\) and \({\varepsilon }>0\), we have

$$\begin{aligned} \max _{w \in \underline{\mathrm{S}} (0), \eta = 10 }\Lambda (w)\le N^{-1/2+{\varepsilon }} \end{aligned}$$
(6.50)

with \( \zeta \)-high probability.

Proof

From (6.25)–(6.27), for \(\eta = {{\mathrm{O}}}(1)\) we have

$$\begin{aligned} \left[ \mathcal{G }^{(i, \emptyset )} _{ii} \right] ^{-1} = - w\left[ 1+ m_ G^{( i ,i)}+Z^{(i)}_{i } \right] , \quad |m_ G^{( i ,i)}- m| \le \frac{ C}{ N }\, . \end{aligned}$$

From (6.49), we have \(|G_{ij}|+|\mathcal{G }_{ij}|\le \eta ^{-1}\le {{\mathrm{O}}}(1) \) and \(|m_G^{(i,i)}|\le {{\mathrm{O}}}(1) \). Hence the large deviation estimate (6.27) becomes, with \( \zeta \)-high probability,

$$\begin{aligned} | Z^{(i)}_{i }| \le \varphi ^{C_\zeta } \sqrt{\frac{{{\mathrm{Im}}}\, m_G^{(i,i)}}{N}} \le \varphi ^{C_\zeta } N^{-1/2}. \end{aligned}$$
(6.51)

Thus for any \({\varepsilon }>0\) we have

$$\begin{aligned} \mathcal{G }_{ii}^{(i,\emptyset )}:=- \frac{1}{ w(1+ m + {{\mathrm{O}}}( N^{-1/2 + {\varepsilon }}) ) } \end{aligned}$$

Together with (6.11), we obtain

$$\begin{aligned} G^{-1}_{ii} = - w- wm_\mathcal{G }^{(i, \emptyset )}+\frac{|z|^2}{1+ m + {{\mathrm{O}}}( N^{-1/2 + {\varepsilon }}) }-w \mathcal Z _{i }. \end{aligned}$$

By an argument similar to the one used in (6.51), we can estimate \( \mathcal Z _{i }\) by

$$\begin{aligned} |\mathcal Z _i|\le N^{-1/2+{\varepsilon }} \end{aligned}$$

for any \({\varepsilon }>0\) with \( \zeta \)-high probability. This implies that, with \( \zeta \)-high probability,

$$\begin{aligned} G^{-1}_{ii} = - w- wm +\frac{|z|^2}{1+m +{{\mathrm{O}}}(N^{-1/2+{\varepsilon }})}+ {{\mathrm{O}}}(wN^{-1/2+{\varepsilon }}). \end{aligned}$$
(6.52)

For any \(\eta \) fixed, we claim that the following inequality between the real and imaginary parts of \(m\) holds:

$$\begin{aligned} |{{\mathrm{Re}}}m| \le {2}\sqrt{\frac{{{\mathrm{Im}}}\, m}{\eta }}. \end{aligned}$$
(6.53)

To prove this, we note that for any \(\ell \ge 1\)

$$\begin{aligned} N^{-1} \sum _{ |\lambda _j-E| \ge \ell \eta } \frac{ E - \lambda _j}{ (E- \lambda _j)^2 + \eta ^2 }&\le \frac{1}{ \ell \eta },\\ N^{-1} \sum _{ |\lambda _j-E| \le \ell \eta } \frac{| E - \lambda _j |}{ (E- \lambda _j)^2 + \eta ^2 }&\le N^{-1} \sum _{ |\lambda _j-E| \le \ell \eta } \frac{ \ell \eta }{ (E- \lambda _j)^2 + \eta ^2 } \le \ell {{\mathrm{Im}}}\, m. \end{aligned}$$

Summing up these two inequalities and optimizing \(\ell \), we have proved (6.53).

Assume that \({{\mathrm{Im}}}\, m \le c (\log N)^{-1}\). From (6.53), we have \(|m| \le c (\log N)^{-1/2}\). Together with \({{\mathrm{Im}}}\, w = \eta \sim 1\),

$$\begin{aligned} |m|&= N^{-1} \left| \sum _i G _{ii} \right| = N^{-1} \left| \sum _i \left( - w- wm + \frac{|z|^2}{1+m } \right) ^{-1} \right| \nonumber \\&+{{\mathrm{O}}}(N^{-1/2+{\varepsilon }})\ge { (-w+|z|^2+o(1))^{-1}} \ge C \end{aligned}$$

for some constant \(C\). This contradicts \(|m| \le c (\log N)^{-1/2}\) and we can thus assume that \({{\mathrm{Im}}}\, m \ge c (\log N)^{-1}\) when \(\eta \sim 1\) and \(w={{\mathrm{O}}}(1)\). In this case, we also have

$$\begin{aligned} |1+m|\ge C (\log N)^{-1}. \end{aligned}$$

Then (6.52) implies for any \({\varepsilon }>0\) that with \( \zeta \)-high probability

$$\begin{aligned} G _{ii} = \left( - w- wm + \frac{|z|^2}{1+m } \right) ^{-1}+{{\mathrm{O}}}(N^{-1/2+{\varepsilon }}) \end{aligned}$$

Summing up all \(i\), we have the following equation for \(m\) with \( \zeta \)-high probability:

$$\begin{aligned} m = \frac{-1-m }{w(1+m)^2-|z|^2} +{{\mathrm{O}}}(N^{-1/2+{\varepsilon }}). \end{aligned}$$

We can rewrite this equation into the following form:

$$\begin{aligned} P_{w, z} (m)= w(1+m)^2m -|z^2|m+m+1={{\mathrm{O}}}(N^{-1/2+{\varepsilon }})\, . \end{aligned}$$
(6.54)

It can be checked (with computer calculation or rather complicated but elementary algebraic calculation) that for \(0 \le E \le 5\lambda _+\) and \(\eta = O(1)\), the third order polynomial \( P_{w, z} (m)\) has no double root and there is only one root with positive real part. We denote this root by \(m_1\) and the other two roots by \(m_2\) and \(m_3\). For \(0 \le E \le 5\lambda _+\) and \(t \le \eta \le t^{-1}\) for any \(t\) fixed, the three roots are separate by order one due to compactness. Since there is no double root, we have \( |P^{\prime }_{w, z} (m_1) | \ge c > 0\) whenever \(0 \le E \le 5\lambda _+\) and \(t \le \eta \le t^{-1}\). Thus the stability of (6.54) is trivial and we have proved that in this range of parameters

$$\begin{aligned} \left| m(w, z)- m_1(w, z)\right| ={{\mathrm{O}}}(N^{-1/2+{\varepsilon }}) \end{aligned}$$

for any \({\varepsilon }>0\) with \( \zeta \)-high probability.\(\square \)

6.4 Proof of the weak local Green function estimates

In this subsection, we finish the proof of Theorem 6.1. We fix an energy \(E\) and we will decrease the imaginary part \(\eta \) of \(w= E + i \eta \). Recall all stability results are based on assumption (6.20), i.e., \( \Lambda \le \alpha |m_c| \sim \alpha |w|^{-1/2}\) for some small constant \(\alpha \), which so far was established only for large \(\eta \) in (6.50). We would like to know that this condition continue to hold for smaller \(\eta \). More precisely, suppose that (6.20) holds in a set \(A\) for all \(w=E+\eta i\) with \(\eta \in [\widetilde{\eta }, 10]\) where \(\widetilde{\eta }\) satisfies

$$\begin{aligned} \widetilde{\eta }\ge \varphi ^b N^{-1}|w|^{1/2}, \quad b > 5 Q_\zeta . \end{aligned}$$
(6.55)

We can choose \( \tilde{\eta }= \eta _1 < \eta _2 \cdots < \eta _n = 10\) such that \( |\eta _{i+1} - \eta _i | \le N^{-20}\) and \(n = O(N^{20})\). By (6.21) and (6.50) we have with \( \zeta \)-high probability in \(A\),

$$\begin{aligned} \Upsilon (w) \le {{\mathrm{O}}}(\varphi ^{Q_\zeta }\Psi )(w) \le \varphi ^{Q_\zeta } \sqrt{ \frac{|w|^{-1/2}}{ N \eta } } \end{aligned}$$
(6.56)

for all \(w = E + i \eta _j\) for all \(1 \le j \le n\). Since \(\Lambda ( E + i \eta ) \) is continuous in \(\eta \) at a scale, say, \(N^{-10}\), (6.56) holds for all \(\eta \in [\widetilde{\eta }, 10]\) with \( \zeta \)-high probability in \(A\). Hence for \(\widetilde{\eta }\) satisfying (6.55) the estimate (6.41) holds with

$$\begin{aligned} \delta =C\varphi ^{Q_\zeta } |w|\left( \frac{|w|^{-1/2} }{N\eta }\right) ^{1/2} \end{aligned}$$

With this choice, we can check that the assumption on \(\delta \), (6.40), holds as well. Furthermore \(\delta \) is decreasing in \(\eta \) when \({\varepsilon }=\sqrt{\kappa +\eta }\) is small enough. By Corollary 6.10, (6.45) holds all \(\eta \in [\widetilde{\eta }, 10]\).

For \(|z| < 1-t\) for some \( t >0\), if \(\kappa \ll 1\) then \(|w| \sim 1\) and (6.45) implies

$$\begin{aligned} \Lambda \le C \sqrt{ \delta (w)} \le \varphi ^{Q_\zeta /2} \left( \frac{1 }{N\eta }\right) ^{1/4}. \end{aligned}$$

If \(\kappa \ge c > 0\) for some \(c> 0\) then

$$\begin{aligned} \Lambda \le C \delta (w) |w|^{-1} \le C \varphi ^{Q_\zeta }\left( \frac{|w|^{-1/2} }{N\eta }\right) ^{1/2} \le C \varphi ^{ Q_\zeta } \frac{1}{| w^{1/2}|} \left( \frac{| w|^{ 1/2}}{N\eta } \right) ^{1/4}\!\!\!.\qquad \end{aligned}$$
(6.57)

Combining both cases, for any \(w\in \underline{\mathrm{S}}(b), b > 5 Q_\zeta \), we have with \( \zeta \)-high probability in \(A\) that

$$\begin{aligned} \Lambda \le \varphi ^{ Q_\zeta } \frac{1}{| w^{1/2}|} \left( \frac{| w|^{ 1/2}}{N\eta } \right) ^{1/4} \le C \varphi ^{ -Q_\zeta /5} | w|^{ - 1/2} \sim C \varphi ^{ -Q_\zeta /5} |m_\mathrm{c}|. \end{aligned}$$
(6.58)

Suppose that \(\hat{\eta }:= \widetilde{\eta }- N^{-20} \in \underline{\mathrm{S}} (b)\) for some \(b > 5 Q_\zeta \). Then for any \( \eta \in [\widetilde{\eta }- N^{-20}, \widetilde{\eta }]\), by (6.58) and the continuity of \(\Lambda \), we have

$$\begin{aligned} \Lambda (E + i \eta )&\le \Lambda (E + i \widetilde{\eta }) + N^{-10} \le C \varphi ^{ -Q_\zeta /5} | w|^{ - 1/2} + N^{-10}\nonumber \\&\le \alpha |m_c(E + i \hat{\eta })| /2 \end{aligned}$$

Thus the condition (6.20) in Lemma 6.7 is satisfied with \( \zeta \)-high probability in \(A\). Since we can start this procedure with \(\widetilde{\eta }= 10\) and there are only \(N^C\) steps to get to \(\widetilde{\eta }= \varphi ^{5 Q_\zeta } N^{-1}|w|^{1/2}\), we have proved that (6.58) holds for all \(w \in \underline{\mathrm{S}} (b)\) with \(b > 5 Q_\zeta \). Notice that from now on the assumption (6.20) holds with \( \zeta \)-high probability.

We can now prove the estimate (6.1) on the diagonal term. Comparing (6.35) with (6.38)(\(\mathbb{T }=\mathbb{U }=\emptyset \)), for any \(w\in \underline{\mathrm{S}}(b), b > 5 Q_\zeta \), we have with \( \zeta \)-high probability

$$\begin{aligned} |G_{ii}-m|\le O(\varphi ^{ Q_\zeta }\Psi ) \end{aligned}$$
(6.59)

By definition of \(\Psi \), (6.58) and \(m_c\sim |w^{-1/2}| \), we have

$$\begin{aligned} \Psi = \left( \sqrt{\frac{{{\mathrm{Im}}}\, m_C+\Lambda }{N\eta }}+\frac{1}{N\eta }\right) \le \left( \sqrt{\frac{|w|^{-1/2} }{N\eta }}+\frac{1}{N\eta }\right) . \end{aligned}$$

Using the restriction on \(\eta \) so that \(N \eta \ge |w|^{1/2} \varphi ^{ 5 Q_\zeta }\), we have

$$\begin{aligned} \Psi \le C \sqrt{\frac{ |w|^{-1/2} }{N\eta }} \le C |w|^{-1/2} \left( \frac{\sqrt{w}}{N\eta }\right) ^{1/4}. \end{aligned}$$
(6.60)

With (6.57) and (6.59), we have thus proved that

$$\begin{aligned} \max _i \big | G_{ii}- m_C \big | \le \varphi ^{Q_\zeta }|w^{-1/2}|\left( \frac{\sqrt{w}}{N\eta }\right) ^{1/4} \end{aligned}$$

for any \(w\in \underline{\mathrm{S}}(b), b > 5 Q_\zeta \). Hence the estimate (6.1) on the diagonal element \(G_{ii}\) holds.

To conclude Theorem 6.1, it remains to prove the estimate on the off-diagonal elements. Recall the identity (6.12) for \(G_{ij}\) and the Eqs. (10.3) and (10.4). We can estimate the off-diagonal Green function by

$$\begin{aligned} \Big | G_{ij} \Big |&= \Big | w G_{ii} G^{(i,\emptyset )}_{jj} |z|^2 \mathcal{G }^{(ij, \emptyset )}_{ij} \Big | \nonumber \\&+ {{\mathrm{O}}}\left( \varphi ^{Q_\zeta } \sqrt{\frac{{{\mathrm{Im}}}\, m_\mathcal{G }^{(ij,\emptyset )} +|z|^2{{\mathrm{Im}}}\mathcal{G }^{(ij, \emptyset )}_{ii}+|z|^2 {{\mathrm{Im}}}\mathcal{G }^{(ij, \emptyset )}_{jj}}{N\eta }} \right) , \quad i\ne j,\nonumber \\ \Big | G_{ij} \Big |&= \Big | |z|^2 \mathcal{G }^{(ij, \emptyset )}_{ij} \Big | + {{\mathrm{O}}}\left( \varphi ^{Q_\zeta } \Psi \right) , \quad i\ne j. \end{aligned}$$
(6.61)

Here we have used \(|G_{ii} G^{(i,\emptyset )}_{jj}|=O(|w|^{-1})\), which follows from (6.36), \(\Lambda \ll m_c\) and \(|m_c|\sim |w^{-1/2}|\)

Recall the identity (6.14) that

$$\begin{aligned} {\mathcal{G }_{ij}^{(ij, \emptyset )}} = -w\mathcal{G }_{ii}^{(ij, \emptyset )}\mathcal{G }^{(ij, i)}_{jj}\left( \mathrm{y}_i^{(ij)} G^{(ij,ij)} \mathrm{y}_j^{(ij)*}\right) , \quad i\ne j. \end{aligned}$$

By (10.2), we have

$$\begin{aligned} \left| \left( \mathrm{y}_i^{(ij)} G^{(ij,ij)} , \mathrm{y}_j^{(ij)*}\right) \right| \le \varphi ^{Q_\zeta } \sqrt{\frac{|{{\mathrm{Im}}}\, m_ G^{(ij,ij)}| }{N\eta }}\, . \end{aligned}$$

where we have used (10.4) and that, by definition, \({{\mathrm{Im}}}\, G^{(ij,ij)}_{ii}= 0= {{\mathrm{Im}}}\, G^{(ij,ij)}_{jj} \). Therefore, we have with \( \zeta \)-high probability,

$$\begin{aligned} \Big | {\mathcal{G }_{ij}^{(ij, \emptyset )}} \Big | \le \varphi ^{Q_\zeta } \sqrt{\frac{ {{\mathrm{Im}}}\, m_C+\Lambda +(N\eta )^{-1} }{N\eta }}\le \varphi ^{Q_\zeta } \Psi , \quad i\ne j, \end{aligned}$$
(6.62)

where we also used \(|\mathcal{G }_{ii}^{(ij, \emptyset )}\mathcal{G }^{(ij, i)}_{jj} |\le C|m_c|^2\le C |w|^{-1}\). Together with (6.61) and (6.36), we have proved that with \( \zeta \)-high probability

$$\begin{aligned} \Big | G_{ij} \Big | \le \varphi ^{Q_\zeta }\Psi , \quad i\ne j\, . \end{aligned}$$
(6.63)

With (6.60), it proves Theorem 6.1 for the off-diagonal elements provided that \(w\in \underline{\mathrm{S}}(b)\) with \( b > 5 Q_\zeta \). Finally, we rename \(b\) as the \(C_\zeta \) and this concludes the proof of Theorem 6.1.

7 Proof of the strong local Green function estimates

Lemma 6.7 provides an error estimate to the self-consistent equation of \(m\) linearly in \(\Psi \). The following Lemma improves this estimate to quadratic in \(\Psi \). This is the key improvement leading to a proof of the strong local Green function estimates, i.e., Theorem 3.4.

Lemma 7.1

For any \(\zeta >1\), there exists \(R_\zeta > 0 \) such that the following statement holds. Suppose for some deterministic number \(\widetilde{\Lambda }(w, z)\) (which can depend on \(\zeta \)) we have

$$\begin{aligned} \Lambda (w, z) \le \widetilde{\Lambda }(w, z) \ll m_c (w, z) \end{aligned}$$

for \( w \in \underline{\mathrm{S}} ( b), b > 5 R_\zeta \), in a set \(\Xi \) with \(\mathbb P (\Xi ^c) \le e^{-p_{N}(\log N)^2 }\) and \(p_N\) satisfies that

$$\begin{aligned} \varphi \le p_N \le \varphi ^{ 2\zeta }. \end{aligned}$$
(7.1)

Then there exists a set \(\Xi ^{\prime }\) such that \( \mathbb P (\Xi ^{\prime c}) \le e^{-p_{N} } \) and

$$\begin{aligned} \mathcal D (m(w,z))\le \frac{1}{2} \varphi ^{R_\zeta } |m_\mathrm{c}|^{-3} \widetilde{\Psi }^2 , \quad \widetilde{\Psi }:=\sqrt{ \frac{{{\mathrm{Im}}}\,m_\mathrm{c}+\widetilde{\Lambda }}{N\eta }} + \frac{1}{N\eta },\quad \mathrm{in}\quad \Xi ^{\prime }. \end{aligned}$$
(7.2)

Notice that the probability deteriorates in the exponent by a \((\log N)^{-2}\) factor.

We remark that, by Lemma 4.1, \({{\mathrm{Im}}}\, m_\mathrm{c} \ll |m_\mathrm{c}|\) when \(\eta + \kappa \ll 1\). Hence we have to track the dependence of \({{\mathrm{Im}}}\, m_\mathrm{c}\) carefully in the previous Lemma. This is one major difference between the weak and strong local Green function estimates. Similar phenomena occur for the Stieltjes transforms of the eigenvalue distributions of Wigner matrices. Lemma 7.1 will be proved later in this section; we now use it to prove Theorem 3.4. We first give a heuristic argument.

Suppose that we have the estimate (7.2) with \(\widetilde{\Psi }\) replaced by \(\Psi \). We assume \(\Lambda \ge (N\eta )^{-1} \) for convenience so that \(\Psi ^2 \sim ({{\mathrm{Im}}}\, m_\mathrm{c}+\Lambda )/(N\eta ) \) (If this assumption is violated then (3.5) holds automatically and we have nothing to prove). Then we can apply Corollary 6.10 by choosing

$$\begin{aligned} \delta = \varphi ^{R_\zeta }|w|^{3/2} \left[ \frac{{{\mathrm{Im}}}\, m_\mathrm{c}+\Lambda }{N\eta } \right] \end{aligned}$$
(7.3)

which implies (6.45). Consider first the case \(\kappa + \eta \sim {{\mathrm{O}}}(1)\). Using (6.45) with the choice of \(\delta \) in (7.3) and \(\kappa +\eta +\delta \ge {{\mathrm{O}}}(1)\), we have

$$\begin{aligned} \Lambda \le \varphi ^{R_\zeta } |w|^{1/2} \left[ \frac{{{\mathrm{Im}}}\, m_\mathrm{c}+\Lambda }{N\eta } \right] . \end{aligned}$$

When \(\eta \) satisfies the condition (6.55), the coefficient of \(\Lambda \) on the right side of the last equation is smaller than \(1/2\). Hence, using \({{\mathrm{Im}}}\, m_\mathrm{c}\le |m_\mathrm{c}|\le C |w|^{-1/2}\) (see Proposition 3.2), we have

$$\begin{aligned} \Lambda \le C \varphi ^{R_\zeta }\left[ \frac{ |w|^{ 1/2} {{\mathrm{Im}}}m_\mathrm{c}}{N \eta } \right] \le C\varphi ^{R_\zeta } \frac{1}{ N\eta }\, . \end{aligned}$$

We now consider the case \(\kappa + \eta \ll 1\) and thus \(|w| \sim {{\mathrm{O}}}(1)\). From the first inequality of (6.45), we have

$$\begin{aligned} \Lambda \le C \frac{ \delta (w)|w|^{-1}}{ \sqrt{\kappa +\eta +\delta (w)}} \le C \sqrt{\delta (w) }. \end{aligned}$$
(7.4)

Also, in the regime \(\kappa + \eta \ll 1\), (4.4) asserts that

$$\begin{aligned} {{\mathrm{Im}}}\, m_\mathrm{c} \le C \sqrt{\kappa +\eta } , \quad \frac{ {{\mathrm{Im}}}\, m_\mathrm{c} }{ N \eta \sqrt{\kappa +\eta + \delta }} \le \frac{C}{ N \eta }\, . \end{aligned}$$

Using the choice of \(\delta \) in (7.3), we have

$$\begin{aligned} \Lambda \le C \varphi ^{R_\zeta } |w|^{1/2} \frac{ {{\mathrm{Im}}}\, m_\mathrm{c}+\Lambda }{ N \eta \sqrt{\kappa +\eta +\delta }} \!\le \! C\varphi ^{R_\zeta } \frac{1 }{ N \eta } + C \varphi ^{R_\zeta } \frac{ \Lambda }{ N \eta \sqrt{\kappa +\eta +\delta }} \!\le \! C^{\prime } \varphi ^{R_\zeta } \frac{1 }{ N \eta } \end{aligned}$$

where we have used (7.4) to absorb the last term involving \(\Lambda \) in the last inequality with a change of constant \(C\). This completes the heuristic proof of Theorem 3.4. We now give a formal proof of this theorem assuming Lemma 7.1.

Proof of Theorem

3.4 We first prove (3.6) assuming (3.5). By (6.63) and the definition of \(\Psi \), we have for \(i\ne j\),

$$\begin{aligned} \Big | G_{ij} \Big | \le \varphi ^{{ R_\zeta }} \left[ \sqrt{ \frac{{{\mathrm{Im}}}\,m_\mathrm{c}+ \Lambda }{N\eta } } +\frac{1}{N\eta } \right] \le \varphi ^{{ R_\zeta }} \left[ \sqrt{ \frac{{{\mathrm{Im}}}\,m_\mathrm{c}}{N\eta } } +\frac{1}{N\eta } \right] \end{aligned}$$

where we have used (3.5) in the last step. This proves (3.6).

The main task in proving Theorem 3.4 is to prove (3.5). We first consider the case that \(|z| \le 1-t\). We assume that \( \zeta \) is large enough, e.g., \(\zeta \ge 10\). By Theorem 6.1 and \(m_c \sim |w|^{-1/2}\) (4.9) for \(|z| < 1-t\), there exists a constant \(C_{\zeta +5}\) such that for any \( w \in \underline{\mathrm{S}} (b), b > 5 C_{\zeta +5}\) and \(\alpha \ll 1\), we have

$$\begin{aligned} \Lambda ( w)\le \Lambda _1:= \alpha |m_\mathrm{c}| \sim O(\alpha |w|^{-1/2}), \end{aligned}$$
(7.5)

holds with the probability larger than \(1-\exp (-\varphi ^{\zeta +5})\) (here we have replaced \(\zeta \) in Theorem 6.1 by \(\zeta + 5\) for the convenience of the following argument). Since \(\underline{\mathrm{S}}(b)\) is decreasing in \(b\), we can choose \(D_\zeta = 5 \max (C_{\zeta +5}, R_\zeta ) \) so that we can apply Lemma 7.1 with \(p_N = \varphi ^{\zeta +5}\) (which guarantees (7.1)). Together with \(\Lambda _1 \le |m_c|\), we have, for any \( w \in \underline{\mathrm{S}}(D_\zeta )\) fixed,

$$\begin{aligned} \mathcal D (m)\le \frac{1}{2}\varphi ^{R_\zeta } |m_\mathrm{c}|^{-3} \Psi _1^2 , \quad \Psi _1 :=\sqrt{ \frac{{{\mathrm{Im}}}\,m_\mathrm{c}+ |m_c| }{N\eta }} +\frac{1}{N\eta }, \end{aligned}$$
(7.6)

holds with the probability larger than \(1-\exp (-\varphi ^{\zeta +5}(\log N)^{ -2})\). Notice that the application of Lemma 7.1 causes the probability in the exponent to deteriorate by a \((\log N)^{-2}\) factor.

Using (7.6), we can apply Corollary 6.10 with

$$\begin{aligned} \delta =\delta _1:= \varphi ^{ R_\zeta } |m_\mathrm{c}|^{-3} \Psi _1^2. \end{aligned}$$
(7.7)

Here the assumption of \(\Lambda (E+10\mathrm{i})\) is guaranteed by (7.5). By definition of \(\Psi _1\) (7.6) and \(|m_c| \sim |w|^{-1/2}\) (4.9), for \(w \in \underline{\mathrm{S}}(D_\zeta )\), we have

$$\begin{aligned} \delta \le \varphi ^{ R_\zeta }\frac{|w|}{N\eta } \ll (\log N)^{-8} |w|^{1/2}. \end{aligned}$$

Furthermore, it is easy to prove that \(\delta \) is decreasing in \(\eta \) when \(\kappa +\eta \) is small. We have thus verified the assumptions on \(\delta \) in Corollary 6.10 with the choice \(\delta = \delta _1\) given in (7.7). From (6.45), we obtain for \(w \in \underline{\mathrm{S}}(D_\zeta )\), with \(C_0\) being the \(C\) in (6.45),

$$\begin{aligned} \Lambda \le C_0\frac{\delta _1 |w|^{-1}}{\sqrt{\kappa +\eta +\delta _1}}\le C_0 \frac{ \varphi ^{ R_\zeta } }{N\eta \sqrt{\kappa +\eta +\delta _1}} \end{aligned}$$

holds with the probability larger than \(1-\exp (-\varphi ^{\zeta +5}(\log N)^{- 2})\). We have thus proved (3.5) provided that \(\kappa +\eta \ge (\log N)^{-1}\).

We now prove (3.5) when \(\kappa +\eta \le (\log N)^{-1} \). We have in this case \( |w|\sim 1\). We apply Lemma 7.1 with \(\widetilde{\Lambda }= \Lambda _1= |m_c| \sim 1 \) given by (7.5). Thus (7.6) holds and we apply Corollary 6.10 with \(\delta = \delta _1\) (7.7). Since \(\Lambda _1\ge (N\eta )^{-1}\) and \({{\mathrm{Im}}}\, m_c\sim \sqrt{\kappa +\eta }\) (4.4), the conclusion of Corollary 6.10 implies that for \(w \in \underline{\mathrm{S}}(D_\zeta )\),

$$\begin{aligned} \Lambda \le C_0 \varphi ^{R_\zeta } |w|^{1/2} \frac{ {{\mathrm{Im}}}\, m_\mathrm{c}+\Lambda _1 }{ N \eta \sqrt{\kappa +\eta +\delta _1}} \le C_1\varphi ^{R_\zeta } \frac{1 }{ N \eta } + C_1 \varphi ^{R_\zeta } \frac{ \Lambda _1 }{ N \eta \sqrt{\delta _1}} \end{aligned}$$

holds with probability larger than \(1-\exp (-\varphi ^{\zeta +5}(\log N)^{- 2})\). Here \(C_1\) depends only on \(C_0\). From the definition of \(\delta _1\) and \(\Psi _1\), we have

$$\begin{aligned} \varphi ^{R_\zeta } \frac{ \Lambda _1 }{ N \eta \sqrt{\delta _1}} \le \varphi ^{ R_\zeta /2} \frac{ |m_c|^{3/2} }{ N \eta } \frac{ \Lambda _1 }{ \Psi _1} \le C_2\varphi ^{ R_\zeta /2 } \left( \frac{\Lambda _1 }{N\eta }\right) ^{1/2}, \end{aligned}$$

where for the last inequality we used

$$\begin{aligned} \Psi _1\ge \sqrt{ \Lambda _1 /( N\eta )}. \end{aligned}$$

Since \(\Lambda _1\ge (N\eta )^{-1} \), combining the last two inequalities, for \(w \in \underline{\mathrm{S}}(D_\zeta )\), we have

$$\begin{aligned} N \eta |\Lambda | \le C_3 \varphi ^{ R_\zeta }+ C_3 \varphi ^{ R_\zeta /2 } \left( N\eta \Lambda _1 \right) ^{1/2} \le \varphi ^{ R_\zeta } \left( N\eta \Lambda _1 \right) ^{1/2} \end{aligned}$$
(7.8)

holds with the probability larger than \(1-\exp (-\varphi ^{\zeta +5}(\log N)^{- 2})\) for some \(C_3\). Notice that we have used \( N \eta \ge \varphi ^{5 R_\zeta }\) in the last step in (7.8).

Repeating this process with the choices

$$\begin{aligned} N \eta \Lambda _2 { :}= \varphi ^{ R_\zeta } \left( N\eta \Lambda _1 \right) ^{1/2} ,\quad \Psi _2 :=\sqrt{\frac{{{\mathrm{Im}}}\,m_\mathrm{c}+\Lambda _2}{N\eta } } +\frac{1}{N\eta } ,\quad \delta _2:= \varphi ^{ R_\zeta } |m_\mathrm{c}|^{-3} \Psi _2^2, \end{aligned}$$

for \(w \in \underline{\mathrm{S}}(D_\zeta )\), we obtain that

$$\begin{aligned} N \eta |\Lambda | \le { C_3} \varphi ^{ R_\zeta }+ {C_3} \varphi ^{ R_\zeta /2 } \left( N\eta \Lambda _2 \right) ^{1/2} \le \varphi ^{ R_\zeta } \left( N\eta \Lambda _2 \right) ^{1/2} \end{aligned}$$

holds with the probability larger than \(1- \exp (-\varphi ^{\zeta +5}(\log N)^{-4})\). Notice that the last constant \(C_3\) is the same as the one appears in (7.8) and it does not change in the iteration procedure. We now iterate this process \(K\) times to have

$$\begin{aligned} N \eta |\Lambda | \le \varphi ^{ R_\zeta } \left( N\eta \Lambda _K \right) ^{1/2} \le \varphi ^{ 2 R_\zeta }\left( N\eta \Lambda _1 \right) ^{1/2^K} \end{aligned}$$

holds with the probability larger than \(1- \exp (-\varphi ^{\zeta +5}(\log N)^{- 2 K})\). We need \(K\) so large that

$$\begin{aligned} \left( \Lambda _1 N\eta \right) ^{1/(2^K)} \le (C N)^{1/(2^K)} \le \varphi , \end{aligned}$$

i.e.,

$$\begin{aligned} K\ge \frac{\left( \log \log ( CN)-\log \log \varphi \right) }{\log 2} = \frac{\left( \log \log ( CN)-2\log \log \log N \right) }{\log 2} \end{aligned}$$

On the other hand, we need \(K\) small enough so that

$$\begin{aligned} 1-\exp (-\varphi ^{\zeta +5}(\log N)^{-2K})\ge 1 -\exp (-\varphi ^{\zeta }), \quad \text { i.e.,} \; \varphi ^{5}(\log N)^{-2K} \ge 1 . \end{aligned}$$
(7.9)

We note that it also guarantees (7.1), since \(\varphi ^{\zeta +5}\ge p_1\ge p_2\ge \cdots \ge p_K\ge \varphi \). We choose \(K = \log \log N/\log 2 \) and we have thus proved that

$$\begin{aligned} N\eta |\Lambda | \le \varphi ^{ 2 R_\zeta +{1}} \end{aligned}$$
(7.10)

with the probability larger than \(1- \exp (-\varphi ^{\zeta })\) which implies (3.5) when \(\kappa +\eta \le (\log N)^{-1}\). This completes the proof of Theorem 3.4.\(\square \)

7.1 Proof of Lemma 7.1

The first step in proving Lemma 7.1 is to derive a second order self-consistent equation which identifies the first order dependence of the correction in the self-consistent equation derived in Lemma 6.7. The second error terms will be bounded by \(\Psi ^2\); the first order terms are of the forms of averages of \(Z^{(i)}_i\) and \(\mathcal{Z }_i\). In Lemma 7.3, the averages of \(Z^{(i)}_i\) and \(\mathcal{Z }_i\) will be estimated by \(\Psi ^2\). This improvement from the naive order \(\Psi \) to \(\Psi ^2\) is the key ingredient to obtain the strong local law. We remark that \({{\mathrm{Im}}}\, m_\mathrm{c} \ll |m_\mathrm{c}|\) when \(\eta + \kappa \ll 1\). Hence the dependence of \({{\mathrm{Im}}}\, m_\mathrm{c}\) verses \(m_c\) has to be tracked carefully. We now state the second order self-consistent equation: as the following lemma.

Lemma 7.2

(Second order self-consistent equation) For any constant \(\zeta >0\), there exists \(C_\zeta >0\) such that for \( w\in \underline{\mathrm{S}}(b), b\ge 5C_\zeta \) with \( \zeta \)-high probability

$$\begin{aligned} \mathcal D (m) \le {{\mathrm{O}}}\left( \varphi ^{C_\zeta } \frac{1 }{m_\mathrm{c}^3} \Psi ^2 + w[\mathcal Z ]+m_\mathrm{c}^{-2}[ Z_*^*]\right) \end{aligned}$$
(7.11)

where

$$\begin{aligned}{}[ Z_*^*] = N^{-1} \sum _i Z^ {(i)}_{i}, \quad [ \mathcal Z ] = N^{-1} \sum _i \mathcal Z _{i}\, . \end{aligned}$$

Proof

We have proved the weak local Green function estimate, i.e., Theorem 6.1, in Sect. 6. This in particular implies that (6.20) holds with \( \zeta \)-high probability in \(\underline{\mathrm{S}}(b)\) for large enough \(b\) with \( \zeta \)-high probability. With this remark in mind, we now prove Lemma 7.2.

We first take the inverse of both sides of (6.33) and sum up \(i\) to get, with \( \zeta \)-high probability,

$$\begin{aligned} { N^{-1} } \sum _i G_{ii}^{-1}&= -w-wm+\frac{|z|^2}{1+m}+w[\mathcal Z ] { - \frac{|z|^2}{ (1+ m)^2} [ Z_*^*] } \nonumber \\&+ { N^{-1} } \sum _i {{\mathrm{O}}}\left( \frac{ (Z^ {(i)}_{i})^2 + \frac{1}{ (N \eta )^2} }{ (1+ m)^3} \right) \nonumber \\&+ |w| {{\mathrm{O}}}\left( \frac{1}{N}\sum _i m_\mathcal{G }^{(i,\emptyset )}-m\right) +|m_\mathrm{c}|^{-2}{{\mathrm{O}}}\left( \left| \frac{1}{N}\sum _i m^{(i,i)}-m \right| \right) ,\nonumber \\ \end{aligned}$$
(7.12)

where we have used (6.30) and the bound (6.22). Recall the estimates of \(\mathcal Z _i\) and \(Z^{(i)}_i\) by \(\Psi \) in (6.27) and (6.32). Hence we have

$$\begin{aligned} {N^{-1} } \sum _i G_{ii}^{-1}&= -w-wm+\frac{|z|^2}{1+m}+\varphi ^{C_\zeta } {{\mathrm{O}}}(m_\mathrm{c}^{-3} \Psi ^2) \nonumber \\&+{{\mathrm{O}}}(w[\mathcal Z ])+{{\mathrm{O}}}(m_\mathrm{c}^{-2}[ Z_*^*] ) + |w| {{\mathrm{O}}}\left( \frac{1}{N}\sum _i m_\mathcal{G }^{(i,\emptyset )}-m\right) \nonumber \\&+|m_\mathrm{c}|^{-2}{{\mathrm{O}}}\left( \left| \frac{1}{N}\sum _i m^{(i,i)}-m \right| \right) . \end{aligned}$$
(7.13)

By (6.59)–(6.60), we have

$$\begin{aligned} |G_{ii}-m|\le {{\mathrm{O}}}(\varphi ^{Q_\zeta }\Psi ) \ll |m_\mathrm{c}|, \end{aligned}$$
(7.14)

where \(b\ge 5 Q_\zeta \) and \(Q_\zeta \) is defined in Lemma 10.1. We now perform the expansion \(G_{ii} ^{-1} = [(G_{ii}-m) + m] ^{-1}\) to have

$$\begin{aligned} G_{ii} ^{-1}=m ^{-1}- \frac{G_{ii}-m}{m^2}+O( \varphi ^{{ 2}Q_\zeta } |m_\mathrm{c}|^{-3}\Psi ^2). \end{aligned}$$

Using this approximation in (7.13), we have

$$\begin{aligned}&m^{-1} +w+wm-\frac{|z|^2}{1+m} = \varphi ^{{ 2}Q_\zeta } {{\mathrm{O}}}(m_\mathrm{c}^{-3} \Psi ^2) +{{\mathrm{O}}}(w[\mathcal Z ])+{{\mathrm{O}}}(m_\mathrm{c}^{-2}[ Z]) \qquad \qquad \quad \end{aligned}$$
(7.15)
$$\begin{aligned}&+ |w| {{\mathrm{O}}}\left( \frac{1}{N}\sum _i m_\mathcal{G }^{(i,\emptyset )}-m\right) +|m_\mathrm{c}|^{-2} {{\mathrm{O}}}\left( \left| \frac{1}{N}\sum _i m^{(i,i)}-m \right| \right) . \end{aligned}$$
(7.16)

Using (6.2), we have

$$\begin{aligned} \frac{1}{N}\sum _i m_\mathcal{G }^{(i,\emptyset )}-m=\frac{1}{N}\sum _i m_G^{(i,\emptyset )}-m +\frac{C}{Nw}. \end{aligned}$$

Furthermore, with (6.4) we have

$$\begin{aligned} m_ G^{(i,\emptyset )}-m = \frac{1}{N} \left( G_{ii}+\sum _{j \ne i}\frac{ G_{ji} G_{ij} }{ G_{ii}}\right) =\frac{1}{N}\sum _j \frac{ G_{ji} G_{ij}}{ G_{ii}} = {{\mathrm{O}}}\left( \frac{{{\mathrm{Im}}}G_{ii}}{N\eta |G_{ii}|}\right) .\nonumber \\ \end{aligned}$$
(7.17)

The diagonal element \(G_{ii}\) can be estimated by (7.14) so that

$$\begin{aligned} \left| \frac{{{\mathrm{Im}}}\, G_{ii}}{N\eta |G_{ii}|} \right| \le \varphi ^{Q_\zeta } \frac{{{\mathrm{Im}}}\, m_\mathrm{c}+\Lambda + \Psi }{N\eta |m_\mathrm{c}|} \le \varphi ^{Q_\zeta } \frac{\Psi ^2 }{ |m_\mathrm{c}| }. \end{aligned}$$

Therefore, we have

$$\begin{aligned} {{\mathrm{O}}}\left( \frac{1}{N}\sum _i m_\mathcal{G }^{(i,\emptyset )}\!-\!m\right) \!\le \!{{\mathrm{O}}}\left( \frac{1}{N}\sum _i m_G^{(i,\emptyset )}-m\right) \!+\!\frac{C}{N |w| } \!\le \! \varphi ^{Q_\zeta } |m_\mathrm{c}|^{-1} \Psi ^2 \!+\!\frac{C}{N |w| }.\nonumber \\ \end{aligned}$$
(7.18)

Notice that only the imaginary part of \(m_\mathrm{c}\) appears through \(\Psi \) instead of \(m_\mathrm{c}\) which can be much bigger near the spectral edge.

We now estimate the last term in (7.16). Notice that \(\mathcal{G }^{(i, \emptyset )}\) is the Green function of the matrix \(A^+ A\) where \(A = (Y^{(i, \emptyset )})^*\). Then \(m^{(i,i)}\) is the Green function of \(A^{(i, ), +} A^{(i, )}\) where we have used \(A^{(i, )} = Y^{(i, i)}\). Thus we can apply (7.17) (which holds for matrices of the form \(A^+ A\) with A not necessarily a square matrix) to get

$$\begin{aligned} |m_\mathcal{G }^{(i,\emptyset )} - m^{(i,i)}|\le {{\mathrm{O}}}(\frac{{{\mathrm{Im}}}\mathcal{G }^{(i,\emptyset )}_{ii}}{N\eta |\mathcal{G }^{(i,\emptyset )}_{ii}|}) . \end{aligned}$$

By (6.31), we have

$$\begin{aligned} {{\mathrm{Im}}}\, \mathcal{G }^{(i , \emptyset )} _{ii}\le C \left( {{\mathrm{Im}}}\, m_\mathrm{c}+ \Lambda + \varphi ^{C_\zeta } \Psi \right) \!. \end{aligned}$$

By (6.30) and (6.29),

$$\begin{aligned} |\mathcal{G }^{(i , \emptyset )}_{ii}|\sim |w^{-1/2}|\sim |m_\mathrm{c}|. \end{aligned}$$

These estimates imply that

$$\begin{aligned} \left| \frac{1}{N}\sum _i m^{(i,i)}\!-\!m \right| \!\le \! \left| \frac{1}{N}\sum _i m_\mathcal{G }^{(i,\emptyset )}\!-\!m \right| \!+\!\frac{1}{N}\sum _i|m^{(i,i)}\!-\!m_\mathcal{G }^{(i,\emptyset )}|\!\le \! \varphi ^{ Q_\zeta } |m_\mathrm{c}|^{-1}\Psi ^2.\nonumber \\ \end{aligned}$$
(7.19)

Inserting (7.18) and (7.19) into (7.15), we obtain

$$\begin{aligned} \mathcal D (m) \le {{\mathrm{O}}}\left( \varphi ^{{ 2}Q_\zeta }\left( \frac{1 }{m_\mathrm{c}^3} \Psi ^2+N^{-1}\right) + w[\mathcal Z ]+m_\mathrm{c}^{-2}[ Z_*^*]\right) . \end{aligned}$$

To conclude Lemma 7.2, we choose \(C_\zeta =2Q_\zeta \) and it remains to prove \( |\frac{1}{m_\mathrm{c}^3}\Psi ^2|\ge {{\mathrm{O}}}(N^{-1})\). By definition of \(\Psi \) and the fact that \( |m_\mathrm{c}| \sim |w|^{-1/2}\) (4.9), this inequality follows from the following property of \({{\mathrm{Im}}}\, m_c\):

$$\begin{aligned} \left| \frac{{{\mathrm{Im}}}\, m_\mathrm{c}}{N\eta }\right| \ge {{\mathrm{O}}}(N^{-1}). \end{aligned}$$

This estimate on \({{\mathrm{Im}}}\, m_c\) is a direct consequence of (4.2), (4.4), (4.6) and (4.7). This completes the proof of Lemma 7.2 ( with \(C_\zeta \) increasing by 1).\(\square \)

We now estimate the averages \([\mathcal Z ]\) and \( [ Z_*^*] \). Our goal is to catch cancellation effects due to the average over the indices \(i\). This is the content of the next lemma, to be proved in next subsection. Clearly this lemma completes the proof of Lemma 7.1.

Lemma 7.3

For any \(\zeta >1\), there exists \(R_\zeta > 0 \) such that the following statement holds. Suppose for some deterministic number \(\widetilde{\Lambda }(w, z)\) (which can depend on \(\zeta \)) we have

$$\begin{aligned} \Lambda (w, z) \le \widetilde{\Lambda }(w, z) \ll m_c (w, z) \end{aligned}$$

for \( w \in \underline{\mathrm{S}} ( b), b > 5 R_\zeta \), in a set \(\Xi \) with \(\mathbb P (\Xi ^c) \le e^{-p_{N}(\log N)^2 }\) and \(p_N\) satisfies that

$$\begin{aligned} \varphi { \le } p_N{ \le } \varphi ^{ 2\zeta }. \end{aligned}$$
(7.20)

Then there exists a set \(\Xi ^{\prime }\) such that \( \mathbb P (\Xi ^{\prime c}) \le e^{-p_{N} } \) and

$$\begin{aligned} \big | [\mathcal{Z }]\big |+\big |[ Z_*^*]\big | \le \varphi ^{C_\zeta } |w|^{1/2} \widetilde{\Psi }^2, { \quad in\quad \Xi ^{\prime } } \end{aligned}$$
(7.21)

where \(\widetilde{\Psi }\) is defined in (7.2).

7.2 Strong bounds on \([Z]\)

In this subsection, we prove Lemma 7.3. The main tool is the abstract cancellation Lemma 11.1.

We first perform a cutoff for all random variables \(X_{ij}\) in \(X\) so that \( |X_{ij}| \le N^{10}\). Due to the subexponential decay assumption, the probability of the complement of this event is \(e^{-N^c}\), which is negligible.

Define \(P_i\) and \(\mathcal P _i\) as the operator for the expectation value w.r.t. the \(i\)th row and \(i\)th column. Let

$$\begin{aligned} Q_i=1-P_i,\quad \mathcal Q _i=1-\mathcal P _i \end{aligned}$$

With this convention and Lemma 6.5, we can rewrite \( \mathcal{Z }_i\) and \(Z_i^{(i)}\), from Definition 6.4, as

$$\begin{aligned} \mathcal{Z }_i=\mathcal Q _i \left( wG_{ii}\right) ^{-1}, \quad Z_i^{(i)}= Q_i \left( w\mathcal{G }^{(i, \emptyset )}_{ii}\right) ^{-1}. \end{aligned}$$

By definition, for any \(i,j, \mathbb{U }, \mathbb T \), we know \(|G^{\mathbb{U }, \mathbb T }_{ij}|\le \eta ^{-1}\). From the identities of \(G_{ii}\) and \(\mathcal{G }^{(i, \emptyset )}_{ii}\) in Lemma 6.5 and \(|X_{ij}|\le N^C\), we have, for any \(1\le i\le N\),

$$\begin{aligned} |G_{ii}|^{-1} +|\mathcal{G }^{(i, \emptyset )}_{ii}|^{-1} \le N^C. \end{aligned}$$
(7.22)

Let \(D_\zeta =\max \{C_{6\zeta + 10}, Q_{6\zeta + 10}+1\}\) with \(C_{\zeta }\) defined in Lemma 6.1 and \(Q_\zeta \) in Lemma 10.1. Then for any fixed \(\mathbb{T }, \mathbb{U }\): \(|\mathbb{T }|, |\mathbb{U }|\le p\) there exists a set \(\Xi _{ \mathbb{T }, \mathbb{U }}\) with

$$\begin{aligned} P(\Xi _{ \mathbb{T }, \mathbb{U }})\ge 1-e^{-\varphi ^{{ 6} \zeta +10}} \end{aligned}$$

such that for any \({w}\in \underline{\mathrm{S}}(b), b>5D_\zeta \) the

  1. (i)

    for \(w \in \underline{\mathrm{S}}(b )\)

    $$\begin{aligned} \Lambda \le \varphi ^{-D_\zeta /4}|w^{-1/2}|, \quad \Psi \le \varphi ^{-2D_\zeta }|w^{-1/2}| \end{aligned}$$
    (7.23)
  2. (ii)

    for \(w \in \underline{\mathrm{S}} (b )\)

    $$\begin{aligned} \max _{ij}|G_{ij}(z)-m_\mathrm{c}(z)\delta _{ij}| \le \varphi ^{D_\zeta }\frac{1}{| w^{1/2}|} \left( \frac{| w^{1/2}|}{N\eta } \right) ^{1/4} , \quad b > 5 D_\zeta .\qquad \end{aligned}$$
    (7.24)
  3. (iii)

    for any \(i\ne j\),

    $$\begin{aligned} |(1-\mathbb E _{\mathbf{y}_i})\mathbf{y}_i^* \mathcal G ^{(i \mathbb{T }, \emptyset )} \mathbf{y}_i| +|\mathbf{y}_i^* \mathcal G ^{(ij \mathbb{T }, \emptyset )} \mathbf{y}_j|\le \varphi ^{D_\zeta }\Psi \end{aligned}$$
    (7.25)
    $$\begin{aligned} |(1-\mathbb E _{ \mathrm{y}_i})\mathrm{y}_i^{(i)} G^{(i , i \mathbb{U })} ( \mathrm{y}_i^{(i)}) ^* | +| \mathrm{y}_i^{(i)} G^{(i , ij \mathbb{U })} ( \mathrm{y}_j^{(i)}) ^* | \le \varphi ^{D_\zeta }\Psi \end{aligned}$$
    (7.26)
  4. (iv)

    for any \(i \) and \(\mathbb{T }, \mathbb{U }\): \(|\mathbb{T }|+|\mathbb{U }|\le p\),

    $$\begin{aligned} \left| \mathcal{G }^{(i\mathbb{T },\emptyset )}_{ii}- \frac{-1}{w(1+m^{(i\mathbb{T },\emptyset )} )}\right| \le \varphi ^{D_\zeta }\Psi \end{aligned}$$
    (7.27)

Here (i) and (ii) follow from Lemma 6.1; (iv) follows from (6.39) and the case (iii) with \(\mathbb{T } = \emptyset = \mathbb{U }\) follows from Lemma 10.1 and (6.62). The general case, i.e., \(\mathbb{T }, \mathbb{U }\ne \emptyset \) can be proved similarly using (6.6). Furthermore, since \(|\mathbb{T }|, |\mathbb{U }|\le p\) and \(p\le \varphi ^{2\zeta }\), there exists a set \(\Xi _0\) with

$$\begin{aligned} P(\Xi _0)\ge 1-e^{-\varphi ^{2 \zeta +5}} \end{aligned}$$

such that for any \({w}\in \underline{\mathrm{S}}(b), b>5D_\zeta \) the above properties (7.23)–(7.27) hold for all \(|\mathbb{T }|, |\mathbb{U }|\le p\). The reason is the number of the \(\mathbb{T }, \mathbb{U }\) satisfying \(|\mathbb{T }|, |\mathbb{U }|\le p\) is bounded by \(N^{2p}\le \varphi ^{4\zeta +1}\), where we have used (7.20).

Since \(\Psi \) is a monotonic in \(\Lambda \), we can replace \(\Psi \) in (7.25)–(7.27) by \(\widetilde{\Psi }\) in the set \(\Xi \cap \Xi _0\). By (7.20), we have \(\mathbb P [\Xi _0^c] \ll e^{-p_{N}(\log N)^2 }\). For notation simplicity we will use \(\Xi \) for the set \(\Xi \cap \Xi _0\) from now on. We claim that, for any \(i\in A\subset [\![ 1, N]\!]\), \(|A|\le p\), there exist decompositions

$$\begin{aligned}&\mathcal Q _{A} \left( wG_{ii}\right) ^{-1} \;=\; { { \mathcal{Z }}}_{i, A}+ \mathcal Q _{A}\mathbf{1}(\Xi ^c) \widetilde{ { \mathcal{Z }}}_{i, A}\end{aligned}$$
(7.28)
$$\begin{aligned}&Q_{A} \left( w\mathcal{G }^{(i,\emptyset )}_{ii}\right) ^{-1} \;=\; { { Z}}_{i, A} + Q_{A}\mathbf{1}(\Xi ^c)\widetilde{ { Z}}_{i, A} \end{aligned}$$
(7.29)

so that (11.2) holds with \(\mathcal Y =|w|^{-1/2}\) and \(\mathcal X =\varphi ^{D_\zeta +2 \zeta }|w^{ 1/2}|\widetilde{\Psi }\). Notice that the condition \(\mathcal X <1\) follows from \(\widetilde{\Lambda }\ll |m_c|\) and \(N\eta \ge \varphi ^{5D_\zeta } |m_c|\) if \( {w}\in \underline{\mathrm{S}}(b), b>5D_\zeta \) is large enough. Thus we obtain that

$$\begin{aligned} \mathbb E \left[ |\mathcal{Z }|^p \right] + \mathbb E \left[ | Z^*_*|^p \right] \le |w^{1/2}|^{ p} (Cp)^{4p}(\varphi ^{2D_\zeta +4\zeta }\widetilde{\Psi }^2)^{ p} \end{aligned}$$
(7.30)

Choosing \(C_\zeta =2D_\zeta +20\zeta \), one can see that (7.21) follows from (7.20), (7.30) and the Markov inequality.

It remains to prove (7.28) and (7.29). We prove (7.28) first. For simplicity, we assume that \({A = \{ 1, \ldots , |A |\}}\). Denote the first \(|A|\) column of \(Y_z\) by \(\mathbf{a}\) so that \(\mathbf a\) is a \(N \times |A|\) matrix. Similarly, denote by \(B \) the matrix obtained after removing the first \(K\)-columns of \(Y\). Then we have the identity

$$\begin{aligned} Y^* Y - w \;=\; \begin{pmatrix} \mathbf{a}^* \mathbf{a}- w &{} \mathbf{a}^* B\\ B^* \mathbf{a}&{} B^* B - w \end{pmatrix}\,. \end{aligned}$$

Recall the identity (6.16): for any matrix \(M\),

$$\begin{aligned} M (M^*M - w)^{-1} M^*=1+ w ( M M^* - w)^{-1}. \end{aligned}$$

Then we have for \(i,j\in A\)

$$\begin{aligned} G_{ij}&= \Biggl ({\frac{1}{\mathbf{a}^* \mathbf{a}- w - \mathbf{a}^* B (B^* B - w)^{-1} B^* \mathbf{a}}}\Biggr )_{ij} \nonumber \\&= \Biggl ({ {1 \over \mathbf{a}^* \mathbf{a}-w - \mathbf{a}^* (1+ w ( B B^* - w)^{-1} )\, \mathbf{a}}}\Biggr )_{ij} \; \nonumber \\&= \Biggl ({ {1 \over - w - w\, \mathbf{a}^* \mathcal G ^{(A, \emptyset )} \, \mathbf{a}} }\Biggr )_{ij}\;, \quad \mathcal G ^{(A, \emptyset )} = ( B B^* - w)^{-1}. \end{aligned}$$
(7.31)

Rewrite

$$\begin{aligned} I + \mathbf{a}^* \mathcal G ^{(A, \emptyset )} \, \mathbf{a}= \alpha (I+ R), \quad R := \alpha ^{-1} \left( \mathbf{a}^* \mathcal G ^{(A, \emptyset )} \, \mathbf{a}+I-\alpha I\right) \end{aligned}$$

where

$$\begin{aligned} \alpha := \left( N^{-1} \sum _{j= 1}^N \mathcal G ^{(A, \emptyset )}_{jj}+|z|^2 \frac{-1}{w(1+m_\mathcal{G }^{(A,\emptyset )} )}+1\right) = m_\mathcal{G }^{(A,\emptyset )} -\frac{|z|^2}{w (1+m_\mathcal{G }^{(A,\emptyset )} ) }+1 \end{aligned}$$

We will prove \(\Vert R\Vert \ll 1\) with high probability.Using (3.1), \(\Lambda \ll m_\mathrm{c}\) (7.24) and (6.6), we have

$$\begin{aligned} \alpha \sim w^{-1/2}, \quad \mathrm{in }\; \Xi \end{aligned}$$

By (7.25), (7.27) and (6.6), we have

$$\begin{aligned} \alpha R_{ii}\!&= \! (1-\mathbb E _{\mathbf{y}_i})\mathbf{y}_i^* \mathcal G ^{(A, \emptyset )} \mathbf{y}_i\!+\!|z|^2\left( \mathcal G ^{(A, \emptyset )}_{ii}\!-\! \frac{-1}{w(1+m_\mathcal{G }^{(A,\emptyset )} )}\right) \!=\!O( \varphi ^{D_\zeta }\widetilde{\Psi }), \quad \mathrm{in }\; \Xi ,\\ \alpha R_{ij}&= \mathbf{y}_i^* \mathcal G ^{(A, \emptyset )} \mathbf{y}_j\le O( \varphi ^{D_\zeta }\widetilde{\Psi }), \quad \mathrm{in }\; \Xi . \end{aligned}$$

Therefore, we have the bound

$$\begin{aligned} \Vert \mathbf{1} (\Xi ) R \Vert&= O( \varphi ^{D_\zeta } \widetilde{\Psi }\alpha ^{-1})=O(\varphi ^{D_\zeta } |w|^{ 1/2}\widetilde{\Psi })\ll 1, \quad \nonumber \\ \Vert \mathbf{1} (\Xi ) R^k \Vert&= O( \varphi ^{D_\zeta } \widetilde{\Psi }\alpha ^{-1})^k |A|^{k-1},\quad k=1,2,\dots \end{aligned}$$
(7.32)

With (7.31) and the definition of \(R\), we have \(-w \alpha G_{ij} = [(I+R)^{-1} ]_{ij}\) for \(i,j\in A\). Therefore,

$$\begin{aligned} -w G_{ii} \alpha = [(I+R)^{-1} ]_{ii} =1+ \sum _{j=1}^{|A|-1} ((-R)^j)_{ii} +\alpha w\sum _{j\in A} ((-R)^{|A|})_{ij}G_{ji} \end{aligned}$$

Then, together with (7.32), (7.24) and \(m_c\sim |w^{-1/2}|\sim \alpha \), we have thus proved that, in \(\Xi \),

$$\begin{aligned} -w G_{ii} \alpha = 1+ \sum _{j=1}^{{|A|} -1} (R^j)_{ii} + {{\mathrm{O}}}\left( {|A|}\varphi ^{D_\zeta } |w|^{1/2}\widetilde{\Psi }\right) ^{ {|A|} } , \quad \mathrm{in }\; \Xi \end{aligned}$$

Thus

$$\begin{aligned} \frac{-1}{wG_{ii} }&= \alpha U_{A} + {{\mathrm{O}}}(|w|^{-1/2} ( |A|^2\varphi ^{D_\zeta } |w|^{1/2}\widetilde{\Psi })^{{|A|} })\nonumber \\&= \alpha U_{A} + {{\mathrm{O}}}(|w|^{-1/2} ( |A| \varphi ^{ D_\zeta +2\zeta } |w|^{1/2}\widetilde{\Psi })^{{|A|} }) , \quad \mathrm{in }\; \Xi \end{aligned}$$
(7.33)

where we used \(|A|\le p\le \varphi ^{2\zeta }\) and \(U_{A} \) is a linear combination of the following products of \((R^j)_{ii}\)’s

$$\begin{aligned} \prod _k (R^{j_k})_{ii},\quad \quad \quad 0\le \sum _k j_k\le {|A|} -1. \end{aligned}$$

Notice we have

$$\begin{aligned} \mathcal Q _A\left( \prod _k \alpha (R^{j_k})_{ii}\right) = 0 ,\quad \quad \quad \end{aligned}$$
(7.34)

provided that \(0\le \sum _k j_k\le {|A|} -1\). This is because that \(\alpha \) is independent of \(\{\mathbf{y}_k: k\in A\} \) and \(R_{ab}\) is independent of \(\{\mathbf{y}_k: k\in A, k\ne a,b\}\). Hence there exists \( \ell \in A\) such that \(\mathbf{y}_\ell \) does not appear in \(\prod _k \alpha (R^{j_k})_{ii}\) and this proves (7.34). Therefore, we have proved that

$$\begin{aligned} \mathcal Q _A \alpha U_{A}=0. \end{aligned}$$
(7.35)

Define \(\Omega _A\) as the probability space for the columns \(\{\mathbf{y}_k: k\in A\} \) and \(\Omega _{A^c}\) the one for the columns \(\{\mathbf{y}_k: k\in A^c\} \). Then the full probability space \(\Omega \) equals to \( \Omega = \Omega _A\times \Omega _{A^c}\). Define \(\pi _{A^c}\) to be the projection onto \(\Omega _{A^c}\) and \(\Xi ^*=\left( \pi ^{-1}_{A^c}\cdot \pi _{A^c}\cdot \Xi \right) \). Then \(\mathbf{1}( \Xi ^*)\) is independent of \(\{\mathbf{y}_k: k\in A\} \). Hence we can extend (7.35) to

$$\begin{aligned} \mathcal Q _A \mathbf{1}( \Xi ^*)\alpha U_{A}= 0. \quad \end{aligned}$$

Let

$$\begin{aligned} \widetilde{ { \mathcal{Z }}}_{i, A}=\left( wG_{ii}\right) ^{-1} + \mathbf{1}(\Xi ^*{\setminus }\Xi ) \alpha U_{A}, \quad \mathcal{Z }_{i, A} = \mathcal Q _{A} \mathbf{1}( \Xi ) \left[ \left( wG_{ii}\right) ^{-1} + \alpha U_{A} \right] \end{aligned}$$

so that (11.1) is satisfied, i.e.,

$$\begin{aligned}&\mathcal{Z }_{i, A} + \mathcal Q _{A} \mathbf{1}( \Xi ^c ) \widetilde{ { \mathcal{Z }}}_{i, A}\\&\quad = \mathcal Q _{A} \mathbf{1}( \Xi ) \left[ \left( wG_{ii}\right) ^{-1} + \alpha U_{A} \right] + \mathcal Q _{A} \mathbf{1}( \Xi ^c ) \left[ \left( wG_{ii}\right) ^{-1} + \mathbf{1}(\Xi ^*{\setminus }\Xi ) \alpha U_{A} \right] \\&\quad = \left( \mathcal Q _{A} wG_{ii}\right) ^{-1} + \mathcal Q _{A} \left[ \mathbf{1}( \Xi ) \alpha U_{A} + \mathbf{1}( \Xi ^c ) \mathbf{1}(\Xi ^*{\setminus }\Xi ) \alpha U_{A} \right] \\&\quad = \left( \mathcal Q _{A} wG_{ii}\right) ^{-1} + \mathcal Q _{A} \left[ \mathbf{1}( \Xi ) \alpha U_{A} + \mathbf{1}(\Xi ^*{\setminus }\Xi ) \alpha U_{A} \right] = \left( \mathcal Q _{A} wG_{ii}\right) ^{-1}. \end{aligned}$$

By (7.33), \( |\mathcal{Z }_{i, A}| \le {{\mathrm{O}}}(|w|^{-1/2} ( |A| \varphi ^{D_\zeta +2\zeta } |w|^{1/2}\widetilde{\Psi })^{{|A|} })\) in \(\Xi \). We now prove that

$$\begin{aligned} \widetilde{{ \mathcal{Z }}}_{i, A}= \left( wG_{ii}\right) ^{-1}+ \mathbf{1}(\Xi ^*{\setminus }\Xi ) \alpha U_{A}\le N^{C|A|}. \end{aligned}$$
(7.36)

By (7.22), we have \(\left( wG_{ii}\right) ^{-1}={{\mathrm{O}}}(N^C)\). Notice that \(\alpha \) is independent of \(\{\mathbf{y}_k: k\in A\} \). Since \(\alpha \sim |w^{-1/2}|\) in \(\Xi \), the same asymptotic holds in \(\Xi ^*{\setminus } \Xi \). By definitions of \(U_{A}\) (7.33) and \(R\), and the assumption \(X_{ij}=O(N^{C})\), we obtain (7.36) and this completes the proof of (7.28). Similarly, we can prove (7.29) and this completes the proof of Lemma 7.3.