Abstract
The circular law asserts that the spectral measure of eigenvalues of rescaled random matrices without symmetry assumption converges to the uniform measure on the unit disk. We prove a local version of this law at any point \(z\) away from the unit circle. More precisely, if \( | |z| - 1 | \ge \tau \) for arbitrarily small \(\tau > 0\), the circular law is valid around \(z\) up to scale \(N^{-1/2+ {\varepsilon }}\) for any \({\varepsilon }> 0\) under the assumption that the distributions of the matrix entries satisfy a uniform subexponential decay condition.
1 Introduction
A considerable literature about random matrices focuses on Hermitian or symmetric matrices with independent entries. These models are paradigms for local eigenvalues statistics of many random Hamiltonians, as envisioned by Wigner. The study of non-Hermitian random matrices goes back to Ginibre, then in Princeton and motivated by Wigner. Ginibre’s viewpoint on the problem was described as follows [12]:
Apart from the intrinsic interest of the problem, one may hope that the methods and results will provide further insight in the cases of physical interest or suggest as yet lacking applications.
In fact the eigenvalues statistics found by Ginibre, in the case of Gaussian complex or real entries, correspond to bidimensional gases, with distinct temperatures and symmetry conditions; this is therefore a model for many interacting particle systems in dimension 2 (see e.g. [10] chap. 15). The spectral statistics found in [12] in the complex case are the following: given a \(N\times N\) matrix with independent entries \(\frac{1}{\sqrt{N}}z_{ij}\), the \(z_{ij}\)’s being identically distributed according to the standard complex Gaussian measure \(\mu _g=\frac{1}{\pi }e^{-|z|^2}\mathrm{d A}(z)\) (where \(\mathrm{d A}\) denotes the Lebesgue measure on \(\mathbb{C }\)), its eigenvalues \(\mu _1,\dots ,\mu _N\) have a probability density proportional to
with respect to the Lebesgue measure on \(\mathbb C ^{N}\). This law is a determinantal point process (because of the Vandermonde determinant) with an explicit kernel given by (see [12, 16] for a proof)
with respect to the Lebesgue measure on \(\mathbb C \). This integrability property allowed Ginibre to derive the circular law for the eigenvalues, i.e., the empirical spectral distribution converges to the uniform measure on the unit circle,
This phenomenon is the non-Hermitian counterpart of the semicircular law for Wigner random Hermitian matrices, and the quarter circular limit for Marchenko–Pastur random covariance matrices.
In the case of real Gaussian entries, the join distribution of the eigenvalues is more complicated but still integrable, allowing Edelman [7] to prove the limiting circular law as well; for more precise asymptotic properties of the real Ginibre ensemble, see [4, 11, 21]. We note also that the (right) eigenvalues of the quaternionic Ginibre ensemble were recently shown to converge to a (non-uniform) measure on the unit ball of the quaternions field [3].
For non-Gaussian entries, there is no explicit formula for the eigenvalues. Furthermore, the spectral measure, as a measure on \(\mathbb C \), cannot be characterized by computing \({{\mathrm{Tr}}}(M^\alpha \bar{M}^\beta )\). Thus the moment method, which is the popular way to prove the semicircle law, cannot be applied to solve this problem. Nevertheless, Girko [13] partially proved that the spectral measure of a non-Hermitian matrix \(M\) with independent entries converges to the circular law (1.1). The key insight of this work was the introduction of the Hermitization technique. This allows him to translate the convergence of complex empirical measures into the convergence of logarithmic transforms for a family of Hermitian matrices. More precisely, if we denote the original non-Hermitian matrix by \(X\) and the eigenvalues of \(X\) by \(\mu _j\), then for any \(\fancyscript{C}^2\) function \(F\) we have the identity
From this formula, it is clear that the small eigenvalues of the Hermitian matrix \((X^* - z^* ) (X-z) \) play a special role due to the logarithmic singularity at \(0\). The key question is to estimate the smallest eigenvalues of \((X^* - z^* ) (X-z)\), or in other words, the smallest singular values of \( (X-z)\). This problem was not treated in [13], but the gap was remedied in a series of papers. First Bai [1] was able to treat the logarithmic singularity assuming bounded density and bounded high moments for the entries of the matrix (see also [2]). Lower bounds on the smallest singular values were given in Rudelson and Vershynin [19, 20], and subsequently Tao and Vu [22], Pan and Zhou [17] and Götze and Tikhomirov [14] weakened the moments and smoothness assumptions for the circular law, till the optimal \(\text{ L }^2\) assumption, under which the circular law was proved in [23].
The purpose of this paper is to prove a local version of the circular law, up to the optimal scale \(N^{-1/2 + {\varepsilon }}\) (see Sect. 2 for a precise statement). Below this scale, detailed local statistics will be important and that is beyond the scope of the current paper. The main tool of this paper is a detailed analysis of the self-consistent equations of the Green functions
Our method is related to the proof of a local semicircular law in [9] or to a local Marchenko–Pastur law in [18]. We are able to control \(G_{ij}(E + \mathrm{i}\eta )\) for the energy parameter \(E\) in any compact set and sufficient small \(\eta \). This provides sufficient information to use the formula (1.2) for functions \(F\) at the scales \(N^{-1/2+ {\varepsilon }}\). We also notice that a local Marchenko–Pastur law for \(X^*X\) was proved in [5], simultaneously with the present article.
Finally, we remark that the local circular law demonstrates that the eigenvalue distribution in the unit disk is extremely “uniform”. If the eigenvalues are distributed in the unit disk by a uniform statistics or any other statistics with summable decay of correlations, then there will be big holes or some clusterings of eigenvalues in the disk. While the usual circular law does not rule out these phenomena, the local law established in this paper does. This implies that the eigenvalue statistics cannot be any probability laws with summable decay of correlations.
2 The local circular law
We first introduce some notations. Let \(X\) be an \(N \times N\) matrix with independent centered entries of variance \( N^{-1} \). The matrix elements can be either real or complex, but for the sake of simplicity we will consider real entries in this paper. Denote the eigenvalues of \(X\) by \(\mu _j, j=1, \ldots , N\). We will use the following notion of stochastic domination which simplifies the presentation of the results and their proofs.
Definition 2.1
(Stochastic domination) Let \(W=(W_N)_{N\ge 1}\) be family a random variables and \(\Psi =(\Psi _N)_{N\ge 1}\) be deterministic parameters. We say that \(W\) is stochastically dominated by \(\Psi \) if for any \( \sigma > 0\) and \(D > 0\) we have
for sufficiently large \(N\). We denote this stochastic domination property by
In this paper, we will assume that the probability distributions for the matrix elements have the uniform subexponential decay property, i.e.,
for some constant \(\vartheta >0\) independent of \(N\). This condition can of course be weakened to an hypothesis of boundedness on sufficiently high moments, but the error estimates in the following Theorem would be weakened as well. We now state our local circular law, which holds up to the optimal scale \(N^{-1/2+{\varepsilon }}\).
Theorem 2.2
Let \(X\) be an \(N \times N\) matrix with independent centered entries of variance \( N^{-1} \). Suppose that the probability distributions of the matrix elements satisfy the uniformly subexponentially decay condition (2.1). We assume that for some fixed \( \tau >0\), for any \(N\) we have \(\tau \le ||z_0|-1|\le \tau ^{-1} \) (\(z_0\) can depend on \(N\)). Let \(f \) be a smooth non-negative function which may depend on \(N\), such that \(\Vert f\Vert _\infty \le C, \Vert f^{\prime }\Vert _\infty \le N^C\) and \(f(z)=0\) for \(|z|\ge C\), for some constant \(C\) independent of \(N\). Let \(f_{z_0}(z)=N^{2a}f(N^{a}(z-z_0))\) be the approximate delta function obtained from rescaling \(f\) to the size order \(N^{-a}\) around \(z_0\). We denote by \(D\) the unit disk. Then for any \(a\in (0,1/2]\),
3 Hermitization and local Green function estimate
In the following, we will use the notation
where \(I\) is the identity operator. Let \(\lambda _j(z)\) be the \(j\)th eigenvalue (in the increasing ordering) of \(Y^*_z Y_z \). We will generally omit the \(z\)-dependence in these notations. Thanks to the Hermitization technique of Girko [13], the first step in proving the local circular law is to understand the local statistics of eigenvalues of \(Y^*_z Y_z\), for \(z\) strictly inside the unit circle. In this section, we first recall some well-known facts about the Stieltjes transform of the empirical measure of eigenvalues of \(Y^*_z Y_z\). We then present the key estimate concerning the Green function of \(Y^*_z Y_z\) in almost optimal spectral windows. This result will be used later on to prove a local version of the circular law.
3.1 Properties of the limiting density of the Hermitization matrix
Define the Green function of \(Y^*_z Y_z\) and its trace by
We will also need the following version of the Green function later on:
As we will see, with high probability \(m(w,z)\) converges to \(m_\mathrm{c}(w,z)\) pointwise, as \(N\rightarrow \infty \) where \( m_\mathrm{c}(w,z)\) is the unique solution of
with positive imaginary part (see Sect. 3 in [14] for the existence and uniqueness of such a solution). The limit \( m_\mathrm{c}(w,z)\) is the Stieltjes transform of a density \( \rho _\mathrm{c} (x,z)\) and we have
whenever \(\eta >0\). The function \(\rho _\mathrm{c} (x,z)\) is the limiting eigenvalue density of the matrix \(Y^*_z Y_z\) (cf. Lemmas 4.2 and 4.3 in [1]). Let
Note that \(\lambda _-\) has the same sign as \(|z|-1\). The following two propositions summarize the properties of \(\rho _\mathrm{c}\) and \(m_\mathrm{c}\) that we will need to understand the main results in this section. They will be proved in Appendix A. In the following, we use the notation \(A\sim B\) when \(c B \le A\le c^{-1}B\), where \(c>0\) is independent of \(N\).
Proposition 3.1
The limiting density \(\rho _\mathrm{c}\) is compactly supported and the following properties regarding \(\rho _\mathrm{c}\) hold.
-
(i)
The support of \(\rho _\mathrm{c}(x, z)\) is \([\max \{0,\lambda _-\}, \lambda _+]\).
-
(ii)
As \(x\rightarrow \lambda _+\) from below, the behavior of \(\rho _\mathrm{c}(x, z)\) is given by \(\rho _\mathrm{c}(x, z)\sim \sqrt{\lambda _+-x}. \)
-
(iii)
For any \({\varepsilon }>0\), if \( \max \{0,\lambda _-\}+{\varepsilon }\le x \le \lambda _+-{\varepsilon }\), then \(\rho _\mathrm{c}(x, z)\sim 1\).
-
(iv)
Near \(\max \{0,\lambda _-\}\), the behavior of \(\rho _\mathrm{c}(x, z)\) can be classified as follows.
-
If \(|z|\ge 1+\tau \) for some fixed \(\tau >0\), then \(\lambda _-> {\varepsilon }(\tau ) > 0 \) and \(\rho _\mathrm{c}(x, z)\sim 1\!\!1_{x>\lambda _-}\sqrt{ x-\lambda _-}\).
-
If \(|z|\le 1-\tau \) for some fixed \(\tau >0\), then \(\lambda _-< - {\varepsilon }(\tau ) < 0\) and \(\rho _\mathrm{c}(x, z)\sim 1/ \sqrt{x} \).
All of the estimates in this proposition are uniform in \(|z|<1-\tau \), or \(\tau ^{-1}\ge |z|\ge 1+\tau \) for fixed \(\tau >0\).
-
Proposition 3.2
The preceding Proposition implies that, uniformly in \(w\) in any compact set,
Moreover, the following estimates on \(m_\mathrm{c}(w,z)\) hold.
-
If \(|z|\ge 1+\tau \) for some fixed \(\tau >0\), then \(m_\mathrm{c}\sim 1\) for \(w\) in any compact set.
-
If \(|z|\le 1-\tau \) for some fixed \(\tau >0\), then \(m_\mathrm{c}\sim |w|^{-1/2} \) for \(w\) in any compact set.
3.2 Concentration estimate of the Green function up to the optimal scale
We now state precisely the estimate regarding the convergence of \(m\) to \(m_\mathrm{c}\). Since the matrix \(Y^*_z Y_z\) is symmetric, we will follow the approach of [9]. We will use extensively the following definition of high probability events.
Definition 3.3
(High probability events) Define
Let \(\zeta > 0\). We say that an \(N\)-dependent event \(\Omega \) holds with \(\zeta \) -high probability if there is some constant \(C\) such that
for large enough \(N\).
For \( \alpha \ge 0\), define the \(z\)-dependent set
where \(\varphi \) is defined in (3.3). Here we have suppressed the explicit \(z\)-dependence. Notice that for \(|z|<1-{\varepsilon }\), as \(|m_\mathrm{c}|\sim |\omega |^{-1/2}\) we allow \(\eta \sim |w| \sim {N^{-2} \varphi ^{2\alpha }}\) in the set \(\underline{\mathrm{S}} (\alpha )\). This is a key feature of our approach which shows that the Green function estimates hold until a scale much smaller than the typical \(N^{-1}\) value of \(\eta \).
Theorem 3.4
(Strong local Green function estimates) Suppose \(\tau \le ||z|-1|\le \tau ^{-1} \) for some \(\tau >0\) independent of \(N\). Then for any \(\zeta >0\), there exists \(C_\zeta >0\) such that the following event holds with \( \zeta \)-high probability:
Moreover, the individual matrix elements of the Green function satisfy, with \( \zeta \)-high probability,
4 Properties of \(\rho _\mathrm{c}\) and \(m_\mathrm{c}\)
We have introduced some basic properties of \(\rho _\mathrm{c}\) and \(m_\mathrm{c}\) in Proposition 3.1 and 3.2. In this section, we collect some more useful properties used in this paper, proved in Appendix A. Recall that \(w = E + \mathrm{i}\eta , \alpha =\sqrt{1+8|z|^2}\) from (3.2), and define \(\kappa := \kappa (w, z) \) as the distance from \(E\) to \(\{\lambda _+, \lambda _-\}\):
For \(|z| < 1\), we have \(\lambda _- < 0\) (see Proposition 3.1), so in this case we define \(\kappa :=|E-\lambda _+|\).
Lemma 4.1
There exists \(\tau _0>0\) such that for any \(\tau \le \tau _0\) if \(|z|\le 1-\tau \) and \(|w|\le \tau ^{-1} \) then the following properties concerning \(m_\mathrm{c}\) hold. All constants in the following estimates depend on \(\tau \).
-
Case 1: \(E\ge \lambda _+\) and \(|w-\lambda _+|\ge \tau \). We have
$$\begin{aligned} |{{\mathrm{Re}}}m_\mathrm{c}|\sim 1, \quad -\frac{1}{2}\le {{\mathrm{Re}}}m_\mathrm{c} <0 , \quad {{\mathrm{Im}}}\, m_\mathrm{c}\sim \eta . \end{aligned}$$(4.2) -
Case 2: \(|w-\lambda _+|\le \tau \) (Notice that there is no restriction on whether \(E\le \lambda _+\) or not ). We have
$$\begin{aligned} m_\mathrm{c}(w, z)=- \frac{2}{3+\alpha } +\sqrt{\frac{8(1+\alpha )^3}{\alpha (3+\alpha )^5}}\, (w-\lambda _+ )^{1/2} +{{\mathrm{O}}}(\lambda _+-w), \end{aligned}$$(4.3)and
$$\begin{aligned} {{\mathrm{Im}}}\, m_\mathrm{c}\sim&\left\{ \begin{array}{cc} \frac{\eta }{\sqrt{ \kappa }} &{}\quad \text{ if }\ \kappa \ge \eta \, \hbox {and}\, E\ge \lambda _+, \\ \sqrt{ \eta } &{}\text{ if }\, \kappa \le \eta \, \hbox {or}\,\,\, E\le \lambda _+. \end{array}\right. \end{aligned}$$(4.4) -
Case 3: \(|w|\le \tau \). We have
$$\begin{aligned} m_\mathrm{c}(w,z)=\mathrm{i}\frac{(1-|z|^2)}{\sqrt{w}} +\frac{1-2|z|^2}{2|z|^2-2}+{{\mathrm{O}}}(\sqrt{w}) \end{aligned}$$(4.5)as \(w\rightarrow 0\), and
$$\begin{aligned} {{\mathrm{Im}}}\, m_\mathrm{c}(w,z)\sim |w|^{-1/2}. \end{aligned}$$(4.6) -
Case 4: \(|w|\ge \tau , |w-\lambda _+|\ge \tau \) and \(E\le \lambda _+\). We have
$$\begin{aligned} |m_\mathrm{c}|\sim 1,\quad {{\mathrm{Im}}}\, m_\mathrm{c}\sim 1. \end{aligned}$$(4.7)
Here Case 1 covers the regime where \(E \ge \lambda _+\) and \(w\) is far away from \(\lambda _+\). Case 2 concerns the regime that \(w\) is near \(\lambda _+\), while Case 3 is for \(w\) is near the origin. Finally Case 4 is for \(w\) not covered by the first three cases.
Lemma 4.2
There exists \( \tau _0>0\) such that for any \(\tau \le \tau _0\), if \(|z|\ge 1+\tau \) and \(|w|\le \tau ^{-1}\) then the following properties concerning \(m_\mathrm{c}\) hold. All constants in the following estimates depend on \(\tau \). Recall from (3.2) that \(\lambda _-=\frac{( \alpha -3)^3}{8(\alpha -1)} >0\).
-
Case 1: \(E\ge \lambda _+\) and \(|w-\lambda _+|\ge \tau \). We have
$$\begin{aligned} |{{\mathrm{Re}}}m_\mathrm{c}|\sim 1,\quad -\frac{1}{2}\le {{\mathrm{Re}}}m_\mathrm{c}<0 , \quad {{\mathrm{Im}}}\, m_\mathrm{c}\sim \eta . \end{aligned}$$ -
Case 2: \(E\le \lambda _-\) and \(|w-\lambda _-|\ge \tau \). We have
$$\begin{aligned} |{{\mathrm{Re}}}m_\mathrm{c}|\sim 1,\quad 0\le {{\mathrm{Re}}}m_\mathrm{c}, \quad {{\mathrm{Im}}}\, m_\mathrm{c}\sim \eta . \end{aligned}$$ -
Case 3: \(|\kappa +\eta |\le \tau \). We have
$$\begin{aligned} m_\mathrm{c}(w, z)&= \frac{2}{-3\mp \alpha } + \sqrt{\frac{8(\pm 1+\alpha )^3}{\pm \alpha (\pm 3+\alpha )^5}}\, (w-\lambda _\pm )^{1/2} +{{\mathrm{O}}}(\lambda _\pm -w), \nonumber \\ {{\mathrm{Im}}}\, m_\mathrm{c}&\sim \left\{ \begin{array}{cc} \frac{\eta }{\sqrt{ \kappa }} &{}\quad \text{ if }\, \kappa \ge \eta \quad \hbox {and}\, E\notin [\lambda _-, \lambda _+], \\ \sqrt{\eta } &{}\text{ if }\, \kappa \le \eta \, \quad \hbox {or}\, E\in [\lambda _-, \lambda _+]. \end{array} \right. \end{aligned}$$(4.8) -
Case 4: \(|w|\ge \tau \), \(|w-\lambda _+|\ge \tau \) and \(\lambda _- \le E\le \lambda _+\). We have
$$\begin{aligned} |m_\mathrm{c}|\sim 1,\quad {{\mathrm{Im}}}\, m_\mathrm{c}\sim 1. \end{aligned}$$
Here Case 1 covers the regime \(E \ge \lambda _+\) and \(w\) is far away from \(\lambda _+\). Case 2 concerns the regime \(E \le \lambda _-\) and \(w\) is far away from \(\lambda _-\). Case 3 is for \(w\) near \(\lambda _\pm \). Finally Case 4 is for \(w\) not covered by the first three cases.
The following lemma concerns the two cases covered in Lemmas 4.1 and 4.2, i.e., \(z\) is either strictly inside or outside of the unit disk.
Lemma 4.3
There exists \( \tau _0 >0\) such that for any \(\tau \le \tau _0\) if either the conditions \(|z| \le 1-\tau \) and \(|w|\le \tau ^{-1}\) hold or the conditions \(|z| \ge 1+\tau , |w|\le \tau ^{-1}, {{\mathrm{Re}}}\omega \ge \lambda _-/5\) hold, then we have the following three bounds concerning \(m_\mathrm{c}\) (all constants in the following estimates depend on \(\tau \)):
5 Proof of Theorem 2.2, local circular law in the bulk
Our main tool in this section will be Theorem 3.4, which critically uses the hypothesis \(||z|-1|\ge \tau \): when \(z\) is on the unit circle the self-consistent equation (which is a fixed point equation for the function \(g(m)=(1+w m(1+m)^2)/(|z|^2-1)\) see (6.21) later in this paper) becomes unstable
We follow Girko’s idea [13] of Hermitization, which can be reformulated as the following identity (see e.g. [15]): for any smooth \(F\)
We will use the notation \(z= z(\xi )=z_0+N^{-a} \xi \). Choosing \(F= f_{z_0}\) defined in Theorem 2.2 and changing the variable to \(\xi \), we can rewrite the identity (5.1) as
Recall that \(\lambda _j(z)\)’s are the ordered eigenvalues of \(Y_z^* Y_z \), and define \(\gamma _j(z)\) as the classical location of \(\lambda _j(z)\), i.e.
Suppose we have
Thanks to Proposition 3.1, one can check that uniformly in \( |z| < 1-\tau \), and also in the domain \(1+\tau \le |z|\le \tau ^{-1}\) (\(\tau >0\)), for any \(\delta >0\) we have
for large enough \(N\). We therefore have
where we have used that
It is known, by Lemma 4.4 of [1], that
Combining (5.4) and (5.5), we have proved (2.2) provided that we can prove (5.3). To prove (5.3), we need the following rigidity estimate which is a consequence of Theorem 3.4.
Lemma 5.1
Suppose \(\tau \le ||z|-1|\le \tau ^{-1} \) for some \(\tau >0\) independent of \(N\). Then for any \(\zeta >0\), there exists \(C_\zeta >0\) such that the following event holds with \( \zeta \)-high probability: for any \( \varphi ^{C_\zeta }<j<N-\varphi ^{C_\zeta }\) we have
and in the case \(|z|\le 1-\tau \),
in the case \(|z|\ge 1+\tau \),
Proof
First, with (3.5) and the definition (3.4), for any \(\zeta \) there exists \(C_\zeta >0\) such that
holds with with \( \zeta \)-high probability. It also implies that for \(\eta =\varphi ^{C_\zeta }N^{-1}|m_\mathrm{c} |^{-1}\),
Then using the fact that \( \eta \, {{\mathrm{Im}}}\,m(E+\mathrm{i}\eta ) \) and \( \eta {{\mathrm{Im}}}\, m_\mathrm{c}(E+\mathrm{i}\eta ) \) are increasing with \(\eta \), we obtain that (5.10) holds for any \(0\le \eta \le {{\mathrm{O}}}( \varphi ^{C_\zeta }N^{-1}|m_\mathrm{c} |^{-1})\) with \( \zeta \)-high probability. Notice that \({{\mathrm{Im}}}\, m\) and \({{\mathrm{Im}}}\, m_\mathrm{c}\) are positive number. Define the interval
and define \(\eta _j\ge 0\) as the smallest positive solution of
Since
we have by (5.10) that
Using the Helffer–Sjöstrand functional calculus (see e.g. [6]), letting \(\chi (\eta )\) be a smooth cutoff function with support in \([-1,1]\), with \(\chi (\eta )= 1\) for \( |\eta |\le 1/2\) and with bounded derivatives, we have for any \(q: \mathbb R \rightarrow \mathbb R \),
To prove (5.6), we choose \(q\) to be supported in \([E_1 , E_2 ]\) such that \(q(x)=1\) if \(x\in [E_1+\eta _1, E_2-\eta _2]\) and \(|q^{\prime }|\le C(\eta _{i})^ {-1}, |q^{\prime \prime }|\le C(\eta _{i})^{-2}\) if \(|x-E_i|\le \eta _i\). We now claim that
Combining (5.12) and (5.11), we have for any \(1\le j\le N\),
which implies (5.6) with \(C_\zeta \) in (5.6) replaced by \(2 C_\zeta \).
It remains to prove (5.12). Since \(q\) and \(\chi \) are real, with \(\Delta m=m-m_\mathrm{c}\)
The first term is estimated by
using (3.5) and that on the support of \(\chi ^{\prime }\) is in \(1\ge |\eta |\ge 1/2\).
For the second term in the r.h.s. of (5.13), with \(|q^{\prime \prime }|\le C\eta _i^{-2}\), (5.9) and (5.10), we obtain
We now integrate the third term in (5.13) by parts first in \(E\), then in \(\eta \) (and use the Cauchy-Riemann equation \(\frac{\partial }{\partial E}{{\mathrm{Im}}}(\Delta m)=-\frac{\partial }{\partial \eta } {{\mathrm{Re}}}(\Delta m))\) so that
We therefore can bound the third term in (5.13) with absolute value by
where the last term can be bounded as the first term in r.h.s. of (5.13). By using (5.9) we have
where we used \(\eta _i\ge N^{-C}\). Together with (5.14) and (5.15), we obtain (5.12) and complete the proof of (5.6).
Now we prove (5.7). Using (5.2) and Proposition 3.1, we have
One can check easily that
and for \(j\ge 2\)
Combining (5.18) with (5.6), we obtain (5.7).
For (5.8), the proof is similar to the above reasoning, but simpler: in this case \(\gamma _j\sim 1\) for \(j\le N/2\). For \(j\ge N/2, \gamma _j\) is bounded as (5.17), and one can check if \(1+\tau \le |z|\le \tau ^{-1}\), Proposition 3.1, we have
which implies (5.8).\(\square \)
We return to the proof of the local circular law, Theorem 2.2. We now only need to prove (5.3) from Lemma 5.1. From (5.7) and (5.8), we have
and
Notice that, for large enough \(C\), there is a constant \(c>0\) such that for any \(j\) we have
with probability larger than \(1-\exp ({{ -N^c}})\) (for this elementary fact, one can for example see that the entries of \(X\) are smaller that \(1\) with probability greater than \(1-\vartheta ^{-1}e^{-N^\vartheta }\) by the subexponential decay assumption (2.1) and then use \(\sum \lambda _j={{\mathrm{Tr}}}Y^* Y \)), so together with the above bounds on \(\left| \log \lambda _j(z)-\log \gamma _j(z)\right| \) this proves that for any \(\zeta >0\), there exists \(C_\zeta >0\) such that
with \( \zeta \)-high probability. Furthermore, one can see that or estimates hold uniformly for \(z\)’s in this region.
On the other hand, the following important Lemma 5.2 holds, concerning the smallest eigenvalue. It implies that
holds uniformly for \(z\) in any fixed compact set. It is easy to check that for any \(\delta >0\), for large enough \(N\),
Hence we can extend the summation in (5.19) to all \(j \ge 1\), which gives (5.3) and completes the proof of Theorem 2.2.
Lemma 5.2
(Lower bound on the smallest eigenvalue) Under the same assumptions of Theorem 2.2,
holds uniformly for \(z\) in any fixed compact set.
Proof
This lemma followsFootnote 1 from [20] or Theorem 2.1 of [22], which gives the required estimate uniformly in \(z\). Note that the typical size of \(\lambda _1\) is \(N^{-2}\) [20], and we need a much weaker bound of type \(\mathbb{P }(\lambda _1(z)\le e^{-N^{-{\varepsilon }}})\le N^{-C}\) for any \({\varepsilon },C>0\). This estimate is very simple to prove if, for example, the entries of \(X\) have a density bounded by \(N^C\). Then, from the variational characterization \(\lambda _1(z)=\min _{|u|=1}\Vert X(z)u\Vert ^2\), one easily gets
where \(u_k(z)\) is a unit vector independent of \(X(z)e_k\). By conditioning on \(u_k(z)\), the result of this lemma is straightforward since the matrix entries have a density.\(\square \)
6 Weak local Green function estimate
In this section, we make a first step towards Theorem 3.4, with a weaker version of it, stated hereafter.
Theorem 6.1
(Weak local Green function estimates) Under the assumption of Theorem 3.4, the following event hold with \( \zeta \)-high probability (see (3.4) for the definition of \(\underline{\mathrm{S}}\)):
This theorem will be proved in the subsequent subsections.
6.1 Identities for Green functions and their minors
There are many different ways to form minors for the matrices \(Y^* Y\) and \( Y Y^* \). We will use the following definition (where we use the notation \([\![ a, b]\!]=[a,b]\cap \mathbb Z \)).
Definition 6.2
Let \(\mathbb{T }, \mathbb{U } \subset [\![1, N]\!]\). Then we define \(Y^{(\mathbb{T }, \mathbb{U })}\) as the \( (N-|\mathbb{U }|)\times ( N-|\mathbb{T }|) \) matrix obtained by removing all columns of \(Y\) indexed by \(i \in \mathbb{T }\) and all rows of \(Y\) indexed by \(i \in \mathbb{U }\). Notice that we keep the labels of indices of \(Y\) when defining \(Y^{(\mathbb{T }, \mathbb{U })}\).
Let \(\mathbf{y}_i\) be the \(i \)th column of \(Y\) and \(\mathbf{y}^{(\mathbb{S })}_i\) be the vector obtained by removing \(\mathbf{y}_i (j) \) for all \( j \in \mathbb{S }\). Similarly we define \(\mathrm{y}_i\) be the \(i \)th row of \(Y\). Define
By definition, \(m^{(\emptyset , \emptyset )} = m\). Since the eigenvalues of \(Y^* Y \) and \(Y Y^*\) are the same except the zero eigenvalue, it is easy to check that
For \(|\mathbb{U }|=| \mathbb{T }|\), we define
By definition, \(G^{(\mathbb{T }, \mathbb{U })} \) is a \((N-|\mathbb{T }|)\times (N-|\mathbb{T }|)\) matrix and \(\mathcal{G }^{(\mathbb{T }, \mathbb{U })} \) is a \((N-|\mathbb{U }|)\times (N-|\mathbb{U }|)\) matrix. For \(i\) or \(j\in \mathbb{T }, G_{ij}^{(\mathbb{T }, \mathbb{U })}\) has no meaning from the previous definition. But we define \(G_{ij}^{(\mathbb{T }, \mathbb{U })} = 0\) whenever either \(i\) or \(j \in \mathbb{T }\). Similar convention applies to \(\mathcal{G }_{i j}^{(\mathbb{T }, \mathbb{U })}\), which is zero if \(i\) or \(j \in \mathbb{U }\).
Notice that we can view \(Y_z Y^*_z = (W_{z^*})^* W_{z^*} \) where \( W_{z^*} = Y^*_z\), so all properties of \(G^{(\mathbb{T }, \mathbb{U })}\) have parallel versions for \(\mathcal{G }^{(\mathbb{U }, \mathbb{T })}\). We shall call this property row–column reflection symmetry, i.e., we interchange \(G^{(\mathbb{U },\mathbb{T })}, Y, z, \mathbf{y}_i \) by \(\mathcal{G }^{(\mathbb{T }, \mathbb{U })}, Y^*, z^*, \mathrm{y}_i \). Here \(\mathbf{y}_i\) is a \(N\times 1 \) column vector and \(\mathrm{y}_i\) a \(1\times N \) row vector. The following lemma provides the formulas relating Green functions and their minors.
Lemma 6.3
(Relation between \(G, G^{(\mathbb{T },\emptyset )}\)and \(G^{( \emptyset , \mathbb{T })}\)) For \(i,j \ne k \) ( \(i = j\) is allowed) we have
and
Furthermore, the following crude bound on the difference between \(m\) and \(m_G^{(\mathbb U , \mathbb T )}\) holds: for \(\mathbb{U }, \mathbb{T }\subset [\![1, N]\!]\) we have
Proof
By the row–column reflection symmetry, we only need to prove those formulas involving \(G\). We first prove (6.4). In [8, 9], was proved a lemma concerning Green functions of matrices and their minors. This lemma is stated as Lemma 9.2 in Appendix B. Let
For \(\mathbb T \subset [\![1, N]\!]\), denote \(H^{[\mathbb T ]}\) as the \(N-|\mathbb T |\) by \(N-|\mathbb T |\) minor of \(H\) after removing the \(i\)th rows and columns index by \(i\in \mathbb T \). Following the convention in Definition 9.1, we define
By definition, we have
Then we can apply (9.4) to \(G^{(\mathbb T ,\emptyset )}\) and obtain (6.4).
We now prove (6.5). Recall the rank one perturbation formula
where \(\mathbf{v}\) is a row vector and \(\mathbf{v}^*\) is its Hermitian conjugate. Together with
we obtain (6.5).
We now prove (6.6). With (6.4), we have
Moreover, by diagonalization in an orthonormal basis and the obvious identity \(|(\lambda -\omega )^{-2}|=\eta ^{-1}{{\mathrm{Im}}}[(\lambda -\omega )^{-1}]\) (\(\lambda \in \mathbb{R }\)), we have
so we have proved that
By (6.3), (6.10) holds for \(m_\mathcal{G }^{( i, \emptyset )}\) as well. Similar arguments can be used to prove (6.6) for \(m_ \mathcal{G }^{(i, j)}, m_ G^{(i, j)}\) and the general cases. This completes the proof of Lemma 6.3.\(\square \)
The next step is to derive equations between the matrix and its minors. The main results are stated as the following Lemma 6.5. We first need the following definition.
Definition 6.4
In the following, \(\mathbb E _X\) means the integration with respect to the random variable \(X\). For any \(\mathbb{T }\subset [\![ 1, N]\!]\), we introduce the notations
and
Recall by our convention that \(\mathbf{y}_i\) is a \(N\times 1 \) column vector and \(\mathrm{y}_i\) is a \(1\times N \) row vector. For simplicity we will write
Lemma 6.5
(Identities for \(G, \mathcal{G }, Z\)and \(\mathcal{Z }\)) For any \( \mathbb T \subset [\![ 1, N]\!]\), we have
where, by definition, \(\mathcal{G }_{ii}^{(i,\mathbb{T })}=0\) if \(i\in \mathbb{T }\). Similar results hold for \(\mathcal{G }\):
Proof
By the row–column reflection symmetry, we only need to prove the \(G\) part of this lemma. Furthermore, for simplicity, we prove the case \(T=\emptyset \), the general case can be proved in the same way.
We first prove (6.11). Let \(H=Y^* Y\). Similarly to (6.7) and (6.8), we define \(G^{[i]}\) and \(H^{[i]}\). Then using (9.2) and (6.9), we have
From the definition of \( H \), we have \(h _{ik}= \mathbf{y}_i^* \mathbf{y}_k \). Then
For any matrix \(A\), we have the identity
and as a consequence
Combining (6.15) and (6.17), we have
We now write
By definition
which complete the proof of (6.11).
We now prove (6.12). As above, using now (9.3), we have
where
Then using (6.16) again, we obtain (6.12).\(\square \)
6.2 The self-consistent equation and its stability
We now derive the self-consistent equation for \(m(w)\) and its stability estimates. Following [9], we introduce the following control parameter:
Definition 6.6
Define the control parameter
Notice that all quantities depend on \(w\) and \(z\). Furthermore, if \(\Lambda \le C |m_c|\) then for \(w \in \underline{\mathrm{S}} (b)\) (see (3.4)),
The quantity \(|m_\mathrm{c}|^{-1} \Psi \) will be our controlling small parameter in this paper.
Before we start to prove Theorem 3.4, we make the following observation. The parameter \(z\) can be either inside the unit ball or outside of it. Recall the properties of \(m_\mathrm{c}\) in section 4. By Lemma 3.1, the limiting density \(\rho _c\) of \(YY^*\) is supported on \([\lambda _-, \lambda _+]\), where \(\lambda _- < 0\) and \(\lambda _+ \sim 1\) when \(|z| \le 1 - \tau \). Since \(\lambda _- < 0\) in this case, we will never approach \(\lambda _-\). On the other hand, we will have to consider the behavior when \(w \sim 0\). When \( 1+ \tau \le |z| \le \tau ^{-1}\), we have \(\lambda _-> 0\) and \(w\) stays away from the origin by definition of \(\underline{\mathrm{S}} (C_\zeta )\), i.e., the condition \(E \ge \lambda _-/5\). Our approach to the local Green function estimates will use the self-consistent equation of \(m(w)\). This approach depends crucially on the stability properties of this equation which can be divided roughly into three cases: \(w\) near the edges \(\lambda _\pm , w \sim 0\) or \(w\) in the bulk (defined here as the rest of possible \(w \in \underline{\mathrm{S}} (C_\zeta )\)). From Lemma 4.1 and Lemma 4.2, the behavior of \(m_c\) near the edges \(\lambda _\pm \) when \( |z| \ge 1 + \tau \) are identical to its behavior near the edge \(\lambda _+\) when \( |z| \le 1 - \tau \). In the bulk, the behavior for both cases are the same. Thus we will only consider the case \( |z| \le 1 - \tau \) since it covers all three different behaviors. Hence from now on, we will assume that \(|z| \le 1 -\tau \). We emphasize that \({{\mathrm{Im}}}\, m_\mathrm{c} \ll |m_\mathrm{c}|\) when \(|\lambda _+-w|\ll 1\). All stability results concerning the self-consistent equation will be under the following assumption (6.20).
Lemma 6.7
(Self consistent equation) Suppose \(|z| \le 1 -\tau \) for some \(\tau > 0\). Then there exists a small constant \( \alpha > 0\) independent of \(N\) such that if the estimate
holds for some \(|w|\le C\) on a set \(A\) in the probability space of matrix elements for \(X\), then in the set \(A\) we have with \( \zeta \)-high probability
provided that \( w \in \underline{\mathrm{S}} (b)\) for some \(b > 5 Q_\zeta \) with \(Q_\zeta \) defined in Lemma 10.1.
Proof
By (4.9), (4.10) and (6.20), for \(|z| \le 1-t\) the following inequalities hold on the set \(A\):
Furthermore, using (6.22), (4.9), (4.10), (6.20) and (3.1), we have in the set \(A\)
The origin of the self-consistent equation (6.21) relies on the choice \(\mathbb T = \{i\}\) in (6.13):
By definition of \(\Psi \) and (6.6),
Moreover, we have from (10.1) that with \( \zeta \)-high probability in \(A\)
where we have used (6.26), (6.20) and, by definition, \(G^{(i, i)}_{ii} = 0\). We would like to estimate \( (\mathcal{G }_{ii}^{(i,\emptyset )})^{-1}\) in (6.25) by treating \(( 1+ m )\) as the main term and the rest as error terms. From the Eqs. (6.20) and (6.19), the ratio between the error terms and the main term for \( w \in \underline{\mathrm{S}} (b)\) with \(b > 5 Q_\zeta \) is bounded by
Therefore for any \( w \in \underline{\mathrm{S}} (b)\) with \(b > 5 Q_\zeta \) we have with \( \zeta \)-high probability
where
where we have used (6.22) and \( |m_c| \sim |w|^{-1/2}\). Together with (6.23), we thus have with \( \zeta \)-high probability
Using this estimate, (6.6) and (6.29), we can estimate \(\mathcal{Z }_{i }:= \mathcal{Z }_{i }^{(\emptyset )}\) by
We can now use (6.32), (6.29) and (6.6) to estimate the right hand side of (6.11) such that
where \(\mathcal E _1\) and \(\mathcal{Z }_{i }\) are bounded in (6.30) and (6.32) and \(\mathcal E _2\) is bounded by
In the last inequality, we have used (6.24) to bound \( 1+ m - \frac{ |z|^2 }{ w(1+ m)} \) and (4.9) for \(m_\mathrm{c}\).
Summing over the index \(i\) in (6.34), we have
Hence we have proved
Together with the assumption (6.20) on \(\Lambda \) and (4.9) on the order of \(m_\mathrm{c}\), this proves (6.21).\(\square \)
Corollary 6.8
Under the assumptions of Lemma 6.7, the following properties hold. Let \(\mathbb{T }, \mathbb{U }\in [\![ 1, N]\!]\) such that \(i\notin \mathbb{T }\) and \(|\mathbb{T }|+| \mathbb N |\le C\). For any \(\zeta >0\) and \( w \in \underline{\mathrm{S}} (b)\) for some \(b > 5 Q_\zeta \) with \(Q_\zeta \) defined in Lemma 10.1, we have with \( \zeta \)-high probability for any \(i\in \mathbb{U }\) that
If \(i \not \in \mathbb{U }\), then
Proof
We first prove the case \(i \not \in \mathbb{U }\). We claim that the parallel version of (6.34) holds as well, i.e.,
Comparing (6.38) with (6.34), we have proved (6.37).
We now prove the case \(i \in \mathbb{U }\). By row–column symmetry, we have
Hence we have to prove, for \(i \in \mathbb{U }\) and \(i \not \in \mathbb{T }\), that
We will omit \(A\) in the following argument.
One can extend (6.25)–(6.30) to \(\mathcal{G }_{ii}^{(\mathbb{U } , \mathbb{T })}\) and obtain
as in (6.29). Comparing (6.39) with the equation for \(\mathcal{G }_{ii}^{(i ,\emptyset )}\) (6.29), we obtain (6.36) in the case \(i\in \mathbb{U }\).\(\square \)
We define for any sequence \(A_i\) (\(1\le i\le N\)) the quantity
In application, we often use \( A=Z\) or \(A=\mathcal{Z }\). Define
The following lemma is our stability estimate for the equation \( \mathcal D (m)=0\). Notice that it is a deterministic result. It assumes that \(|\mathcal D (m)| \) has a crude upper bound and then derives a more precise estimate on \(\Lambda =|m-m_c|\).
Lemma 6.9
(Stability of the self-consistent Equation) Suppose that \(1 - |z|^2 > t > 0\). Let \(\delta : \mathbb C \mapsto \mathbb R _+\) be a continuous function satisfying the bound
Suppose that, for a fixed \(E\) with \( 0 \le E \le C\) for some constant \(C\) independent of \(N\), (6.20) and the estimate
hold for \(10\ge \eta \ge \tilde{\eta }\) for some \(\tilde{\eta }\) which may depend on \(N\). Denote \( {\varepsilon }^2 := \kappa + \eta \) where \(\kappa = |E- \lambda _+|\) (4.1) in our case that \(1 - |z|^2 > t > 0\). Then there is an \(M_0\) large enough independent of \(N\) such that for any fixed \(M>M_0\) and \(N\) large enough (depending on \(M\)) the following estimates for \( \Lambda = |m-m_\mathrm{c}|\) hold for \(10\ge \eta \ge \tilde{\eta }\):
The three upper bounds (i.e., the first inequalities in (6.42)–(6.44)) can be summarized as
Proof
Define the polynomial
By definition of \(\Upsilon \) (6.21), we have
Since \(P_{w, z}(m_\mathrm{c}) = 0\), we have
By definition of \(P_{w, z}\), we can express \(A\) and \(B\) by
Case 1: In this case, we claim that the following estimates concerning \(A\) and \(B\) hold:
Since \(A\) and \(B\) are explicit functions of \(m_\mathrm{c}\), Eq. (6.46) is just properties of the solution \(m_\mathrm{c}\) of the third order polynomial \(P_{w, z}(m)\). We now give a sketch of the proof. Consider first the case \(|w| \ll 1\). Then (6.46) follows from (4.9), (4.10), (4.6) and the definitions of \(A\) and \(B\).
We now assume that \( w\sim 1\) . Clearly, \( |B|\le {{\mathrm{O}}}(1) \sim |w^{ 1/2}|\), which gives (6.46) for \(B\). To prove \(|A|\ge C/M\), by definition of \(m_\mathrm{c}\) (3.1), we have \(w= \frac{-1 - m_\mathrm{c} + m_\mathrm{c} |z|^2}{ m_\mathrm{c} (1 + m_\mathrm{c})^2 }\). Thus we can rewrite \(A\) as
By (4.9) and (4.11) (where \(\alpha = \sqrt{ 1 + 8 |z|^2}\)), we obtain (6.46).
We now prove (6.42) by contradiction. If (6.42) is violated then with \(u = m-m_c\) we have
where \(M\) is a large constant in the last inequality. By (6.41) and (4.9), \(|\Upsilon | \le C \delta / |w|\). Thus we have
which is a contradiction provided that \(M\) is large enough.
Case 2: \( {\varepsilon }^2 := \kappa + \eta \le 1/M^2 \). Note in this case \(w\sim 1\). Then by (4.3) we have
where the last equation can be checked by direct computation and we used \(|z|^2<1-t<1\). There is a more intrinsic reason why the last equation for \(A\) holds. Notice that \(\lambda _+\) is a point that the polynomial \(P_{w, z} (m)|_{w = \lambda _+}\) has a double root. Therefore, we have \(0=P^{\prime }_{w, z} (m_\mathrm{c} (\lambda _+, z) ) = A(\lambda _+, z)\).
Notice that in the case \(\kappa + \eta \) is small enough, we can approximate \( A(w, z) \) by linearizing w.r.t. \(w= \lambda _+\). Thus by the defining equation \(P^{\prime }_{w, z} (m_\mathrm{c} (\lambda _+, z) ) = A(\lambda _+, z)\), we have
where we have used that \(P_{w, z}^{\prime \prime } (m_\mathrm{c} (\lambda _+, z) ) = B(\lambda _+, z) \sim 1, \frac{ \partial P_{w, z}}{ \partial w} (m_\mathrm{c} (\lambda _+, z) ) \sim 1\) and, by (4.3), that \( (m_\mathrm{c} (w, z) - m_\mathrm{c} (\lambda _+, z)) \sim \sqrt{\kappa + \eta }\). While we can also check the conclusion of (6.48) by direction computation, the current derivation provides a more intrinsic reason why it is correct.
Case 2a: Suppose (6.43) is violated. We first choose \(M\) large enough so that \(|m_\mathrm{c} (1 + m_\mathrm{c}) | \le M^{1/4} \) in this regime. Then by (6.47) and (6.48), with \(w\sim 1 \), we have
which is a contradiction provided that \(M\) is large enough. Here we have used that, by the restriction of \({\varepsilon }\) and \(\delta \) in (6.43) that \({\varepsilon }\ge M^{3/4} \sqrt{\delta }, M\) is large enough constant and \(\delta \ll 1\).
Case 2b: Suppose (6.44) is violated. Similarly we have
which is a contradiction. Here we have used, by the restriction of \({\varepsilon }\) and \(\delta \) in (6.44) and \(M\) is large enough constant, that \( C_2{\varepsilon }\le C_2 M^{3/4} \sqrt{\delta }\le M \sqrt{\delta }/20\).\(\square \)
With a slighter strong condition on \(\delta \) and an initial estimate \(\Lambda \ll 1\) when \(\eta \sim 1\), the first inequalities in (6.42)–(6.44), i.e., (6.45), always hold. We state this as the following Corollary, which is a deterministic statement.
Corollary 6.10
(Deterministic continuity argument) Suppose that the assumptions of Lemma 6.9 hold. If we have
and that \(\delta \) is decreasing in \(\eta \) for \({\varepsilon }=\sqrt{\kappa +\eta }\) small enough, then (6.45) holds all \(\eta \in [\tilde{\eta }, 10]\).
Proof
By assumption \(\Lambda (E+10 \mathrm{i})\ll 1\) and the left inequality of (6.42) holds for \(\eta = 10\). By continuity of \(\Lambda \), the same inequality,
holds for \(w=E+\mathrm{i}\eta \) as long as \(\eta \in [\widetilde{\eta }, 10]\) and \({\varepsilon }\ge 1/M\).
Suppose that as \(\eta \) decreases, we get to Case 2a. Notice that when we decrease \(\eta \), by the conditions on \({\varepsilon }\) we will not go back to Case 1 from either Case 2a or Case 2b. For any \({\varepsilon }\le 1/M\) with \(M\) large, we have
Hence at the transition point from Case 1 to Case 2a, the inequality \( \Lambda (E+ i \eta ) \le \frac{M \delta }{ {{\varepsilon }} } \) holds. Thus by continuity of \(\Lambda \), the bound \( \Lambda (E+ i \eta ) \le \frac{M \delta }{ {{\varepsilon }} } \) in (6.44) holds until we leave Case 2a.
It is possible that we cross from Case 2a to Case 2b. At the transition point, we have \( \delta = \frac{ {\varepsilon }^2 }{M^{3/2}}\) and thus
for \(M\) large. Hence the first inequality of Case 2b, i.e., \(\Lambda \le M \sqrt{\delta }\) holds. By continuity, this bound continues to hold unless we leave Case 2b. Since \(\delta \) is decreasing in \(\eta \) when \({\varepsilon }\) is small, once we get to Case 2b, we will not go back to Case 2a (or Case 1 as explained before).
It is possible that the Case 2a is omitted and we get to Case 2b directly from Case 1. Notice that \({\varepsilon }=1/M\) at such a transition point and we have \(|w| \sim 1\). Furthermore, by (6.40), we get \(\delta \le 1/\log N \) at the transition point. Putting these together, we have for \(M\) large,
Hence the bound \( \Lambda (E+ i \eta ) \le M \sqrt{\delta }\) in (6.44) holds.\(\square \)
6.3 The large \(\eta \) case
Our method to estimate the Green functions and the Stieltjes transform is to fix the energy \(E\) and apply a continuity argument in \(\eta \) by first showing that the crude bound in Lemma 6.9 holds for large \(\eta \). In order to start this scheme, we need to establish estimates on the Green functions when \(\eta ={{\mathrm{O}}}(1)\). This is the main focus of this subsection. We start with the following lemma which provide a crude bound on the Green functions.
Lemma 6.11
For any \(w\in \mathrm{S}(0)\) and \(\eta >c>0\) for fixed \(c\), we have the bound
for some \(C>0\). Notice that this bound is deterministic and is independent of the randomness.
Proof
By definition, we have
where we have used \(|\lambda _\alpha -w |\ge {{\mathrm{Im}}}w=\eta \). Furthermore, \(G^{(\mathbb U ,\mathbb T )}_{ij}\) can be bounded similarly.\(\square \)
The main result of this subsection is the following bound on \(\Lambda \).
Lemma 6.12
For any \(\zeta >0\) and \({\varepsilon }>0\), we have
with \( \zeta \)-high probability.
Proof
From (6.25)–(6.27), for \(\eta = {{\mathrm{O}}}(1)\) we have
From (6.49), we have \(|G_{ij}|+|\mathcal{G }_{ij}|\le \eta ^{-1}\le {{\mathrm{O}}}(1) \) and \(|m_G^{(i,i)}|\le {{\mathrm{O}}}(1) \). Hence the large deviation estimate (6.27) becomes, with \( \zeta \)-high probability,
Thus for any \({\varepsilon }>0\) we have
Together with (6.11), we obtain
By an argument similar to the one used in (6.51), we can estimate \( \mathcal Z _{i }\) by
for any \({\varepsilon }>0\) with \( \zeta \)-high probability. This implies that, with \( \zeta \)-high probability,
For any \(\eta \) fixed, we claim that the following inequality between the real and imaginary parts of \(m\) holds:
To prove this, we note that for any \(\ell \ge 1\)
Summing up these two inequalities and optimizing \(\ell \), we have proved (6.53).
Assume that \({{\mathrm{Im}}}\, m \le c (\log N)^{-1}\). From (6.53), we have \(|m| \le c (\log N)^{-1/2}\). Together with \({{\mathrm{Im}}}\, w = \eta \sim 1\),
for some constant \(C\). This contradicts \(|m| \le c (\log N)^{-1/2}\) and we can thus assume that \({{\mathrm{Im}}}\, m \ge c (\log N)^{-1}\) when \(\eta \sim 1\) and \(w={{\mathrm{O}}}(1)\). In this case, we also have
Then (6.52) implies for any \({\varepsilon }>0\) that with \( \zeta \)-high probability
Summing up all \(i\), we have the following equation for \(m\) with \( \zeta \)-high probability:
We can rewrite this equation into the following form:
It can be checked (with computer calculation or rather complicated but elementary algebraic calculation) that for \(0 \le E \le 5\lambda _+\) and \(\eta = O(1)\), the third order polynomial \( P_{w, z} (m)\) has no double root and there is only one root with positive real part. We denote this root by \(m_1\) and the other two roots by \(m_2\) and \(m_3\). For \(0 \le E \le 5\lambda _+\) and \(t \le \eta \le t^{-1}\) for any \(t\) fixed, the three roots are separate by order one due to compactness. Since there is no double root, we have \( |P^{\prime }_{w, z} (m_1) | \ge c > 0\) whenever \(0 \le E \le 5\lambda _+\) and \(t \le \eta \le t^{-1}\). Thus the stability of (6.54) is trivial and we have proved that in this range of parameters
for any \({\varepsilon }>0\) with \( \zeta \)-high probability.\(\square \)
6.4 Proof of the weak local Green function estimates
In this subsection, we finish the proof of Theorem 6.1. We fix an energy \(E\) and we will decrease the imaginary part \(\eta \) of \(w= E + i \eta \). Recall all stability results are based on assumption (6.20), i.e., \( \Lambda \le \alpha |m_c| \sim \alpha |w|^{-1/2}\) for some small constant \(\alpha \), which so far was established only for large \(\eta \) in (6.50). We would like to know that this condition continue to hold for smaller \(\eta \). More precisely, suppose that (6.20) holds in a set \(A\) for all \(w=E+\eta i\) with \(\eta \in [\widetilde{\eta }, 10]\) where \(\widetilde{\eta }\) satisfies
We can choose \( \tilde{\eta }= \eta _1 < \eta _2 \cdots < \eta _n = 10\) such that \( |\eta _{i+1} - \eta _i | \le N^{-20}\) and \(n = O(N^{20})\). By (6.21) and (6.50) we have with \( \zeta \)-high probability in \(A\),
for all \(w = E + i \eta _j\) for all \(1 \le j \le n\). Since \(\Lambda ( E + i \eta ) \) is continuous in \(\eta \) at a scale, say, \(N^{-10}\), (6.56) holds for all \(\eta \in [\widetilde{\eta }, 10]\) with \( \zeta \)-high probability in \(A\). Hence for \(\widetilde{\eta }\) satisfying (6.55) the estimate (6.41) holds with
With this choice, we can check that the assumption on \(\delta \), (6.40), holds as well. Furthermore \(\delta \) is decreasing in \(\eta \) when \({\varepsilon }=\sqrt{\kappa +\eta }\) is small enough. By Corollary 6.10, (6.45) holds all \(\eta \in [\widetilde{\eta }, 10]\).
For \(|z| < 1-t\) for some \( t >0\), if \(\kappa \ll 1\) then \(|w| \sim 1\) and (6.45) implies
If \(\kappa \ge c > 0\) for some \(c> 0\) then
Combining both cases, for any \(w\in \underline{\mathrm{S}}(b), b > 5 Q_\zeta \), we have with \( \zeta \)-high probability in \(A\) that
Suppose that \(\hat{\eta }:= \widetilde{\eta }- N^{-20} \in \underline{\mathrm{S}} (b)\) for some \(b > 5 Q_\zeta \). Then for any \( \eta \in [\widetilde{\eta }- N^{-20}, \widetilde{\eta }]\), by (6.58) and the continuity of \(\Lambda \), we have
Thus the condition (6.20) in Lemma 6.7 is satisfied with \( \zeta \)-high probability in \(A\). Since we can start this procedure with \(\widetilde{\eta }= 10\) and there are only \(N^C\) steps to get to \(\widetilde{\eta }= \varphi ^{5 Q_\zeta } N^{-1}|w|^{1/2}\), we have proved that (6.58) holds for all \(w \in \underline{\mathrm{S}} (b)\) with \(b > 5 Q_\zeta \). Notice that from now on the assumption (6.20) holds with \( \zeta \)-high probability.
We can now prove the estimate (6.1) on the diagonal term. Comparing (6.35) with (6.38)(\(\mathbb{T }=\mathbb{U }=\emptyset \)), for any \(w\in \underline{\mathrm{S}}(b), b > 5 Q_\zeta \), we have with \( \zeta \)-high probability
By definition of \(\Psi \), (6.58) and \(m_c\sim |w^{-1/2}| \), we have
Using the restriction on \(\eta \) so that \(N \eta \ge |w|^{1/2} \varphi ^{ 5 Q_\zeta }\), we have
With (6.57) and (6.59), we have thus proved that
for any \(w\in \underline{\mathrm{S}}(b), b > 5 Q_\zeta \). Hence the estimate (6.1) on the diagonal element \(G_{ii}\) holds.
To conclude Theorem 6.1, it remains to prove the estimate on the off-diagonal elements. Recall the identity (6.12) for \(G_{ij}\) and the Eqs. (10.3) and (10.4). We can estimate the off-diagonal Green function by
Here we have used \(|G_{ii} G^{(i,\emptyset )}_{jj}|=O(|w|^{-1})\), which follows from (6.36), \(\Lambda \ll m_c\) and \(|m_c|\sim |w^{-1/2}|\)
Recall the identity (6.14) that
By (10.2), we have
where we have used (10.4) and that, by definition, \({{\mathrm{Im}}}\, G^{(ij,ij)}_{ii}= 0= {{\mathrm{Im}}}\, G^{(ij,ij)}_{jj} \). Therefore, we have with \( \zeta \)-high probability,
where we also used \(|\mathcal{G }_{ii}^{(ij, \emptyset )}\mathcal{G }^{(ij, i)}_{jj} |\le C|m_c|^2\le C |w|^{-1}\). Together with (6.61) and (6.36), we have proved that with \( \zeta \)-high probability
With (6.60), it proves Theorem 6.1 for the off-diagonal elements provided that \(w\in \underline{\mathrm{S}}(b)\) with \( b > 5 Q_\zeta \). Finally, we rename \(b\) as the \(C_\zeta \) and this concludes the proof of Theorem 6.1.
7 Proof of the strong local Green function estimates
Lemma 6.7 provides an error estimate to the self-consistent equation of \(m\) linearly in \(\Psi \). The following Lemma improves this estimate to quadratic in \(\Psi \). This is the key improvement leading to a proof of the strong local Green function estimates, i.e., Theorem 3.4.
Lemma 7.1
For any \(\zeta >1\), there exists \(R_\zeta > 0 \) such that the following statement holds. Suppose for some deterministic number \(\widetilde{\Lambda }(w, z)\) (which can depend on \(\zeta \)) we have
for \( w \in \underline{\mathrm{S}} ( b), b > 5 R_\zeta \), in a set \(\Xi \) with \(\mathbb P (\Xi ^c) \le e^{-p_{N}(\log N)^2 }\) and \(p_N\) satisfies that
Then there exists a set \(\Xi ^{\prime }\) such that \( \mathbb P (\Xi ^{\prime c}) \le e^{-p_{N} } \) and
Notice that the probability deteriorates in the exponent by a \((\log N)^{-2}\) factor.
We remark that, by Lemma 4.1, \({{\mathrm{Im}}}\, m_\mathrm{c} \ll |m_\mathrm{c}|\) when \(\eta + \kappa \ll 1\). Hence we have to track the dependence of \({{\mathrm{Im}}}\, m_\mathrm{c}\) carefully in the previous Lemma. This is one major difference between the weak and strong local Green function estimates. Similar phenomena occur for the Stieltjes transforms of the eigenvalue distributions of Wigner matrices. Lemma 7.1 will be proved later in this section; we now use it to prove Theorem 3.4. We first give a heuristic argument.
Suppose that we have the estimate (7.2) with \(\widetilde{\Psi }\) replaced by \(\Psi \). We assume \(\Lambda \ge (N\eta )^{-1} \) for convenience so that \(\Psi ^2 \sim ({{\mathrm{Im}}}\, m_\mathrm{c}+\Lambda )/(N\eta ) \) (If this assumption is violated then (3.5) holds automatically and we have nothing to prove). Then we can apply Corollary 6.10 by choosing
which implies (6.45). Consider first the case \(\kappa + \eta \sim {{\mathrm{O}}}(1)\). Using (6.45) with the choice of \(\delta \) in (7.3) and \(\kappa +\eta +\delta \ge {{\mathrm{O}}}(1)\), we have
When \(\eta \) satisfies the condition (6.55), the coefficient of \(\Lambda \) on the right side of the last equation is smaller than \(1/2\). Hence, using \({{\mathrm{Im}}}\, m_\mathrm{c}\le |m_\mathrm{c}|\le C |w|^{-1/2}\) (see Proposition 3.2), we have
We now consider the case \(\kappa + \eta \ll 1\) and thus \(|w| \sim {{\mathrm{O}}}(1)\). From the first inequality of (6.45), we have
Also, in the regime \(\kappa + \eta \ll 1\), (4.4) asserts that
Using the choice of \(\delta \) in (7.3), we have
where we have used (7.4) to absorb the last term involving \(\Lambda \) in the last inequality with a change of constant \(C\). This completes the heuristic proof of Theorem 3.4. We now give a formal proof of this theorem assuming Lemma 7.1.
Proof of Theorem
3.4 We first prove (3.6) assuming (3.5). By (6.63) and the definition of \(\Psi \), we have for \(i\ne j\),
where we have used (3.5) in the last step. This proves (3.6).
The main task in proving Theorem 3.4 is to prove (3.5). We first consider the case that \(|z| \le 1-t\). We assume that \( \zeta \) is large enough, e.g., \(\zeta \ge 10\). By Theorem 6.1 and \(m_c \sim |w|^{-1/2}\) (4.9) for \(|z| < 1-t\), there exists a constant \(C_{\zeta +5}\) such that for any \( w \in \underline{\mathrm{S}} (b), b > 5 C_{\zeta +5}\) and \(\alpha \ll 1\), we have
holds with the probability larger than \(1-\exp (-\varphi ^{\zeta +5})\) (here we have replaced \(\zeta \) in Theorem 6.1 by \(\zeta + 5\) for the convenience of the following argument). Since \(\underline{\mathrm{S}}(b)\) is decreasing in \(b\), we can choose \(D_\zeta = 5 \max (C_{\zeta +5}, R_\zeta ) \) so that we can apply Lemma 7.1 with \(p_N = \varphi ^{\zeta +5}\) (which guarantees (7.1)). Together with \(\Lambda _1 \le |m_c|\), we have, for any \( w \in \underline{\mathrm{S}}(D_\zeta )\) fixed,
holds with the probability larger than \(1-\exp (-\varphi ^{\zeta +5}(\log N)^{ -2})\). Notice that the application of Lemma 7.1 causes the probability in the exponent to deteriorate by a \((\log N)^{-2}\) factor.
Using (7.6), we can apply Corollary 6.10 with
Here the assumption of \(\Lambda (E+10\mathrm{i})\) is guaranteed by (7.5). By definition of \(\Psi _1\) (7.6) and \(|m_c| \sim |w|^{-1/2}\) (4.9), for \(w \in \underline{\mathrm{S}}(D_\zeta )\), we have
Furthermore, it is easy to prove that \(\delta \) is decreasing in \(\eta \) when \(\kappa +\eta \) is small. We have thus verified the assumptions on \(\delta \) in Corollary 6.10 with the choice \(\delta = \delta _1\) given in (7.7). From (6.45), we obtain for \(w \in \underline{\mathrm{S}}(D_\zeta )\), with \(C_0\) being the \(C\) in (6.45),
holds with the probability larger than \(1-\exp (-\varphi ^{\zeta +5}(\log N)^{- 2})\). We have thus proved (3.5) provided that \(\kappa +\eta \ge (\log N)^{-1}\).
We now prove (3.5) when \(\kappa +\eta \le (\log N)^{-1} \). We have in this case \( |w|\sim 1\). We apply Lemma 7.1 with \(\widetilde{\Lambda }= \Lambda _1= |m_c| \sim 1 \) given by (7.5). Thus (7.6) holds and we apply Corollary 6.10 with \(\delta = \delta _1\) (7.7). Since \(\Lambda _1\ge (N\eta )^{-1}\) and \({{\mathrm{Im}}}\, m_c\sim \sqrt{\kappa +\eta }\) (4.4), the conclusion of Corollary 6.10 implies that for \(w \in \underline{\mathrm{S}}(D_\zeta )\),
holds with probability larger than \(1-\exp (-\varphi ^{\zeta +5}(\log N)^{- 2})\). Here \(C_1\) depends only on \(C_0\). From the definition of \(\delta _1\) and \(\Psi _1\), we have
where for the last inequality we used
Since \(\Lambda _1\ge (N\eta )^{-1} \), combining the last two inequalities, for \(w \in \underline{\mathrm{S}}(D_\zeta )\), we have
holds with the probability larger than \(1-\exp (-\varphi ^{\zeta +5}(\log N)^{- 2})\) for some \(C_3\). Notice that we have used \( N \eta \ge \varphi ^{5 R_\zeta }\) in the last step in (7.8).
Repeating this process with the choices
for \(w \in \underline{\mathrm{S}}(D_\zeta )\), we obtain that
holds with the probability larger than \(1- \exp (-\varphi ^{\zeta +5}(\log N)^{-4})\). Notice that the last constant \(C_3\) is the same as the one appears in (7.8) and it does not change in the iteration procedure. We now iterate this process \(K\) times to have
holds with the probability larger than \(1- \exp (-\varphi ^{\zeta +5}(\log N)^{- 2 K})\). We need \(K\) so large that
i.e.,
On the other hand, we need \(K\) small enough so that
We note that it also guarantees (7.1), since \(\varphi ^{\zeta +5}\ge p_1\ge p_2\ge \cdots \ge p_K\ge \varphi \). We choose \(K = \log \log N/\log 2 \) and we have thus proved that
with the probability larger than \(1- \exp (-\varphi ^{\zeta })\) which implies (3.5) when \(\kappa +\eta \le (\log N)^{-1}\). This completes the proof of Theorem 3.4.\(\square \)
7.1 Proof of Lemma 7.1
The first step in proving Lemma 7.1 is to derive a second order self-consistent equation which identifies the first order dependence of the correction in the self-consistent equation derived in Lemma 6.7. The second error terms will be bounded by \(\Psi ^2\); the first order terms are of the forms of averages of \(Z^{(i)}_i\) and \(\mathcal{Z }_i\). In Lemma 7.3, the averages of \(Z^{(i)}_i\) and \(\mathcal{Z }_i\) will be estimated by \(\Psi ^2\). This improvement from the naive order \(\Psi \) to \(\Psi ^2\) is the key ingredient to obtain the strong local law. We remark that \({{\mathrm{Im}}}\, m_\mathrm{c} \ll |m_\mathrm{c}|\) when \(\eta + \kappa \ll 1\). Hence the dependence of \({{\mathrm{Im}}}\, m_\mathrm{c}\) verses \(m_c\) has to be tracked carefully. We now state the second order self-consistent equation: as the following lemma.
Lemma 7.2
(Second order self-consistent equation) For any constant \(\zeta >0\), there exists \(C_\zeta >0\) such that for \( w\in \underline{\mathrm{S}}(b), b\ge 5C_\zeta \) with \( \zeta \)-high probability
where
Proof
We have proved the weak local Green function estimate, i.e., Theorem 6.1, in Sect. 6. This in particular implies that (6.20) holds with \( \zeta \)-high probability in \(\underline{\mathrm{S}}(b)\) for large enough \(b\) with \( \zeta \)-high probability. With this remark in mind, we now prove Lemma 7.2.
We first take the inverse of both sides of (6.33) and sum up \(i\) to get, with \( \zeta \)-high probability,
where we have used (6.30) and the bound (6.22). Recall the estimates of \(\mathcal Z _i\) and \(Z^{(i)}_i\) by \(\Psi \) in (6.27) and (6.32). Hence we have
where \(b\ge 5 Q_\zeta \) and \(Q_\zeta \) is defined in Lemma 10.1. We now perform the expansion \(G_{ii} ^{-1} = [(G_{ii}-m) + m] ^{-1}\) to have
Using this approximation in (7.13), we have
Using (6.2), we have
Furthermore, with (6.4) we have
The diagonal element \(G_{ii}\) can be estimated by (7.14) so that
Therefore, we have
Notice that only the imaginary part of \(m_\mathrm{c}\) appears through \(\Psi \) instead of \(m_\mathrm{c}\) which can be much bigger near the spectral edge.
We now estimate the last term in (7.16). Notice that \(\mathcal{G }^{(i, \emptyset )}\) is the Green function of the matrix \(A^+ A\) where \(A = (Y^{(i, \emptyset )})^*\). Then \(m^{(i,i)}\) is the Green function of \(A^{(i, ), +} A^{(i, )}\) where we have used \(A^{(i, )} = Y^{(i, i)}\). Thus we can apply (7.17) (which holds for matrices of the form \(A^+ A\) with A not necessarily a square matrix) to get
By (6.31), we have
These estimates imply that
Inserting (7.18) and (7.19) into (7.15), we obtain
To conclude Lemma 7.2, we choose \(C_\zeta =2Q_\zeta \) and it remains to prove \( |\frac{1}{m_\mathrm{c}^3}\Psi ^2|\ge {{\mathrm{O}}}(N^{-1})\). By definition of \(\Psi \) and the fact that \( |m_\mathrm{c}| \sim |w|^{-1/2}\) (4.9), this inequality follows from the following property of \({{\mathrm{Im}}}\, m_c\):
This estimate on \({{\mathrm{Im}}}\, m_c\) is a direct consequence of (4.2), (4.4), (4.6) and (4.7). This completes the proof of Lemma 7.2 ( with \(C_\zeta \) increasing by 1).\(\square \)
We now estimate the averages \([\mathcal Z ]\) and \( [ Z_*^*] \). Our goal is to catch cancellation effects due to the average over the indices \(i\). This is the content of the next lemma, to be proved in next subsection. Clearly this lemma completes the proof of Lemma 7.1.
Lemma 7.3
For any \(\zeta >1\), there exists \(R_\zeta > 0 \) such that the following statement holds. Suppose for some deterministic number \(\widetilde{\Lambda }(w, z)\) (which can depend on \(\zeta \)) we have
for \( w \in \underline{\mathrm{S}} ( b), b > 5 R_\zeta \), in a set \(\Xi \) with \(\mathbb P (\Xi ^c) \le e^{-p_{N}(\log N)^2 }\) and \(p_N\) satisfies that
Then there exists a set \(\Xi ^{\prime }\) such that \( \mathbb P (\Xi ^{\prime c}) \le e^{-p_{N} } \) and
where \(\widetilde{\Psi }\) is defined in (7.2).
7.2 Strong bounds on \([Z]\)
In this subsection, we prove Lemma 7.3. The main tool is the abstract cancellation Lemma 11.1.
We first perform a cutoff for all random variables \(X_{ij}\) in \(X\) so that \( |X_{ij}| \le N^{10}\). Due to the subexponential decay assumption, the probability of the complement of this event is \(e^{-N^c}\), which is negligible.
Define \(P_i\) and \(\mathcal P _i\) as the operator for the expectation value w.r.t. the \(i\)th row and \(i\)th column. Let
With this convention and Lemma 6.5, we can rewrite \( \mathcal{Z }_i\) and \(Z_i^{(i)}\), from Definition 6.4, as
By definition, for any \(i,j, \mathbb{U }, \mathbb T \), we know \(|G^{\mathbb{U }, \mathbb T }_{ij}|\le \eta ^{-1}\). From the identities of \(G_{ii}\) and \(\mathcal{G }^{(i, \emptyset )}_{ii}\) in Lemma 6.5 and \(|X_{ij}|\le N^C\), we have, for any \(1\le i\le N\),
Let \(D_\zeta =\max \{C_{6\zeta + 10}, Q_{6\zeta + 10}+1\}\) with \(C_{\zeta }\) defined in Lemma 6.1 and \(Q_\zeta \) in Lemma 10.1. Then for any fixed \(\mathbb{T }, \mathbb{U }\): \(|\mathbb{T }|, |\mathbb{U }|\le p\) there exists a set \(\Xi _{ \mathbb{T }, \mathbb{U }}\) with
such that for any \({w}\in \underline{\mathrm{S}}(b), b>5D_\zeta \) the
-
(i)
for \(w \in \underline{\mathrm{S}}(b )\)
$$\begin{aligned} \Lambda \le \varphi ^{-D_\zeta /4}|w^{-1/2}|, \quad \Psi \le \varphi ^{-2D_\zeta }|w^{-1/2}| \end{aligned}$$(7.23) -
(ii)
for \(w \in \underline{\mathrm{S}} (b )\)
$$\begin{aligned} \max _{ij}|G_{ij}(z)-m_\mathrm{c}(z)\delta _{ij}| \le \varphi ^{D_\zeta }\frac{1}{| w^{1/2}|} \left( \frac{| w^{1/2}|}{N\eta } \right) ^{1/4} , \quad b > 5 D_\zeta .\qquad \end{aligned}$$(7.24) -
(iii)
for any \(i\ne j\),
$$\begin{aligned} |(1-\mathbb E _{\mathbf{y}_i})\mathbf{y}_i^* \mathcal G ^{(i \mathbb{T }, \emptyset )} \mathbf{y}_i| +|\mathbf{y}_i^* \mathcal G ^{(ij \mathbb{T }, \emptyset )} \mathbf{y}_j|\le \varphi ^{D_\zeta }\Psi \end{aligned}$$(7.25)$$\begin{aligned} |(1-\mathbb E _{ \mathrm{y}_i})\mathrm{y}_i^{(i)} G^{(i , i \mathbb{U })} ( \mathrm{y}_i^{(i)}) ^* | +| \mathrm{y}_i^{(i)} G^{(i , ij \mathbb{U })} ( \mathrm{y}_j^{(i)}) ^* | \le \varphi ^{D_\zeta }\Psi \end{aligned}$$(7.26) -
(iv)
for any \(i \) and \(\mathbb{T }, \mathbb{U }\): \(|\mathbb{T }|+|\mathbb{U }|\le p\),
$$\begin{aligned} \left| \mathcal{G }^{(i\mathbb{T },\emptyset )}_{ii}- \frac{-1}{w(1+m^{(i\mathbb{T },\emptyset )} )}\right| \le \varphi ^{D_\zeta }\Psi \end{aligned}$$(7.27)
Here (i) and (ii) follow from Lemma 6.1; (iv) follows from (6.39) and the case (iii) with \(\mathbb{T } = \emptyset = \mathbb{U }\) follows from Lemma 10.1 and (6.62). The general case, i.e., \(\mathbb{T }, \mathbb{U }\ne \emptyset \) can be proved similarly using (6.6). Furthermore, since \(|\mathbb{T }|, |\mathbb{U }|\le p\) and \(p\le \varphi ^{2\zeta }\), there exists a set \(\Xi _0\) with
such that for any \({w}\in \underline{\mathrm{S}}(b), b>5D_\zeta \) the above properties (7.23)–(7.27) hold for all \(|\mathbb{T }|, |\mathbb{U }|\le p\). The reason is the number of the \(\mathbb{T }, \mathbb{U }\) satisfying \(|\mathbb{T }|, |\mathbb{U }|\le p\) is bounded by \(N^{2p}\le \varphi ^{4\zeta +1}\), where we have used (7.20).
Since \(\Psi \) is a monotonic in \(\Lambda \), we can replace \(\Psi \) in (7.25)–(7.27) by \(\widetilde{\Psi }\) in the set \(\Xi \cap \Xi _0\). By (7.20), we have \(\mathbb P [\Xi _0^c] \ll e^{-p_{N}(\log N)^2 }\). For notation simplicity we will use \(\Xi \) for the set \(\Xi \cap \Xi _0\) from now on. We claim that, for any \(i\in A\subset [\![ 1, N]\!]\), \(|A|\le p\), there exist decompositions
so that (11.2) holds with \(\mathcal Y =|w|^{-1/2}\) and \(\mathcal X =\varphi ^{D_\zeta +2 \zeta }|w^{ 1/2}|\widetilde{\Psi }\). Notice that the condition \(\mathcal X <1\) follows from \(\widetilde{\Lambda }\ll |m_c|\) and \(N\eta \ge \varphi ^{5D_\zeta } |m_c|\) if \( {w}\in \underline{\mathrm{S}}(b), b>5D_\zeta \) is large enough. Thus we obtain that
Choosing \(C_\zeta =2D_\zeta +20\zeta \), one can see that (7.21) follows from (7.20), (7.30) and the Markov inequality.
It remains to prove (7.28) and (7.29). We prove (7.28) first. For simplicity, we assume that \({A = \{ 1, \ldots , |A |\}}\). Denote the first \(|A|\) column of \(Y_z\) by \(\mathbf{a}\) so that \(\mathbf a\) is a \(N \times |A|\) matrix. Similarly, denote by \(B \) the matrix obtained after removing the first \(K\)-columns of \(Y\). Then we have the identity
Recall the identity (6.16): for any matrix \(M\),
Then we have for \(i,j\in A\)
Rewrite
where
We will prove \(\Vert R\Vert \ll 1\) with high probability.Using (3.1), \(\Lambda \ll m_\mathrm{c}\) (7.24) and (6.6), we have
By (7.25), (7.27) and (6.6), we have
Therefore, we have the bound
With (7.31) and the definition of \(R\), we have \(-w \alpha G_{ij} = [(I+R)^{-1} ]_{ij}\) for \(i,j\in A\). Therefore,
Then, together with (7.32), (7.24) and \(m_c\sim |w^{-1/2}|\sim \alpha \), we have thus proved that, in \(\Xi \),
Thus
where we used \(|A|\le p\le \varphi ^{2\zeta }\) and \(U_{A} \) is a linear combination of the following products of \((R^j)_{ii}\)’s
Notice we have
provided that \(0\le \sum _k j_k\le {|A|} -1\). This is because that \(\alpha \) is independent of \(\{\mathbf{y}_k: k\in A\} \) and \(R_{ab}\) is independent of \(\{\mathbf{y}_k: k\in A, k\ne a,b\}\). Hence there exists \( \ell \in A\) such that \(\mathbf{y}_\ell \) does not appear in \(\prod _k \alpha (R^{j_k})_{ii}\) and this proves (7.34). Therefore, we have proved that
Define \(\Omega _A\) as the probability space for the columns \(\{\mathbf{y}_k: k\in A\} \) and \(\Omega _{A^c}\) the one for the columns \(\{\mathbf{y}_k: k\in A^c\} \). Then the full probability space \(\Omega \) equals to \( \Omega = \Omega _A\times \Omega _{A^c}\). Define \(\pi _{A^c}\) to be the projection onto \(\Omega _{A^c}\) and \(\Xi ^*=\left( \pi ^{-1}_{A^c}\cdot \pi _{A^c}\cdot \Xi \right) \). Then \(\mathbf{1}( \Xi ^*)\) is independent of \(\{\mathbf{y}_k: k\in A\} \). Hence we can extend (7.35) to
Let
so that (11.1) is satisfied, i.e.,
By (7.33), \( |\mathcal{Z }_{i, A}| \le {{\mathrm{O}}}(|w|^{-1/2} ( |A| \varphi ^{D_\zeta +2\zeta } |w|^{1/2}\widetilde{\Psi })^{{|A|} })\) in \(\Xi \). We now prove that
By (7.22), we have \(\left( wG_{ii}\right) ^{-1}={{\mathrm{O}}}(N^C)\). Notice that \(\alpha \) is independent of \(\{\mathbf{y}_k: k\in A\} \). Since \(\alpha \sim |w^{-1/2}|\) in \(\Xi \), the same asymptotic holds in \(\Xi ^*{\setminus } \Xi \). By definitions of \(U_{A}\) (7.33) and \(R\), and the assumption \(X_{ij}=O(N^{C})\), we obtain (7.36) and this completes the proof of (7.28). Similarly, we can prove (7.29) and this completes the proof of Lemma 7.3.
Notes
Strictly speaking, this bound was proved for identically distributed entries, but the proof extends to the case of distinct distributions, provided that, for example, a uniform subexponential decay holds.
References
Bai, Z.D.: Circular law. Ann. Probab. 25(1), 494–529 (1997)
Bai, Z.D., Silverstein, J.: Spectral Analysis of Large Dimensional Random Matrices. In: Mathematics Monograph Series, vol. 2. Science Press, Beijing (2006)
Benaych-Georges, F., Chapon, F.: Random right eigenvalues of Gaussian quaternionic matrices. In: Random Matrices: Theory and Applications, vol. 2 (2012)
Borodin, A., Sinclair, C.D.: The Ginibre ensemble of real random matrices and its scaling limits. Commun. Math. Phys. 291(1), 177–224 (2009)
Cacciapuoti, C., Maltsev, A., Schlein, B.: Local Marchenko–Pastur law at the hard edge of sample covariance matrices. J. Math. Phys. (2012, to appear)
Davies, E.B.: The functional calculus. J. Lond. Math. Soc. (2) 52(1), 166–176 (1997)
Edelman, A.: The probability that a random real Gaussian matrix has \(k\) real eigenvalues, related distributions, and the circular law. J. Multivar. Anal. 60(2), 203–232 (1997)
Erdős, L., Yau, H.-T., Yin, J.: Bulk universality for generalized Wigner matrices. Probab. Theory Related Fields 154(1–2), 341–407 (2012)
Erdős, L., Yau, H.-T., Yin, J.: Rigidity of Eigenvalues of Generalized Wigner Matrices. Adv. Math. 229(3), 1435–1515 (2012)
Forrester, P.J.: Log-gases and random matrices. London Mathematical Society Monographs Series 34. Princeton University Press, Princeton (2010)
Forrester, P.J., Nagao, T.: Eigenvalue statistics of the real Ginibre ensemble. Phys. Rev. Lett. 99 (2007)
Ginibre, J.: Statistical ensembles of complex, quaternion, and real matrices. J. Math. Phys. 6, 440–449 (1965)
Girko, V.L.: The circular law. Russ. Teor. Veroyatnost. i Primenen. 29(4), 669–679 (1984)
Götze, F., Tikhomirov, A.: The circular law for random matrices. Ann. Probab. 38(4), 1444–1491 (2010)
Guionnet, A., Krishnapur, M., Zeitouni, O.: The single ring theorem. Ann. Math. 174(2), 1189–1217 (2011)
Mehta, M.: Random matrices. Pure and Applied Mathematics (Amsterdam), vol. 142, issue 3. Elsevier, Amsterdam (2004)
Pan, G., Zhou, W.: Circular law, extreme singular values and potential theory. J. Multivar. Anal. 101(3), 645–656 (2010)
Pillai, N., Yin, J.: Universality of Covariance matrices. preprint arXiv:1110.2501 (2011)
Rudelson, M.: Invertibility of random matrices: Norm of the inverse. Ann. Math. 168(2), 575–600 (2008)
Rudelson, M., Vershynin, R.: The Littlewood-Offord problem and invertibility of random matrices. Adv. Math. 218(2), 600–633 (2008)
Sinclair, C.D.: Averages over Ginibre’s ensemble of random real matrices. Int. Math. Res. Not, IMRN 5 (2007)
Tao, T., Vu, V.: Random matrices: the circular law. Commun. Contemp. Math. 10(2), 261–307 (2008)
Tao, T., Vu, V.: Random matrices: universality of ESDs and the circular law. With an appendix by Manjunath Krishnapur. Ann. Probab. 38(5), 2023–2065 (2005)
Author information
Authors and Affiliations
Corresponding author
Additional information
P. Bourgade was partially supported by NSF grant DMS-1208859. H.-T. Yau was partially supported by NSF grants DMS-0757425, 0804279. J. Yin was partially supported by NSF grants DMS-1001655, 1207961.
Appendices
Appendix A: Proof of the properties of \(m_\mathrm{c}\) and \(\rho _\mathrm{c}\)
In this appendix we are going to prove the Lemmas 4.1, 4.2 and 4.3. We can solve \(m_\mathrm{c}\) explicitly by the following formula.
Lemma 8.1
(Explicit expression of \(m_\mathrm{c}\)) For any \(E\in \mathbb R \), let
Then we have
where we note \(x^{1/3}={{\mathrm{sgn}}}(x)|x^{1/3}|\). Moreover, for general \(w\in \mathbb C , m_\mathrm{c}(w, z)\) is the analytic extension of \( \lim _{\eta \rightarrow 0^+}m_\mathrm{c}(E+\mathrm{i}\eta , z) \).
Proof of Lemma
8.1 By definition, \(m_\mathrm{c}\) is an analytic function, so we only need to prove (8.1). By definition, \(m_\mathrm{c}\) is one of the three solutions of (3.1), and needs to have positive imaginary part. Solving explicitly this degree three polynomial equation proves that there is just one such solution, with the limit 8.1 close to the critical axis.\(\square \)
Since \(\rho _\mathrm{c} (E)= \frac{1}{\pi }{{\mathrm{Im}}}\, m_\mathrm{c}(E+\mathrm{i}0^+)\), by (8.1) and \(A_+\ge A_-\), we have: for \(0\le E\le \uplambda _+\),
With Lemma 8.1 and (8.2), one can easily prove Proposition 3.1.
Proof of Lemma
4.1 By definition,
so for the first case this implies
Moreover, recall that \(\alpha =\sqrt{1+8|z|^2}\), so (still in the first case)
We also have easily \(|m_\mathrm{c}|\sim 1\) easily from (8.3), we therefore obtained the l.h.s. of (4.2). Similarly, one can prove \({{\mathrm{Im}}}\, m_\mathrm{c}\sim \eta \) thanks to
and complete the proof for the first case.
For the second case, it is easy to prove (4.3) when \(w=\uplambda _+\), as we did from an explicit calculation. Then one obtains (4.3) by expanding \(m_\mathrm{c}\) around \(m_\mathrm{c}(\uplambda _+, z)\), using (3.1). The estimate (4.4) directly follows from (4.3).
Similarly, for the third case, first \(m_\mathrm{c}=\infty \), i.e., \(m_\mathrm{c}^{-1}=0\) when \(w=0\), then one can easily obtain (4.5) in case 3 by solving (3.1) with expanding \(m_\mathrm{c}^{-1}\) around \((m_\mathrm{c}(0, z))^{-1}\). The estimate (4.6) directly follows from (4.5). The fourth case follows from
and the properties of \(\rho \) stated in Proposition 3.1.\(\square \)
Proof of Lemma
4.2 This is similar to the proof of Lemma 4.1.\(\square \)
Proof of Lemma
4.3 We are going to prove this lemma in the case \(|z|\le 1-\tau \), the other cases can be proved similarly. Note first that (4.9) is a consequence of all possible cases in Lemma 4.1.
We now prove (4.10) in the four different cases, which have been classified in Lemma 4.1. In the first case, if additionally \(\eta \sim 1 \), as \(0>{{\mathrm{Re}}}(m_\mathrm{c})>-1/2\), the l.h.s. in (4.10) is bounded by \({{\mathrm{O}}}(1)\), which implies (4.10). For the first case if \(\eta \) is small enough, since \(|{{\mathrm{Re}}}\, w|\sim (1+m_\mathrm{c})\sim 1\) and \(|{{\mathrm{Im}}}\, (m_\mathrm{c})| \sim \eta \), so
which gives (4.10) in the first case. In the same way we get (4.10) in the second case, where \({{\mathrm{Im}}}\, m_\mathrm{c}\ge c\eta \). For the third case, using (4.5), one can easily prove (4.10). Finally, the fourth case is simple since the l.h.s. in (4.10) is clearly \({{\mathrm{O}}}(1)\).
We now prove (4.11). Using (4.6) and (4.7), (\(\alpha =\sqrt{1+8|z|^2} \) is a real number) we have that, in the cases three and four,
For case two, using (4.3),
Note \(m_\mathrm{c}(\uplambda _+)= - 2/(3+\alpha ) \). For case one, with (8.4), it is easy to prove that either \({{\mathrm{Im}}}\, m_\mathrm{c}\sim 1\) or \({{\mathrm{Re}}}m_\mathrm{c}-m_\mathrm{c}(\uplambda _+)={{\mathrm{Re}}}\, m_\mathrm{c}+ 2/(3+\alpha ) \sim 1\). It implies that \(\left| m_\mathrm{c}-\frac{-2}{3+\alpha }\right| \sim 1\). This completes the proof.\(\square \)
Appendix B: Perturbation theorem
In this section, we introduce the theorem on the relations between the Green function \(G\) of the matrix \(H\) and the Green function of the minor of the matrix. This theorem was proved in [8]. We first introduce some notations (here we use \([]\) instead of \(()\) in [8], since upper index \(()\) has been used in the main part of the paper).
Definition 9.1
Let \(H\) be \(N\times N\) matrix, \(\mathbb{T } \subset [\![ 1, N]\!]\) and \(H^{[\mathbb T ]}\) be the \(N-|\mathbb T |\) by \(N-|\mathbb T |\) minor of \(H\) after removing the \(i\)th rows and columns index by \(i\in \mathbb T \). For \(\mathbb T =\emptyset \), we define \(H^{(\emptyset )}=H\). For any \(\mathbb{T }\subset [\![ 1, N]\!]\) we introduce the following notations:
The following formulas were proved in Lemma 4.2 from [8].
Lemma 9.2
(Self-consistent perturbation formulas) Let \(\mathbb T \subset [\![ 1, N]\!]\). For simplicity, we use the notation \([i \,\mathbb{T }]\) for \([\{i\}\cup \mathbb{T }]\) and \([i j \,\mathbb{T }]\) for \([\{i,j\}\cup \mathbb{T }]\). Then we have the following identities:
-
(i)
For any \(i\notin \mathbb{T }\)
$$\begin{aligned} G^{[\mathbb{T }]}_{ii}=\left( K^{[i\,\mathbb{T }]}_{ii}\right) ^{-1}. \end{aligned}$$(9.2) -
(ii)
For \(i\ne j\) and \(i,j\notin \mathbb{T }\)
$$\begin{aligned} G^{[\mathbb{T }]}_{ij}=-G^{[\mathbb{T }]}_{jj}G_{ii}^{[j\,\mathbb{T }]} K^{[ij\,\,\mathbb{T }]}_{ij}= -G^{[\mathbb{T }]}_{ii}G_{jj}^{[i\,\mathbb{T }]}K^{[ij\,\,\mathbb{T }]}_{ij}. \end{aligned}$$(9.3) -
(iii)
For any indices \(i,j,k \notin \mathbb{T }\) with \(k \not \in \{i , j\}\) (but \(i = j\) is allowed)
$$\begin{aligned} G^{[\mathbb{T }]}_{ij}-G^{[k\,\,\mathbb{T }]}_{ij}=G^{[\mathbb{T }]}_{ik}G^{[\mathbb{T }]}_{kj} (G^{[\mathbb{T }]}_{kk})^{-1} . \end{aligned}$$(9.4)
Appendix C: Large deviation estimates
In order to obtain the self-consistent equations for the Green functions, we needed the following large deviation estimate.
Lemma 10.1
(Large deviation estimate) For any \(\zeta >0\), there exists \(Q_\zeta >0\) such that for \(\mathbb{T }\subset [\![ 1, N]\!], |\mathbb{T }| \le N/2\) the following estimates hold with \( \zeta \)-high probability:
Furthermore, for \(i\ne j\), we have
where
We first recall the following large deviation estimates concerning independent random variables, which were proved in Appendix B of [8].
Lemma 10.2
Let \(a_i\) (\(1\le i\le N\)) be independent complex random variables with mean zero, variance \(\sigma ^2\) and having a uniform subexponential decay
with some \(\vartheta >0\). Let \(A_i, B_{ij}\in \mathbb C \) (\(1\le i,j\le N\)). Then there exists a constant \(0< \phi <1\), depending on \(\vartheta \), such that for any \(\xi > 1\) we have
for any sufficiently large \(N\ge N_0\), where \(N_0=N_0(\vartheta )\) depends on \(\vartheta \).
Proof of Lemma
10.1 We will only prove the assertion of this lemma concerning the Green function \(G\). Similar statement for \(\mathcal{G }\) can be proved with the row–column symmetry. From now on, we will only prove all statements concerning \(G\) if identical proofs are valid for \(\mathcal{G }\) and we will not repeat this comment.
We first prove (10.1) by writing
with \(Y=X-zI\). Since \(G^{(\mathbb{T }, i)}_{ii}\) is independent of \(\mathrm{y}_i\), the first term on the right hand side vanishes. For any \(\zeta >0\), we apply (10.6) and (10.7) in Lemma 10.2 with \(\phi \xi = \zeta \log \log N\). Denote \(\xi = Q_\zeta /2\) and the last term in (10.8) is bounded by
with \( \zeta \)-high probability. Similarly, with (10.5), the second term on the right hand side is bounded by
The proofs for the other bounds follow from similar arguments.\(\square \)
Appendix D: Abstract decoupling lemma
We recall an abstract cancellation Lemma proved in [18].
Lemma 11.1
Let \(\mathcal I \) be a finite set which may depend on \(N\) and
Let \({S}_1, \dots , {S}_N\) be random variables which depend on the independent random variables \(\{x_\alpha , \alpha \in \mathcal I \}\). In application, we often take \(\mathcal I = [\![ 1, N]\!]\) and \(\mathcal I _i = \{ i \}\).
Recall \(\mathbb E _i\) denote the conditional expectation with respect to the complement of \(\{x_\alpha , \alpha \in \mathcal I _i \}\), i.e., we integrate out the variables \(\{x_\alpha , \alpha \in \mathcal I _i \}\). Define the commuting projection operators
For \(A\subset [\![ 1, N]\!]\)
We use the notation
Let \(p \) be an even integer Suppose for some constants \(C_0, c_0>0\) there is a set \(\Xi \) (the “good configurations”) so that the following assumptions hold:
-
(i)
(Bound on \(Q_A S_i\) in \(\Xi \)). There exist deterministic positive numbers \(\mathcal X <1\) and \(\mathcal Y \) such that for any set \(A\subset [\![ 1, N]\!]\) with \(i\in A\) and \(|A | \le p\), \(Q_{A}S_i\) in \(\Xi \) can be written as the sum of two random variables
$$\begin{aligned} ( Q_{A} S_i )= \mathbf{Z}_{i, A}+ Q_{A}\mathbf{1}(\Xi ^c) \widetilde{ \mathbf{Z}}_{i, A}, \quad \mathrm{in}\quad \Xi \end{aligned}$$(11.1)and
$$\begin{aligned} \; | \mathbf{Z}_{i, A} |\le \mathcal Y \big (C_0\mathcal X |A| \big )^{ |A|} ,\quad | \widetilde{\mathbf{Z}}_{i, A} |\le \mathcal Y N^{C_0|A|} \end{aligned}$$(11.2) -
(ii)
(Crude bound on \( S_i\)).
$$\begin{aligned} \max _i | S_i | \;\le \; \mathcal Y N^{C_0}\,. \end{aligned}$$ -
(iii)
(\(\Xi \) has high probability).
$$\begin{aligned} \mathbb P [\Xi ^c] \;\le \; \mathrm{e}^{-c_0(\log N)^{3/2} p }\,. \end{aligned}$$
Then, under the assumptions (i)–(iii), we have
for some \(C>0\) and any sufficiently large \(N \).
Roughly speaking, this lemma increase the estimate of \( \mathbf{Z}_i\) from \(\mathcal X \) to \(\mathcal X ^2\) after averaging over \(i\).
Rights and permissions
About this article
Cite this article
Bourgade, P., Yau, HT. & Yin, J. Local circular law for random matrices. Probab. Theory Relat. Fields 159, 545–595 (2014). https://doi.org/10.1007/s00440-013-0514-z
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-013-0514-z
Keywords
- Local circular law
- Universality
Mathematics Subject Classification (2010)
- 15B52
- 82B44