1 Introduction

1.1 Overview

Let A be the adjacency matrix of a graph with vertex set \([N]\!=\! \{1, \dots , N\}\). We are interested in the geometric structure of the eigenvectors of A, in particular their spatial localization. An \(\ell ^2\)-normalized eigenvector \(\varvec{\mathrm {w}} = (w_x)_{x \in [N]}\) gives rise to a probability measure \(\sum _{x \in [N]} w_x^2 \delta _x\) on the set of vertices. Informally, \(\varvec{\mathrm {w}}\) is delocalized if its mass is approximately uniformly distributed throughout [N], and localized if its mass is essentially concentrated in a small number of vertices.

There are several ways of quantifying spatial localization. One is the notion of concentration of mass, sometimes referred to as scarring [49], stating that there is some set \({{\mathcal {B}}} \subset [N]\) of small cardinality and a small \(\varepsilon > 0\) such that \(\sum _{x \in {{\mathcal {B}}}} w_x^2 = 1 - \varepsilon \). In this case, it is also of interest to characterize the geometric structure of the vertex set \({{\mathcal {B}}}\) and of the eigenvector \(\varvec{\mathrm {w}}\) restricted to \({{\mathcal {B}}}\). Another convenient quantifier of spatial localization is the \(\ell ^p\)-norm \(\Vert \varvec{\mathrm {w}} \Vert _p\) for \(2 \leqslant p \leqslant \infty \). It has the following interpretation: if the mass of \(\varvec{\mathrm {w}}\) is uniformly distributed over some set \({{\mathcal {B}}} \subset [N]\) then \(\Vert \varvec{\mathrm {w}} \Vert _p^2 = |{{\mathcal {B}}} |^{-1 + 2/p}\). Focusing on the \(\ell ^\infty \)-norm for definiteness, we define the localization exponent \(\gamma (\varvec{\mathrm {w}})\) through

$$\begin{aligned} \Vert \varvec{\mathrm {w}} \Vert _\infty ^2 =:N^{-\gamma (\varvec{\mathrm {w}})}\,. \end{aligned}$$
(1.1)

Thus, \(0 \leqslant \gamma (\varvec{\mathrm {w}}) \leqslant 1\), and \(\gamma (\varvec{\mathrm {w}}) = 0\) corresponds to localization at a single vertex while \(\gamma (\varvec{\mathrm {w}}) = 1\) to complete delocalization.

In this paper we address the question of spatial localization for the random Erdős–Rényi graph \({\mathbb {G}}(N,d/N)\). We consider the limit \(N \rightarrow \infty \) with \(d \equiv d_N\). It is well known that \({\mathbb {G}}(N,d/N)\) undergoes a dramatic change in behaviour at the critical scale \(d \asymp \log N\), which is the scale at and below which the vertex degrees do not concentrate. Thus, for \(d \gg \log N\), with high probability all degrees are approximately equal and the graph is homogeneous. On the other hand, for \(d \lesssim \log N\), the degrees do not concentrate and the graph becomes highly inhomogeneous: it contains for instance hubs of exceptionally large degree, leaves, and isolated vertices. As long as \(d > 1\), the graph has with high probability a unique giant component, and we shall always restrict our attention to it.

Here we propose the Erdős–Rényi graph at criticality as a simple and natural model on which to address the question of spatial localization of eigenvectors. It has the following attributes.

  1. (i)

    Its graph structure provides an intrinsic and nontrivial notion of distance.

  2. (ii)

    Its spectrum splits into a delocalized phase and a semilocalized phase. The transition between the phases is sharp, in the sense of a discontinuity in the localization exponent.

  3. (iii)

    Both phases are amenable to rigorous analysis.

Our results are summarized in the phase diagram of Fig. 1, which is expressed in terms of the parameter b parametrizing \(d = b \log N\) on the critical scale and the eigenvalue \(\lambda \) of \(A / \sqrt{d}\) associated with the eigenvector \(\varvec{\mathrm {w}}\). To the best of our knowledge, the phase coexistence for the critical Erdős–Rényi graph established in this paper had previously not been analysed even in the physics literature.

Fig. 1
figure 1

The phase diagram of the adjacency matrix \(A / \sqrt{d}\) of the Erdős–Rényi graph \({\mathbb {G}}(N,d/N)\) at criticality, where \(d = b \log N\) with b fixed. The horizontal axis records the location in the spectrum and the vertical axis the sparseness parameter b. The spectrum is confined to the coloured region. In the red region the eigenvectors are delocalized while in the blue region they are semilocalized. The grey regions have width o(1) and are not analysed in this paper. For \(b > b_*\) the spectrum is asymptotically contained in \([-2,2]\) and the semilocalized phase does not exist. For \(b < b_*\) a semilocalized phase emerges in the region \((-\lambda _{\max }(b), -2) \cup (2, \lambda _{\max }(b))\) for some explicit \(\lambda _{\max }(b) > 2\)

Throughout the following, we always exclude the largest eigenvalue of A, its Perron–Frobenius eigenvalue, which is an outlier separated from the rest of the spectrum. The delocalized phase is characterized by a localization exponent asymptotically equal to 1. It exists for all fixed \(b > 0\) and consists asymptotically of energies in \((-2,0) \cup (0,2)\). The semilocalized phase is characterized by a localization exponent asymptotically less than 1. It exists only when \(b < b_*\), where

$$\begin{aligned} b_* :=\frac{1}{2\log 2 - 1} \approx 2.59\,. \end{aligned}$$
(1.2)

It consists asymptotically of energies in \((-\lambda _{\max }(b), -2) \cup (2, \lambda _{\max }(b))\), where \(\lambda _{\max }(b) > 2\) is an explicit function of b (see (1.14) below). The density of states at energy \(\lambda \in {\mathbb {R}}\) is equal to \(N^{\rho _b(\lambda ) + o(1)}\), where \(\rho _b\) is an explicit exponent defined in (1.14) below and illustrated in Fig. 2. It has a discontinuity at 2 (and similarly at \(-2\)), jumping from \(\rho _b(2^-) = 1\) to \(\rho _b(2^+) = 1 - b / b^*\). The localization exponent \(\gamma (\varvec{\mathrm {w}})\) from (1.1) of an eigenvector \(\varvec{\mathrm {w}}\) with associated eigenvalue \(\lambda \) satisfies with high probability

$$\begin{aligned} \gamma (\varvec{\mathrm {w}}) = 1 + o(1) \;\; \text {if} \;\; |\lambda | < 2\,, \qquad \gamma (\varvec{\mathrm {w}}) \leqslant \rho _b(\lambda ) + o(1) \;\; \text {if} \;\; |\lambda | > 2\,. \end{aligned}$$

This establishes a discontinuity, in the limit \(N \rightarrow \infty \), in the localization exponent \(\gamma (\varvec{\mathrm {w}})\) as a function of \(\lambda \) at the energies \(\pm 2\). See Fig. 2 for an illustration; we also refer to Appendix A.1 for a simulation depicting the behaviour of \(\Vert \varvec{\mathrm {w}} \Vert _\infty \) throughout the spectrum. Moreover, in the semilocalized phase scarring occurs in the sense that a fraction \(1 - o(1)\) of the mass of the eigenvectors is supported in a set of at most \(N^{\rho _b(\lambda ) + o(1)}\) vertices.

Fig. 2
figure 2

The behaviour of the exponents \(\rho _b\) and \(\gamma \) as a function of the energy \(\lambda \). The dark blue curve is the exponent \(\rho _b(\lambda )\) characterizing the density of states \(N^{\rho _b(\lambda ) + o(1)}\) of the matrix \(A / \sqrt{d}\) at energy \(\lambda \). The entire blue region (light and dark blue) is the asymptotically allowed region of the localization exponent \(\gamma (\varvec{\mathrm {w}})\) of an eigenvector of \(A / \sqrt{d}\) as a function of the associated eigenvalue \(\lambda \). Here \(d = b \log N\) with \(b = 1\) and \(\lambda _{\max }(b) \approx 2.0737\). We only plot a neighbourhood of the threshold energy 2. The discontinuity at 2 of \(\rho _b\) is from \(\rho _b(2^-) = 1\) to \(\rho _b(2^+) = 1 - b / b^* = 2 - 2 \log 2\)

The eigenvalues in the semilocalized phase were analysed in [10], where it was proved that they arise precisely from vertices x of abnormally large degree, \(D_x \geqslant 2 d\). More precisely, it was proved in [10] that each vertex x with \(D_x \geqslant 2 d\) gives rise to two eigenvalues of \(A / \sqrt{d}\) near \(\pm \Lambda (D_x / d)\), where \(\Lambda (\alpha ) :=\frac{\alpha }{\sqrt{\alpha -1}}\). The same result for the O(1) largest degree vertices was independently proved in [54] by a different method. We refer also to [14, 15] for an analysis in the supercritical and subcritical phases.

In the current paper, we prove that the eigenvector \(\varvec{\mathrm {w}}\) associated with an eigenvalue \(\lambda \) in the semilocalized phase is highly concentrated around resonant vertices at energy \(\lambda \), which are defined as the vertices x such that \(\Lambda (D_x/d)\) is close to \(\lambda \). For this reason, we also call the resonant vertices localization centres. With high probability, and after a small pruning of the graph, all balls \(B_r(x)\) of a certain radius \(r \gg 1\) around the resonant vertices are disjoint, and within any such ball \(B_r(x)\) the eigenvector \(\varvec{\mathrm {w}}\) is an approximately radial exponentially decaying function. The number of resonant vertices at energy \(\lambda \) is comparable to the density of states, \(N^{\rho _b(\lambda ) + o(1)}\), which is much less than N. See Fig. 3 for a schematic illustration of the mass distribution of \(\varvec{\mathrm {w}}\).

Fig. 3
figure 3

A schematic representation of the geometric structure of a typical eigenvector in the semilocalized phase. The giant component of the graph is depicted in pale blue. The eigenvector’s mass (depicted in dark blue) is concentrated in a small number of disjoint balls centred around resonant vertices (drawn in white), and within each ball the mass decays exponentially in the radius. The mass outside the balls is an asymptotically vanishing proportion of the total mass

The behaviour of the critical Erdős–Rényi graph described above has some similarities but also differences to that of the Anderson model [11]. The Anderson model on \({\mathbb {Z}}^n\) with \(n \geqslant 3\) is conjectured to exhibit a metal-insulator, or delocalization-localization, transition: for weak enough disorder, the spectrum splits into a delocalized phase in the middle of the spectrum and a localized phase near the spectral edges. See e.g. [8, Figure 1.2] for a phase diagram of its conjectured behaviour. So far, only the localized phase of the Anderson model has been understood rigorously, in the landmark works [4, 39], as well as contributions of many subsequent developments. The phase diagram for the Anderson model bears some similarity to that of Fig. 1, in which one can interpret 1/b as the disorder strength, since smaller values of b lead to stronger inhomogeneities in the graph.

As is apparent from the proofs in [4, 39], in the localized phase the local structure of an eigenvector of the Anderson model is similar to that of the critical Erdős–Rényi graph described above: exponentially decaying around well-separated localization centres associated with resonances near the energy \(\lambda \) of the eigenvector. The localization centres arise from exceptionally large local averages of the potential. The phenomenon of localization can be heuristically understood using the following well-known rule of thumb: one expects localization around a single localization centre if the level spacing is much larger than the tunnelling amplitude between localization centres. It arises from perturbation theory around the block diagonal model where the complement of balls \(B_r(x)\) around localization centres is set to zero. On a very elementary level, this rule is illustrated by the matrix \(H(t) = \bigl ( {\begin{matrix}0 &{} t\\ t &{} 1\end{matrix}}\bigr )\), whose eigenvectors are localized for \(t = 0\), remain essentially localized for \(t \ll 1\), where perturbation theory around H(0) is valid, and become delocalized for \(t \gtrsim 1\), where perturbation theory around H(0) fails.

More precisely, it is a general heuristic that the tunnelling amplitude decays exponentially in the distance between the localization centres [25]. Denoting by \(\beta (\lambda ) > 1\) the rate of exponential decay at energy \(\lambda \), the rule of thumb hence reads

$$\begin{aligned} \beta (\lambda )^{-L}\ll \varepsilon (\lambda )\,, \end{aligned}$$
(1.3)

where L is the distance between the localization centres and \(\varepsilon (\lambda )\) the level spacing at energy \(\lambda \). For the Anderson model restricted to a finite cube of \({\mathbb {Z}}^n\) with side length \(N^{1/n}\), the level spacing \(\varepsilon (\lambda )\) is of order \(N^{-1}\) (see [57] and [8, Chapter 4]) whereas the diameter of the graph is of order \(N^{1/n}\). Hence, the rule of thumb (1.3) becomes

$$\begin{aligned} \beta (\lambda )^{-N^{1/n}}\ll N^{-1}\,, \end{aligned}$$

which is satisfied and one therefore expects localization. For the critical Erdős–Rényi graph, the level spacing \(\varepsilon (\lambda )\) is \(N^{-\rho (\lambda )+o(1)}\) but the diameter of the giant component is only \(\frac{\log N}{\log d}\). Hence, the rule of thumb (1.3) becomes

$$\begin{aligned} N^{-\frac{\log \beta (\lambda )}{\log d}}\ll N^{-\rho (\lambda )+o(1)}\,, \end{aligned}$$

which is never satisfied because \(\frac{\log \beta (\lambda )}{\log d}\rightarrow 0\) as \(N \rightarrow \infty \). Thus, the rule of thumb (1.3) is satisfied in the localized phase of the Anderson model but not in the semilocalized phase of the critical Erdős–Rényi graph. The underlying reason behind this difference is that the diameter of the Anderson model is polynomial in N, while the diameter of the critical Erdős–Rényi graph is logarithmic in N. Thus, the critical Erdős–Rényi graph is far more connected than the Anderson model; this property tends to push it more towards the delocalized behaviour of mean-field systems. As noted above, another important difference between the localized phase of the Anderson model and the semilocalized phase of the critical Erdős–Rényi graph is that the density of states is of order N in the former and a fractional power of N in the latter.

Up to now we have focused on the Erdős–Rényi graph on the critical scale \(d \asymp \log N\). It is natural to ask whether this assumption can be relaxed without changing its behaviour. The question of the upper bound on d is simple: as explained above, there is no semilocalized phase for \(d > b_* \log N\), and the delocalized phase is completely understood up to \(d \leqslant N/2\), thanks to Theorem 1.8 below and [35, 42]. The lower bound is more subtle. In fact, it turns out that all of our results remain valid throughout the regime

$$\begin{aligned} \sqrt{\log N} \ll d \leqslant O(\log N)\,. \end{aligned}$$
(1.4)

The lower bound \(\sqrt{\log N}\) is optimal in the sense that below it both phases are disrupted and the phase diagram from Fig. 1 no longer holds. Indeed, for \(d \lesssim \sqrt{\log N}\) a new family of localized states, associated with so-called tuning forks at the periphery of the graph, appear throughout the delocalized and semilocalized phases. We refer to Sect. 1.5 below for more details.

Previously, strong delocalization with localization exponent \(\gamma (\varvec{\mathrm {w}}) = 1 + o(1)\) has been established for many mean-field models, such as Wigner matrices [1, 34,35,36,37], supercritical Erdős–Rényi graphs [35, 42], and random regular graphs [12, 13]. All of these models are homogeneous and only have a delocalized phase.

Although a rigorous understanding of the metal-insulator transition for the Anderson model is still elusive, some progress has been made for random band matrices. Random band matrices [23, 40, 47, 58] constitute an attractive model interpolating between the Anderson model and mean-field Wigner matrices. They retain the n-dimensional structure of the Anderson model but have proved somewhat more amenable to rigorous analysis. They are conjectured [40] to have a similar phase diagram as the Anderson model in dimensions \(n \geqslant 3\). As for the Anderson model, dimensions \(n > 1\) have so far seen little progress, but for \(n = 1\) much has been understood both in the localized [48, 50] and the delocalized [20,21,22, 28,29,30,31,32,33, 43, 51, 52, 59] phases. A simplification of band matrices is the ultrametric ensemble [41], where the Euclidean metric of \({\mathbb {Z}}^n\) is replaced with an ultrametric arising from a tree structure. For this model, a phase transition was rigorously established in [56].

Another modification of the n-dimensional Anderson model is the Anderson model on the Bethe lattice, an infinite regular tree corresponding to the case \(n = \infty \). For it, the existence of a delocalized phase was shown in [5, 38, 44]. In [6, 7] it was shown that for unbounded random potentials the delocalized phase exists for arbitrarily weak disorder. It extends beyond the spectrum of the unperturbed adjacency matrix into the so-called Lifschitz tails, where the density of states is very small. The authors showed that, through the mechanism of resonant delocalization, the exponentially decaying tunnelling amplitudes between localization centres are counterbalanced by an exponentially large number of possible channels through which tunnelling can occur, so that the rule of thumb (1.3) for localization is violated. As a consequence, the eigenvectors are delocalized across many resonant localization centres. We remark that this analysis was made possible by the absence of cycles on the Bethe lattice. In contrast, the global geometry of the critical Erdős–Rényi graph is fundamentally different from that of the Bethe lattice (through the existence of a very large number of long cycles), which has a defining impact on the nature of the delocalization-semilocalization transition summarized in Fig. 1.

Transitions in the localization behaviour of eigenvectors have also been analysed in several mean-field type models. In [45, 46] the authors considered the sum of a Wigner matrix and a diagonal matrix with independent random entries with a large enough variance. They showed that the eigenvectors in the bulk are delocalized while near the edge they are partially localized at a single site. Their partially localized phase can be understood heuristically as a rigorous (and highly nontrivial) verification of the rule of thumb for localization, where the perturbation takes place around the diagonal matrix. Heavy-tailed Wigner matrices, or Lévy matrices, whose entries have \(\alpha \)-stable laws for \(0< \alpha < 2\), were proposed in [24] as a simple model that exhibits a transition in the localization of its eigenvectors; we refer to [3] for a summary of the predictions from [24, 53]. In [18, 19] it was proved that for energies in a compact interval around the origin, eigenvectors are weakly delocalized, and for \(0< \alpha < 2/3\) for energies far enough from the origin, eigenvectors are weakly localized. In [3], full delocalization was proved in a compact interval around the origin, and the authors even established GOE local eigenvalue statistics in the same spectral region. In [2], the law of the eigenvector components of Lévy matrices was computed.

Conventions Throughout the following, every quantity that is not explicitly constant depends on the fundamental parameter N. We almost always omit this dependence from our notation. We use C to denote a generic positive universal constant, and write \(X = O(Y)\) to mean \(|X | \leqslant C Y\). For \(X,Y > 0\) we write \(X \asymp Y\) if \(X = O(Y)\) and \(Y = O(X)\). We write \(X \ll Y\) or \(X = o(Y)\) to mean \(\lim _{N \rightarrow \infty } X/Y = 0\). A vector is normalized if its \(\ell ^2\)-norm is one.

1.2 Results—the semilocalized phase

Let \({\mathbb {G}} = {\mathbb {G}}(N,d/N)\) be the Erdős–Rényi graph with vertex set \([N] :=\{1, \ldots , N\}\) and edge probability d/N for \(0 \leqslant d \leqslant N\). Let \(A = (A_{xy})_{x,y \in [N]} \in \{0,1\}^{N\times N}\) be the adjacency matrix of \({\mathbb {G}}\). Thus, \(A =A^*\), \(A_{xx}=0\) for all \(x \in [N]\), and \(( A_{xy} :x < y)\) are independent \({\text {Bernoulli}}(d/N)\) random variables.

The entrywise nonnegative matrix \(A/\sqrt{d}\) has a trivial Perron–Frobenius eigenvalue, which is its largest eigenvalue. In the following we only consider the other eigenvalues, which we call nontrivial. In the regime \(d \gg \sqrt{\log N/\log \log N}\), which we always assume in this paper, the trivial eigenvalue is located at \(\sqrt{d} (1 + o(1))\), and it is separated from the nontrivial ones with high probability; see [14]. Moreover, without loss of generality in this subsection we always assume that \(d \leqslant 3 \log N\), for otherwise the semilocalized phase does not exist (see Sect. 1.1).

For \(x \in [N]\) we define the normalized degree of x as

$$\begin{aligned} \alpha _x :=\frac{1}{d} \sum _{y \in [N]} A_{xy}\,. \end{aligned}$$
(1.5)

In Theorem 1.7 below we show that the nontrivial eigenvalues of \(A / \sqrt{d}\) outside the interval \([-2,2]\) are in two-to-one correspondence with vertices with normalized degree greater than 2: each vertex x with \(\alpha _x > 2\) gives rise to two eigenvalues of \(A / \sqrt{d}\) located with high probability near \(\pm \Lambda (\alpha _x)\), where we defined the bijective function \(\Lambda :[2,\infty ) \rightarrow [2,\infty )\) through

$$\begin{aligned} \Lambda (\alpha ) :=\frac{\alpha }{\sqrt{\alpha -1}}. \end{aligned}$$
(1.6)

Our main result in the semilocalized phase is about the eigenvectors associated with these eigenvalues. To state it, we need the following notions.

Definition 1.1

Let \(\lambda >2\) and \(0 < \delta \leqslant \lambda - 2\). We define the set of resonant vertices at energy \(\lambda \) through

$$\begin{aligned} {{\mathcal {W}}}_{\lambda ,\delta } :=\bigl \{x :\alpha _x \geqslant 2, |\Lambda (\alpha _x) - \lambda | \leqslant \delta \bigr \}\,. \end{aligned}$$
(1.7)

We denote by \(B_r(x)\) the ball around the vertex x of radius r for the graph distance in \({\mathbb {G}}\). Define

$$\begin{aligned} r_\star = \big \lfloor c \sqrt{\log N} \big \rfloor \,; \end{aligned}$$
(1.8)

all of our results will hold provided \(c > 0\) is chosen to be a small enough universal constant. The quantity \(r_\star \) will play the role of a maximal radius for balls around localization centres.

We introduce the basic control parameters

$$\begin{aligned} \xi :=\frac{\sqrt{\log N}}{d} \log d\,, \qquad \xi _u :=\frac{\sqrt{\log N}}{d} \frac{1}{u}\,, \end{aligned}$$
(1.9)

which under our assumptions will always be small (see Remark 1.5 below). We now state our main result in the semilocalized phase.

Theorem 1.2

(Semilocalized phase). For any \(\nu > 0\) there exists a constant \({{\mathcal {C}}}\) such that the following holds. Suppose that

$$\begin{aligned} {{\mathcal {C}}} \sqrt{\log N} \log \log N \leqslant d \leqslant 3\log N\,. \end{aligned}$$
(1.10)

Let \(\varvec{\mathrm {w}}\) be a normalized eigenvector of \(A/\sqrt{d}\) with nontrivial eigenvalue \(\lambda \geqslant 2+{{\mathcal {C}}} \xi ^{1/2}\). Let \(0<\delta \leqslant (\lambda -2)/2\). Then for each \(x \in {{\mathcal {W}}}_{\lambda , \delta }\) there exists a normalized vector \(\varvec{\mathrm {v}}(x)\), supported in \(B_{r_\star }(x)\), such that the supports of \(\varvec{\mathrm {v}}(x)\) and \(\varvec{\mathrm {v}}(y)\) are disjoint for \(x \ne y\), and

with probability at least \(1 - {{\mathcal {C}}} N^{-\nu }\). Moreover, \(\varvec{\mathrm {v}}(x)\) decays exponentially around x in the sense that for any \(r \geqslant 0\) we have

$$\begin{aligned} \sum _{y \notin B_r(x)} (\varvec{\mathrm {v}}(x))_y^2 \leqslant \frac{1}{(\alpha _x - 1)^{r+1}}\,. \end{aligned}$$

Remark 1.3

An analogous result holds for negative eigenvalues \(-\lambda \leqslant -2 - {{\mathcal {C}}} \xi ^{1/2}\), with a different vector \(\varvec{\mathrm {v}}(x)\). See Theorem 3.4 and Remark 3.5 below for a precise statement.

Remark 1.4

The upper bound \(d \leqslant 3 \log N\) in (1.10) is made for convenience and without loss of generality, because if \(d > 3 \log N\) then, as explained in Sect. 1.1, with high probability the semilocalized phase does not exist, i.e. eigenvalues satisfying the conditions of Theorem 1.2 do not exist.

Theorem 1.2 implies that \(\varvec{\mathrm {w}}\) is almost entirely concentrated in the balls around the resonant vertices, and in each such ball \(B_{r_\star }(x)\), \(x \in {{\mathcal {W}}}_{\lambda ,\delta }\), the vector \(\varvec{\mathrm {w}}\) is almost collinear to the vector \(\varvec{\mathrm {v}}(x)\). Thus, \(\varvec{\mathrm {v}}(x)\) has the interpretation of the localization profile around the localization centre x. Since it has exponential decay, we deduce immediately from Theorem 1.2 that the radius \(r_\star \) can be made smaller at the expense of worse error terms. In fact, in Definition 3.2 and Theorem 3.4 below, we give an explicit definition of \(\varvec{\mathrm {v}}(x)\), which shows that it is radial in the sense that its value at a vertex y depends only on the distance between x and y, in which it is an exponentially decaying function. To ensure that the supports of the vectors \(\varvec{\mathrm {v}}(x)\) for different x do not overlap, \(\varvec{\mathrm {v}}(x)\) is in fact defined as the restriction of a radial function around x to a subgraph of \({\mathbb {G}}\), the pruned graph, which differs from \({\mathbb {G}}\) by only a small number of edges and whose balls of radius \(r_\star \) around the vertices of \({{\mathcal {W}}}_{\lambda ,\delta }\) are disjoint (see Proposition 3.1 below). For positive eigenvalues, the entries of \(\varvec{\mathrm {v}}(x)\) are nonnegative, while for negative eigenvalues its entries carry a sign that alternates in the distance to x. The set of resonant vertices \(\mathcal W_{\lambda ,\delta }\) is a small fraction of the whole vertex set [N]; its size is analysed in Lemma A.12 below.

Remark 1.5

Note that, by the lower bounds imposed on d and \(\lambda \) in Theorem 1.2, we always have \(\xi , \xi _{\lambda - 2} \leqslant 1/ {{\mathcal {C}}}\).

Using the exponential decay of the localization profiles, it is easy to deduce from Theorem 1.2 that a positive proportion of the eigenvector mass concentrates at the resonant vertices.

Corollary 1.6

Under the assumptions of Theorem 1.2 we have

$$\begin{aligned} \sum _{y \in {{\mathcal {W}}}_{\lambda ,\delta }} w_y^2 = \frac{\sqrt{\lambda ^2-4}}{ \lambda + \sqrt{\lambda ^2-4}} +O \bigg (\frac{{{\mathcal {C}}}(\xi +\xi _{\lambda -2})}{\delta }+\frac{{{\mathcal {C}}} \delta }{\lambda ^{5/2} \sqrt{\lambda -2}}\bigg ) \end{aligned}$$

with probability at least \(1 - {{\mathcal {C}}} N^{-\nu }\).

Next, we state a rigidity result on the eigenvalue locations in the semilocalized phase. It generalizes [10, Corollary 2.3] by improving the error bound and extending it to the full regime (1.4) of d, below which it must fail (see Sect. 1.5 below). Its proof is a byproduct of the proof of our main result in the semilocalized phase, Theorem 1.2. We denote the ordered eigenvalues of a Hermitian matrix \(M\in {\mathbb {C}}^{N\times N}\) by \(\lambda _1(M) \geqslant \lambda _2(M) \geqslant \cdots \geqslant \lambda _N(M)\). We only consider the nontrivial eigenvalues of \(A / \sqrt{d}\), i.e. \(\lambda _i(A / \sqrt{d})\) with \(2 \leqslant i \leqslant N\). For the following statements we order the normalized degrees by choosing a (random) permutation \(\sigma \in S_N\) such that \(i \mapsto \alpha _{\sigma (i)}\) is nonincreasing.

Theorem 1.7

(Eigenvalue locations in semilocalized phase). For any \(\nu > 0\) there exists a constant \({{\mathcal {C}}}\) such that the following holds. Suppose that (1.10) holds. Let

$$\begin{aligned} {{\mathcal {U}}} :=\{x \in [N] :\Lambda (\alpha _x) \geqslant 2 + \xi ^{1/2}\}\,. \end{aligned}$$

Then with probability at least \(1 - {{\mathcal {C}}} N^{-\nu }\), for all \(1\leqslant i\leqslant |{{\mathcal {U}}}|\) we have

$$\begin{aligned} |\lambda _{i + 1}(A/\sqrt{d})-\Lambda (\alpha _{\sigma (i)})| + |\lambda _{N-i+1}(A/\sqrt{d})+\Lambda (\alpha _{\sigma (i)})| \leqslant \mathcal C (\xi +\xi _{\Lambda (\alpha _{\sigma (i)})-2})\nonumber \\ \end{aligned}$$
(1.11)

and for all \(|{{\mathcal {U}}} | + 2 \leqslant i \leqslant N - |{{\mathcal {U}}} |\) we have

$$\begin{aligned} |\lambda _i(A/\sqrt{d}) | \leqslant 2 + \xi ^{1/2}\,. \end{aligned}$$
(1.12)

We remark that the upper bound on d from (1.10), which is necessary for the existence of a semilocalized phase, can be relaxed in Theorem 1.7 to obtain an estimate on \(\max _{2 \leqslant i \leqslant N} |\lambda _i(A / \sqrt{d}) |\) in the supercritical regime \(d \geqslant 3 \log N\), which is sharper than the one in [10]. The proof is the same and we do not pursue this direction here.

We conclude this subsection with a discussion on the counting function of the normalized degrees, which we use to give estimates on the number of resonant vertices (1.7). For \(b \geqslant 0\) and \(\alpha \geqslant 2\) define the exponent

$$\begin{aligned} \theta _b(\alpha ) :=[1 - b (\alpha \log \alpha - \alpha + 1)]_+\,. \end{aligned}$$
(1.13)

Define \(\alpha _{\max }(b) :=\inf \{\alpha \geqslant 2 :\theta _b(\alpha ) = 0\}\). Thus, \(\theta _b\) is a nonincreasing function that is nonzero on \([0, \alpha _{\max }(b))\). Moreover, \(\theta _b(2) = [1 - b/b_*]_+\), so that \(\alpha _{\max }(b) > 2\) if and only if \(b < b_*\). From Lemma A.9 below it is easy to deduce that if \(d \gg 1\) then \(\alpha _{\sigma (1)} = \alpha _{\max }(d/\log N) + O(\zeta / d)\) with probability at least \(1 - o(1)\) for any \(\zeta \gg 1\). Thus, \(\alpha _{\max }(d/\log N)\) has the interpretation of the deterministic location of the largest normalized degree. See Fig. 4 for a plot of \(\theta _b\).

Fig. 4
figure 4

A plot of the exponent \(\theta _b(\alpha )\) as a function of \(\alpha \geqslant 2\) for the values \(b = 0.3\) (blue), \(b = 1.3\) (red), and \(b = 2.3\) (green). The graph hits the value 0 at \(\alpha _{\max }(b)\)

In Appendix A.4 below, we obtain estimates on the density of the normalized degrees \((\alpha _x)_{x \in [N]}\) and combine it with Theorem 1.2 to deduce a lower bound on the \(\ell ^p\)-norm of eigenvectors in the semilocalized phase. The precise statements are given in Lemma A.12 and Corollary A.13, which provide quantitative error bounds throughout the regime (1.10). Here, we summarize them, for simplicity, in simple qualitative versions in the critical regime \(d \asymp \log N\). For \(b < b_*\) we abbreviate

$$\begin{aligned} \lambda _{\max }(b) :=\Lambda (\alpha _{\max }(b))\,, \qquad \rho _b(\lambda ) :={\left\{ \begin{array}{ll} \theta _b(\Lambda ^{-1}(\lambda )) &{} \text {if } |\lambda | \geqslant 2\\ 1 &{} \text {if } |\lambda | < 2\,, \end{array}\right. } \end{aligned}$$
(1.14)

where \(\Lambda ^{-1}(\lambda ) = \frac{\lambda ^2}{2}(1 + \sqrt{1 - 4/\lambda ^2})\) for \(|\lambda | \geqslant 2\). Let \(d = b \log N\) with some constant \(b < b_*\), and suppose that \(2 + \kappa \leqslant \lambda \leqslant \lambda _{\max }(b) - \kappa \) for some constant \(\kappa > 0\). Then Lemma A.12 (ii) implies (choosing \(1/d \ll \delta \ll 1\))

$$\begin{aligned} |{{\mathcal {W}}}_{\lambda , \delta } | = N^{\rho _b(\lambda ) + o(1)} \end{aligned}$$
(1.15)

with probability \(1 - o(1)\). From (1.15) and Theorem 1.2 we obtain, for any \(2 \leqslant p \leqslant \infty \),

$$\begin{aligned} \Vert \varvec{\mathrm {w}} \Vert _p^{2} \geqslant N^{(2/p - 1) \rho _b(\lambda ) + o(1)} \end{aligned}$$
(1.16)

with probability \(1 - o(1)\) (see Corollary A.13 below). In other words, the localization exponent \(\gamma (\varvec{\mathrm {w}})\) from (1.1) satisfies \(\gamma (\varvec{\mathrm {w}}) \leqslant \rho _b(\lambda ) + o(1)\). See Fig. 2 for an illustration of the bound (1.16) for \(p = \infty \). We remark that the exponent \(\rho _b(\lambda )\) also describes the density of states at energy \(\lambda \): under the above assumptions on b and \(\lambda \), for any interval I containing \(\lambda \) and satisfying \(\xi \ll |I | \ll 1\), the number of eigenvalues in I is equal to \(N^{\rho _b(\lambda ) + o(1)} |I |\) with probability \(1 - o(1)\), as can be seen from Lemma A.12 (i) and Theorem 1.7.

1.3 Results—the delocalized phase

Let A be the adjacency matrix of \({\mathbb {G}}(N,d/N)\), as in Sect. 1.2. For \(0< \kappa < 1/2\) define the spectral region

$$\begin{aligned} {{\mathcal {S}}}_\kappa :=[-2 + \kappa , -\kappa ] \cup [\kappa , 2 - \kappa ]\,. \end{aligned}$$
(1.17)

Theorem 1.8

(Delocalized phase). For any \(\nu >0\) and \(\kappa >0\) there exists a constant \({{\mathcal {C}}} > 0\) such that the following holds. Suppose that

$$\begin{aligned} {{\mathcal {C}}} \sqrt{\log N} \leqslant d \leqslant (\log N)^{3/2}\,. \end{aligned}$$
(1.18)

Let \(\varvec{\mathrm {w}}\) be a normalized eigenvector of \(A / \sqrt{d}\) with eigenvalue \(\lambda \in {{\mathcal {S}}}_\kappa \). Then

$$\begin{aligned} \Vert \varvec{\mathrm {w}} \Vert _\infty ^2 \leqslant N^{-1 + \kappa } \end{aligned}$$
(1.19)

with probability at least \(1 - \mathcal CN^{-\nu }\).

In the delocalized phase, i.e. in \({{\mathcal {S}}}_\kappa \), we also show that the spectral measure of \(A / \sqrt{d}\) at any vertex x is well approximated by the spectral measure at the root of \({\mathbb {T}}_{d\alpha _x,d}\), the infinite rooted \((d\alpha _x,d)\)-regular tree, whose root has \(d \alpha _x\) children and all other vertices have d children. This approximation is a local law, valid for intervals containing down to \(N^\kappa \) eigenvalues. See Remark 4.4 as well as Remark 4.3 and Appendix A.2 below for details.

Remark 1.9

In [42] it is shown that (1.19) holds with probability at least \(1 - \mathcal CN^{-\nu }\) for all eigenvectors provided that

$$\begin{aligned} {{\mathcal {C}}} \log N \leqslant d \leqslant N/2\,. \end{aligned}$$
(1.20)

This shows that the upper bound in (1.18) is in fact not restrictive.

Remark 1.10

(Optimality of (1.18) and (1.20)). Both lower bounds in (1.18) and (1.20) are optimal (up to the value of \({{\mathcal {C}}}\)), in the sense that delocalization fails in each case if these lower bounds are relaxed. See Sect. 1.5 below.

We note that the domain \({{\mathcal {S}}}_\kappa \) is optimal, up to the choice of \(\kappa > 0\). Indeed, as explained in Sect. 1.5 below, delocalization fails in the neighbourhood of the origin, owing to a proliferation highly localized tuning fork states. Similarly, we expect the delocalization to fail in the neighbourhoods of \(\pm 2\), where the masses of the eigenvectors become concentrated on vertices x with normalized degrees \(\alpha _x\) close to 2. The neighbourhoods of \(0, \pm 2\) are also singled out as the regions where the self-consistent equation used to prove Theorem 1.8 (see Lemma 4.16) becomes unstable. This instability is directly related to the appearance of singularities in the spectral measure of the tree \({\mathbb {T}}_{d \alpha _x,d}\) (see (4.11) and Fig. 8 for an illustration). The singularity near 0 occurs when \(\alpha _x\) is close to 0, and the singularities near \(\pm 2\) when \(\alpha _x\) is close to 2. See Fig. 10 for a simulation that demonstrates numerically the failure of delocalization outside of \({{\mathcal {S}}}_\kappa \).

1.4 Extension to general sparse random matrices

Our results, Theorems 1.2, 1.7, and 1.8, hold also for the following family of sparse Wigner matrices. Let \(A = (A_{xy})\) be the adjacency matrix of \({\mathbb {G}}(N,d/N)\) as above and \(W=(W_{xy})\) be an independent Wigner matrix with bounded entries. That is, W is Hermitian and its upper triangular entries \((W_{xy} :x \leqslant y)\) are independent complex-valued random variables with mean zero and variance one, \({\mathbb {E}}|W_{xy} |^2 = 1\), and \(|W_{xy} | \leqslant K\) almost surely for some constant K. Then we define the sparse Wigner matrix \(M = (M_{xy})\) as the Hadamard product of A and W, with entries \(M_{xy} :=A_{xy} W_{xy}\). Since the entries of \(M / \sqrt{d}\) are centred, it does not have a trivial eigenvalue like \(A / \sqrt{d}\).

Theorem 1.11

Let \(M = (M_{xy})_{x,y \in [N]}\) be a sparse Wigner matrix. Define

$$\begin{aligned} \alpha _x = \frac{1}{d} \sum _{y \in [N]} |M_{xy} |^2. \end{aligned}$$
(1.21)

Theorems 1.2 and 1.8 hold with (1.21) if A is replaced with M, and Theorem 1.7 holds with (1.21) if \(\lambda _{i + 1}(A/\sqrt{d})\), \(\lambda _{N-i+1}(A/\sqrt{d})\), and \(\lambda _i(A/\sqrt{d})\) are replaced with \(\lambda _{i}(M / \sqrt{d})\), \(\lambda _{N-i+1}(M / \sqrt{d})\), and \(\lambda _i(M / \sqrt{d})\), respectively. Here, the constants \({{\mathcal {C}}}\) depend on K in addition to \(\nu \) and \(\kappa \).

The modifications to the proofs of Theorems 1.2 and 1.7 required to establish Theorem 1.11 are minor and follow along the lines of [10, Section 10]. The modification to the proof of Theorem 1.8 is trivial, since the assumptions of the general Theorem 4.2 below include the sparse Wigner matrix M. We also remark that, with some extra work, one can relax the boundedness assumption on the entries of W, which we shall however not do here.

1.5 The limits of sparseness and the scale \(d \asymp \sqrt{\log N}\)

We conclude this section with a discussion on how sparse \({\mathbb {G}}\) can be for our results to remain valid. We show that all of our results—Theorems 1.2, 1.7, and 1.8—are wrong below the regime (1.4), i.e. if d is smaller than order \(\sqrt{\log N}\). Thus, our sparseness assumptions—the lower bounds on d from (1.10) and (1.18)—are optimal (up to the factor \(\log \log N\) in (1.10) and the factor \({{\mathcal {C}}}\) in (1.18)). The fundamental reason for this change of behaviour will turn out to be that the ratio \(|S_2(x) | / |S_1(x) |\) concentrates if and only if \(d \gg \sqrt{\log N}\), where \(S_i(x)\) denotes the sphere in \({\mathbb {G}}\) of radius i around x. This can be easily made precise with a well-known tuning fork construction, detailed below.

In the critical and subcritical regime \(1 \ll d = O(\log N)\), the graph \({\mathbb {G}}\) is in general not connected, but with probability \(1 - o(1)\) it has a unique giant component \({\mathbb {G}}_{\mathrm {giant}}\) with at least \(N (1 - \mathrm {e}^{- d/4})\) vertices (see Corollary A.15 below). Moreover, the spectrum of \(A / \sqrt{d}\) restricted to the complement of the giant component is contained in the \(O\bigl (\frac{\sqrt{\log N}}{d}\bigr )\)-neighbourhood of the origin (see Corollary A.16 below). Since we always assume \(d \geqslant {{\mathcal {C}}} \sqrt{\log N}\) and we only consider eigenvalues in \({\mathbb {R}}\setminus [-\kappa ,\kappa ]\), we conclude that all of our results listed above only pertain to the eigenvalues and eigenvectors of the giant component.

For \(D = 0,1,2,\dots \) we introduce a starFootnote 1tuning fork of degree D rooted in \({\mathbb {G}}_{\mathrm {giant}}\), or D-tuning fork for short, which is obtained by taking two stars with central degree D and connecting their hubs to a common base vertex in \({\mathbb {G}}_{\mathrm {giant}}\). We refer to Fig. 5 for an illustration and Definition A.17 below for a precise definition.

Fig. 5
figure 5

A star tuning fork of degree 12 rooted in a graph. The tuning fork is highlighted in blue. Its base is filled with red and its two hubs are filled with blue

It is not hard to see that every D-tuning fork gives rise to two eigenvalues \(\pm \sqrt{D/d}\) of \(A / \sqrt{d}\) restricted to \({\mathbb {G}}_{\mathrm {giant}}\), whose associated eigenvectors are supported on the stars (see Lemma A.18 below). We denote by \(\Sigma :=\{\sqrt{D/d} :\text {a D-tuning fork exists}\}\) the spectrum of \(A / \sqrt{d}\) restricted to \({\mathbb {G}}_{\mathrm {giant}}\) generated by the tuning forks. Any eigenvector associated with an eigenvalue \(\sqrt{D/d} \in \Sigma \) is localized on precisely \(2D + 2\) vertices. Thus, D-tuning forks provide a simple way of constructing localized states. Note that this is a very basic form of concentration of mass, supported at the periphery of the graph on special graph structures, and is unrelated to the much more subtle concentration in the semilocalized phase described in Sect. 1.2.

For \(d > 0\) and \(D \in {\mathbb {N}}\) we now estimate the number of D-tuning forks in \({\mathbb {G}}(N,d/N)\), which we denote by F(dD). The following result is proved in Appendix A.6.

Lemma 1.12

(Number of D-tuning forks). Suppose that \(1 \ll d = b \log N = O(\log N)\) and \(0 \leqslant D \ll \log N / \log \log N\). Then \(F(d,D) = N^{1 - 2b - 2b D + o(1)}\) with probability \(1 - o(1)\).

Defining \(D_* :=\frac{\log N}{2d} - 1\), we immediately deduce the following result.

Corollary 1.13

For any constant \(\varepsilon > 0\) with probability \(1 - o(1)\) the following holds. If \(D_* \leqslant -\varepsilon \) then \(\Sigma = \emptyset \). If \(D_* \geqslant \varepsilon \) then \(\Sigma = \{\pm \sqrt{D/d} :D \in {\mathbb {N}}, D \leqslant D_* (1 + o(1))\}\).

We deduce that if \(d \leqslant (1/2 - \varepsilon ) \log N\) then \(\Sigma \ne \emptyset \) and hence the delocalization for all eigenvectors from Remark 1.9 fails. Hence, the lower bound (1.20) is optimal up to the value of \({{\mathcal {C}}}\).

Similarly, for \(d \gg \sqrt{\log N}\) the set \(\Sigma \) is in general nonempty, but we always have \(\Sigma \subset [-\kappa , \kappa ]\) for any fixed \(\kappa > 0\), so that eigenvalues from \(\Sigma \) do not interfere with the statements of Theorems 1.2, 1.7, and 1.8. On the other hand, if \(d = \sqrt{\log N} / t\) for constant t, we find that \(\Sigma \) is asymptotically dense in the interval \([-t/\sqrt{2}, t / \sqrt{2}]\). Since the conclusions of Theorems 1.2, 1.7, and 1.8 are obviously wrong for any eigenvalue from \(\Sigma \), they must all be wrong for large enough t. This shows that the lower bounds d from (1.10) and (1.18) are optimal (up to the factor \(\log \log N\) in (1.10) and the factor \({{\mathcal {C}}}\) in (1.18)).

In fact, the emergence of the tuning fork eigenvalues of order one and the failure of all of our proofs has the same underlying root cause, which singles out the scale \(d \asymp \sqrt{\log N}\) as the scale below which the concentration of the ratio

$$\begin{aligned} |S_2(x) | / |S_1(x) | = d (1 + o(1)) \end{aligned}$$
(1.22)

fails for vertices x satisfying \(D_x \asymp d\). Clearly, to have a D-tuning fork with \(D \asymp d\), (1.22) has to fail at the hubs of the stars. Moreover, (1.22) enters our proofs of both the semilocalized and the delocalized phase in a crucial way. For the former, it is linked to the validity of the local approximation by the \((D_x,d)\)-regular tree from Appendix A.2, which underlies also the construction of the localization profile vectors (see e.g. (3.35) below). For the latter, in the language of Definition 4.6 below, it is linked to the property that most neighbours of any vertex are typical (see Proposition 4.8 (ii) below).

2 Basic Definitions and Overview of Proofs

In this preliminary section we introduce some basic notations and definitions that are used throughout the paper, and give an overview of the proofs of Theorems 1.2 (semilocalized phase) and 1.8 (delocalized phase). These proofs are unrelated and, thus, explained separately. For simplicity, in this overview we only consider qualitative error terms of the form o(1), although all of our estimates are in fact quantitative.

2.1 Basic definitions

We write \({\mathbb {N}}= \{0,1,2,\dots \}\). We set \([n] :=\{1, \ldots , n\}\) for any \(n \in {\mathbb {N}}^*\) and \([0] :=\emptyset \). We write \(|X |\) for the cardinality of a finite set X. We use \(\mathbb {1}_{\Omega }\) as symbol for the indicator function of the event \(\Omega \).

Vectors in \({\mathbb {R}}^N\) are denoted by boldface lowercase Latin letters like \(\varvec{\mathrm {u}}\), \(\varvec{\mathrm {v}}\) and \(\varvec{\mathrm {w}}\). We use the notation \(\varvec{\mathrm {v}} = (v_x)_{x \in [N]} \in {\mathbb {R}}^N\) for the entries of a vector. We denote by \({{\,\mathrm{supp}\,}}\varvec{\mathrm {v}} :=\{x \in [N] :v_x \ne 0\}\) the support of a vector \(\varvec{\mathrm {v}}\). We denote by the Euclidean scalar product on \({\mathbb {R}}^N\) and by the induced Euclidean norm. For a matrix \(M \in {\mathbb {R}}^{N \times N}\), \(\Vert M \Vert \) is its operator norm induced by the Euclidean norm on \({\mathbb {R}}^N\). For any \(x \in [N]\), we define the standard basis vector \(\varvec{\mathrm {1}}_x :=(\delta _{xy})_{y \in [N]} \in {\mathbb {R}}^N\). To any subset \(S \subset [N]\) we assign the vector \(\varvec{\mathrm {1}}_S\in {\mathbb {R}}^N\) given by \(\varvec{\mathrm {1}}_S :=\sum _{x \in S} \varvec{\mathrm {1}}_x\). In particular, \(\varvec{\mathrm {1}}_{\{ x\}} = \varvec{\mathrm {1}}_x\).

We use blackboard bold letters to denote graphs. Let \({\mathbb {H}} = (V({\mathbb {H}}), E({\mathbb {H}}))\) be a (simple, undirected) graph on the vertex set \(V({\mathbb {H}}) = [N]\). We often identify a graph \({\mathbb {H}}\) with its set of edges \(E({\mathbb {H}})\). We denote by \(A^{{\mathbb {H}}} \in \{0,1\}^{N \times N}\) the adjacency matrix of \({\mathbb {H}}\). For \(r \in {\mathbb {N}}\) and \(x \in [N]\), we denote by \(B_r^{{\mathbb {H}}}(x)\) the closed ball of radius r around x in the graph \({\mathbb {H}}\), i.e. the set of vertices at distance (with respect to \({\mathbb {H}}\)) at most r from the vertex x. We denote the sphere of radius r around the vertex x by \(S_r^{{\mathbb {H}}}(x) :=B_r^{{\mathbb {H}}}(x) \setminus B_{r - 1}^{{\mathbb {H}}}(x)\). We denote by \(D_x^{{\mathbb {H}}}\) the degree of the vertex x in the graph \({\mathbb {H}}\). For any subset \(V \subset [N]\), we denote by \({\mathbb {H}} \vert _V\) the subgraph induced by \({\mathbb {H}}\) on V. If \({\mathbb {H}}\) is a subgraph of \({\mathbb {G}}\) then we denote by \({\mathbb {G}} \setminus {\mathbb {H}}\) the graph on [N] with edge set \(E({\mathbb {G}}) \setminus E({\mathbb {H}})\). In the above definitions, if the graph \({\mathbb {H}}\) is the Erdős–Rényi graph \({\mathbb {G}}\), we systematically omit the superscript \({\mathbb {G}}\).

The following notion of very high probability is a convenient shorthand used throughout the paper. It simplifies considerably the probabilistic statements of the kind that appear in Theorems 1.2, 1.7, and 1.8. It also introduces two special symbols, \(\nu \) and \({{\mathcal {C}}}\), which appear throughout the rest of the paper.

Definition 2.1

Let \(\Xi \equiv \Xi _{N,\nu }\) be a family of events parametrized by \(N \in {\mathbb {N}}\) and \(\nu > 0\). We say that \(\Xi \) holds with very high probability if for every \(\nu > 0\) there exists \({\mathcal {C}}\equiv {\mathcal {C}}_\nu \) such that

$$\begin{aligned} {\mathbb {P}}(\Xi _{N,\nu }) \geqslant 1 - {\mathcal {C}}_\nu N^{-\nu } \end{aligned}$$

for all \(N \in {\mathbb {N}}\).

Convention 2.2

In statements that hold with very high probability, we use the special symbol \({{\mathcal {C}}} \equiv {{\mathcal {C}}}_\nu \) to denote a generic positive constant depending on \(\nu \) such that the statement holds with probability at least \(1 - {{\mathcal {C}}}_\nu N^{-\nu }\) provided \(\mathcal C_\nu \) is chosen large enough. Thus, the bound \(|X | \leqslant {\mathcal {C}}Y\) with very high probability means that, for each \(\nu >0\), there is a constant \({\mathcal {C}}_\nu >0\), depending on \(\nu \), such that

$$\begin{aligned} {\mathbb {P}}\big ( |X | \leqslant {\mathcal {C}}_\nu Y \big ) \geqslant 1 - {{\mathcal {C}}}_\nu N^{-\nu } \end{aligned}$$

for all \(N \in {\mathbb {N}}\). Here, X and Y are allowed to depend on N. We also write \(X = {{\mathcal {O}}}(Y)\) to mean \(|X | \leqslant {{\mathcal {C}}} Y\).

We remark that the notion of very high probability from Definition 2.1 survives a union bound involving \(N^{O(1)}\) events. We shall tacitly use this fact throughout the paper. Moreover, throughout the paper, the constant \({{\mathcal {C}}} \equiv {{\mathcal {C}}}_\nu \) in the assumptions (1.10) and (1.18) is always assumed to be large enough.

2.2 Overview of proof in semilocalized phase

The starting point of the proof of Theorem 1.2 is the following simple observation. Suppose that M is a Hermitian matrix with eigenvalue \(\lambda \) and associated eigenvector \(\varvec{\mathrm {w}}\). Let \(\Pi \) be an orthogonal projection and write \(\overline{\Pi } \!\,:=I - \Pi \). If \(\lambda \) is not an eigenvalue of \(\overline{\Pi } \!\,M \overline{\Pi } \!\,\) then from \((M - \lambda ) \varvec{\mathrm {w}} = 0\) we deduce

$$\begin{aligned} \overline{\Pi } \!\,\varvec{\mathrm {w}} = - (\overline{\Pi } \!\,M \overline{\Pi } \!\,- \lambda )^{-1} \overline{\Pi } \!\,M \Pi \varvec{\mathrm {w}}\,. \end{aligned}$$
(2.1)

If \(\Pi \) is an eigenprojection of M whose range contains the eigenspace of \(\lambda \) (for instance \(\Pi = \varvec{\mathrm {w}} \varvec{\mathrm {w}}^*\) if \(\lambda \) is simple) then clearly both sides of (2.1) vanish. The basic idea of our proof is to apply an approximate version of this observation to \(M = A / \sqrt{d}\), by choosing \(\Pi \) appropriately, and showing that the left-hand side of (2.1) is small by estimating the right-hand side.

In fact, we chooseFootnote 2

$$\begin{aligned} \Pi :=\sum _{x \in {{\mathcal {W}}}_{\lambda ,\delta }} \varvec{\mathrm {v}}(x) \, \varvec{\mathrm {v}}(x)^*\,, \end{aligned}$$
(2.2)

where \({{\mathcal {W}}}_{\lambda ,\delta }\) is the set (1.7) of resonant vertices at energy \(\lambda \), and \(\varvec{\mathrm {v}}(x)\) is the exponentially decaying localization profile from Theorem 1.2. The proof then consists of two main ingredients:

  1. (a)

    \(\Vert \overline{\Pi } \!\,M \Pi \Vert = o(1)\);

  2. (b)

    \(\overline{\Pi } \!\,M \overline{\Pi } \!\,\) has a spectral gap around \(\lambda \).

Informally, (a) states that \(\Pi \) is close to a spectral projection of M, as \(\overline{\Pi } \!\,M \Pi = [M,\Pi ] \Pi \) quantifies the noncommutativity of M and \(\Pi \) on the range of \(\Pi \). Similarly, (b) states that \(\Pi \) projects roughly onto an eigenspace of M of energies near \(\lambda \). Plugging (a) and (b) into (2.1) yields an estimate on \(\Vert \overline{\Pi } \!\,\varvec{\mathrm {w}} \Vert \) from which Theorem 1.2 follows easily. Thus, the main work of the proof is to establish the properties (a) and (b) for the specific choice of \(\Pi \) from (2.2).

The construction of the localization profile \(\varvec{\mathrm {v}}(x)\) uses the pruned graph \({\mathbb {G}}_\tau \) from [10], a subgraph of \({\mathbb {G}}\) depending on a threshold \(\tau > 1\), which differs from \({\mathbb {G}}\) by only a small number of edges and whose balls of radius \(r_\star \) around the vertices of \({{\mathcal {V}}}_\tau :=\{x :\alpha _x \geqslant \tau \}\) are disjoint (see Proposition 3.1 below). Now we define the vector \(\varvec{\mathrm {v}}(x) :=\varvec{\mathrm {v}}^\tau _+(x)\), where, for \(\sigma = \pm \) and \(\tau > 1\),

$$\begin{aligned} \varvec{\mathrm {v}}^\tau _\sigma (x) :=\sum _{i = 0}^{r_\star } \sigma ^i u_i(x) \varvec{\mathrm {1}}_{S_i^{{\mathbb {G}}_\tau }(x)} / \Vert \varvec{\mathrm {1}}_{S_i^{{\mathbb {G}}_\tau }(x)} \Vert \,, \qquad u_i(x) :=\frac{\sqrt{\alpha _x}}{(\alpha _x - 1)^{i/2}} \, u_0 \quad (1 \leqslant i \leqslant r_\star )\,.\nonumber \\ \end{aligned}$$
(2.3)

The motivation behind this choice is explained in Appendix A.2: with high probability, the \(r_\star \)-neighbourhood of x in \({\mathbb {G}}_\tau \) looks roughly like that of the root of infinite regular tree \({\mathbb {T}}_{D_x, d}\) whose root has \(D_x\) children and all other vertices d children. The adjacency matrix of \({\mathbb {T}}_{D_x, d}\) has the exact eigenvalues \(\pm \sqrt{d} \Lambda (\alpha _x)\) with the corresponding eigenvectors given by (2.3) with \({\mathbb {G}}_\tau \) replaced with \({\mathbb {T}}_{D_x, d}\).

The central idea of our proof is the introduction of a block diagonal approximation of the pruned graph. Define the orthogonal projections

$$\begin{aligned} \Pi ^\tau :=\sum _{x \in {{\mathcal {V}}}_{2 + o(1)}} \sum _{\sigma = \pm } \varvec{\mathrm {v}}^\tau _\sigma (x) \varvec{\mathrm {v}}^\tau _\sigma (x)^*\,, \qquad \overline{\Pi } \!\,^\tau :=I - \Pi ^\tau \,. \end{aligned}$$

The range of \(\Pi \) from (2.2) is a subspace of the range of \(\Pi ^\tau \), i.e. \(\Pi \Pi ^\tau = \Pi \). The interpretation of \(\Pi ^\tau \) is the orthogonal projection onto all localization profiles around vertices x with normalized degree at least \(2 + o(1)\), which is precisely the set of vertices around which one can define an exponentially decaying localization profile. Now we define the block diagonal approximation of the pruned graph as

$$\begin{aligned} \widehat{H}^\tau :=\sum _{x \in {{\mathcal {V}}}_{2 + o(1)}} \sum _{\sigma = \pm } \sigma \Lambda (\alpha _x) \varvec{\mathrm {v}}^\tau _\sigma (x) \varvec{\mathrm {v}}^\tau _\sigma (x)^* + \overline{\Pi } \!\,^\tau H^\tau \overline{\Pi } \!\,^\tau \,; \end{aligned}$$
(2.4)

here we defined the centred and scaled adjacency matrix \(H^\tau :=A^{{\mathbb {G}}_\tau } / \sqrt{d} - E^\tau \), where \(E^\tau \) is a suitably chosen matrix that is close to \({\mathbb {E}}A^{{\mathbb {G}}} / \sqrt{d}\) and preserves the locality of \(A^{{\mathbb {G}}_\tau }\) in balls around the vertices of \({{\mathcal {V}}}_\tau \). In the subspace spanned by the localization profiles \(\{\varvec{\mathrm {v}}^\tau _\sigma (x) :\sigma = \pm , x \in {{\mathcal {V}}}_{2 + o(1)}\}\), \(\widehat{H}^\tau \) is diagonal with eigenvalues \(\sigma \Lambda (\alpha _x)\). In the orthogonal complement, it is equal to \(H^\tau \). The off-diagonal blocks are zero. The main work of our proof consists in an analysis of \(\widehat{H}^\tau \).

In terms of \(\widehat{H}^\tau \), abbreviating \(H :=(A^{{\mathbb {G}}} - {\mathbb {E}}A^{{\mathbb {G}}}) / \sqrt{d}\), the problem of showing (a) and (b) reduces to showing

  1. (c)

    \(\Vert H - \widehat{H}^\tau \Vert = o(1)\),

  2. (d)

    \(\Vert \overline{\Pi } \!\,^\tau H^\tau \overline{\Pi } \!\,^\tau \Vert \leqslant 2 + o(1)\).

Indeed, ignoring minor issues pertaining to the centring \({\mathbb {E}}A^{{\mathbb {G}}}\), we replace \(M = A^{{\mathbb {G}}} / \sqrt{d}\) with H in (a) and (b). Then (a) follows immediately from (c), since \(\overline{\Pi } \!\,H \Pi = \Vert \overline{\Pi } \!\,\widehat{H}^\tau \Pi \Vert + o(1) = o(1)\), as \(\overline{\Pi } \!\,\widehat{H}^\tau \Pi = 0\) by the block structure of \(\widehat{H}^\tau \) and the relation \(\Pi ^\tau \Pi = \Pi \). To show (b), we note that the \(\Pi ^\tau \)-block of \(\widehat{H}^\tau \), \(\Pi ^\tau \widehat{H}^\tau \Pi ^\tau = \sum _{x \in {{\mathcal {V}}}_{2 + o(1)}} \sum _{\sigma = \pm } \sigma \Lambda (\alpha _x) \varvec{\mathrm {v}}^\tau _\sigma (x) \varvec{\mathrm {v}}^\tau _\sigma (x)^*\), trivially has a spectral gap: \(\overline{\Pi } \!\,\Pi ^\tau H^\tau \Pi ^\tau \overline{\Pi } \!\,\) has no eigenvalues in the \(\delta \)-neighbourhood of \(\lambda \), simply because the projection \(\overline{\Pi } \!\,\) removes the projections \(\varvec{\mathrm {v}}^\tau _\sigma (x) \varvec{\mathrm {v}}^\tau _\sigma (x)^*\) with eigenvalues \(\sigma \Lambda (\alpha _x)\) in the \(\delta \)-neighbourhood of \(\lambda \). Moreover, the \(\overline{\Pi } \!\,^\tau \)-block also has such a spectral gap by (d) and \(\lambda > 2 + o(1)\). Hence, by (c), we deduce the desired spectral gap (b).

Thus, what remains is the proof of (c) and (d). To prove (c), we prove \(\Vert H - H^\tau \Vert = o(1)\) and \(\Vert H^\tau - \widehat{H}^\tau \Vert = o(1)\). The bound \(\Vert H - H^\tau \Vert = o(1)\) follows from a detailed analysis of the graph \({\mathbb {G}} \setminus {\mathbb {G}}_\tau \) removed from \({\mathbb {G}}\) to obtain the pruned graph \({\mathbb {G}}_\tau \), which we decompose as a union of a graph of small maximal degree and a forest, to which standard estimates of adjacency matrices of graphs can be applied (see Lemma 3.8 below). To prove \(\Vert H^\tau - \widehat{H}^\tau \Vert = o(1)\), we first prove that \(\varvec{\mathrm {v}}^\tau _\sigma (x)\) is an approximate eigenvector of \(H^\tau \) with approximate eigenvalue \(\sigma \Lambda (\alpha _x)\) (see Proposition 3.9 below). Then we deduce \(\Vert H^\tau - \widehat{H}^\tau \Vert = o(1)\) using that the balls \(B_{2r_\star }(x)\), \(x \in {{\mathcal {V}}}_{2 + o(1)}\), are disjoint and the locality of the operator \(H^\tau \) (see Lemma 3.11 below). Thus we obtain (c).

Finally, we sketch the proof of (d). The starting point is an observation going back to [10, 15]: from an estimate on the spectral radius of the nonbacktracking matrix associated with H from [15] and an Ihara–Bass-type formula relating the spectra of H and its nonbacktracking matrix from [15], we obtain the quadratic form inequality \(|H | \leqslant I + Q + o(1)\) with very high probability, where \(Q = {{\,\mathrm{diag}\,}}(\alpha _x :x \in [N])\), \(|H |\) is the absolute value of the Hermitian matrix H, and o(1) is in the sense of operator norm (see Proposition 3.13 below). Using (c), we deduce the inequality

$$\begin{aligned} |\widehat{H}^\tau | \leqslant I + Q + o(1)\,. \end{aligned}$$
(2.5)

To estimate \(\Vert \overline{\Pi } \!\,^\tau H^\tau \overline{\Pi } \!\,^\tau \Vert \), we take a normalized eigenvector \(\varvec{\mathrm {w}}\) of \(\overline{\Pi } \!\,^\tau H^\tau \overline{\Pi } \!\,^\tau \) with maximal eigenvalue \(\lambda > 0\). Thus, \(\varvec{\mathrm {w}} \perp \varvec{\mathrm {v}}^\tau _\pm (x)\) for all \(x \in {{\mathcal {V}}}_{2 + o(1)}\). We estimate \(\overline{\Pi } \!\,^\tau H^\tau \overline{\Pi } \!\,^\tau \) from above (an analogous argument yields an estimate from below) using (2.5) to get

$$\begin{aligned} \lambda \leqslant 1 + o(1) + \sum _x \alpha _x w_x^2 \leqslant 1 + \tau + o(1) + \max _x \alpha _x \sum _{x \in {{\mathcal {V}}}_\tau } w_x^2\,. \end{aligned}$$
(2.6)

Choosing \(\tau = 1 + o(1)\), we see that (d) follows provided that we can show that

$$\begin{aligned} \sum _{x\in {{\mathcal {V}}}_\tau }w_{x}^{2} = o(1 / \log N)\,, \end{aligned}$$
(2.7)

since \(\max _x \alpha _x \leqslant {{\mathcal {C}}} \log N\) with very high probability.

The estimate (2.7) is a delocalization bound, in the vertex set \({{\mathcal {V}}}_\tau \), for any eigenvector \(\varvec{\mathrm {w}}\) of \(\widehat{H}^\tau \) that is orthogonal to \(\varvec{\mathrm {v}}_\pm ^\tau (x)\) for all \(x \in \mathcal V_{2 + o(1)}\) and whose associated eigenvalue is larger than \(2 \tau + o(1)\). It crucially relies on the assumption that \(\varvec{\mathrm {w}} \perp \varvec{\mathrm {v}}_\pm ^\tau (x)\) for all \(x \in {{\mathcal {V}}}_{2 + o(1)}\), without which it is false (see Proposition 3.14 below). The underlying principle behind its proof is the same as that of the Combes–Thomas estimate [25]: the Green function \(((\lambda - Z)^{-1})_{ij}\) of a local operator Z at a spectral parameter \(\lambda \) separated from the spectrum of Z decays exponentially in the distance between i and j, at a rate inversely proportional to the distance from \(\lambda \) to the spectrum of Z. We in fact use a radial form of a Combes–Thomas estimate, where Z is the tridiagonalization of a local restriction of \(\widehat{H}^\tau \) around a vertex \(x \in {{\mathcal {V}}}_\tau \) (see Appendix A.2) and ij index radii of concentric spheres. The key observation is that, by the orthogonality assumption on \(\varvec{\mathrm {w}}\), the Green function \(((\lambda - Z)^{-1})_{i r_\star }\), \(0 \leqslant i < r_\star \), and the eigenvector components in the radial basis \(u_i\), \(0 \leqslant i < r_\star \), satisfy the same linear difference equation. Thus we obtain exponential decay for the components \(u_i\), which yields \(u_0^2 \leqslant o(1/\log N) \sum _{i = 0}^{r_*} u_i^2\). Going back to the original vertex basis, this implies that \(w_x^2 \leqslant o(1/\log N) \Vert \varvec{\mathrm {w}}|_{B_{2r_\star }^{{\mathbb {G}}_\tau }(x)}\Vert ^2\) for all \(x \in \mathcal V_\tau \), from which (2.7) follows since the balls \(B_{2r_\star }^{{\mathbb {G}}_\tau }(x)\), \(x \in {{\mathcal {V}}}_\tau \), are disjoint.

2.3 Overview of proof in delocalized phase

The delocalization result of Theorem 1.8 is an immediate consequence of a local law for the matrix \(A / \sqrt{d}\), which controls the entries of the Green function

$$\begin{aligned} G \equiv G(z) :=\bigl (A / \sqrt{d} - z\bigr )^{-1} \end{aligned}$$

in the form of high-probability estimates, for spectral scales \({{\,\mathrm{Im}\,}}z\) down to the optimal scale 1/N, which is the typical eigenvalue spacing. Such a local law was first established for \(d \gg (\log N)^6\) in [35] and extended down to \(d \geqslant {{\mathcal {C}}} \log N\) in [42]. In both of these works, the diagonal entries of G are close to the Stieltjes transform of the semicircle law. In contrast, in the regime (1.4) the diagonal entry \(G_{xx}\) is close to the Stieltjes transform of the spectral measure at the root of an infinite \((D_x,d)\)-regular tree. Hence, \(G_{xx}\) does not concentrate around a deterministic quantity.

The basic approach of the proof is the same as for any local law: derive an approximate self-consistent equation with very high probability, solve it using a stability analysis, and perform a bootstrapping from large to small values of \({{\,\mathrm{Im}\,}}z\) . For a set \(T \subset [N]\) denote by \(A^{(T)}\) the adjacency matrix of the graph \({\mathbb {G}}\) where the vertices of T (and all incident edges) have been removed, and denote by \(G^{(T)} = \bigl (A^{(T)} / \sqrt{d} - z\bigr )^{-1}\) the associated Green function. In order to understand the emergence of the self-consistent equation, it is instructive to consider the toy situation where, for a given vertex x, all neighbours \(S_1(x)\) are in different connected components of \(A^{(x)}\). This is for instance the case if \({\mathbb {G}}\) is a tree. On the global scale, where \({{\,\mathrm{Im}\,}}z\) is large enough, this assumption is in fact valid to a good approximation, since the neighbourhood of x is with high probability a tree. Then a simple application of Schur’s complement formula and the resolvent identity yield

$$\begin{aligned} \frac{1}{G_{xx}} = -z - \frac{1}{d} \sum _{y \in S_1(x)} G_{yy}^{(x)} \,, \qquad G_{yy}^{(x)} - G_{yy} = (G_{yy}^{(x)})^2 \frac{1}{d} G_{xx}\,. \end{aligned}$$
(2.8)

Thus, on the global scale, using that G is bounded, we obtain the self-consistent equation

$$\begin{aligned} \frac{1}{G_{xx}} = -z - \frac{1}{d} \sum _{y \in S_1(x)} G_{yy} + o(1) \end{aligned}$$
(2.9)

with very high probability.

It is instructive to solve the self-consistent equation (2.9) in the family \((G_{xx})_{x \in [N]}\) on the global scale. To that end, we introduce the notion of typical vertices, which is roughly the set \({{\mathcal {T}}} = \{x \in [N] :\alpha _x = 1 + o(1)\}\). (In fact, as explained below, the actual definition for local scales has to be different; see (2.12) below.) A simple argument shows that with very high probability most neighbours of any vertex are typical. With this definition, we can try to solve (2.9) on the global scale as follows. From the boundedness of G we obtain a self-consistent equation for the vector \((G_{xx})_{x \in {{\mathcal {T}}}}\) that reads

$$\begin{aligned} \frac{1}{G_{xx}} = -z - \sum _{y \in {{\mathcal {T}}}} \frac{1}{d} A_{xy} G_{yy} + \zeta _x\,, \qquad \zeta _x = o(1)\,. \end{aligned}$$
(2.10)

It is not hard to see that the equation (2.10) has a unique solution, which satisfies \(G_{xx} = m + o(1)\) for all \(x \in {{\mathcal {T}}}\). Here m is the Stieltjes transform of the semicircle law, which satisfies \(m = \frac{1}{-z - m}\). Plugging this solution back into (2.9) and using that most neighbours of any vertex are typical shows that for \(x \notin {{\mathcal {T}}}\) we have \(G_{xx} = m_{\alpha _x} + o(1)\), where \(m_\alpha :=\frac{1}{-z - \alpha m}\). One readily finds (see Appendix A.2 below) that \(m_{\alpha _x}\) is Stieltjes transform of the spectral measure of the infinite \((D_x,d)\)-regular tree at the root.

The first main difficulty of the proof is to provide a derivation of identities of the form (2.8) (and hence a self-consistent equation of the form (2.9)) on the local scale \({{\,\mathrm{Im}\,}}z \ll 1\). We emphasize that the above derivation of (2.8) is completely wrong on the local scale. Unlike on the global scale, on the local scale the behaviour of the Green function is not governed by the local geometry of the graph, and long cycles contribute to G in an essential way. In particular, eigenvector delocalization, which follows from the local law, is a global property of the graph and cannot be addressed using local arguments; it is in fact wrong outside of the region \(\mathcal S_\kappa \), although the above derivation is insensitive to the real part of z.

We address this difficulty by replacing the identities (2.8) with the following argument, which ultimately provides an a posteriori justification of approximate versions of (2.8) with very high probability, provided we are in the region \({{\mathcal {S}}}_\kappa \). We make an a priori assumption that the entries of G are bounded with very high probability; we propagate this assumption from large to small scales using a standard bootstrapping argument and the uniform boundedness of the density of the spectral measure associated with \(m_\alpha \). It is precisely this uniform boundedness requirement that imposes the restriction to \({{\mathcal {S}}}_\kappa \) in our local law (as explained in Remark 1.10, this restriction is necessary). The key tool that replaces the simpleminded approximation (2.8) is a series of large deviation estimates for sparse random vectors proved in [42], which, as it turns out, are effective for the full optimal regime (1.4). Thus, under the bootstrapping assumption that the entries of G are bounded, we obtain (2.8) (and hence also (2.9)), with some additional error terms, with very high probability.

The second main difficulty of the proof is that, on the local scale and for sparse graphs, the self-consistent equation (2.10), which can be derived from (2.9) as explained above, is not stable enough to be solved in \((G_{xx})_{x \in {{\mathcal {T}}}}\). This problem stems from the sparseness of the graphs that we are considering, and does not appear in random matrix theory for denser (or even heavy-tailed) matrices. Indeed, the stability estimates of (2.10) carry a logarithmic factor, which is usually of no concern in random matrix theory but is deadly for the sparse regime of this paper. This is a major obstacle and in fact ultimately dooms the self-consistent equation (2.10). To explain the issue, write the sum in (2.10) as \(\sum _y S_{xy} G_{yy}\), where S is the \({{\mathcal {T}}} \times {{\mathcal {T}}}\) matrix \(S_{xy} = \frac{1}{d} A_{xy}\). Writing \(G_{xx} = m + \varepsilon _x\), plugging it into (2.10), and expanding to first order in \(\varepsilon _x\), we obtain, using the definition of m, that \(\varepsilon _x = -m^2 ((I - m^2 S)^{-1} \zeta )_x\). Thus, in order to deduce smallness of \(\varepsilon _x\) from the smallness of \(\zeta _x\), we need an estimate on the normFootnote 3\(\Vert (I - m^2 S)^{-1} \Vert _{\infty \rightarrow \infty }\). In Appendix A.10 below we show that for typical S, \({{\,\mathrm{Re}\,}}z \in {{\mathcal {S}}}_\kappa \), and small enough \({{\,\mathrm{Im}\,}}z\),we have

$$\begin{aligned} \frac{\log N}{C (\log \log N)^2} \leqslant \Vert (I - m^2 S)^{-1} \Vert _{\infty \rightarrow \infty } \leqslant C_\kappa \log N \end{aligned}$$
(2.11)

for some universal constant C and some constant \(C_\kappa \) depending on \(\kappa \). In our context, where \(\zeta _x\) is small but much larger than the reciprocal of the lower bound of (2.11), such a logarithmic factor is not affordable.

To address this difficulty, we avoid passing by the form (2.10) altogether, as it is doomed by (2.11). The underlying cause for the instability of (2.10) is the inhomogeneous local structure of the matrix S, which is a multiple of the adjacency matrix of a sparse graph. Thus, the solution is to derive a self-consistent equation of the form (2.10) but with an unstructured S, which has constant entries. The basic intuition is to replace the local average \(\frac{1}{d} \sum _{y \in S_1(x)} G_{yy}^{(x)}\) in the first identity of (2.8) with the global average \(\frac{1}{N} \sum _{y \ne x} G_{yy}^{(x)}\). Of course, in general these two are not close, but we can include their closeness into the definition of a typical vertex. Thus, we define the set of typical vertices as

$$\begin{aligned} {{\mathcal {T}}} :=\biggl \{x \in [N] :\alpha _x = 1 + o(1) \,,\, \frac{1}{d} \sum _{y \in S_1(x)} G_{yy}^{(x)} = \frac{1}{N} \sum _{y \ne x} G_{yy}^{(x)} + o(1)\biggr \}\,. \end{aligned}$$
(2.12)

The main work of the proof is then to prove the following facts with very high probability.

  1. (a)

    Most vertices are typical.

  2. (b)

    Most neighbours of any vertex are typical.

With (a) and (b) at hand, we explain how to conclude the proof. Using (a) and the approximate version of (2.8) established above, we deduce the self-consistent equation for typical vertices,

$$\begin{aligned} \frac{1}{G_{xx}} = -z - \frac{1}{|{{\mathcal {T}}} |} \sum _{y \in {{\mathcal {T}}}} G_{yy} + o(1)\,, \qquad x \in {{\mathcal {T}}}\,, \end{aligned}$$

which, unlike (2.10), is stable (see Lemma 4.19 below) and can be easily solved to show that \(G_{xx} = m + o(1) = m_{\alpha _x} + o(1)\) for all \(x \in {{\mathcal {T}}}\). Moreover, if \(x \notin {{\mathcal {T}}}\) then we obtain from (2.8) and (b) that

$$\begin{aligned} \frac{1}{G_{xx}} = -z - \frac{1}{d} \sum _{y \in S_1(x) \cap {{\mathcal {T}}}} G_{yy}^{(x)} + o(1) = -z - \alpha _x m + o(1)\,, \end{aligned}$$

where we used that \(G_{yy} = m + o(1)\) for \(y \in {{\mathcal {T}}}\). This shows that \(G_{xx} = m_{\alpha _x} + o(1)\) for all \(x \in [N]\) with very high probability, and hence concludes the proof.

What remains, therefore, is the proof of (a) and (b); see Proposition 4.8 below for a precise statement. Using the bootstrapping assumption of boundedness of the entries of G, it is not hard to estimate the probability \({\mathbb {P}}(x \in {{\mathcal {T}}})\), which we prove to be \(1 - o(1)\), although \(\{x \in {{\mathcal {T}}}\}\) does not hold with very high probability (this characterizes the critical and subcritical regimes). Now if the events \(\{x \in {{\mathcal {T}}}\}\), \(x \in [N]\), were all independent, it would then be a simple matter to deduce (a) and (b).

The most troublesome source of dependence among the events \(\{x \in {{\mathcal {T}}}\}\), \(x \in [N]\), is the Green function \(G_{yy}^{(x)}\) in the definition of \({{\mathcal {T}}}\). Thus, the main difficulty of the proof is a decoupling argument that allows us to obtain good decay for the probability \({\mathbb {P}}(T \subset {{\mathcal {T}}})\) in the size of T. This decay can only work up to a threshold in the size of T, beyond which the correlations among the different events kick in. In fact, we essentially prove that

$$\begin{aligned} {\mathbb {P}}(T \subset {{\mathcal {T}}}) \leqslant \mathrm {e}^{- o(1) d |T |} + {{\mathcal {C}}} N^{-\nu } \qquad \text {for} \quad |T | = o(d)\,; \end{aligned}$$
(2.13)

see Lemma 4.12. Choosing the largest possible T, \(T = o(d)\), we find that the first term on the right-hand side of (2.13) is bounded by \(N^{-\nu }\) provided that \(o(1) d^2 \geqslant \nu \log N\), which corresponds precisely to the optimal lower bound in (1.18). Using (2.13), we may deduce (a) and (b).

To prove (2.13), we need to decouple the events \(\{x \in {{\mathcal {T}}}\}\), \(x \in T\). We do so by replacing the Green functions \(G^{(x)}\) in the definition of \({{\mathcal {T}}}\) by \(G^{(T)}\), after which the corresponding events are essentially independent. The error that we incur depends on the difference \(G^{(T)}_{yy} - G_{yy}\), which we have to show is small with very high probability under the bootstrapping assumption that the entries of G are bounded. For T of fixed size, this follows easily from standard resolvent identities. However, for our purposes it is crucial that T can have size up to o(d), which requires a more careful quantitative analysis. As it turns out, \(G^{(T)}_{yy} - G_{yy}\) is small only up to \(|T | = o(d)\), which is precisely what we need to reach the optimal scale \(d \gg \sqrt{\log N}\) from (1.4).

3 The Semilocalized Phase

In this section we prove the results of Sect. 1.2–Theorems 1.2 and 1.7.

3.1 The pruned graph and proof of Theorem 1.2

The balls \((B_r(x))_{x \in {{\mathcal {W}}}_{\lambda , \delta }}\) in Theorem 1.2 are in general not disjoint. For its proof, and in order to give a precise definition of the vector \(\varvec{\mathrm {v}}(x)\) in Theorem 1.2, we need to make these balls disjoint by pruning the graph \({\mathbb {G}}\). This is an important ingredient of the proof, and will also allow us to state a more precise version of Theorem 1.2, which is Theorem 3.4 below. This pruning was previously introduced in [10]; it is performed by cutting edges from \({\mathbb {G}}\) in such a way that the balls \((B_r(x))_{x \in {{\mathcal {W}}}_{\lambda , \delta }}\) are disjoint for appropriate radii, \(r = 2 r_\star \), by carefully cutting in the right places, thus reducing the number of cut edges. This ensures that the pruned graph is close to the original graph in an appropriate sense. The pruned graph, \({\mathbb {G}}_\tau \), depends on a parameter \(\tau > 1\), and its construction is the subject of the following proposition.

To state it, we introduce the following notations. For a subgraph \({\mathbb {G}}_\tau \) of \({\mathbb {G}}\) we abbreviate

$$\begin{aligned} B^\tau _i(x) :=B^{{\mathbb {G}}_\tau }_i(x)\,, \qquad S^\tau _i(x) :=S^{{\mathbb {G}}_\tau }_i(x)\,. \end{aligned}$$

Moreover, we define the set of vertices with large degrees

$$\begin{aligned} {{\mathcal {V}}}_\tau :=\{x \in [N] :\alpha _x \geqslant \tau \}\,. \end{aligned}$$

Proposition 3.1

(Existence of pruned graph). Let \(1 + \xi ^{1/2} \leqslant \tau \leqslant 2\) and \(d \leqslant 3 \log N\). There exists a subgraph \({\mathbb {G}}_\tau \) of \({\mathbb {G}}\) with the following properties.

  1. (i)

    Any path in \({\mathbb {G}}_\tau \) connecting two different vertices in \({{\mathcal {V}}}_\tau \) has length at least \(4 r_{\star } +1\). In particular, the balls \((B_{2 r_{\star }}^{\tau }(x))_{x \in {{\mathcal {V}}}_\tau }\) are disjoint.

  2. (ii)

    The induced subgraph \({\mathbb {G}}_\tau |_{B_{2 r_{\star }}^{\tau }(x)}\) is a tree for each \(x \in {{\mathcal {V}}}_\tau \).

  3. (iii)

    For each edge in \({\mathbb {G}}\setminus {\mathbb {G}}_\tau \), there is at least one vertex in \({{\mathcal {V}}}_\tau \) incident to it.

  4. (iv)

    For each \(x \in {{\mathcal {V}}}_\tau \) and each \(i \in {\mathbb {N}}\) satisfying \(1 \leqslant i \leqslant 2 r_{\star }\) we have \(S_i^{\tau }(x) \subset S_i(x)\).

  5. (v)

    The degrees induced on [N] by \({\mathbb {G}}\setminus {\mathbb {G}}_\tau \) are bounded according to

    $$\begin{aligned} \max _{x \in [N]} D_x^{{\mathbb {G}} \setminus {\mathbb {G}}_\tau } \leqslant {{\mathcal {C}}} \frac{\log N}{(\tau -1)^2d} \end{aligned}$$
    (3.1)

    with very high probability.

  6. (vi)

    Suppose that \(\sqrt{\log N} \leqslant d\). For each \(x \in {{\mathcal {V}}}_\tau \) and all \(2 \leqslant i \leqslant 2 r_\star \), the bound

    $$\begin{aligned} |S_{i}(x)\setminus S_{i}^{\tau }(x)|\leqslant {\mathcal {C}}\frac{\log N}{(\tau -1)^2}d^{i-2} \end{aligned}$$
    (3.2)

    holds with very high probability.

The proof of Proposition 3.1 is postponed to the end of this section, in Sect. 3.5 below. It is essentially [10, Lemma 7.2], the main difference being that (vi) is considerably sharper than its counterpart, [10, Lemma 7.2 (vii)]; this stronger bound is essential to cover the full optimal regime (1.4) (see Sect. 1.5). As a guide for the reader’s intuition, we recall the main idea of the pruning. First, for every \(x \in {{\mathcal {V}}}_\tau \), we make the \(2 r_\star \)-neighbourhood of x a tree by removing appropriate edges incident to x. Second, we take all paths of length less than \(4 r_\star + 1\) connecting different vertices in \({{\mathcal {V}}}_\tau \), and remove all of their edges incident to any vertex in \({{\mathcal {V}}}_\tau \). Note that only edges incident to vertices in \({{\mathcal {V}}}_\tau \) are removed. This informal description already explains properties (i)–(iv). Properties (v) and (vi) are probabilistic in nature, and express that with very high probability the pruning has a small impact on the graph. See also Lemma 3.8 below for a statement in terms of operator norms of the adjacency matrices. For the detailed algorithm, we refer to the proof of [10, Lemma 7.2].

Using the pruned graph \({\mathbb {G}}_\tau \), we can give a more precise formulation of Theorem 1.2, where the localization profile vector \(\varvec{\mathrm {v}}(x)\) from Theorem 1.2 is explicit. For its statement, we introduce the set of vertices

$$\begin{aligned} {{\mathcal {V}}} :={{\mathcal {V}}}_{2 + \xi ^{1/4}} \end{aligned}$$
(3.3)

around which a localization profile can be defined.

Definition 3.2

(Localization profile). Let \(1 + \xi ^{1/2} \leqslant \tau \leqslant 2\) and \({\mathbb {G}}_\tau \) be the pruned graph from Proposition 3.1. For \(x \in {{\mathcal {V}}}\) we introduce positive weights \(u_0(x), u_1(x), \dots , u_{r_\star }(x)\) as follows. Set \(u_0(x) > 0\) and define, for \(i = 1, \dots , r_\star - 1\),

$$\begin{aligned} u_i(x) :=\frac{\sqrt{\alpha _x}}{(\alpha _x - 1)^{i/2}} \, u_0(x)\,, \qquad u_{r_\star }(x) :=\frac{1}{(\alpha _x - 1)^{(r_\star - 1)/2}} \, u_0(x)\,. \end{aligned}$$
(3.4)

For \(\sigma = \pm \) we define the radial vector

$$\begin{aligned} \varvec{\mathrm {v}}^\tau _\sigma (x) :=\sum _{i = 0}^{r_\star } \sigma ^i u_i(x) \frac{\varvec{\mathrm {1}}_{S_i^\tau (x)}}{\Vert \varvec{\mathrm {1}}_{S_i^\tau (x)} \Vert }\,, \qquad \end{aligned}$$
(3.5)

and choose \(u_0(x) > 0\) such that \(\varvec{\mathrm {v}}^\tau _\sigma (x)\) is normalized.

Remark 3.3

The family \((\varvec{\mathrm {v}}_\sigma ^\tau (x) :x \in {{\mathcal {V}}}, \,\sigma = \pm )\) is orthonormal. Indeed, if \(x,y \in {{\mathcal {V}}}\) are distinct, then by Proposition 3.1 (i) the vectors \(\varvec{\mathrm {v}}^\tau _{\sigma }(x)\) and \(\varvec{\mathrm {v}}^\tau _{{\tilde{\sigma }}}(y)\) are orthogonal for any \(\sigma , {\tilde{\sigma }} = \pm \) because they are supported on disjoint sets of vertices. Moreover, \(\varvec{\mathrm {v}}^\tau _+(x)\) and \(\varvec{\mathrm {v}}^\tau _-(x)\) are orthogonal by the choice of \(u_{r_\star }(x)\) from (3.4), as can be seen by a simple computation.

The following result restates Theorem 1.2 by identifying \(\varvec{\mathrm {v}}(x)\) there as \(\varvec{\mathrm {v}}_+^\tau (x)\) given in (3.5). It easily implies Theorem 1.2, and the rest of this section is devoted to its proof.

Theorem 3.4

The following holds with very high probability. Suppose that d satisfies (1.10). Let \(\varvec{\mathrm {w}}\) be a normalized eigenvector of \(A/\sqrt{d}\) with nontrivial eigenvalue \(\lambda \geqslant 2+ {{\mathcal {C}}} \xi ^{1/2}\). Choose \(0<\delta \leqslant (\lambda -2)/2\) and set \(\tau :=1 + (\lambda -2)/8\wedge 1\). Then

(3.6)

Remark 3.5

An analogous result holds for negative eigenvalues \(-\lambda \), where \(\lambda \) is as in Theorem 3.4 and \(\varvec{\mathrm {v}}_+^\tau (x)\) in (3.6) is replaced with \(\varvec{\mathrm {v}}_-^\tau (x)\).

For the motivation behind Definition 3.2, we refer to the discussion in Sect. 2.2 and Appendix A.2. As explained there, if \({\mathbb {G}}_\tau \) is sufficiently close to the infinite tree \({\mathbb {T}}_{D_x, d}\) in a ball of radius \(r_\star \) around x, and if \(r_\star \) is large enough for \(u_{r_\star }(x)\) to be very small, we expect (3.5) to be an approximate eigenvector of A. This will in fact turn out to be true; see Proposition 3.9 below. That \(r_\star \) is in fact large enough is easy to see: the definition of \(r_\star \) in (1.8) and the bound \(\xi \geqslant 1/d\) imply that, for \(\alpha _x\geqslant 2+ C (\log d)^2 / \sqrt{\log N}\), we have

$$\begin{aligned} (\alpha _x - 1)^{-(r_\star -2)/2} \leqslant \xi \,. \end{aligned}$$
(3.7)

This means that the last element of the sequence \((u_i(x))_{i=0}^{r_\star }\) is bounded by \(\xi \). Note that the lower bound on \(\alpha _x\) imposed above always holds for \(x \in {{\mathcal {V}}}\), since, by (1.10),

$$\begin{aligned} \frac{C (\log d)^2}{\sqrt{\log N}} \leqslant \xi ^{1/4}\,. \end{aligned}$$
(3.8)
Fig. 6
figure 6

An illustration of the three sets of vertices of increasing size that enter into the proof of Theorem 3.4. Each vertex x is plotted as a dot at its normalized degree \(\alpha _x\). The largest set is \({{\mathcal {V}}}_{\tau }\) from Proposition 3.1, where \(1 + \xi ^{1/2} \leqslant \tau \leqslant 2\). It is used to define the pruned graph \({\mathbb {G}}_\tau \). The intermediate set is \({{\mathcal {V}}} \equiv {{\mathcal {V}}}_{2 + \xi ^{1/4}}\) from (3.3). It is the set of vertices for which we can define the localization profile vector \(\varvec{\mathrm {v}}(x)\) that decays exponentially around x. The smallest set \(\mathcal W_{\lambda ,\delta } = \Lambda ^{-1}([\lambda - \delta , \lambda + \delta ])\) is the set of resonant vertices at energy \(\lambda \)

As a guide to the reader, in Fig. 6, we summarize the three main sets of vertices that are used in the proof of Theorem 3.4. We conclude this subsection by proving Theorem 1.2 and Corollary 1.6 using Theorem 3.4.

Proof of Theorem 1.2

The first claim follows immediately from Theorem 3.4, with \(\varvec{\mathrm {v}}(x) = \varvec{\mathrm {v}}^\tau _+(x)\). To verify the claim about the exponential decay of \(\varvec{\mathrm {v}}\), we note that the graph distance in \({\mathbb {G}}\) is bounded by the graph distance in \({\mathbb {G}}_\tau \), which implies

$$\begin{aligned} \sum _{y \in B_r(x)^c} (\varvec{\mathrm {v}}^\tau _+(x))_y^2 \leqslant \sum _{y \in B_r^\tau (x)^c} (\varvec{\mathrm {v}}^\tau _+(x))_y^2 = \sum _{i = r+1}^{r_\star } u_i(x)^2\,, \end{aligned}$$

from which the claim easily follows using the definition (3.4). \(\quad \square \)

Proof of Corollary 1.6

We decompose , where and \(\varvec{\mathrm {e}}\) is orthogonal to \({{\,\mathrm{Span}\,}}\{\varvec{\mathrm {v}}_+^\tau (x) :x \in {{\mathcal {W}}}_{\lambda , \delta }\}\). By Theorem 3.4 we have \(\Vert \varvec{\mathrm {e}} \Vert \leqslant \frac{{{\mathcal {C}}} (\xi +\xi _{\tau -1})}{\delta }\) and

$$\begin{aligned} \sum _{x \in {{\mathcal {W}}}_{\lambda ,\delta }} \gamma _x^2\geqslant 1-\frac{{{\mathcal {C}}} (\xi +\xi _{\tau -1})}{\delta }\,. \end{aligned}$$
(3.9)

Moreover, since \(\lambda - \delta \geqslant 2 \geqslant \tau \), we have \(\mathcal W_{\lambda ,\delta } \subset {{\mathcal {V}}}_\tau \), so that Proposition 3.1 (i) implies \((\varvec{\mathrm {v}}^\tau _+(x))_y = \delta _{xy} u_0(x)\) for \(x,y \in {{\mathcal {W}}}_{\lambda , \delta }\). Thus we have

$$\begin{aligned}&\sum _{y \in {{\mathcal {W}}}_{\lambda ,\delta }} w_y^2 = \Vert \varvec{\mathrm {w}}|_{\mathcal W_{\lambda ,\delta }}\Vert ^2 = \Biggl \Vert \sum _{x \in \mathcal W_{\lambda ,\delta }} \gamma _x \varvec{\mathrm {v}}^\tau _+(x) \vert _{\mathcal W_{\lambda ,\delta }} \Biggr \Vert ^2 + O(\Vert \varvec{\mathrm {e}} \Vert )\nonumber \\&= \sum _{y \in \mathcal W_{\lambda ,\delta }} \gamma _y^2 u_0(y)^2+{\mathcal {O}}\left( \frac{\xi +\xi _{\tau -1}}{\delta }\right) \,. \end{aligned}$$
(3.10)

Since \(u_0(y)\) was chosen such that \(\varvec{\mathrm {v}}_+^\tau (y) \) is normalized, we find

$$\begin{aligned} u_0(y)^2=\left( 1 + \sum _{i=1}^{r_\star -1} \frac{\alpha _y}{(\alpha _y-1)^i} +\frac{1}{(\alpha _y-1)^{r_\star - 1}}\right) ^{-1} = \frac{\alpha _y-2}{2(\alpha _y-1)} + O \biggl (\frac{1}{(\alpha _y - 1)^{r_\star - 1}}\biggr )\,. \end{aligned}$$

Define \(\alpha :=\Lambda ^{-1}(\lambda )\) for \(\alpha \geqslant 2\). Since \(|\Lambda (\alpha _y)-\lambda |\leqslant \delta \) for \(y\in \mathcal W_{\lambda , \delta }\), we obtain

$$\begin{aligned}&|\alpha _y - \alpha | \leqslant \delta \max _{t\in [\lambda -\delta ,\lambda +\delta ]}( \Lambda ^{-1})'(t)\\&= O\left( \delta \lambda ^{3/2} (\lambda -2)^{-1/2}\right) \,, \end{aligned}$$

where we used that \(\lambda \pm \delta - 2 \asymp \lambda - 2\). Since \(\frac{\mathrm {d}}{\mathrm {d}\alpha } \frac{\alpha - 2}{2 (\alpha - 1)} = \frac{1}{2(\alpha - 1)^2} \asymp \lambda ^{-4}\), we find

$$\begin{aligned} u_0(y)^2= & {} \frac{\alpha - 2}{2 (\alpha - 1)} + O\left( \frac{\delta }{\lambda ^{5/2} \sqrt{\lambda -2}} + \frac{1}{(\alpha _y - 1)^{r_\star - 1}}\right) \nonumber \\= & {} \frac{\alpha - 2}{2 (\alpha - 1)} + O \left( \frac{\delta }{\lambda ^{5/2} \sqrt{\lambda -2}} + \frac{\xi }{\delta } \right) \,, \end{aligned}$$
(3.11)

where we used (3.7) and the upper bound on \(\delta \) in the last step. By an elementary computation,

$$\begin{aligned} \frac{\alpha - 2}{2 (\alpha - 1)} = \frac{\sqrt{\lambda ^2 - 4}}{\lambda + \sqrt{\lambda ^2 - 4}}\,, \end{aligned}$$

and the claim hence follows by recalling (3.7) and plugging (3.9) and (3.11) into (3.10). \(\quad \square \)

3.2 Block diagonal approximation of pruned graph and proof of Theorems 3.4 and 1.7

We now introduce the adjacency matrix of \({\mathbb {G}}_\tau \) and a suitably defined centred version. Then we define a block diagonal approximation of this matrix, called \(\widehat{H}^\tau \) in (3.16) below, which is the central construction of our proof.

Definition 3.6

Let \(A^\tau \) be the adjacency matrix of \({\mathbb {G}}_\tau \). Let \(H :=\underline{A} \!\, / \sqrt{d}\) and \(H^\tau :=\underline{A} \!\,^\tau / \sqrt{d}\), where

$$\begin{aligned} \underline{A} \!\, :=A - {\mathbb {E}}A \,, \qquad {\underline{A}}^\tau :=A^\tau - \chi ^\tau ({\mathbb {E}}A)\chi ^\tau \end{aligned}$$
(3.12)

and \(\chi ^\tau \) is the orthogonal projection onto \({{\,\mathrm{Span}\,}}\{ \varvec{\mathrm {1}}_y :y \notin \bigcup _{x \in {{\mathcal {V}}}_\tau } B_{2 r_\star }^\tau (x)\}\).

The definition of \(\underline{A} \!\,^\tau \) is chosen so that (i) \(\underline{A} \!\,^\tau \) is close to \(\underline{A} \!\,\) provided that \(A^\tau \) is close to A, since the kernel of \(\chi ^\tau \) has a relatively low dimension, and (ii) when restricted to vertices at distance at most \(2 r_\star \) from \(\mathcal V_\tau \), the matrix \(\underline{A} \!\,^\tau \) coincides with \(A^\tau \). In fact, property (i) is made precise by the simple estimate

$$\begin{aligned} \Vert {\mathbb {E}}A - \chi ^\tau ({\mathbb {E}}A) \chi ^\tau \Vert \leqslant 2 \end{aligned}$$
(3.13)

with very high probability (see [10, Eq. (8.17)] for details). Property (ii) means that \(\underline{A} \!\,^\tau \) inherits the locality of the matrix A, meaning that applying \(\underline{A} \!\,^\tau \) to a vector localized in space to a small enough neighbourhood of \(\mathcal V_\tau \) yields again a vector localized in space. This property will play a crucial role in the proof, and it can be formalized as follows.

Remark 3.7

Let \(i + j \leqslant 2 r_\star \). Then for any \(x \in {{\mathcal {V}}}_\tau \) and vector \(\varvec{\mathrm {v}}\) we have

$$\begin{aligned} {{\,\mathrm{supp}\,}}\varvec{\mathrm {v}} \subset B_i^\tau (x) \quad \Longrightarrow \quad {{\,\mathrm{supp}\,}}\bigl [(H^\tau )^j \varvec{\mathrm {v}}\bigr ] \subset B_{i+j}^\tau (x)\,. \end{aligned}$$

The next result states that \(H^\tau \) is a small perturbation of H.

Lemma 3.8

Suppose that \(d \leqslant 3 \log N\). For any \(1 + \xi ^{1/2} \leqslant \tau \leqslant 2\) we have \(\Vert H - H^\tau \Vert \leqslant {{\mathcal {C}}} \xi _{\tau -1}\) with very high probability.

The next result states that \(\varvec{\mathrm {v}}_\sigma ^\tau (x)\) is an approximate eigenvector of \(H^\tau \).

Proposition 3.9

Let d satisfy (1.10). Let \(x \in [N]\) and suppose that \(1 + \xi ^{1/2} \leqslant \tau \leqslant 2\). If \(\alpha _x\geqslant 2+ C (\log d)^2 / \sqrt{\log N}\) then for \(\sigma = \pm \) we have

$$\begin{aligned} \Vert (H^\tau - \sigma \Lambda (\alpha _x)) \varvec{\mathrm {v}}^{\tau }_\sigma (x) \Vert \leqslant {\mathcal {C}} \xi \end{aligned}$$
(3.14)

with very high probability.

The proofs of Lemma 3.8 and Proposition 3.9 are deferred to Sect. 3.3. The following object is the central construction in our proof.

Definition 3.10

(Block diagonal approximation of pruned graph) Define the orthogonal projections

$$\begin{aligned} \Pi ^\tau :=\sum _{x \in {{\mathcal {V}}}} \sum _{\sigma = \pm } \varvec{\mathrm {v}}^\tau _\sigma (x) \varvec{\mathrm {v}}^\tau _\sigma (x)^*\,, \qquad \overline{\Pi } \!\,^\tau :=I - \Pi ^\tau \,, \end{aligned}$$
(3.15)

and the matrix

$$\begin{aligned} \widehat{H}^\tau :=\sum _{x \in {{\mathcal {V}}}} \sum _{\sigma = \pm } \sigma \Lambda (\alpha _x) \varvec{\mathrm {v}}^\tau _\sigma (x) \varvec{\mathrm {v}}^\tau _\sigma (x)^* + \overline{\Pi } \!\,^\tau H^\tau \overline{\Pi } \!\,^\tau \,. \end{aligned}$$
(3.16)

That \(\Pi ^\tau \) and \(\overline{\Pi } \!\,^\tau \) are indeed orthogonal projections follows from Remark 3.3. Note that \(\widehat{H}^\tau \) may be interpreted as a block diagonal approximation of \(H^\tau \). Indeed, completing the orthonormal family \((\varvec{\mathrm {v}}^\tau _\sigma (x))_{x \in {{\mathcal {V}}}, \sigma = \pm }\) to an orthonormal basis of \({\mathbb {R}}^N\), which we write as the columns of the orthogonal matrix R, we have

$$\begin{aligned} R^* \widehat{H}^\tau R = \begin{bmatrix} {{\,\mathrm{diag}\,}}(\sigma \Lambda (\alpha _x))_{x \in {{\mathcal {V}}}, \sigma = \pm } &{} 0 \\ 0 &{} [*] \end{bmatrix}\,. \end{aligned}$$

The following estimate states that \(\widehat{H}^\tau \) is a small perturbation of \(H^\tau \).

Lemma 3.11

Let d satisfy (1.10). If \(1 + \xi ^{1/2} \leqslant \tau \leqslant 2\) then \(\Vert H^\tau - \widehat{H}^\tau \Vert \leqslant {\mathcal {C}}\xi \) with very high probability.

The proof of Lemma 3.11 is deferred to Sect. 3.3. The following result is the key estimate of our proof; it states that on the range of \(\overline{\Pi } \!\,^\tau \) the matrix \(H^\tau \) is bounded by \(2\tau + o(1)\).

Proposition 3.12

Let d satisfy (1.10). If \(1 + \xi ^{1/2} \leqslant \tau \leqslant 2\) then \(\Vert \overline{\Pi } \!\,^\tau H^\tau \overline{\Pi } \!\,^\tau \Vert \leqslant 2\tau + {{\mathcal {C}}} (\xi +\xi _{\tau -1})\) with very high probability.

The proof of Proposition 3.12 is deferred to Sect. 3.4. We now use Lemma 3.11 and Proposition 3.12 to conclude Theorems 3.4 and 1.7.

Proof of Theorem 3.4

Define the orthogonal projections

$$\begin{aligned} \Pi ^\tau _{\lambda ,\delta } :=\sum _{x \in {{\mathcal {W}}}_{\lambda ,\delta }} \varvec{\mathrm {v}}^\tau _+(x) \, \varvec{\mathrm {v}}^\tau _+(x)^* \,, \qquad \qquad \overline{\Pi } \!\,^\tau _{\lambda ,\delta } :=I - \Pi ^\tau _{\lambda ,\delta } \,. \end{aligned}$$

By definition, the orthogonal projections \(\Pi ^\tau \) and \(\Pi ^\tau _{\lambda ,\delta }\) commute. Moreover, under the assumptions of Theorem 3.4 we have the inclusion property

$$\begin{aligned} \Pi ^\tau \Pi ^\tau _{\lambda ,\delta } = \Pi ^\tau _{\lambda ,\delta }\,. \end{aligned}$$
(3.17)

See also Fig. 6. To show (3.17), we note that the condition on \(\delta \) and the lower bound on \(\lambda \) in Theorem 3.4 imply \(\lambda - \delta \geqslant 2 + {{\mathcal {C}}} \xi ^{1/2}\). Using \(\Lambda (2 + x) - 2 \asymp x^2 \wedge x^{1/2}\) for \(x \geqslant 0\) we conclude that for any \(\alpha \geqslant 2\) we have the implication \(\Lambda (\alpha ) \geqslant \lambda - \delta \; \Rightarrow \; \alpha \geqslant 2 + \xi ^{1/4}\), which implies (3.17).

Next, we abbreviate \(E^\tau :=\chi ^\tau ({\mathbb {E}}A / \sqrt{d}) \chi ^\tau \) and note that \(\Pi ^\tau E^\tau = 0\) because \(\Pi ^\tau \chi ^\tau = 0\) by construction of \(\varvec{\mathrm {v}}_\sigma ^\tau (x)\). From (3.17) we obtain \(\overline{\Pi } \!\,^\tau _{\lambda ,\delta } = \overline{\Pi } \!\,^\tau _{\lambda ,\delta } \Pi ^\tau + \overline{\Pi } \!\,^\tau \), which yields

$$\begin{aligned} \overline{\Pi } \!\,^\tau _{\lambda ,\delta } (\widehat{H}^\tau + E^\tau ) \overline{\Pi } \!\,^\tau _{\lambda ,\delta } = \overline{\Pi } \!\,^\tau _{\lambda ,\delta } \Pi ^\tau \widehat{H}^\tau \Pi ^\tau \overline{\Pi } \!\,^\tau _{\lambda ,\delta } + \bigl (\overline{\Pi } \!\,^\tau \widehat{H}^\tau \overline{\Pi } \!\,^\tau + E^\tau \bigr )\,, \end{aligned}$$
(3.18)

where we used that the cross terms vanish because of the block diagonal structure of \(\widehat{H}^\tau \).

The core of our proof is the spectral gap

$$\begin{aligned} {{\,\mathrm{spec}\,}}\Bigl (\overline{\Pi } \!\,^\tau _{\lambda ,\delta } (\widehat{H}^\tau + E^\tau ) \overline{\Pi } \!\,^\tau _{\lambda ,\delta }\Bigr ) \subset {\mathbb {R}}\setminus [\lambda - \delta , \lambda + \delta ]\,. \end{aligned}$$
(3.19)

To establish (3.19), it suffices to establish the same spectral gap for each term on the right-hand side of (3.18) separately, since the right-hand side of (3.18) is a block decomposition of its left-hand side. The first term on the right-hand side of (3.18) is explicit:

$$\begin{aligned} \overline{\Pi } \!\,^\tau _{\lambda ,\delta } \Pi ^\tau \widehat{H}^\tau \Pi ^\tau \overline{\Pi } \!\,^\tau _{\lambda ,\delta } = \sum _{x \in {{\mathcal {V}}}} \sum _{\sigma = \pm } \sigma \Lambda (\alpha _x) \, \mathbb {1}_{|\sigma \Lambda (\alpha _x) - \lambda | > \delta } \, \varvec{\mathrm {v}}^\tau _\sigma (x) \varvec{\mathrm {v}}^\tau _\sigma (x)^*\,, \end{aligned}$$

which trivially has no eigenvalues in \([\lambda - \delta , \lambda + \delta ]\).

In order to establish the spectral gap for the second term of (3.18), we begin by remarking that \(E^\tau \) has rank one and, by (3.13), its unique nonzero eigenvalue is \(\sqrt{d} + O(1/\sqrt{d})\). Hence, by rank-one interlacing and Proposition 3.12, we find

$$\begin{aligned} {{\,\mathrm{spec}\,}}\bigl (\overline{\Pi } \!\,^\tau (H^\tau + E^\tau ) \overline{\Pi } \!\,^\tau \bigr ) \subset \bigl [-2\tau -{{\mathcal {C}}} (\xi +\xi _{\tau -1}) \,, 2\tau +{{\mathcal {C}}} (\xi +\xi _{\tau -1})\bigr ] \cup \bigl \{\mu \bigr \} \end{aligned}$$
(3.20)

for some simple eigenvalue \(\mu = \sqrt{d} + O(1)\). Thus, to conclude the proof of the spectral gap for the second term of (3.18), it suffices to show that

$$\begin{aligned} \lambda - \delta&> 2\tau +{{\mathcal {C}}} (\xi +\xi _{\tau -1}) \end{aligned}$$
(3.21)
$$\begin{aligned} \lambda + \delta&< \mu \,. \end{aligned}$$
(3.22)

To prove (3.21), we suppose that \(\lambda \geqslant 2 + 8 {{\mathcal {C}}} \xi ^{1/2}\) and, recalling the condition on \(\delta \) and the choice of \(\tau \) in Theorem 3.4, obtain

$$\begin{aligned} \lambda - \delta \geqslant 2 + \frac{\lambda - 2}{2} \geqslant 2\tau + 2 \mathcal C \xi ^{1/2} > 2\tau + {{\mathcal {C}}}(\xi + \xi _{\tau -1})\,, \end{aligned}$$
(3.23)

where in the last step we used that \(\xi _{\tau -1} < \xi ^{1/2}\) by our choice of \(\tau \) and the lower bound on \(\lambda \). This is (3.21).

For the following arguments, we compare \(A / \sqrt{d}\) with \(\widehat{H}^\tau + E^\tau \) using the estimate

$$\begin{aligned}&\Vert A / \sqrt{d} - (\widehat{H}^\tau + E^\tau ) \Vert \leqslant \Vert (H^\tau - \widehat{H}^\tau ) + (H - H^\tau )\nonumber \\&+ ({\mathbb {E}}A / \sqrt{d} - E^\tau )\Vert \leqslant {{\mathcal {C}}} (\xi +\xi _{\tau -1}) \end{aligned}$$
(3.24)

with very high probability, which follows from Lemma 3.8, Lemma 3.11, (3.13) and \(d^{-1/2} \leqslant {{\mathcal {C}}} \xi \).

Next, we use (3.24) to conclude the proof of (3.22). The only nonzero eigenvalue of \(E^\tau \) is \(\sqrt{d}(1 + O(1/d))\), and from Proposition 3.12 and Remark 1.5 we have \(\Vert \widehat{H}^\tau \Vert \leqslant \Lambda (\max _{x \in {{\mathcal {V}}}} \alpha _x) + O(1)\) with very high probability, so that Lemma A.7 and the assumption (1.10) yield \(\Vert \widehat{H}^\tau \Vert \leqslant {{\mathcal {C}}} \sqrt{\frac{\log N}{d}}\) with very high probability. Hence, by first order perturbation theory (e.g. Weyl’s inequality), (1.10) and (3.24) imply that \(A/\sqrt{d}\) has one eigenvalue bigger than \(\sqrt{d} - O(1)\) and all other eigenvalues are at most \({{\mathcal {C}}} \sqrt{\frac{\log N}{d}}\). Since \(\lambda \) is nontrivial, we conclude that \(\lambda \leqslant {{\mathcal {C}}} \sqrt{\frac{\log N}{d}}\). By the upper bound \(\delta \leqslant (\lambda - 2)/2\) and the lower bound on d in (1.10), this concludes the proof of (3.22) and, thus, the one of the spectral gap (3.19).

Next, from (3.19), and (3.24), we conclude the spectral gap for the full adjacency matrix

$$\begin{aligned} {{\,\mathrm{spec}\,}}\Bigl (\overline{\Pi } \!\,^\tau _{\lambda ,\delta } (A / \sqrt{d}) \overline{\Pi } \!\,^\tau _{\lambda ,\delta }\Bigr ) \subset {\mathbb {R}}\setminus \bigl [\lambda -\delta +{{\mathcal {C}}} (\xi +\xi _{\tau -1}),\lambda +\delta -{{\mathcal {C}}} (\xi +\xi _{\tau -1})\bigr ].\quad \end{aligned}$$
(3.25)

Using (3.25) we may conclude the proof. The eigenvalue-eigenvector equation \((A/ \sqrt{d} - \lambda ) \varvec{\mathrm {w}} = 0\) yields

$$\begin{aligned} \overline{\Pi } \!\,_{\lambda ,\delta }^\tau \varvec{\mathrm {w}} = - \Bigl (\overline{\Pi } \!\,_{\lambda ,\delta }^\tau (A/\sqrt{d}) \overline{\Pi } \!\,_{\lambda ,\delta }^\tau -\lambda \Bigr )^{-1} \overline{\Pi } \!\,_{\lambda ,\delta }^\tau (A/\sqrt{d}) \Pi _{\lambda ,\delta }^\tau \varvec{\mathrm {w}}\,. \end{aligned}$$
(3.26)

Assuming that \(\delta > {{\mathcal {C}}} (\xi + \xi _{\tau - 1})\), from (3.25) we get

$$\begin{aligned} \Bigl \Vert \Bigl (\overline{\Pi } \!\,_{\lambda ,\delta }^\tau (A/\sqrt{d}) \overline{\Pi } \!\,_{\lambda ,\delta }^\tau -\lambda \Bigr )^{-1} \Bigr \Vert \leqslant \frac{1}{\delta -\mathcal C (\xi +\xi _{\tau -1})}\,. \end{aligned}$$
(3.27)

Moreover, since \(\overline{\Pi } \!\,_{\lambda ,\delta }^\tau \widehat{H}^\tau \Pi _{\lambda ,\delta }^\tau = 0\) and \(E^\tau \Pi _{\lambda ,\delta }^\tau =0\), we deduce from (3.24) that

$$\begin{aligned} \Vert \overline{\Pi } \!\,_{\lambda ,\delta }^\tau (A/\sqrt{d}) \Pi _{\lambda ,\delta }^\tau \Vert \leqslant {{\mathcal {C}}} (\xi + \xi _{\tau -1})\,. \end{aligned}$$
(3.28)

Plugging (3.27) and (3.28) into (3.26) yields

$$\begin{aligned} \Vert \overline{\Pi } \!\,_{\lambda ,\delta }^\tau \varvec{\mathrm {w}} \Vert \leqslant \frac{{{\mathcal {C}}} (\xi +\xi _{\tau -1})}{\delta - {{\mathcal {C}}}(\xi + \xi _{\tau -1})}\wedge 1 \leqslant \frac{2{{\mathcal {C}}} (\xi +\xi _{\tau -1})}{\delta }\,, \end{aligned}$$

since \(\varvec{\mathrm {w}}\) is normalized. This concludes the proof if \(\delta > {{\mathcal {C}}} (\xi + \xi _{\tau - 1})\) (after a renaming of the constant \({{\mathcal {C}}}\)), and otherwise the claim is trivial. \(\quad \square \)

Proposition 3.12 is also the main tool to prove Theorem 1.7.

Proof of Theorem 1.7

The proof uses Proposition 3.12, Lemma 3.8, and Lemma 3.11 for \(\tau \in [1 + \xi ^{1/2}/3,2]\). Note that the lower bound \(1 + \xi ^{1/2}/3\) is smaller than the lower bound \(1 + \xi ^{1/2}\) imposed in these results, but their proofs hold verbatim also in this regime of \(\tau \).

We set \(E^\tau :=\chi ^\tau ({\mathbb {E}}A/\sqrt{d})\chi ^\tau \) with \(\chi ^\tau \) from Definition 3.6. We now compare \(A/\sqrt{d}\) and \(\widehat{H}^\tau + E^\tau \), as in the proof of Theorem 3.4, and use some estimates from its proof. For any \( \tau \in [1 + \xi ^{1/2}/3,2]\), we have

$$\begin{aligned} {{\,\mathrm{spec}\,}}(\widehat{H}^\tau + E^\tau ) = \{\pm \Lambda (\alpha _x):x \in \mathcal U \} \cup {{\,\mathrm{spec}\,}}\big (\overline{\Pi } \!\,^\tau ( H^\tau + E^\tau ) \overline{\Pi } \!\,^\tau \big )\,, \end{aligned}$$
(3.29)

since \(\Pi ^\tau \chi ^\tau =0\). By first order perturbation theory and the choice \(\tau =2\), we get from (3.29), (3.20) and (3.24) that \(\lambda _1(A/\sqrt{d}) = \mu + O(\xi ) = \sqrt{d} + O(1)\) and \(\lambda _1(A/\sqrt{d})\) is well separated from the other eigenvalues of \(A/\sqrt{d}\) (see the proof of Theorem 3.4). Combining (3.29), (3.20), and (3.24), choosing \(\tau = 1 + \xi ^{1/2}/3\) as well as using \({{\mathcal {C}}}(\xi + \xi _{\tau - 1}) \leqslant \xi ^{1/2}/3\) for this choice of \(\tau \) imply (1.12).

Moreover, we apply first order perturbation theory to (3.29) using (3.20) and (3.24), and obtain

$$\begin{aligned} |\lambda _{i + 1}(A/\sqrt{d}) - \Lambda (\alpha _{\sigma (i)})| + |\lambda _{N-i+1}(A/\sqrt{d}) + \Lambda (\alpha _{\sigma (i)})| \leqslant {\mathcal {C}}(\xi + \xi _{\tau - 1}) \end{aligned}$$
(3.30)

with very high probability for all \(\tau \in [1+\xi ^{1/2}/3,2]\) and all \(i \in [|{{\mathcal {U}}} |]\) satisfying

$$\begin{aligned} 2 (\tau -1) + {{\mathcal {C}}} ( \xi + \xi _{\tau - 1}) < \Lambda (\alpha _{\sigma (i)})-2. \end{aligned}$$
(3.31)

What remains is choosing \(\tau \equiv \tau _i\), depending on \(i \in [|{{\mathcal {U}}} |]\), such that the condition (3.31) is satisfied and the error estimate from (3.30) transforms into the form of (1.11). Both are achieved by setting

$$\begin{aligned} \tau = 1 + \frac{1}{3}\big [ (\Lambda (\alpha _{\sigma (i)}) -2 ) \wedge 3 \big ]. \end{aligned}$$
(3.32)

Note that \(\tau \in [1 + \xi ^{1/2}/3,2]\) as \(\sigma (i) \in {{\mathcal {U}}}\). From \(\Lambda (\alpha _{\sigma (i)})-2 \geqslant 3(\tau -1) \) due to (3.32) and \(\Lambda (\alpha _{\sigma (i)})-2 \geqslant \xi ^{1/2}\) by the definition of \({{\mathcal {U}}}\), we conclude that

$$\begin{aligned} \Lambda (\alpha _{\sigma (i)})-2 \geqslant \frac{5}{2} (\tau - 1) + \frac{1}{6} \xi ^{1/2} \geqslant 2 (\tau - 1) + {{\mathcal {C}}} (\xi _{\tau - 1} + \xi ), \end{aligned}$$

where we used \(\tau - 1 \geqslant 3 \xi _{\tau - 1} \log d\) as \(\tau -1 \geqslant \xi ^{1/2}/3\). This proves (3.31) and, thus, (3.30) for any \(\sigma (i) \in {{\mathcal {U}}}\) with the choice of \(\tau \) from (3.32).

In order to show that the right-hand side of (3.30) is controlled by the one in (1.11), we now distinguish the two cases, \(\Lambda (\alpha _{\sigma (i)}) -2 \leqslant 3\) and \(\Lambda (\alpha _{\sigma (i)}) -2 > 3\). In the latter case, \(\tau = 2\) by (3.32) and (1.11) follows immediately from (3.30) as \(\xi _1 \leqslant \xi \). If \(\Lambda (\alpha _{\sigma (i)}) -2 \leqslant 3\) then \(\tau - 1 = (\Lambda (\alpha _{\sigma (i)}) -2)/3\) and, thus, \(\xi _{\tau - 1} = 3 \xi _{\Lambda (\alpha _{\sigma (i)}) -2}\). Hence, (3.30) implies (1.11). This concludes the proof of Theorem 1.7. \(\quad \square \)

3.3 Proof of Lemma 3.8, Proposition 3.9, and Lemma 3.11

Proof of Lemma 3.8

To begin with, we reduce the problem to the adjacency matrices by using the estimate (3.13). Hence, with very high probability,

$$\begin{aligned} \sqrt{d} \Vert H - H^\tau \Vert \leqslant \Vert {\mathbb {E}}A - \chi ^\tau ({\mathbb {E}}A) \chi ^\tau \Vert + \Vert A - A^\tau \Vert \leqslant 2 + \Vert A^{{\mathbb {D}}_\tau } \Vert \,, \end{aligned}$$

where \(A^{{\mathbb {D}}_\tau }\) is the adjacency matrix of the graph \({\mathbb {D}}_\tau :={\mathbb {G}} \setminus {\mathbb {G}}_\tau \). Hence, since \(d^{-1/2} \leqslant C \xi _{\tau -1} \) by \(d \leqslant 3 \log N\) and the definition (1.9), it suffices to show that \(\Vert A^{{\mathbb {D}}_\tau } \Vert \leqslant {{\mathcal {C}}} \xi _{\tau -1} \sqrt{d}\).

We know from Proposition 3.1 (iii) and (v) that with very high probability \({\mathbb {D}}_\tau \) consists of (possibly overlapping) starsFootnote 4 around vertices \(x \in {{\mathcal {V}}}_\tau \) of central degree \(D_x^{{\mathbb {D}}_\tau } \leqslant {{\mathcal {C}}} d \xi _{\tau -1}^2\). Moreover, with very high probability,

  1. (i)

    any ball \(B_{2 r_\star }(x)\) around \(x \in {{\mathcal {V}}}_\tau \) has at most \({{\mathcal {C}}}\) cycles;

  2. (ii)

    any ball \(B_{2 r_\star }(x)\) around \(x \in {{\mathcal {V}}}_\tau \) contains at most \({{\mathcal {C}}} d \xi _{\tau -1}^2\) vertices in \({{\mathcal {V}}}_\tau \).

Claim (i) follows from [10, Corollary 5.6], the definition (1.8), and Lemma A.7. Claim (ii) follows from [10, Lemma 7.3] and \(h((\tau -1)/2) \asymp (\tau -1)^2\) for \(1 \leqslant \tau \leqslant 2\).

Let \(x \in {{\mathcal {V}}}_\tau \). We claim that we can remove at most \(\mathcal C\) edges of \({\mathbb {D}}_\tau \) incident to x so that no cycle passes through x. Indeed, if there were more than \({{\mathcal {C}}}\) cycles in \({\mathbb {D}}_\tau \) passing through x, then at least one such cycle would have to leave \(B_{2 r_\star }(x)\) (by (i)), which would imply that \(B_{2 r_\star }(x)\) has at least \(r_\star \) vertices in \({{\mathcal {V}}}_\tau \), which, by (ii), is impossible since \(r_\star \geqslant 2 {{\mathcal {C}}} d \xi _{\tau -1}^2\) by \(\tau \geqslant 1 + \xi ^{1/2}\). See Fig. 7 for an illustration of \({\mathbb {D}}_\tau \).

Thus, we can remove a graph \({\mathbb {U}}_\tau \) from \({\mathbb {D}}_\tau \) such that \({\mathbb {U}}_\tau \) has maximal degree \({{\mathcal {C}}}\) and \({\mathbb {D}}_\tau \setminus {\mathbb {U}}_\tau \) is a forest of maximal degree \(\mathcal Cd \xi _{\tau -1}^2\) (by (ii)). The claim now follows from Lemma A.4. \(\quad \square \)

Fig. 7
figure 7

An illustration of a connected component of \({\mathbb {D}}_\tau \). Vertices of \({{\mathcal {V}}}_\tau \) are drawn in white and the other vertices in black. The ball \(B_{2 r_\star }(x)\) around a chosen white vertex x is drawn in grey, where \(2 r_\star = 4\). The illustrated component of \({\mathbb {D}}_\tau \) has three cycles, two of which are in \(B_{2 r_\star }(x)\). The blue and red cycles pass through x. The purple edge is removed from the blue cycle, i.e. it is put into the graph \({\mathbb {U}}_\tau \). With very high probability, the red cycle cannot appear, because it leaves the ball \(B_{2 r_\star }(x)\) and therefore contains more white vertices in \(B_{2 r_\star }(x)\) than allowed by property (ii)

Proof of Proposition 3.9

We focus on the case \(\sigma = +\); trivial modifications yield (3.14) for \(\sigma = -\). The basic strategy is to decompose \((H^\tau - \Lambda (\alpha _x))\varvec{\mathrm {v}}_+^\tau (x)\) into several error terms that are estimated separately. A similar argument was applied in [10, Proposition 5.1] to the original graph \({\mathbb {G}}\) instead of \({\mathbb {G}}^\tau \), which however does not yield sharp enough estimates to reach the optimal scale \(d \gg \sqrt{\log N}\) (see Sect. 1.5).

We omit x from the notation in this proof and write \(u_i\), \(\varvec{\mathrm {v}}_+^\tau \) and \(S_i^\tau \) instead of \(u_i(x)\), \(\varvec{\mathrm {v}}^\tau _+(x)\) and \(S_i^\tau (x)\). We define

$$\begin{aligned} \varvec{\mathrm {s}}^\tau _i :=\frac{\varvec{\mathrm {1}}_{S_i^\tau }}{\Vert \varvec{\mathrm {1}}_{S_i^\tau } \Vert }, \qquad \qquad N_i^\tau (y) :=|S_1^\tau (y) \cap S_i^\tau |\,. \end{aligned}$$

Note that \((\varvec{\mathrm {s}}^\tau _i)_{i=0}^{2r_\star }\) form an orthonormal system. Defining the vectors

$$\begin{aligned} \varvec{\mathrm {w}}_2&:=\sum _{i=2}^{{r_\star }} \frac{u_i}{\sqrt{d |S^\tau _i |}} \sum _{y \in S^\tau _{i-1}} \bigg ( N_i^\tau (y) - \frac{|S^\tau _i |}{|S^\tau _{i-1} |} \bigg ) \varvec{\mathrm {1}}_y, \nonumber \\ \varvec{\mathrm {w}}_3&:=u_2 \left( \frac{\sqrt{|S^\tau _2 |}}{\sqrt{d|S^\tau _1 |}} - 1 \right) \varvec{\mathrm {s}}^\tau _1 + \sum _{i=2}^{{r_\star } - 1} \left[ u_{i+1} \left( \frac{\sqrt{|S^\tau _{i+1} |}}{\sqrt{d|S^\tau _i |}} -1 \right) + u_{i-1} \left( \frac{\sqrt{|S^\tau _i |}}{\sqrt{d |S^\tau _{i-1} |}} - 1 \right) \right] \varvec{\mathrm {s}}^\tau _i , \nonumber \\ \varvec{\mathrm {w}}_4&:=u_{r_\star } \bigg (1 - \frac{1}{\sqrt{\alpha _x}} \bigg ) \varvec{\mathrm {s}}_{r_\star -1}^\tau + u_{{r_\star }-1} \Bigg ( \frac{\sqrt{|S^\tau _{r_\star } |}}{\sqrt{d |S^\tau _{{r_\star }-1} |}} - \frac{1}{\sqrt{\alpha _x - 1}} \Bigg ) \varvec{\mathrm {s}}^\tau _{r_\star } + u_{r_\star } \frac{\sqrt{|S^\tau _{{r_\star }+1} |}}{\sqrt{d|S^\tau _{r_\star } |}} \, \varvec{\mathrm {s}}^\tau _{{r_\star }+1}\,, \end{aligned}$$
(3.33)

a straightforward computation using the definition of \(\varvec{\mathrm {v}}^\tau _+\) yields

$$\begin{aligned} (H^\tau - \Lambda (\alpha _x)) \varvec{\mathrm {v}}_+^\tau = \varvec{\mathrm {w}}_2 + \varvec{\mathrm {w}}_3 + \varvec{\mathrm {w}}_4. \end{aligned}$$
(3.34)

For a detailed proof of (3.34) in a similar setup, we refer the reader to [10, Lemma 5.2] (note that in the analogous calculation of [10] the left-hand side of (3.34) is multiplied by \(\sqrt{d}\)). The terms in (3.34) analogous to \(\varvec{\mathrm {w}}_0\) and \(\varvec{\mathrm {w}}_1\) in [10] vanish, respectively, because the projection \(\chi ^\tau \) is included in (3.12) and because \({\mathbb {G}}_\tau |_{B_{2r_\star }^\tau }\) is a tree by Proposition 3.1 (ii). The vector \(\varvec{\mathrm {w}}_4\) from (3.33) differs from the one in [10] due to the special choice of \(u_{r_\star }\) in (3.4).

We now complete the proof of (3.14) by showing that each term on the right-hand side of (3.34) is bounded in norm by \({\mathcal {C}} \xi \) with very high probability. We start with \(\varvec{\mathrm {w}}_3\) by first proving the concentration bound

$$\begin{aligned} \left| \frac{|S_{i+1}^\tau |}{d|S_i^\tau |}-1\right| ={\mathcal {O}}\left( {\frac{\sqrt{\log N}}{d}}\right) \end{aligned}$$
(3.35)

with very high probability, for \(i = 1, \ldots , r_\star \). To prove this, we use Proposition 3.1 (iv) and (vi), as well as [10, Lemma 5.4], to obtain

$$\begin{aligned} \frac{|S_i^\tau |}{|S_i|}=1-\frac{|S_{i}\setminus S_{i}^\tau |}{|S_i|}\geqslant 1-{\mathcal {C}}\frac{\log N}{(\tau - 1)^2 d^2} \end{aligned}$$
(3.36)

with very high probability, where we used that \(\alpha _x \geqslant 1\), and the assumption [10, Eq. (5.13)] is satisfied by the definition (1.8). Therefore, invoking [10, Lemma 5.4] in the following expansion yields

$$\begin{aligned} \frac{|S_{i+1}^\tau |}{d |S_i^\tau |} = \frac{|S_{i+1} |}{d |S_i |}\, \frac{|S_{i} |}{|S_i^\tau |}\, \frac{|S_{i+1}^\tau |}{|S_{i+1} |} = \biggl (1 + {{\mathcal {O}}} \biggl ({\frac{\sqrt{\log N}}{d}}\biggr )\biggr ) \biggl (1 + {{\mathcal {O}}} \biggl (\frac{\log N}{d^2 (\tau - 1)^2}\biggr )\biggr )\qquad \end{aligned}$$
(3.37)

with very high probability. Hence, recalling the lower bound \(\tau \geqslant 1 + \xi ^{1/2}\), we obtain (3.35).

We take the norm in the definition of \(\varvec{\mathrm {w}}_3\), use the orthonormality of \((\varvec{\mathrm {s}}_i^\tau )_{i=0}^{r_\star }\), and end up with

$$\begin{aligned} \Vert \varvec{\mathrm {w}}_3 \Vert ^2 \leqslant \left[ \left( \frac{\sqrt{|S^\tau _2 |}}{\sqrt{d |S^\tau _1 |}} -1 \right) ^2 u_2^2 + 2 \sum _{i=2}^{{r_\star }-1} \left( \left( \frac{\sqrt{|S^\tau _{i+1} |}}{\sqrt{d |S^\tau _i |}} - 1\right) ^2 u_{i+1}^2 + \left( \frac{\sqrt{|S^\tau _i |}}{ \sqrt{d |S^\tau _{i-1} |}} - 1 \right) ^2 u_{i-1}^2 \right) \right] . \end{aligned}$$

Consequently, (3.35) and \(\sum _{i=0}^{r_\star } u_i^2 =1\) yield the desired bound on \(\Vert \varvec{\mathrm {w}}_3 \Vert \).

In order to estimate \(\Vert \varvec{\mathrm {w}}_2 \Vert \), we use the definitions

$$\begin{aligned} N_i(y) :=|S_1(y) \cap S_i |, \qquad Y_i :=\frac{1}{|S_{i-1}^{\tau } |}\sum _{y \in S_{i-1}^\tau } \big ( N_i(y) - {\mathbb {E}}[N_i(y) | B_{i-1}] \big )^2\end{aligned}$$

and the Pythagorean theorem to obtain

$$\begin{aligned} \Vert \varvec{\mathrm {w}}_2\Vert ^2&= \sum _{i=2}^{{r_\star }} \frac{u_i^2}{d |S^\tau _i |} \sum _{y \in S^\tau _{i-1}} \bigg ( N^\tau _i(y) - \frac{|S^\tau _i |}{|S^\tau _{i-1} |} \bigg )^2 \nonumber \\&\leqslant 4 \sum _{i=2}^{{r_\star }} \frac{u_i^2}{d |S^\tau _i |} \sum _{y \in S^\tau _{i-1}} \bigg [ \big ( N_i(y) - {\mathbb {E}}[N_i(y) | B_{i-1}]\big )^2 + \big ({\mathbb {E}}[N_i(y) | B_{i-1}] - d \big )^2 \nonumber \\&\qquad \qquad \qquad \qquad +\bigg (d- \frac{|S^\tau _i |}{|S^\tau _{i-1} |} \bigg )^2+(N_i^{\tau }(y)-N_i(y))^2 \bigg ] \nonumber \\&\leqslant 4 \max _{2\leqslant i\leqslant r_\star } \frac{|S^{\tau }_{i-1} |}{d |S^\tau _i |} \Big [ Y_i + {{\mathcal {C}}} \log N+\big (\max _y D_y^{{\mathbb {G}} \setminus {\mathbb {G}}_\tau }\big )^2 \Big ] \end{aligned}$$
(3.38)

with very high probability. Here, in the last step, we used (3.35), \(\sum _{i=0}^{r_\star } u_i^2 = 1\) and \(|d - {\mathbb {E}}[N_i(y) | B_{i-1}] | = d |B_{i-1} |/N \leqslant {\mathcal {C}}\) with very high probability due to [10, Eq. (5.12b)] and Lemma A.7.

Next, we claim that

$$\begin{aligned} Y_i \leqslant {{\mathcal {C}}} \log N \log d \end{aligned}$$
(3.39)

with very high probability, for \(i = 2, \ldots , r_\star \). The proof of (3.39) is based on a dyadic decomposition analogous to the one used in the proof of [10, Eq. (5.26)]. We distinguish two regimes and estimate

$$\begin{aligned} Y_i&\leqslant d + \frac{1}{|S_{i-1}^{\tau } |}\sum _{y \in S_{i-1}^\tau } \mathbb {1}_{|N_i(y) - {\mathbb {E}}[N_i(y) | B_{i-1}] |>d^{1/2}} \left( N_i(y) - {\mathbb {E}}[N_i(y) | B_{i-1}] \right) ^2 \nonumber \\&\leqslant d + \frac{1}{|S_{i-1}^{\tau } |} \sum _{k=k_{\min }}^{0} d^2 \mathrm {e}^{k+1} |{{\mathcal {N}}}^\tau _{i,k} | \end{aligned}$$
(3.40)

with very high probability, where we introduced

$$\begin{aligned}&k_{\min }:=\lfloor -\log d \rfloor \,,\\&\qquad {{\mathcal {N}}}^\tau _{i,k} :=\Big \{y\in S_{i-1}^\tau :d^2 \mathrm {e}^k< \bigl (N_i(y)-{\mathbb {E}}[N_i(y) | B_{i-1}]\bigr )^2 \leqslant d^2 \mathrm {e}^{k+1}\Big \}\,. \end{aligned}$$

In (3.40), we used that, with very high probability, \(\bigl (N_i(y)-{\mathbb {E}}[N_i(y) | B_{i-1}]\bigr )^2 \leqslant d^2 \bigl ((\tau - 1/2)^2 \vee 1\bigr ) \leqslant d^2 \mathrm {e}\), because \(y\in S_{i-1}^\tau \) implies the conditions \(0\leqslant N_i(y)\leqslant D_y\leqslant \tau d \) due to Proposition 3.1 (i) and \(d/2 \leqslant {\mathbb {E}}[N_i(y) |B_{i-1}] \leqslant d\) with very high probability. By Proposition 3.1 (iv), we have \({{\mathcal {N}}}_{i,k}^\tau \subset {{\mathcal {N}}}^{i-1}_k\), where \({{\mathcal {N}}}^{i-1}_k\) is defined as in the proof of [10, Eq. (5.26)]. (Note that, in the notation of [10], there is a one-to-one mapping between \(A_{( B_{i-1})}\) and \(B_i\).) In this proof it is shown that, with very high probability,

$$\begin{aligned}|{{\mathcal {N}}}^{i-1}_k| \leqslant \ell _k, \qquad \qquad \ell _k :=\frac{{{\mathcal {C}}}}{d}(|S_{i-1}|+\log N) \mathrm {e}^{-k}. \end{aligned}$$

Using (3.36) and (3.37), and then plugging the resulting bound into (3.40) concludes the proof of (3.39).

Thus, we obtain \(\Vert \varvec{\mathrm {w}}_2 \Vert \leqslant {{\mathcal {C}}} \xi \) with very high probability, by starting from (3.38) and using (3.35), (3.39) and Proposition 3.1 (v) as well as the assumption \(1 + \xi ^{1/2} \leqslant \tau \leqslant 2\).

Finally, we estimate \(\varvec{\mathrm {w}}_4\). Since \(\alpha _x \geqslant 2\) and \(u_0 \leqslant 1\) we have that \(u_{r_\star } + u_{r_\star -1} \leqslant 3 (\alpha _x - 1)^{-(r_\star -2)/2}\). The other coefficients of \(\varvec{\mathrm {s}}_{r_\star -1}^\tau \), \(\varvec{\mathrm {s}}_{r_\star }^\tau \) and \(\varvec{\mathrm {s}}_{r_\star +1}^\tau \) are bounded by \({{\mathcal {C}}}\) with very high probability, due to \(\alpha _x \geqslant 2\) and (3.35), respectively. Therefore, (3.7) implies \(\Vert \varvec{\mathrm {w}}_4 \Vert \leqslant {{\mathcal {C}}} \xi \). This concludes the proof of Proposition 3.9. \(\quad \square \)

Proof of Lemma 3.11

We have to estimate the norm of

$$\begin{aligned}&H^\tau -\widehat{H}^\tau = \Pi ^\tau H^\tau \Pi ^\tau \nonumber \\&-\sum _{x \in {{\mathcal {V}}}} \sum _{\sigma = \pm } \sigma \Lambda (\alpha _x) \varvec{\mathrm {v}}^\tau _\sigma (x) \varvec{\mathrm {v}}^\tau _\sigma (x)^* +\overline{\Pi } \!\,^\tau H^\tau \Pi ^\tau + (\overline{\Pi } \!\,^\tau H^\tau \Pi ^\tau )^*. \end{aligned}$$
(3.41)

Each \(x \in {{\mathcal {V}}}\) satisfies the condition of Proposition 3.9 since \(\xi ^{1/4} \geqslant C (\log d)^2 / \sqrt{\log N}\) (see (3.8)). Hence, for any \(x \in {{\mathcal {V}}}\) and \(\sigma = \pm \), Proposition 3.9 yields

$$\begin{aligned} H^\tau \varvec{\mathrm {v}}_\sigma ^\tau (x)= \sigma \Lambda (\alpha _x) \varvec{\mathrm {v}}_\sigma ^\tau (x) + \varvec{\mathrm {e}}_\sigma ^\tau (x)\,, \qquad {{\,\mathrm{supp}\,}}\varvec{\mathrm {e}}_\sigma ^\tau (x) \subset B_{r_{\star }+1}^{\tau }(x) \,, \qquad \Vert \varvec{\mathrm {e}}_\sigma ^\tau (x) \Vert \leqslant {{\mathcal {C}}} \xi \end{aligned}$$

with very high probability, where the second statement follows from the first together with the definition (3.5) of \(\varvec{\mathrm {v}}_\sigma ^\tau (x)\) and Remark 3.7. By Proposition 3.1 (i), the balls \(B_{2r_\star }^\tau (x)\) and \(B_{2r_\star }^\tau (y)\) are disjoint for \(x, y \in {{\mathcal {V}}}_\tau \) with \(x \ne y\). Hence, in this case, \(\varvec{\mathrm {v}}_\sigma ^\tau (x),\varvec{\mathrm {e}}_\sigma ^\tau (x) \perp \varvec{\mathrm {v}}_{\sigma '}^\tau (y),\varvec{\mathrm {e}}_{\sigma '}^\tau (y)\). For any \(\varvec{\mathrm {a}} = \sum _{x\in {{\mathcal {V}}}} \sum _{\sigma = \pm } a_{x,\sigma } \varvec{\mathrm {v}}_\sigma ^\tau (x)\), we obtain

$$\begin{aligned} \overline{\Pi } \!\,^\tau H^\tau \Pi ^\tau \varvec{\mathrm {a}} = \sum _{x \in {{\mathcal {V}}}} \sum _{\sigma = \pm } a_{x,\sigma } \overline{\Pi } \!\,^\tau H^\tau \varvec{\mathrm {v}}_\sigma ^\tau (x) = \overline{\Pi } \!\,^\tau \sum _{x \in {{\mathcal {V}}}} \sum _{\sigma = \pm } a_{x,\sigma } \varvec{\mathrm {e}}_\sigma ^\tau (x)\,. \end{aligned}$$

Thus, with very high probability, \(\Vert \overline{\Pi } \!\,^\tau H^\tau \Pi ^\tau \varvec{\mathrm {a}} \Vert ^2 \leqslant \sum _{x \in {{\mathcal {V}}}} \Vert \sum _{\sigma = \pm } a_{x,\sigma } \varvec{\mathrm {e}}_\sigma ^\tau (x)\Vert ^2 \leqslant 4 {{\mathcal {C}}}^2\sum _{x\in \mathcal V} \sum _{\sigma = \pm } a_{x,\sigma }^2 \xi ^2 = 4 {{\mathcal {C}}}^2 \xi ^2 \Vert \varvec{\mathrm {a}} \Vert ^2\) by orthogonality. Therefore, \(\Vert \overline{\Pi } \!\,^\tau H^\tau \Pi ^\tau \Vert \leqslant {{\mathcal {C}}} \xi \) with very high probability. Similarly, the representation

$$\begin{aligned}\left( \Pi ^\tau H^\tau \Pi ^\tau -\sum _{x \in {{\mathcal {V}}}} \sum _{\sigma = \pm } \sigma \Lambda (\alpha _{x}) \varvec{\mathrm {v}}^\tau _\sigma (x) \varvec{\mathrm {v}}^\tau _\sigma (x)^*\right) \varvec{\mathrm {a}} = \Pi ^\tau \sum _{x \in {{\mathcal {V}}}} \sum _{\sigma = \pm } a_{x,\sigma } \varvec{\mathrm {e}}_\sigma ^\tau (x)\end{aligned}$$

yields the desired estimate on the sum of the two first terms on the right-hand side of (3.41). \(\quad \square \)

3.4 Proof of Proposition 3.12

In this section we prove Proposition 3.12. Its proof relies on two fundamental tools.

The first tool is a quadratic form estimate, which estimates H in terms of the diagonal matrix of the vertex degrees. It is an improvement of [10, Proposition 6.1]. To state it, for two Hermitian matrices X and Y we use the notation \(X \leqslant Y\) to mean that \(Y - X\) is a nonnegative matrix, and \(|X |\) is the absolute value function applied to the matrix X.

Proposition 3.13

Let \(4 \leqslant d \leqslant 3 \log N\). Then, with very high probability, we have

$$\begin{aligned} |H |\leqslant I+(1+ 2d^{-1/2}) Q +{\mathcal {C}}\frac{\log N}{d^{2}}\vee d^{-1/2}, \end{aligned}$$

where Q is the diagonal matrix with diagonal \((\alpha _x)_{x \in [N]}\).

The second tool is a delocalization estimate for an eigenvector \(\varvec{\mathrm {w}}\) of \(\widehat{H}^\tau \) associated with an eigenvalue \(\lambda > 2\). Essentially, it says that \(w_x\) is small at any \(x \in {{\mathcal {V}}}_\tau \) unless \(\varvec{\mathrm {w}}\) happens to be the specific eigenvector \(\varvec{\mathrm {v}}^\tau _\pm (x)\) of \(\widehat{H}^\tau \), which is by definition localized around x. Thus, in any ball \(B_{2 r_\star }^\tau (x)\) around \(x \in {{\mathcal {V}}}_\tau \), all eigenvectors except \(\varvec{\mathrm {v}}^\tau _\pm (x)\) are locally delocalized in the sense that their magnitudes at x are small. Using that the balls \((B_{2 r_\star }^\tau (x))_{x \in \mathcal V_\tau }\) are disjoint, this implies that eigenvectors of \(\overline{\Pi } \!\,^\tau H^\tau \overline{\Pi } \!\,^\tau \) have negligible mass on the set \(\mathcal V\).

Proposition 3.14

Let d satisfy (1.10). If \(1 + \xi ^{1/2} \leqslant \tau \leqslant 2\) then the following holds with very high probability. Let \(\lambda \) be an eigenvalue of \(\widehat{H}^{\tau }\) with \(\lambda >2\tau +{{\mathcal {C}}} \xi \) and \(\varvec{\mathrm {w}}=(w_x)_{x \in [N]}\) its corresponding eigenvector.

  1. (i)

    If \(x \in {{\mathcal {V}}}\) and \(\varvec{\mathrm {v}}_\pm ^{\tau }(x)\perp \varvec{\mathrm {w}}\) or if \(x \in {{\mathcal {V}}}_\tau \setminus {{\mathcal {V}}}\) then

    $$\begin{aligned} \frac{|w_{x}|}{\Vert \varvec{\mathrm {w}}|_{B_{2r_\star }^\tau (x)}\Vert }\leqslant \frac{\lambda ^2}{(\lambda -2 \tau - {{\mathcal {C}}}\xi )^{2}}\bigg (\frac{2 \tau +{{\mathcal {C}}} \xi }{\lambda }\bigg )^{r_{\star }}\,. \end{aligned}$$
  2. (ii)

    Let \(\varvec{\mathrm {w}}\) be normalized. If \(\varvec{\mathrm {v}}_\pm ^{\tau }(x)\perp \varvec{\mathrm {w}}\) for all \(x \in {{\mathcal {V}}}\) then

    $$\begin{aligned} \sum _{x\in {{\mathcal {V}}}_\tau }w_{x}^{2}\leqslant \frac{\lambda ^4}{(\lambda -2 \tau -{{\mathcal {C}}} \xi )^{4}}\bigg (\frac{2 \tau +{{\mathcal {C}}} \xi }{\lambda }\bigg )^{2r_{\star }}\,. \end{aligned}$$

Analogous results hold for \(\lambda < -2 \tau - {{\mathcal {C}}} \xi \).

We may now conclude the proof of Proposition 3.12.

Proof of Proposition 3.12

By Proposition 3.13, Lemma 3.11, and Lemma 3.8 we have

$$\begin{aligned} \begin{aligned} \widehat{H}^\tau&\leqslant I+(1+2 d^{-1/2}) Q +{\mathcal {C}}\frac{\log N}{d^{2}}\vee d^{-1/2}+\Vert H-H^\tau \Vert +\Vert H^\tau - \widehat{H}^\tau \Vert \\&\leqslant I+(1+2 d^{-1/2}) Q +{{\mathcal {C}}} (\xi +\xi _{\tau -1}) \end{aligned} \end{aligned}$$
(3.42)

with very high probability, where we used \(\frac{\log N}{d^{2}}\vee d^{-1/2} \leqslant (\xi +\xi _{\tau -1})\).

Arguing by contradiction, we assume that there exists an eigenvalue \(\lambda > 2\tau + {{\mathcal {C}}}' (\xi +\xi _{\tau -1})\) of \(\overline{\Pi } \!\,^\tau H^\tau \overline{\Pi } \!\,^\tau \) for some \({{\mathcal {C}}}' \geqslant 2 {{\mathcal {C}}}\) to be chosen later. By the lower bound in (1.10), we may assume that \({{\mathcal {C}}}' \xi \leqslant 1\). Thus, by the definition of \(\widehat{H}^\tau \), there is an eigenvector \(\varvec{\mathrm {w}}\) of \(\widehat{H}^\tau \) corresponding to \(\lambda \), which is orthogonal to \(\varvec{\mathrm {v}}_\pm ^\tau (x)\) for all \(x \in {{\mathcal {V}}}\). From (3.42), we conclude

$$\begin{aligned}&\lambda = \langle \varvec{\mathrm {w}}, \widehat{H}^\tau \varvec{\mathrm {w}} \rangle \leqslant 1+(1+2 d^{-1/2}) \sum _{x \notin {{\mathcal {V}}}_\tau }w_x^{2} \tau \nonumber \\&+(1+2 d^{-1/2}) \sum _{x \in {{\mathcal {V}}}_\tau }w_x^{2} \max _{y\in [N]} \alpha _y +{\mathcal {C}} (\xi +\xi _{\tau -1}). \end{aligned}$$
(3.43)

It remains to estimate the two sums on right-hand side of (3.43).

Since \(\varvec{\mathrm {w}} \perp \varvec{\mathrm {v}}^\tau _\pm (x)\) for all \(x \in {{\mathcal {V}}}\), we can apply Proposition 3.14 (ii). We find

$$\begin{aligned} 2r_\star \log \left( \frac{2 \tau +{{\mathcal {C}}} \xi }{\lambda }\right)\leqslant & {} 2r_\star \log \left( \frac{2\tau +{{\mathcal {C}}} \xi }{2\tau +{{\mathcal {C}}}' \xi }\right) \leqslant - 2 r_\star \frac{({{\mathcal {C}}}' - {{\mathcal {C}}}) \xi }{2\tau + {{\mathcal {C}}}' \xi }\nonumber \\\leqslant & {} - \frac{c ({{\mathcal {C}}}' - {{\mathcal {C}}})}{3} \sqrt{\log N} \, \xi \,, \end{aligned}$$
(3.44)

where in the last step we recalled the definition (1.8) and used that \(\tau \leqslant 2\) and \({{\mathcal {C}}}' \xi \leqslant 1\). Using the estimate

$$\begin{aligned} \frac{\lambda ^4}{(\lambda -2\tau -{{\mathcal {C}}} \xi )^{4}} \leqslant \frac{C}{(\mathcal C' - {{\mathcal {C}}})^4 \xi ^4}\,, \end{aligned}$$

combined with Proposition 3.14 (ii), (3.44) and Lemma A.7, yields

$$\begin{aligned} \frac{1}{\xi } \sum _{x \in {{\mathcal {V}}}_\tau }w_x^{2} \max _{y\in [N]} \alpha _y&\leqslant \frac{C \log N}{({{\mathcal {C}}}' - {{\mathcal {C}}})^4 \xi ^5} \exp \biggl (- \frac{c ({{\mathcal {C}}}' - {{\mathcal {C}}})}{3} \sqrt{\log N} \, \xi \biggr ) \\&\leqslant \frac{C d^5 \log N}{({{\mathcal {C}}}' - {{\mathcal {C}}})^4} \exp \biggl (- \frac{c ({{\mathcal {C}}}' - {{\mathcal {C}}})}{3} \frac{\log N}{d} \log d\biggr ) \\&\leqslant \frac{C d^5 \log N}{({{\mathcal {C}}}' - {{\mathcal {C}}})^4} \frac{1}{d^8} \leqslant 1\,, \end{aligned}$$

where the third step follows by choosing \({{\mathcal {C}}}'\) large enough, depending on \({{\mathcal {C}}}\).

Plugging this estimate into (3.43) and using \(\sum _x w_x^2\leqslant 1\) to estimate the first sum in (3.43), we obtain \(\lambda \leqslant 2\tau + 2 {\mathcal {C}} (\xi +\xi _{\tau -1})\). This is a contradiction to the assumption \(\lambda > 2\tau + {{\mathcal {C}}}' (\xi +\xi _{\tau -1})\). The proof of Proposition 3.12 is therefore complete. \(\quad \square \)

Proof of Proposition 3.13

We only establish an upper bound on H. The proof of the same upper bound on \(-H\) is identical and, therefore, omitted.

We introduce the matrices \(H(t) = (H_{xy}(t))_{x, y \in [N]}\) and \(M(t) = (\delta _{xy} m_x(t))_{x,y \in [N]}\) with entries

$$\begin{aligned} H_{xy}(t) :=\frac{tH_{xy}}{t^2 -H_{xy}^2},\quad m_x(t):=1+\sum _y \frac{H_{xy}^2}{t^2-H_{xy}^2} \end{aligned}$$

By the estimate on the spectral radius of the nonbacktracking matrix associated with H in [15, Theorem 2.5] and the Ihara–Bass-type formula in [15, Lemma 4.1] we have, with very high probability, \(\det (M(t)-H(t))\ne 0\) for all \(t\geqslant 1+{{\mathcal {C}}} d^{-1/2}\). Because \((M(t)-H(t))\rightarrow I\) as \(t\rightarrow \infty \), the matrix \(M(t)-H(t)\) is positive definite for large enough t. By continuity of the eigenvalues, we conclude that all eigenvalues of \(M(t)-H(t)\) stay positive for \(t\geqslant 1+\mathcal C d^{-1/2}\), and hence

$$\begin{aligned} H(t)\leqslant M(t) \end{aligned}$$
(3.45)

for all \(t\geqslant 1+{{\mathcal {C}}} d^{-1/2}\) with very high probability. We now define the matrix \(\Delta = (\Delta _{xy})_{x,y \in [N]}\) with

$$\begin{aligned} \Delta _{xy}:={\left\{ \begin{array}{ll} H_{xy}(t)-t^{-1}H_{xy} &{} \text { if }x\ne y \\ \sum _{y'} |H_{xy'}(t)-t^{-1}H_{xy'}| &{} \text { if }x=y\,. \end{array}\right. }\end{aligned}$$

It is easy to check that \(\Delta \) is a nonnegative matrix. We also have

$$\begin{aligned}\sum _{y'} |H_{xy'}(t)-t^{-1}H_{xy'}|\leqslant \sum _{y'} \frac{|H_{xy'}|^3}{t(t^2 -H_{xy'}^2)}\leqslant \frac{2}{t^3d^{1/2}}\bigg (\alpha _x+\frac{1}{d} \bigg )\,, \end{aligned}$$

where we used that \(|H_{xy} | \leqslant d^{-1/2}\) and \(\sum _{y'} H_{x y'}^2 \leqslant \alpha _x + \frac{d}{N}\) by definition of H. We use this to estimate the diagonal entries of \(\Delta \) and obtain

$$\begin{aligned} 0\leqslant \Delta \leqslant H(t)-t^{-1}H+\frac{2}{t^3\sqrt{d}}Q+\frac{2}{t^3 d^{3/2}}. \end{aligned}$$
(3.46)

On the other hand, for the diagonal matrix M(t), we have the trivial upper bound

$$\begin{aligned} M(t)\leqslant I +t^{-2}Q+{\mathcal {C}}\frac{\log N}{d^2} \end{aligned}$$
(3.47)

since \(\alpha _x \leqslant {\mathcal {C}} (\log N)/d\) with very high probability due to Lemma A.7. Finally, combining (3.45), (3.46) and (3.47) yields

$$\begin{aligned} t^{-1} H\leqslant I +\biggl (t^{-2}+\frac{2}{t^3\sqrt{d}}\biggr )Q+{\mathcal {C}}\frac{\log N}{d^2} \end{aligned}$$

and Proposition 3.13 follows by choosing \(t= 1+{{\mathcal {C}}} d^{-1/2}\). \(\quad \square \)

What remains is the proof of Proposition 3.14. The underlying principle behind the proof is the same as that of the Combes–Thomas estimate [25]: the Green function \(((\lambda - Z)^{-1})_{ij}\) of a local operator Z at a spectral parameter \(\lambda \) separated from the spectrum of Z decays exponentially in the distance between i and j, at a rate inversely proportional to the distance from \(\lambda \) to the spectrum of Z. Here local means that \(Z_{ij}\) vanishes if the distance between i and j is larger than 1. Since a graph is equipped with a natural notion of distance and the adjacency matrix is a local operator, a Combes–Thomas estimate would be applicable directly on the level of the graph, at least for the matrix \(H^\tau \). For our purposes, however, we need a radial version of a Combes–Thomas estimate, obtained by first tridiagonalizing (a modification of) \(\widehat{H}^\tau \) around a vertex \(x \in {{\mathcal {V}}}_\tau \) (see Appendix A.2). In this formulation, the indices i and j have the interpretation of radii around the vertex x, and the notion of distance is simply that of \({\mathbb {N}}\) on the set of radii. Since Z is tridiagonal, the locality of Z is trivial, although the matrix \(\widehat{H}^\tau \) (or its appropriate modification) is not a local operator on the graph \({\mathbb {G}}_\tau \).

To ensure the separation of \(\lambda > 2\tau + o(1)\) and the spectrum of Z, we cannot choose Z to be the tridiagonalization of \(\widehat{H}^\tau \), since \(\lambda \) is an eigenvalue of \(\widehat{H}^\tau \). In fact, Z is the tridiagonalization of a new matrix \(\widehat{H}^{\tau ,x}\), obtained by restricting \(\widehat{H}^\tau \) to the ball \(B^\tau _{2 r_\star }(x)\) and possibly subtracting a suitably chosen rank-two matrix, which allows us to show \(\Vert \widehat{H}^{\tau , x} \Vert \leqslant 2 \tau + o(1)\). By the orthogonality assumption on \(\varvec{\mathrm {w}}\), we then find that the Green function \(((\lambda - Z)^{-1})_{i r_\star }\), \(0 \leqslant i < r_\star \), and the eigenvector components in the radial basis \(u_i\), \(0 \leqslant i < r_\star \), satisfy the same linear difference equation. The exponential decay of \(((\lambda - Z)^{-1})_{i r_\star }\) in \(r_\star - i\) then implies that, for each \(x \in {{\mathcal {V}}}_\tau \), \(u_0^2 \leqslant o(1/\log N) \sum _{i = 0}^{r_*} u_i^2\). Going back to the original vertex basis, this implies that \(w_x^2 \leqslant o(1/\log N) \Vert \varvec{\mathrm {w}}|_{B_{2r_\star }^\tau (x)}\Vert ^2\) for all \(x \in {{\mathcal {V}}}_\tau \), from which Proposition 3.14 follows since the balls \(B_{2r_\star }^\tau (x)\), \(x \in {{\mathcal {V}}}_\tau \), are disjoint.

Proof of Proposition 3.14

For a matrix \(M \in {\mathbb {R}}^{N \times N}\) and a set \(V \subset [N]\), we use the notation \((M \vert _V)_{xy} :=\mathbb {1}_{x,y \in V} M_{xy}\).

We begin with part (i). We first treat the case \(x\in {{\mathcal {V}}}\). To that end, we introduce the matrix

$$\begin{aligned} \widehat{H}^{\tau ,x} :=\widehat{H}^{\tau }|_{B_{2r_{\star }}^{\tau }(x)}-\Lambda (\alpha _{x}) \varvec{\mathrm {v}}_+^{\tau }(x)\varvec{\mathrm {v}}_+^{\tau }(x)^{*}+\Lambda (\alpha _{x}) \varvec{\mathrm {v}}_{-}^{\tau }(x)\varvec{\mathrm {v}}_{-}^{\tau }(x)^{*}\,. \end{aligned}$$
(3.48)

We claim that, with very high probability,

$$\begin{aligned} \Vert \widehat{H}^{\tau ,x} \Vert \leqslant 2 \tau +{{\mathcal {C}}}\xi \,. \end{aligned}$$
(3.49)

To show (3.49), we begin by noting that, by Proposition 3.1 (i) and (ii), \({\mathbb {G}}_{\tau }\) restricted to \(B_{2r_{\star }}^{\tau }(x)\) is a tree whose root x has \(\alpha _x d\) children and all other vertices have at most \(\tau d\) children. Hence, Lemma A.5 yields \(\bigl \Vert H^{\tau }|_{B_{2r_{\star }}^{\tau }(x)} \bigr \Vert \leqslant \sqrt{\tau } \Lambda (\alpha _x /\tau \vee 2)\). Using Lemma 3.11 we find

$$\begin{aligned} \Vert \widehat{H}^{\tau }|_{B_{2r_{\star }}^\tau (x)}-H^{\tau }|_{B_{2r_{\star }}(x)}\Vert \leqslant \mathcal C \xi \end{aligned}$$
(3.50)

with very high probability, and since \(\varvec{\mathrm {v}}_\pm ^\tau (x)\) is an eigenvector of \(\widehat{H}^\tau |_{B_{2r_{\star }}^{\tau }(x)}\) with eigenvalue \(\pm \Lambda (\alpha _x)\), we conclude

$$\begin{aligned} \Vert \widehat{H}^{\tau ,x} \Vert \leqslant \sqrt{\tau } \Lambda (\alpha _x /\tau \vee 2) + {{\mathcal {C}}} \xi \end{aligned}$$
(3.51)

with very high probability. The estimate (3.51) is rough in the sense that the subtraction of the two last terms of (3.48) is not needed for its validity (since \(\Lambda (\alpha _x) \leqslant \sqrt{\tau } \Lambda (\alpha _x/\tau \vee 2)\)). Nevertheless, it is sufficient to establish (3.49) in the following cases, which may be considered degenerate.

If \(\alpha _x \leqslant 2 \tau \) then (3.51) immediately implies (3.49), since \(\sqrt{\tau } \leqslant \tau \). Moreover, if \(\alpha _x > 2 \tau \) and \(\Lambda (\alpha _x) \leqslant 2 \sqrt{\tau } + {{\mathcal {C}}} \xi \), then (3.51) implies

$$\begin{aligned} \Vert \widehat{H}^{\tau ,x} \Vert \leqslant \sqrt{\tau } \Lambda (\alpha _x /\tau ) + {{\mathcal {C}}} \xi \leqslant \sqrt{\tau } \Lambda (\alpha _x) + {{\mathcal {C}}} \xi \leqslant 2 \tau + 3 {{\mathcal {C}}} \xi \,, \end{aligned}$$

which is (3.49) after renaming the constant \({{\mathcal {C}}}\).

Hence, to prove (3.49), it suffices to consider the case \(\Lambda (\alpha _x) > 2 \sqrt{\tau } + {{\mathcal {C}}} \xi \). By Proposition 3.1 (i) and (ii), \({\mathbb {G}}_{\tau }\) restricted to \(B_{2r_{\star }}^{\tau }(x) \setminus \{x\}\) is a forest of maximal degree at most \(\tau d\). Lemma A.4 therefore yields \(\Vert H^{\tau }|_{B_{2r_{\star }}^\tau (x) \setminus \{x\}}\Vert \leqslant 2\sqrt{\tau }\). Moreover, the adjacency matrix of the star graph consisting of all edges of \({\mathbb {G}}_\tau \) incident to x has precisely two nonzero eigenvalues, \(\pm \sqrt{d \alpha _x}\). By first order perturbation theory, we therefore conclude that \(H^{\tau }|_{B_{2r_{\star }}^\tau (x)}\) has at most one eigenvalue strictly larger than \(2\sqrt{\tau }\) and at most one strictly smaller than \(-2\sqrt{\tau }\). Using (3.50) we conclude that \(\widehat{H}^{\tau }|_{B_{2r_{\star }}^\tau (x)}\) has at most one eigenvalue strictly larger than \(2\sqrt{\tau }+{{\mathcal {C}}} \xi \) and at most one strictly smaller than \(-2\sqrt{\tau }-{{\mathcal {C}}} \xi \). Since \(\varvec{\mathrm {v}}_+^{\tau }(x)\) (respectively \(\varvec{\mathrm {v}}_-^{\tau }(x)\)) is an eigenvector of \(\widehat{H}^{\tau }|_{B_{2r_{\star }}^\tau (x)}\) with eigenvalue \(\Lambda (\alpha _{x})\) (respectively \(-\Lambda (\alpha _{x})\)), and since \(\Lambda (\alpha _x) > 2 \sqrt{\tau } + {{\mathcal {C}}} \xi \), we conclude (3.49).

Next, let \((\varvec{\mathrm {g}}_i)_{i=0}^{r_\star }\) be the Gram–Schmidt orthonormalization of the vectors \(((\widehat{H}^{\tau ,x})^i \varvec{\mathrm {1}}_x)_{i=0}^{r_\star }\). We claim that

$$\begin{aligned} {{\,\mathrm{supp}\,}}\varvec{\mathrm {g}}_i \subset B_{r_\star +i}^\tau (x)\,. \end{aligned}$$
(3.52)

for \(i = 0, \ldots , r_\star \). The proof proceeds by induction. The base case for \(i =0\) holds trivially. For the induction step, it suffices to prove for \(0 \leqslant i < r_\star \) that if \({{\,\mathrm{supp}\,}}\varvec{\mathrm {g}}_i \subset B_{r_\star +i}^\tau (x)\) then

$$\begin{aligned} {{\,\mathrm{supp}\,}}(\widehat{H}^{\tau ,x} \varvec{\mathrm {g}}_i) \subset B_{r_\star +i+1}^\tau (x) \end{aligned}$$
(3.53)

To that end, we note that by Proposition 3.1 (i) we have \(\widehat{H}^{\tau ,x} = \bigl (\overline{\Pi } \!\,^{\tau } H^\tau \overline{\Pi } \!\,^\tau \bigr ) |_{B_{2r_{\star }}^{\tau }(x)}\). Hence, by induction assumption, Proposition 3.1 (i), and Remark 3.7,

$$\begin{aligned} \widehat{H}^{\tau ,x} \varvec{\mathrm {g}}_i = \biggl (I - \sum _{\sigma = \pm } \varvec{\mathrm {v}}^\tau _\sigma (x) \varvec{\mathrm {v}}^\tau _\sigma (x)^*\biggr ) H^\tau \biggl (I - \sum _{\sigma = \pm } \varvec{\mathrm {v}}^\tau _\sigma (x) \varvec{\mathrm {v}}^\tau _\sigma (x)^*\biggr ) \varvec{\mathrm {g}}_i\,, \end{aligned}$$

and we conclude (3.53), as \({{\,\mathrm{supp}\,}}\varvec{\mathrm {v}}_\sigma ^\tau (x) \subset B_{r_\star }^\tau (x)\).

Let , be the tridiagonal representation of \(\widehat{H}^{\tau ,x}\) up to radius \(r_\star \) (see Appendix A.2 below). Owing to (3.49), we have

$$\begin{aligned} \Vert Z\Vert \leqslant 2 \tau +{{\mathcal {C}}} \xi . \end{aligned}$$
(3.54)

We set \(u_{i}:=\langle \varvec{\mathrm {g}}_{i},\varvec{\mathrm {w}}\rangle \) for any \(0 \leqslant i \leqslant r_\star \). Because \(\varvec{\mathrm {w}}\) is an eigenvector of \(\widehat{H}^{\tau }\) that is orthogonal to \(\varvec{\mathrm {v}}_\pm ^\tau (x)\), for any \(i<r_\star \), (3.52) implies

$$\begin{aligned} \begin{aligned} \lambda u_i&= \left\langle \varvec{\mathrm {g}}_i,\left( \widehat{H}^\tau -\Lambda (\alpha _{x}) \varvec{\mathrm {v}}_+^{\tau }(x)\varvec{\mathrm {v}}_+^{\tau }(x)^{*}+\Lambda (\alpha _{x}) \varvec{\mathrm {v}}_{-}^{\tau }(x)\varvec{\mathrm {v}}_{-}^{\tau }(x)^{*}\right) \varvec{\mathrm {w}}\right\rangle \\&=\left\langle \widehat{H}^{\tau ,x}\varvec{\mathrm {g}}_i, \varvec{\mathrm {w}} \right\rangle \\&= \langle Z_{ii}\varvec{\mathrm {g}}_{i}+Z_{i\,i+1}\varvec{\mathrm {g}}_{i+1}+Z_{i\,i-1}\varvec{\mathrm {g}}_{i-1}, \varvec{\mathrm {w}}\rangle \\&=Z_{ii}u_{i}+Z_{i\,i+1}u_{i+1}+Z_{i\,i-1}u_{i-1}\, \end{aligned} \end{aligned}$$
(3.55)

with the conventions \(u_{-1}=0\) and \(Z_{0,-1}=0\). Let \(G(\lambda ) :=(\lambda - Z)^{-1}\) be the resolvent of Z at \(\lambda \). Note that \(\lambda -Z\) is invertible since \(\lambda > \Vert Z \Vert \) by assumption and (3.54). Since \(\bigl ((\lambda - Z) G(\lambda )\bigr )_{i \, r_{\star }} = 0\) for \(i<r_\star \), we find

$$\begin{aligned} \lambda G_{i r_{\star }}(\lambda )=Z _{ii}G_{i r_{\star }}(\lambda )+Z _{i\,i+1}G_{i +1 \, r_{\star }}(\lambda )+Z _{i\,i-1}G_{i -1\, r_{\star }}(\lambda ). \end{aligned}$$

Therefore \((G_{i r_{\star }}(\lambda ))_{i\leqslant r_\star }\) and \(\left( u_{i}\right) _{i\leqslant r_\star }\) satisfy the same linear recursive equation (cf. (3.55)); solving them recursively from \(i = 0\) to \(i = r_\star \) yields

$$\begin{aligned} \frac{G_{i r_{\star }}(\lambda )}{G_{r_\star r_{\star }}(\lambda )}=\frac{u_{i}}{u_{r_{\star }}} \end{aligned}$$
(3.56)

for all \(i\leqslant r_{\star }\). Moreover, as \(\lambda >\Vert Z\Vert \) by assumption and (3.54), we have the convergent Neumann series \(G(\lambda )= \frac{1}{\lambda }\sum _{k\geqslant 0}(Z / \lambda )^{k}\). Thus, the offdiagonal entries of the resolvent satisfy

$$\begin{aligned} G_{0 r_{\star }}(\lambda ) =\frac{1}{\lambda }\sum _{k \geqslant 0} \bigl ((Z / \lambda )^{k}\bigr )_{0 r_\star }\,. \end{aligned}$$

Since Z is tridiagonal, we deduce that \(\bigl ((Z / \lambda )^k\bigr )_{0 r_\star } = 0\) if \(k < r_\star \), so that, by (3.54),

$$\begin{aligned} |G_{0 r_{\star }}(\lambda )| \leqslant \bigg (\frac{2 \tau +{{\mathcal {C}}} \xi }{\lambda }\bigg )^{r_{\star }}\frac{1}{\lambda -2\tau -{{\mathcal {C}}} \xi }\,. \end{aligned}$$
(3.57)

On the other hand, for the diagonal entries of the resolvent, we get, by splitting the summation over k into even and odd values,

$$\begin{aligned}&G_{r_\star r_\star }(\lambda ) = \frac{1}{\lambda } \sum _{k \geqslant 0} \bigl ((Z/\lambda )^k\bigr )_{r_\star r_\star } = \frac{1}{\lambda } \sum _{k \geqslant 0} \Bigl ((Z/\lambda )^{k} (I + Z/\lambda ) (Z/\lambda )^{k}\Bigr )_{r_\star r_\star } \nonumber \\&\quad \geqslant \frac{1}{\lambda } (I + Z/\lambda )_{r_\star r_\star } \geqslant \frac{1}{\lambda }\bigg (1-\frac{2\tau +{{\mathcal {C}}} \xi }{\lambda }\bigg )\,, \end{aligned}$$
(3.58)

where in the thid step we discarded the terms \(k > 0\) to obtain a lower bound using that \(I + Z/\lambda \geqslant 0\) by (3.54), and in the last step we used (3.54). Hence, the definition of \(u_i\) and (3.52) imply

$$\begin{aligned} \frac{|w_{x}|}{\Vert \varvec{\mathrm {w}}|_{B^\tau _{2r_{\star }}(x)}\Vert }\leqslant \frac{|u_{0}|}{\left( \sum _{i=0}^{r_\star } u_{i}^2\right) ^{1/2}}\leqslant \frac{|u_{0}|}{|u_{r_{\star }}|}=\frac{|G_{0r_{\star }}(\lambda ) |}{G_{r_{\star } r_{\star }}(\lambda )}\leqslant \frac{\lambda ^2}{(\lambda -2\tau -{{\mathcal {C}}} \xi )^{2}}\bigg (\frac{2\tau +{{\mathcal {C}}} \xi }{\lambda }\bigg )^{r_{\star }}. \end{aligned}$$

Here, we used (3.56) in third step and (3.57) as well as (3.58) in the last step. This concludes the proof of (i) for \(x \in {{\mathcal {V}}}\).

In the case \(x\in {{\mathcal {V}}}_\tau \setminus {{\mathcal {V}}}\), we set \(\widehat{H}^{\tau ,x} :=\widehat{H}^{\tau } |_{B_{2r_{\star }}^{\tau }(x)}\). We claim that (3.49) holds. To see that, we use Proposition 3.1 (i) and (ii) as well as Lemma A.5 with \(p = d(2 + \xi ^{1/4})\) and \(q = d \tau \) to obtain

$$\begin{aligned} \Vert H^\tau |_{B_{2r_{\star }}^{\tau }(x)} \Vert \leqslant \sqrt{\tau } \Lambda ((2 + \xi ^{1/4})/\tau \vee 2) \leqslant 2 \tau . \end{aligned}$$

Here, the last step is trivial if \(\tau \geqslant 1 + \xi ^{1/4}/2\) and, if \(\tau \in [1 + \xi ^{1/2}, 1 + \xi ^{1/4}/2]\), we used that \(f(\tau ) :=\sqrt{\tau }\Lambda ((2 + \xi ^{1/4}) / \tau )/(2\tau )\) is monotonically decreasing on this interval and \(f(1 + \xi ^{1/2}) \leqslant 1\), as can be seen by an explicit analysis of the function f. Now we may take over the previous argument verbatim to prove (i) for \(x \in {{\mathcal {V}}}_\tau \setminus {{\mathcal {V}}}\).

Finally, we prove (ii). By (i) we have

$$\begin{aligned}&\sum _{x\in {{\mathcal {V}}}_\tau }w_{x}^2 \leqslant \sum _{x \in {{\mathcal {V}}}_\tau } \Vert \varvec{\mathrm {w}}|_{B^\tau _{2r_{\star }}(x)}\Vert ^2 \frac{\lambda ^4}{(\lambda -2\tau -\mathcal C \xi )^{4}}\bigg (\frac{2\tau +{{\mathcal {C}}} \xi }{\lambda }\bigg )^{2 r_{\star }}\\&\leqslant \frac{\lambda ^4}{(\lambda -2\tau -{{\mathcal {C}}} \xi )^{4}}\bigg (\frac{2\tau +{{\mathcal {C}}} \xi }{\lambda }\bigg )^{2 r_{\star }}\,, \end{aligned}$$

where we used that the the balls \(\{B^\tau _{2r_\star }(x) :x \in {{\mathcal {V}}}_\tau \}\) are disjoint, which implies \(1=\Vert \varvec{\mathrm {w}}\Vert ^2 \geqslant \sum _{x\in {{\mathcal {V}}}_\tau } \Vert \varvec{\mathrm {w}}|_{B^\tau _{2r_{\star }}(x)}\Vert ^2\). \(\quad \square \)

3.5 Proof of Proposition 3.1

We conclude this section with the proof of Proposition 3.1.

Proof of Proposition 3.1

Parts (i)–(v) follow immediately from parts (i)–(iv) and (vi) of [10, Lemma 7.2]. To see this, we remark that the function h from [10] satisfies \(h((\tau - 1)/2) \asymp (\tau - 1)^2\) for \(1 < \tau \leqslant 2\). Moreover, by Lemma A.7 and the upper bound on d, we have \(\max _x D_x \leqslant {{\mathcal {C}}} \log N\) with very high probability. Hence, choosing the universal constant c small enough in (1.8) and recalling the lower bound on \(\tau - 1\), in the notation of [10, Equations (5.1) and (7.2)] we obtain for any \(x \in {{\mathcal {V}}}_\tau \) the inequality \(2 r_\star \leqslant (\frac{1}{4} r_x ) \wedge (\frac{1}{2} r(\tau ))\) with very high probability. This yields parts (i)–(v).

It remains to prove (vi), which is the content of the rest of this proof. From now on we systematically omit the argument x from our notation. Part (v) already implies the bound

$$\begin{aligned} |S_{1} \setminus S_{1}^{\tau }|=D_x^{{\mathbb {G}} \setminus {\mathbb {G}}_\tau }\leqslant {\mathcal {C}}\frac{\log N}{(\tau -1)^2 d} \end{aligned}$$
(3.59)

with very high probability, which is (3.2) for \(i=1\).

From [10, Eq. (7.13)] we find

$$\begin{aligned} |S_i \setminus S_i^\tau | \leqslant \sum _{y \in S_1 \setminus S_1^\tau } |S_{i-1}(y) |\,. \end{aligned}$$

(As a guide to the reader, this estimate follows from the construction of \({\mathbb {G}}_\tau \) given in [10, Proof of Lemma 7.2], which ensures that if a vertex \(z \in S_i\) is not in \(S_i^\tau \) then any path in \({\mathbb {G}}\) of length i connecting z to x is cut in \({\mathbb {G}}_\tau \) at its edge incident to x.) Hence, in order to show (vi) for \(i \geqslant 2\), it suffices to prove

$$\begin{aligned} \sum _{y \in S_1 \setminus S_1^\tau } |S_{i-1}(y) | \leqslant {\mathcal {C}}\frac{\log N}{(\tau -1)^2}d^{i-2} \end{aligned}$$
(3.60)

with very high probability, for all \(2 \leqslant i \leqslant 2 r_\star \).

We start with the case \(i=2\). We shall use the relation

$$\begin{aligned} \sum _{y \in S_1 \setminus S_1^\tau } |S_{1}(y) | = \sum _{y \in S_1 \setminus S_1^\tau } N_2(y) + \sum _{y \in S_1\setminus S_1^\tau } |S_1(y) \cap S_1 | + |S_1 \setminus S_1^\tau |\,, \end{aligned}$$
(3.61)

where, for \(y \in S_1\), we introduced \(N_2(y) :=|S_{1}(y) \cap S_2 |\). Note that \(N_2(y)\) is the number of vertices in \(S_2\) connected to x via a path of minimal length passing through y. The identity (3.61) is a direct consequence of \(|S_1(y) | = |S_1(y) \cap S_2 | + |S_1(y) \cap S_1 | + |S_1(y) \cap S_0 |\) using the definition of \(N_2\) and \(|S_1(y) \cap S_0 | = |S_1(y) \cap \{x \} | = 1\).

The second and third terms of (3.61) are smaller than the right-hand side of (3.60) for \(i=2\) due to [10, Eq. (5.23)] and (3.59), respectively. Hence, it remains to estimate the first term on the right-hand side of (3.61) in order to prove (3.60) for \(i =2\).

To that end, we condition on the ball \(B_1\) and abbreviate \({\mathbb {P}}_{B_1}(\cdot ) :={\mathbb {P}}(\, \cdot \mid B_1)\). Since

$$\begin{aligned} N_2(y) = \sum _{z \in [N] \setminus B_1} A_{yz}\,, \end{aligned}$$
(3.62)

we find that conditioned on \(B_1\) the random variables \((N_2(y))_{y \in S_1}\) are independent \({\text {Binom}}(N - |B_1 |, d/N)\) random variables. We abbreviate \(\Gamma :=\frac{\log N}{(\tau -1)^2}\). For given \({{\mathcal {C}}}, {{\mathcal {C}}}'\), we set \({{\mathcal {C}}}'' :={{\mathcal {C}}}' + 2 \mathcal C\) and estimate

$$\begin{aligned}&{\mathbb {P}}_{B_1}\Biggl (\sum _{y\in S_1\setminus S_1^\tau } N_2(y) \geqslant {{\mathcal {C}}}'' \Gamma \Biggr ) \nonumber \\&\leqslant {\mathbb {P}}_{B_1}\Biggl (\sum _{y\in S_1\setminus S_1^\tau } \mathbb {1}_{N_2(y) \geqslant 2 d}N_2(y)\geqslant ({{\mathcal {C}}}'' - 2 {{\mathcal {C}}} )\Gamma \Biggr )\nonumber \\&\quad +{\mathbb {P}}_{B_1}\Biggl (\sum _{y\in S_1\setminus S_1^\tau } \mathbb {1}_{N_2(y)< 2 d}N_2(y) \geqslant 2 {{\mathcal {C}}} \Gamma \Biggr ) \nonumber \\&\leqslant {\mathbb {P}}_{B_1}\Biggl (\sum _{y\in S_1} \mathbb {1}_{2d \leqslant N_2(y) \leqslant N^{1/4}}N_2(y) \geqslant {{\mathcal {C}}}'\Gamma \Biggr )\nonumber \\&\quad + \sum _{y \in S_1} {\mathbb {P}}_{B_1}\bigl (N_2(y) \geqslant N^{1/4}\bigr ) +{\mathbb {P}}_{B_1}\bigl ( |S_1\setminus S_1^\tau | \geqslant {{\mathcal {C}}} \Gamma d^{-1}\bigr ). \end{aligned}$$
(3.63)

In order to estimate the first term on the right-hand side of (3.63), we shall prove that if \(|B_1 | \leqslant N^{1/4}\) then

$$\begin{aligned} {\mathbb {E}}_{B_1} \Bigl [\exp \Bigl (\mathbb {1}_{2d \leqslant N_2(y) \leqslant N^{1/4}} N_2(y)t\Bigr )\Bigr ] \leqslant 2 \end{aligned}$$
(3.64)

for all \(y\in S_1\) and \(t \leqslant 1/8\). To that end, we estimate

$$\begin{aligned} {\mathbb {E}}_{B_1} \Bigl [\exp \Bigl (\mathbb {1}_{2d \leqslant N_2(y) \leqslant N^{1/4}} N_2(y)t\Bigr )\Bigr ] \leqslant 1 + {\mathbb {E}}_{B_1} \Bigl [\mathbb {1}_{2d \leqslant N_2(y) \leqslant N^{1/4}} \mathrm {e}^{N_2(y)t}\Bigr ]\,. \end{aligned}$$

With Poisson approximation, Lemma A.6 below, we obtain (assuming that 2d is an integer to simplify notation)

$$\begin{aligned} {\mathbb {E}}_{B_1} \Bigl [\mathbb {1}_{2d \leqslant N_2(y) \leqslant N^{1/4}} \mathrm {e}^{N_2(y)t}\Bigr ]&= \sum _{2d \leqslant k \leqslant N^{1/4}}\frac{(d - d |B_1 |/N)^k \mathrm {e}^{tk}}{k!}\mathrm {e}^{-d + d |B_1 |/N} \bigl (1+O(N^{-1/2})\bigr )\\&\quad \leqslant \sum _{k\geqslant 2 d}\frac{d^k \mathrm {e}^{tk}}{k!}\mathrm {e}^{-d} \bigl (1+O(N^{-1/2})\bigr )\\&\quad =\frac{d^{2d} \mathrm {e}^{2td}}{(2d)!}\mathrm {e}^{-d}\sum _{i\geqslant 0}\frac{d^i \mathrm {e}^{t i}}{\prod _{j=2 d+1}^{2d + i}j} \bigl (1+O(N^{-1/2})\bigr ) \\&\quad \leqslant \frac{d^{2d} \mathrm {e}^{2td}}{(2d)!}\mathrm {e}^{-d} \sum _{i\geqslant 0}\frac{2d^i \mathrm {e}^{t i}}{(2d)^i} = \frac{d^{2d} \mathrm {e}^{2td}}{(2d)!}\mathrm {e}^{-d}\frac{2}{(1-e^t/2)}. \end{aligned}$$

By Stirling’s approximation we get

$$\begin{aligned} \log \left( \frac{d^{2d} \mathrm {e}^{2td}}{(2d)!}\mathrm {e}^{-d}\right)&= d\left( 2t- 2 \log 2+1 \right) -\frac{1}{2}\log (4 \pi d)+ \mathrm o(1). \end{aligned}$$

The term in the parentheses on the right-hand side is negative for \(t \leqslant 1/8\), and hence

$$\begin{aligned} {\mathbb {E}}_{B_1} \Bigl [\mathbb {1}_{2d \leqslant N_2(y) \leqslant N^{1/4}} \mathrm {e}^{N_2(y)t}\Bigr ] \leqslant 1 \end{aligned}$$

for large enough d, which gives (3.64). Since the family \((N_2(y))_{y \in S_1}\) is independent conditioned on \(B_1\), we can now use Chebyshev’s inequality to obtain, for \(0 \leqslant t \leqslant 1/8\),

$$\begin{aligned} {\mathbb {P}}_{B_1}\Biggl (\sum _{y\in S_1} \mathbb {1}_{2d \leqslant N_2(y) \leqslant N^{1/4}} N_2(y) \geqslant {{\mathcal {C}}}'\Gamma \Biggr )&\leqslant \frac{\max _{y \in S_1}\left( {\mathbb {E}}_{B_1} \exp \Bigl (\mathbb {1}_{2d \leqslant N_2(y) \leqslant N^{1/4}} N_2(y)t\Bigr )\right) ^{|S_1|}}{\mathrm {e}^{t {{\mathcal {C}}}'\Gamma }} \\&\leqslant \exp \left( |S_1 | \log 2 - {{\mathcal {C}}}'\frac{t}{(\tau -1)^2}\log N \right) \,. \end{aligned}$$

Now we set \(t = 1/8\), recall the bound \(\tau \leqslant 2\), plug this estimate back into (3.63), and take the expectation. We use Lemma A.7 to estimate \(|S_1 |\), which in particular implies that \(|B_1 | \leqslant N^{1/4}\) with very high probability; this concludes the estimate of the expectation of the first term of (3.63) by choosing \({{\mathcal {C}}}'\) large enough. Next, the expectation of the second term is easily estimated by Lemma A.7 since \(N_2(y)\) has law \({\text {Binom}}(N - |B_1 |, d/N)\) when conditioned on \(B_1\). Finally, the expectation of the last term of (3.63) is estimated by (3.59) by choosing \({{\mathcal {C}}}\) large enough. This concludes the proof of (3.60) for \(i = 2\).

We now prove (3.60) for \(i + 1\) with \(i\geqslant 2\) by induction. Using [10, Lemma 5.4 (ii)] combined with Lemma A.7, we deduce that

$$\begin{aligned}|S_{i}(y) |\leqslant d |S_{i-1}(y) |+{{\mathcal {C}}} \sqrt{d |S_{i-1}(y) |\log N}\end{aligned}$$

with very high probability for all \(y\in S_1\setminus S_1^{\tau }\) and all \(i \leqslant r_\star \). Therefore, using the induction assumption, i.e. (3.60) for i, we obtain

$$\begin{aligned} \sum _{y\in S_1\setminus S_1^\tau } |S_{i}(y) |&\leqslant {\mathcal {C}}\frac{\log N}{(\tau -1)^2}d^{i-1}+{{\mathcal {C}}} \sqrt{d \log N}\sum _{y\in S_1\setminus S_1^\tau }\sqrt{|S_{i-1}(y) |} \\&\leqslant {\mathcal {C}}\frac{\log N}{(\tau -1)^2}d^{i-1}+{{\mathcal {C}}} \sqrt{d \log N} |S_1\setminus S_1^\tau | \Biggl ( \sum _{y\in S_1\setminus S_1^\tau }\frac{|S_{i-1}(y) |}{|S_1\setminus S_1^\tau |}\Biggr )^{1/2} \\&\leqslant {\mathcal {C}}\frac{\log N}{(\tau -1)^2}d^{i-1}+{{\mathcal {C}}} \sqrt{d \log N} \frac{\log N}{d (\tau -1)^2} \sqrt{d^{i-1}} \end{aligned}$$

with very high probability, where we used the concavity of \(\sqrt{\,\cdot \,}\) in the second step, (3.59) and (3.60) for i in the last step. Since \(\sqrt{d^i \log N}\leqslant d^{i/2+1}\leqslant d^i\) for \(i\geqslant 2 \) and the sequence \((d^{1-i/2})_{i \in {\mathbb {N}}}\) is summable, this proves (3.60) for \(i+1\) with a constant \({{\mathcal {C}}}\) independent of i. This concludes the proof of Proposition 3.1. \(\quad \square \)

4 The Delocalized Phase

In this section we prove Theorem 1.8. In fact, we state and prove a more general result, Theorem 4.2 below, which immediately implies Theorem 1.8.

4.1 Local law

Theorem 4.2 is a local law for a general class of sparse random matrices of the form

$$\begin{aligned} M = H + f \varvec{\mathrm {e}} \varvec{\mathrm {e}}^*\,, \end{aligned}$$
(4.1)

where \(f \geqslant 0\) and \(\varvec{\mathrm {e}} :=N^{-1/2}(1,1,\dots ,1)^*\). Here H is a Hermitian random matrix satisfying the following definition.

Definition 4.1

Let \(0< d < N\). A sparse matrix is a complex Hermitian \(N\times N\) matrix \(H=H^* \in {\mathbb {C}}^{N \times N}\) whose entries \(H_{ij}\) satisfy the following conditions.

  1. (i)

    The upper-triangular entries (\(H_{ij}:1 \leqslant i \leqslant j\leqslant N\)) are independent.

  2. (ii)

    We have \({\mathbb {E}} H_{ij}=0\) and \( {\mathbb {E}} |H_{ij} |^2=(1 + O(\delta _{ij}))/N\) for all ij.

  3. (iii)

    Almost surely, \(|H_{ij} | \leqslant K d^{-1/2}\) for all ij and some constant K.

It is easy to check that the set of matrices M defined as in (4.1) and Definition 4.1 contains those from Theorem 1.8 (see the proof of Theorem 1.8 below). From now on we suppose that \(K = 1\) to simplify notation.

The local law for the matrix M established in Theorem 4.2 below provides control of the entries of the Green function

$$\begin{aligned} G(z) :=\big ( M - z\big )^{-1} \end{aligned}$$
(4.2)

for z in the spectral domain

$$\begin{aligned} {\mathbf {S}} \equiv \varvec{\mathrm {S}}_{\kappa , L, N} = {{\mathcal {S}}}_\kappa \times [N^{-1 + \kappa }, L] \end{aligned}$$
(4.3)

for some constant \(L \geqslant 1\). We also define the Stieltjes transform g of the empirical spectral measure of M given by

$$\begin{aligned} g(z) :=\frac{1}{N} \sum _{i =1}^N \frac{1}{\lambda _i(M) - z} = \frac{1}{N} {{\,\mathrm{Tr}\,}}G(z)\,. \end{aligned}$$
(4.4)

The limiting behaviour of G and g is governed by the following deterministic quantities. Denote by \({\mathbb {C}}_+ :=\{z \in {\mathbb {C}}:{{\,\mathrm{Im}\,}}z > 0\}\) the complex upper half-plane. For \(z \in {\mathbb {C}}_+\) we define m(z) as the Stieltjes transform of the semicircle law \(\mu _1\),

$$\begin{aligned} m(z) :=\int \frac{\mu _1(\mathrm {d}u)}{u - z} \,, \qquad \mu _1(\mathrm {d}u) :=\frac{1}{2 \pi } \sqrt{(4 - u^2)_+} \, \mathrm {d}u\,. \end{aligned}$$
(4.5)

An elementary argument shows that m(z) can be characterized as the unique solution m in \({\mathbb {C}}_+\) of the equation

$$\begin{aligned} \frac{1}{m(z)} = -z - m(z)\,. \end{aligned}$$
(4.6)

For \(\alpha \geqslant 0\) and \(z \in {\mathbb {C}}_+\) we define

$$\begin{aligned} m_\alpha (z) :=- \frac{1}{ z + \alpha m(z)}\,, \end{aligned}$$
(4.7)

so that \(m_1 = m\) by (4.6). In Lemma A.3 below we show that \(m_\alpha \) is bounded in the domain \(\varvec{\mathrm {S}}\), with a bound depending only on \(\kappa \).

For \(x \in [N]\) we denote the square Euclidean norm of the xth row of H by

$$\begin{aligned} \beta _x :=\sum _{y} |H_{xy} |^2\,, \end{aligned}$$
(4.8)

which should be thought of as the normalized degree of x; see Remark 4.3 below.

Theorem 4.2

(Local law for M). Fix \(0 < \kappa \leqslant 1/2\) and \(L \geqslant 1\). Let H be a sparse matrix as in Definition 4.1, define M as in (4.1) for some \(0 \leqslant f \leqslant N^{\kappa /6}\), and define G and g as in (4.2) and (4.4) respectively. Then with very high probability, for d satisfying (1.18), for all \(z \in \varvec{\mathrm {S}}\) we have

$$\begin{aligned} \max _{x,y \in [N]} \bigl |G_{xy}(z) - \delta _{xy} m_{\beta _x}(z) \bigr |&\leqslant {\mathcal {C}}\bigg ( \frac{\log N}{d^2} \bigg )^{1/3}\,, \end{aligned}$$
(4.9)
$$\begin{aligned} \bigl |g(z) - m(z) \bigr |&\leqslant {\mathcal {C}}\Bigg ( \frac{\log N}{d^2} \bigg )^{1/3}\,. \end{aligned}$$
(4.10)

Proof of Theorem 1.8

Under the assumptions of Theorem 1.8 we find that \(M :=A / \sqrt{d}\) is of the form (4.1) for some H and f satisfying the assumptions of Theorem 4.2. Now Theorem 1.8 is a well-known consequence of Theorem 4.2 and the boundedness of \(m_{\alpha }(z)\) in (A.4) below. For the reader’s convenience, we give the short proof. Denoting the eigenvalues of M by \((\lambda _i(M))_{i \in [N]}\) and the associated eigenvectors by \((\varvec{\mathrm {w}}_i(M))_{i \in [N]}\), setting \(z = \lambda + \mathrm {i}\eta \) with \(\eta = N^{-1 + \kappa }\), by (4.9) and (A.4) we have with very high probability

where in the last step we omitted all terms except i satisfying \(\lambda _i(M) = \lambda \). The claim follows by renaming \(\kappa \rightarrow \kappa / 2\). (Here we used that Theorem 4.2 holds also for random \(z \in \varvec{\mathrm {S}}\), as follows form a standard net argument; see e.g. [16, Remark 2.7].) \(\quad \square \)

Remark 4.3

(Relation between \(\alpha _x\) and \(\beta _x\)). In the special case \(M = d^{-1/2} A\) with A the adjacency matrix of \({\mathbb {G}}(N,d/N)\), we have

$$\begin{aligned} \beta _x = \frac{1}{d} \sum _{y} \bigg (A_{xy} - \frac{d}{N}\bigg )^2 = \alpha _x + O \biggl (\frac{d (1 + \alpha _x)}{N}\biggr ) = \alpha _x + {{\mathcal {O}}} \biggl (\frac{d + \log N}{N}\biggr ) \end{aligned}$$

with very high probability, by Lemma A.7.

By definition, \(m_\alpha (z) \in {\mathbb {C}}_+\) for \(z \in {\mathbb {C}}_+\), i.e. \(m_\alpha \) is a Nevanlinna function, and \(\lim _{z \rightarrow \infty } z m_\alpha (z) = -1\). By the integral representation theorem for Nevanlinna functions, we conclude that \(m_\alpha \) is the Stieltjes transform of a Borel probability measure \(\mu _\alpha \) on \({\mathbb {R}}\),

$$\begin{aligned} m_\alpha (z) = \int \frac{\mu _\alpha (\mathrm {d}u)}{u - z}\,. \end{aligned}$$
(4.11)

Theorem 4.2 implies that the spectral measure of M at a vertex x is approximately \(\mu _{\beta _x}\) with very high probability.

Inverting the Stieltjes transform (4.11) and using the definitions (4.5) and (4.7), we find after a short calculation

$$\begin{aligned} \mu _\alpha (\mathrm {d}u) = g_\alpha (u) \, \mathrm {d}u + h_\alpha \delta _{s_\alpha }(\mathrm {d}u)+ h_\alpha \delta _{-s_\alpha }(\mathrm {d}u)\,, \end{aligned}$$
(4.12)

where

$$\begin{aligned} g_\alpha (u) :=\frac{\alpha \mathbb {1}_{|u | < 2}}{2\pi } \frac{\sqrt{4-u^2}}{(1-\alpha )u^2 + \alpha ^2}\,, \quad h_\alpha :=\mathbb {1}_{\alpha> 2} \frac{\alpha - 2}{2 \alpha - 2} + \frac{\mathbb {1}_{\alpha = 0}}{2}\,, \quad s_\alpha :=\mathbb {1}_{\alpha > 2} \Lambda (\alpha )\,. \end{aligned}$$

The family \((\mu _\alpha )_{\alpha \geqslant 0}\) contains the semicircle law (\(\alpha = 1\)), the Kesten-McKay law of parameter d (\(\alpha = d / (d - 1)\)), and the arcsine law (\(\alpha = 2\)). For rational \(\alpha = p/q\), the measure \(\mu _{p/q}\) can be interpreted as the spectral measure at the root of the infinite rooted (pq)-regular tree, whose root has p children and all other vertices have q children. We refer to Appendix A.2 for more details. See Fig. 8 for an illustration of the measure \(\mu _\alpha \).

Remark 4.4

Using a standard application the Helffer-Sjöstrand formula (see e.g. [16, Section 8 and Appendix C]), we deduce from Theorem 4.2 the following local law for the spectral measure. Denote by \(\varrho _x\) the spectral measure of M at vertex x. Under the assumptions of Theorem 4.2, with very high probability, for any inverval \(I \subset {{\mathcal {S}}}_\kappa \), we have

$$\begin{aligned} \varrho _x(I) = \mu _{\beta _x}(I) + {{\mathcal {O}}} \biggl (|I | \bigg ( \frac{\log N}{d^2} \bigg )^{1/3} + N^{\kappa - 1}\biggr )\,. \end{aligned}$$

The error is smaller than the left-hand side provided that \(|I | \geqslant {{\mathcal {C}}} N^{\kappa - 1}\).

Fig. 8
figure 8

An illustration of the probability measure \(\mu _\alpha \) for various values of \(\alpha \). For \(\alpha > 2\), \(\mu _\alpha \) has two atoms which we draw using vertical lines. The measure \(\mu _\alpha \) is the semicircle law for \(\alpha = 1\), the arcsine law for \(\alpha = 2\), and the Kesten-McKay law with \(d = \frac{\alpha }{\alpha - 1}\) for \(1< \alpha < 2\). Note that the density of \(\mu _\alpha \) is bounded in \({{\mathcal {S}}}_\kappa \), uniformly in \(\alpha \). The divergence of the density near 0 is caused by values of \(\alpha \) close to 0, and the divergence of the density near \(\pm 2\) by values of \(\alpha \) close to 2

The remainder of this section is devoted to the proof of Theorem 4.2. For the rest of this section, we assume that M is as in Theorem 4.2. To simplify notation, we consistently omit the z-dependence from our notation in quantities that depend on \(z \in \varvec{\mathrm {S}}\). Unless mentioned otherwise, from now on all statements are uniform in \(z \in \varvec{\mathrm {S}}\).

For the proof of Theorem 4.2, it will be convenient to single out the generic constant \({{\mathcal {C}}}\) from (1.18) by introducing a new constant \({{\mathcal {D}}}\) and replacing (1.18) with

$$\begin{aligned} {{\mathcal {D}}} \sqrt{\log N} \leqslant d \leqslant (\log N)^{3/2}\,. \end{aligned}$$
(4.13)

Our proof will always assume that \({{\mathcal {C}}} \equiv {{\mathcal {C}}}_\nu \) and \({{\mathcal {D}}} \equiv {{\mathcal {D}}}_\nu \) are large enough, and the constant \(\mathcal C\) in (1.18) can be taken to be \({{\mathcal {C}}} \vee \mathcal D\). For the rest of this section we assume that d satisfies (4.13) for some large enough \({{\mathcal {D}}}\), depending on \(\kappa \) and \(\nu \). To guide the reader through the proof, in Fig. 9 we include a diagram of the dependencies of the various quantities appearing throughout this section.

Fig. 9
figure 9

The dependency graph of the various quantities appearing in the proof of Theorem 4.2. An arrow from x to y means that y is chosen as a function of x. The independent parameters, \(\kappa \) and \(\nu \), are highlighted in blue

4.2 Typical vertices

We start by introducing the key tool in the proof of Theorem 4.2, a decomposition of vertices into typical vertices and the complementary atypical vertices. Heuristically, a typical vertex x has close to d neighbours and the spectral measure of M at x is well approximated by the semicircle law. In fact, in order to be applicable to the proof of Proposition 4.18 below, the notion of a typical vertex is somewhat more complicated, and when counting the number of neighbours of a vertex x we also need to weight the neighbours with diagonal entries of a Green function, so that the notion of typical vertex also depends on the spectral parameter z, which in this subsection we allow to be any complex number z with \({{\,\mathrm{Im}\,}}z \geqslant N^{-1 + \kappa }\). This notion is defined precisely using the parameters \(\Phi _x\) and \(\Psi _x\) from (4.18) below. The main result of this subsection is Proposition 4.8 below, which states, in the language of graphs when \(M = d^{-1/2} A\) with A the adjacency matrix of \({\mathbb {G}}(N,d/N)\), that most vertices are typical and most neighbours of any vertex are typical. To state it, we introduce some notation.

Definition 4.5

For any subset \(T \subset [N]\), we define the minor \(M^{(T)}\) with indices in T as the \((N-|T |) \times (N-|T |)\)-matrix

$$\begin{aligned} M^{(T)} :=(M_{xy})_{x,y \in [N] \setminus T}. \end{aligned}$$
(4.14)

If T consists only of one or two elements, \(T = \{x\}\) or \(T=\{x,y\}\), then we abbreviate \(M^{(x)}\) and \(M^{(xy)}\) for \(M^{(\{x\})}\) and \(M^{(\{x,y\})}\). We also abbreviate \(M^{(Tx)}\) for \(M^{(T \cup \{ x\})}\). The Green function of \(M^{(T)}\) is denoted by

$$\begin{aligned} G^{(T)}(z) :=(M^{(T)} - z)^{-1}. \end{aligned}$$
(4.15)

We use the notation

$$\begin{aligned} \sum _{x}^{(T)} :=\sum _{x \in [N]\setminus T} . \end{aligned}$$
(4.16)

Definition 4.6

(Typical vertices). Let \(\mathfrak a> 0\) be a constant, and define the set of typical vertices

$$\begin{aligned} {{\mathcal {T}}}_\mathfrak a:=\{x \in [N] :|\Phi _x | \vee |\Psi _x | \leqslant \varphi _\mathfrak a\}\,, \qquad \varphi _\mathfrak a:=\mathfrak a\bigg ( \frac{\log N}{d^2} \bigg )^{1/3}\,, \end{aligned}$$
(4.17)

where

$$\begin{aligned} \Phi _x :=\sum _{y}^{(x)} \biggl (|H_{xy} |^2 - \frac{1}{N}\biggr )\,, \qquad \Psi _x :=\sum _{y}^{(x)} \biggl (|H_{xy} |^2 - \frac{1}{N}\biggr ) G_{yy}^{(x)}\,. \end{aligned}$$
(4.18)

Note that this notion depends on the spectral parameter z, i.e. \({{\mathcal {T}}}_\mathfrak a\equiv {{\mathcal {T}}}_\mathfrak a(z)\). The constant \(\mathfrak a\) will depend only on \(\nu \) and \(\kappa \). It will be fixed in (4.23) below. The constant \({{\mathcal {D}}} \geqslant \mathfrak a^{3/2}\) from (4.13) is always chosen large enough so that \(\varphi _{\mathfrak a} \leqslant 1\).

The following proposition holds on the event \(\{\theta = 1\}\), where we introduce the indicator function

$$\begin{aligned} \theta :=\mathbb {1}_{\max _{x,y} |G_{xy} | \leqslant \Gamma } \end{aligned}$$
(4.19)

depending on some deterministic constant \(\Gamma \geqslant 1\). In (4.40) below, we shall choose a constant \(\Gamma \equiv \Gamma _\kappa \), depending only on \(\kappa \), such that the condition \(\theta = 1\) can be justified by a bootstrapping argument along the proof of Theorem 4.2 in Sect. 4.3 below.

Throughout the sequel we use the following generalization of Definition 2.1.

Definition 4.7

An event \(\Xi \) holds with very high probability on an event \(\Omega \) if for all \(\nu > 0\) there exists \({{\mathcal {C}}} > 0\) such that \({\mathbb {P}}(\Xi \cap \Omega ) \geqslant {\mathbb {P}}(\Omega ) - {{\mathcal {C}}} N^{-\nu }\) for all \(N \in {\mathbb {N}}\).

We now state the main result of this subsection.

Proposition 4.8

There are constants \(0 < q \leqslant 1\), depending only on \(\Gamma \), and \(\mathfrak a> 0\), depending only on \(\nu \) and q, such that, on the event \(\{\theta = 1\}\), the following holds with very high probability.

  1. (i)

    Most vertices are typical:

    $$\begin{aligned} |{{\mathcal {T}}}_\mathfrak a^c | \leqslant \exp ( q \varphi _\mathfrak a^2 d ) + N \exp ( - 2 q \varphi _\mathfrak a^2 d). \end{aligned}$$
  2. (ii)

    Most neighbours of any vertex are typical:

    $$\begin{aligned} \sum _{y \in {{\mathcal {T}}}_\mathfrak a^c}^{(x)}|H_{xy} |^2 \leqslant {{\mathcal {C}}} \varphi _\mathfrak a+ {\mathcal {C}}d^4 \exp (- q \varphi _\mathfrak a^2 d ) \end{aligned}$$

    uniformly for \(x \in [N]\).

For the interpretation of Proposition 4.8 (ii), one should think of the motivating example \(M = d^{-1/2} A\), for which \(d \sum _{y \in \mathcal T^c_\mathfrak a}^{(x)}|H_{xy} |^2\) is the number of atypical neighbours of x, up to an error term \({{\mathcal {O}}}\bigl (\frac{d^2 + d \log N}{N}\bigr )\) by Remark 4.3.

The remainder of Sect. 4.2 is devoted to the proof of Proposition 4.8. We need the following version of \({{\mathcal {T}}}_\mathfrak a\) defined in terms of \(H^{(T)}\) instead of H.

Definition 4.9

For any \(x \in [N]\) and \(T \subset [N]\), we define

$$\begin{aligned} \Phi _x^{(T)} :=\sum _{y}^{(Tx)} \biggl (|H_{xy} |^2 - \frac{1}{N}\biggr )\,, \qquad \Psi _x^{(T)} :=\sum _{y}^{(Tx)} \biggl (|H_{xy} |^2 - \frac{1}{N}\biggr ) G_{yy}^{(Tx)}\, \end{aligned}$$

and

$$\begin{aligned} {{\mathcal {T}}}^{(T)} _\mathfrak a:=\{x \in [N] \setminus T :|\Phi _x^{(T)} | \vee |\Psi _x^{(T)} | \leqslant \varphi _\mathfrak a\}\,. \end{aligned}$$

Note that \(\Phi _x^{(\emptyset )} = \Phi _x\) and \(\Psi _x^{(\emptyset )} = \Psi _x\) with the definitions from (4.18), and hence \({{\mathcal {T}}}_\mathfrak a^{(\emptyset )} = {{\mathcal {T}}}_\mathfrak a\). The proof of Proposition 4.8 relies on the two following lemmas.

Lemma 4.10

There are constants \(0 < q \leqslant 1\), depending only on \(\Gamma \), and \(\mathfrak a> 0\), depending only on \(\nu \) and q, such that, for any deterministic \(X \subset [N]\), the following holds with very high probability on the event \(\{\theta = 1\}\).

  1. (i)

    \(|X \cap {{\mathcal {T}}}_{\mathfrak a/2}^c | \leqslant \exp ( q \varphi _\mathfrak a^2 d) + |X | \exp (- 2 q \varphi _\mathfrak a^2 d)\).

  2. (ii)

    If \(|X | \leqslant \exp ( 2 q \varphi _\mathfrak a^2 d)\) then \(|X \cap {{\mathcal {T}}}_{\mathfrak a/2}^c | \leqslant \mathcal \varphi _\mathfrak ad\).

For any deterministic \(x \in [N]\), the same estimates hold for \(\big ({{\mathcal {T}}}^{(x)}_{\mathfrak a/ 2}\big )^c\) instead of \({{\mathcal {T}}}^c_{\mathfrak a/2}\) and a random set \(X \subset [N] \setminus \{x\}\) that is independent of \(H^{(x)}\).

Lemma 4.11

With very high probability, for any constant \(\mathfrak a> 0\) we have

$$\begin{aligned} \theta |\Phi _y - \Phi _y^{(x)} | \leqslant \varphi _{\mathfrak a/2}, \qquad \theta |\Psi _y - \Psi _y^{(x)} | \leqslant \varphi _{\mathfrak a/ 2} \end{aligned}$$

for all \(x,y \in [N]\).

Before proving Lemmas 4.10 and 4.11, we use them to establish Proposition 4.8.

Proof of Proposition 4.8

For (i), we choose \(X = [N]\) in Lemma 4.10 (i), using that \({{\mathcal {T}}}_{\mathfrak a/2} \subset \mathcal T_\mathfrak a\).

We now turn to the proof of (ii). By Lemma 4.11, on the event \(\{\theta = 1\}\) we have \({{\mathcal {T}}}^c_\mathfrak a\subset \big ({{\mathcal {T}}}^{(x)}_{\mathfrak a/2}\big )^c\) with very high probability and hence

$$\begin{aligned} \theta \sum ^{(x)}_{y \in {{\mathcal {T}}}_\mathfrak a^c} |H_{xy} |^2 \leqslant \theta \sum ^{(x)}_{y \in ({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)})^c} |H_{xy} |^2 \end{aligned}$$

with very high probability. Since \(|H_{xy} |^2 \leqslant 1 / d\) almost surely, we obtain the decomposition

$$\begin{aligned} \begin{aligned} \sum ^{(x)}_{y \in ({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)})^c} |H_{xy} |^2&\leqslant \sum _{k = 0}^{\log N} \sum ^{(x)}_{y \in ({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)})^c} |H_{xy} |^2 \mathbb {1}_{d^{-k-2} \leqslant |H_{xy} |^2 \leqslant d^{-k - 1}} + \frac{1}{N} \\&\leqslant \sum _{k = 0}^{\log N} \sum ^{(x)}_{y \in ({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)})^c} d^{-k - 1} \mathbb {1}_{|H_{xy} |^2 \geqslant d^{-k-2}} + \frac{1}{N} \\&= \sum _{k = 0}^{\log N} d^{-k - 1} |X_k \cap ({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)})^c | + \frac{1}{N}\,, \end{aligned} \end{aligned}$$
(4.20)

where we defined

$$\begin{aligned} X_k :=\bigl \{y \ne x :|H_{xy} |^2 \geqslant d^{-k - 2}\bigr \}\,. \end{aligned}$$

Since \(\sum ^{(x)}_y |H_{xy} |^2 \leqslant {{\mathcal {C}}} d\) with very high probability by Definition 4.1 and Bennett’s inequality, we conclude that

$$\begin{aligned} |X_k | \leqslant {{\mathcal {C}}} d^{k + 3} \end{aligned}$$
(4.21)

with very high probability.

We shall apply Lemma 4.10 to the sets \(X = X_k\) and \(({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)})^c\). To that end, note that \(X_k \subset [N] \setminus \{x\}\) is a measurable function of the family \((H_{xy})_{y \in [N]}\), and hence independent of \(H^{(x)}\). Thus, we may apply Lemma 4.10.

We define \(K :=\max \bigl \{k \geqslant 0 :{{\mathcal {C}}} d^{k + 3} \leqslant \mathrm {e}^{ 2q\varphi _\mathfrak a^2 d}\bigr \}\) and decompose the sum on the right-hand side of (4.20) into

$$\begin{aligned} \sum _{k = 0}^{\log N} d^{-k - 1} |X_k \cap \big ({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)}\big )^c |&= \sum _{k = 0}^{K} d^{-k - 1} |X_k \cap \big ({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)}\big )^c | + \sum _{k = K+1}^{\log N} d^{-k - 1} |X_k \cap \big ({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)}\big )^c | \\&\leqslant \sum _{k = 0}^{K} d^{-k - 1} \varphi _\mathfrak ad + \sum _{k = K+1}^{\log N} d^{-k - 1} \bigl (\mathrm {e}^{q \varphi _\mathfrak a^2 d} + {{\mathcal {C}}} d^{k+3} \mathrm {e}^{- 2q \varphi _\mathfrak a^2 d}\bigr ) \\&\leqslant 2 \varphi _\mathfrak a+ {{\mathcal {C}}} d^2 \mathrm {e}^{-q \varphi _\mathfrak a^2 d}\log N\, \end{aligned}$$

with very high probability. Here, we used Lemma 4.10 (ii) to estimate the summands if \(k \leqslant K\) and Lemma 4.10 (i) and (4.21) for the other summands. Since \(\log N \leqslant d^2\), this concludes the proof of (ii). \(\quad \square \)

The rest of this subsection is devoted to the proofs of Lemmas 4.10 and 4.11. Let \(\theta \) be defined as in (4.19) for some constant \(\Gamma \geqslant 1\). For any subset \(T \subset [N]\), we define the indicator function

$$\begin{aligned} \theta ^{(T)} :=\mathbb {1}_{\max _{a,b \notin T} |G_{ab}^{(T)} | \leqslant 2 \Gamma }\,. \end{aligned}$$

Lemma 4.10 is a direct consequence of the following two lemmas.

The first one, Lemma 4.12, is mainly a decoupling argument for the random variables \((\Psi _x)_{x \in [N]}\). Indeed, the probability that any fixed vertex x is atypical is only small, o(1), and not very small, \(N^{-\nu }\); see (4.31) below. If the events of different vertices being atypical were independent, we could deduce that the probability that a sufficiently large set of vertices are atypical is very small. However, these events are not independent. The most serious breach of independence arises from the Green function \(G^{(x)}_{yy}\) in the definition of \(\Psi _x\). In order to make this argument work, we have to replace the parameters \(\Phi _x\) and \(\Psi _x\) with their decoupled versions \(\Phi _x^{(T)}\) and \(\Psi _x^{(T)}\) from Definition 4.9. To that end, we have to estimate the error involved, \(|\Phi _x - \Phi _x^{(T)} |\) and \(|\Psi _x - \Psi _x^{(T)} |\). Unfortunately the error bound on the latter is proportional to \(\beta _x\) (see (4.32)), which is not affordable for vertices of large degree. The solution to this issue involves the observation that if \(\beta _x\) is too large then the vertex is atypical by the condition on \(\Phi _x\), which allows us to disregard the size of \(\Psi _x\). The details are given in the proof of Lemma 4.12 below.

The second one, Lemma 4.13, gives a priori bounds on the entries of the Green function \(G^{(T)}\), which shows that if the entries of G are bounded then so are those of \(G^{(T)}\) for \(|T | = o(d)\). For T of fixed size, this fact is a standard application of the resolvent identities from Lemma A.24. For our purposes, it is crucial that T can have size up to o(d), and such a quantitative estimate requires slightly more care.

Lemma 4.12

There is a constant \(0 < q \leqslant 1\), depending only on \(\Gamma \), such that, for any \(\nu >0\), there is \({\mathcal {C}}>0\) such that the following holds for any fixed \(\mathfrak a> 0\). If \(x \notin T \subset [N]\) are deterministic with \(|T | \leqslant \varphi _\mathfrak ad /{\mathcal {C}}\) then

$$\begin{aligned} {\mathbb {P}}\big ( T \subset {{\mathcal {T}}}_{\mathfrak a/ 2}^c,\, \theta = 1 \big )&\leqslant \mathrm {e}^{- 4 q \varphi _\mathfrak a^2 d |T |} + {\mathcal {C}}N^{-\nu } , \end{aligned}$$
(4.22a)
$$\begin{aligned} {\mathbb {P}}\big ( T \subset \big ({{\mathcal {T}}}_{\mathfrak a/2}^{(x)}\big )^c, \theta ^{(x)} =1 \big )&\leqslant \mathrm {e}^{-4 q \varphi _\mathfrak a^2 d |T |} + {\mathcal {C}}N^{-\nu }\,. \end{aligned}$$
(4.22b)

Lemma 4.13

For any subset \(T \subset [N]\) satisfying \(|T | \leqslant \frac{d}{{{\mathcal {C}}} \Gamma ^2}\) we have \(\theta \leqslant \theta ^{(T)}\) with very high probability.

Before proving Lemma 4.12 and Lemma 4.13, we use them to show Lemma 4.10.

Proof of Lemma 4.10

Throughout the proof we abbreviate \({\mathbb {P}}_\theta (\Xi ) :={\mathbb {P}}(\Xi \cap \{ \theta = 1\})\). Let \({{\mathcal {C}}}\) be the constant from Lemma 4.12, and set

$$\begin{aligned} \mathfrak a:=\biggl (\frac{{{\mathcal {C}}} \nu }{4 q}\biggr )^{1/3}\,. \end{aligned}$$
(4.23)

For the proof of (ii), we choose \(k = \varphi _\mathfrak ad /{\mathcal {C}}\) and estimate

$$\begin{aligned}&{\mathbb {P}}_\theta (|X \cap {{\mathcal {T}}}_{\mathfrak a/ 2}^c | \geqslant k) \leqslant \sum _{Y \subset X : |Y | = k} {\mathbb {P}}_\theta (Y \subset {{\mathcal {T}}}_{\mathfrak a/2}^c) \leqslant \left( {\begin{array}{c}|X |\\ k\end{array}}\right) \Big ( \mathrm {e}^{-4 q \varphi _\mathfrak a^2 d k} + {\mathcal {C}}N^{-\nu } \Big ) \\&\quad \leqslant \big (|X | \mathrm {e}^{- 4 q \varphi _\mathfrak a^2 d}\big )^k \!+\! {\mathcal {C}}|X |^k N^{-\nu } \leqslant \mathrm {e}^{- 2 q \varphi _\mathfrak a^2 d k} \!+\! {\mathcal {C}}\mathrm {e}^{2 q \varphi _\mathfrak a^2 d k} N^{-\nu }\\&\quad = N^{-2q\mathfrak a^3/{{\mathcal {C}}}} \!+\! {{\mathcal {C}}} N^{2q\mathfrak a^3/{{\mathcal {C}}} - \nu }\,. \end{aligned}$$

where in the second step we used (4.22a). Thus, by our choice of \(\mathfrak a\), we have \({\mathbb {P}}_\theta (|X \cap {{\mathcal {T}}}_{\mathfrak a/ 2}^c | \geqslant k) \leqslant ({{\mathcal {C}}} + 1) N^{-\nu /2}\), from which (ii) follows after renaming \(\nu \) and \({{\mathcal {C}}}\).

To prove (i) we estimate, for \(t>0\) and \(l \in {\mathbb {N}}\),

$$\begin{aligned} {\mathbb {P}}_\theta (|X \cap {{\mathcal {T}}}_{\mathfrak a/ 2}^c | \geqslant t) \leqslant \frac{1}{t^l} {\mathbb {E}}\Biggl (\sum _{x \in X} \mathbb {1}_{x \in {{\mathcal {T}}}_{\mathfrak a/ 2}^c}\theta \Biggr )^l = \frac{1}{t^l} \sum _{x_1, \dots , x_l \in X} {\mathbb {P}}_\theta (x_1 \in \mathcal T_{\mathfrak a/ 2}^c, \dots , x_l \in {{\mathcal {T}}}_{\mathfrak a/ 2}^c)\,. \end{aligned}$$

Choosing \(l = \varphi _\mathfrak ad/{\mathcal {C}}\), regrouping the summation according to the partition of coincidences, and using Lemma 4.12 yield

$$\begin{aligned}&{\mathbb {P}}_\theta (|X \cap {{\mathcal {T}}}_{\mathfrak a/ 2}^c | \geqslant t) \leqslant \frac{1}{t^l} \sum _{\pi \in \mathfrak P_l} |X |^{|\pi |} \big ( \mathrm {e}^{- 4 q \varphi _\mathfrak a^2 d |\pi |} + {\mathcal {C}}N^{- \nu } \big ) \\&\quad \leqslant \frac{1}{t^l} \sum _{k = 0}^l \left( {\begin{array}{c}l\\ k\end{array}}\right) l^{l - k} |X |^k \big ( \mathrm {e}^{-4 q \varphi _\mathfrak a^2 dk} + {\mathcal {C}}N^{- \nu } \big ) = \frac{(l + |X | \mathrm {e}^{- 4 q \varphi _\mathfrak a^2 d})^l + {\mathcal {C}}N^{-\nu } (l + |X |)^l}{t^l}\,. \end{aligned}$$

Here, \(\mathfrak P_l\) denotes the set of partitions of [l], and we denote by \(k = |\pi |\) the number of blocks in the partition \(\pi \in \mathfrak P_l\). We also used that the number of partitions of l elements consisting of k blocks is bounded by \(\left( {\begin{array}{c}l\\ k\end{array}}\right) l^{l - k}\). The last step follows from the binomial theorem. Therefore, using \(l = \varphi _\mathfrak ad/{{\mathcal {C}}}\) and choosing \(t = \mathrm {e}^{q \varphi _\mathfrak a^2 d} + |X | \mathrm {e}^{- 2 q \varphi _\mathfrak a^2 d}\) as well as \({\mathcal {C}}\) and \(\nu \) sufficiently large imply the bound in Lemma 4.10 (i) with very high probability, after renaming \({\mathcal {C}}\) and \(\nu \). Here we used (4.13).

To obtain the same statements for \({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)}\) instead of \({{\mathcal {T}}}_{\mathfrak a/ 2}\), we estimate

$$\begin{aligned} {\mathbb {P}}_\theta \Big ( |X \cap ({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)})^c | \geqslant t\Big ) \!\leqslant \!{\mathbb {E}}\Big [ {\mathbb {P}}\Bigl ( |X \cap ({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)})^c | \!\geqslant \! t, \theta ^{(x)} \!=\! 1 \Big \vert X \Bigr ) \Big ] \!+\! {\mathbb {P}}\big ( \theta ^{(x)} \!=\! 0 , \theta \!=\! 1\big ). \end{aligned}$$

For both parts, (i) and (ii), the conditional probability \({\mathbb {P}}\bigl ( |X \cap ({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)})^c | \geqslant t, \theta ^{(x)} = 1 \big \vert X \bigr )\) can be bounded as before using (4.22b) instead of (4.22a) since, by assumption on X, the set \({{\mathcal {T}}}_{\mathfrak a/ 2}^{(x)}\) and the indicator function \(\theta ^{(x)}\) are independent of X. The smallness of \({\mathbb {P}}(\theta ^{(x)} = 0, \theta = 1) \leqslant {\mathbb {P}}(\theta ^{(x)} < \theta )\) is a consequence of Lemma 4.13. This concludes the proof of Lemma 4.10. \(\quad \square \)

The rest of this subsection is devoted to the proofs of Lemmas 4.11, 4.12, and 4.13.

Lemma 4.14

There is \(\mathfrak c\equiv \mathfrak c_\nu >0\), depending on \(\nu \) and \(\kappa \), such that for any deterministic \(T \subset [N]\) satisfying \(|T | \leqslant \mathfrak cd / \Gamma ^2\) we have with very high probability

$$\begin{aligned} \theta \max _{x,y \notin T} \bigl |G_{xy}^{(T)} \bigr | \leqslant 2 \Gamma \,. \end{aligned}$$
(4.24)

Moreover, under the same assumptions on T and for any \(u \in [N] \setminus T\), we have

$$\begin{aligned} \theta \max _{x,y \notin T \cup \{u\}} \bigl |G_{xy}^{(Tu)} - G_{xy}^{(T)} \bigr | \leqslant {\mathcal {C}}d^{-1} \end{aligned}$$
(4.25)

with very high probability.

Before proving Lemma 4.14, we use it to conclude the proof of Lemma 4.13.

Proof of Lemma 4.13

The bound in (4.24) of Lemma 4.14 implies that \(\theta = \theta \theta ^{(T)}\) with very high probability. Since \(\theta \leqslant 1\), the proof is complete. \(\quad \square \)

Proof of Lemma 4.14

Throughout the proof we work on the event \(\{\theta = 1\}\) exclusively. After a relabelling of the vertices [N], we can suppose that \(T = [k]\) with \(k \leqslant cd/\Gamma ^2\). For \(k \in [N]\), we set

$$\begin{aligned} \Gamma _k :=1 \vee \max _{x,y \notin [k]} |G_{xy}^{([k])} |\,.\end{aligned}$$

Note that \(\Gamma _0 \leqslant \Gamma \) by definition of \(\theta \).

We now show by induction on k that there is \({\mathcal {C}}>0\) such that

$$\begin{aligned} \Gamma _k \leqslant \Gamma _0 \bigg (1 + \frac{16 {\mathcal {C}}\Gamma ^2}{d} \bigg )^k \end{aligned}$$
(4.26)

for all \(k \in {\mathbb {N}}\) satisfying \(k \leqslant \frac{d}{32 \, {\mathcal {C}}\Gamma ^2}\). Since \(1 + x \leqslant \mathrm {e}^x\), (4.26) implies that \(\Gamma _k \leqslant \mathrm {e}^{1/2} \Gamma _0 \leqslant 2 \Gamma \). This directly implies (4.24) by the definition of \(\theta \).

The initial step with \(k = 0\) is trivially correct. For the induction step \(k \rightarrow k+1\), we set \(T = [k]\) and \(u = k + 1\). The algebraic starting point for the induction step is the identities (A.32a) and (A.32b). We shall need the following two estimates. First, from Lemma A.23 and Cauchy–Schwarz, we get

$$\begin{aligned} \frac{f}{N}\biggl |G_{uy}^{(T)} \sum _a^{(Tu)} G_{xa}^{(Tu)} \biggr | \leqslant \frac{f}{N} \Gamma _k \sqrt{\frac{N}{{{\,\mathrm{Im}\,}}z}} \Gamma _{k+1} \leqslant N^{-\kappa /3} \Gamma _k \Gamma _{k+1}\,, \end{aligned}$$
(4.27)

where we used that \(\Gamma _{k+1} \geqslant 1\), \(f \leqslant N^{\kappa /6}\), and \({{\,\mathrm{Im}\,}}z\geqslant N^{-1 + \kappa }\). Second, the first estimate of (A.28) in Corollary A.21 with \(\psi = \Gamma _{k+1}/\sqrt{d}\) and \(\gamma = \sqrt{\Gamma _{k+1}/(N{{\,\mathrm{Im}\,}}z)}\), Lemma A.23, and \(\Gamma _{k+1} \geqslant 1\) imply

$$\begin{aligned} \biggl | \sum _{a}^{(Tu)} G_{xa}^{(Tu)} H_{au} \biggr | \leqslant \frac{{\mathcal {C}}}{\sqrt{d}} \Gamma _{k+1} \end{aligned}$$
(4.28)

with very high probability.

Hence, owing to (A.32a) and (A.32b) with \(T = [k]\) and \(u = k + 1\), we get, respectively,

$$\begin{aligned} \Gamma _{k+1} \leqslant \Gamma _k + \frac{{\mathcal {C}}}{\sqrt{d}} \Gamma _k \Gamma _{k+1}, \qquad \qquad \Gamma _{k+1} \leqslant \Gamma _k + \frac{{\mathcal {C}}}{d} \Gamma _k \Gamma _{k+1}^2 \end{aligned}$$
(4.29)

with very high probability.

By the induction assumption (4.26) we have \({{\mathcal {C}}} \Gamma _k / \sqrt{d} \leqslant 2 {{\mathcal {C}}} \Gamma / \sqrt{d} \leqslant 1/2\), so that the first inequality in (4.29) implies the rough a priori bound

$$\begin{aligned} \Gamma _{k+1} \leqslant 2 \Gamma _k \end{aligned}$$
(4.30)

with very high probability. From the second inequality in (4.29) and (4.30), we deduce that

$$\begin{aligned} \Gamma _{k+1} \leqslant \Gamma _k \biggl (1 + \frac{4 {{\mathcal {C}}}}{d} \Gamma _k^2\biggr ) \leqslant \Gamma _k \biggl (1 + \frac{16 {{\mathcal {C}}} \Gamma ^2}{d}\biggr )\,, \end{aligned}$$

where in the second step we used \(\Gamma _k \leqslant 2 \Gamma \), by the induction assumption (4.26). This concludes the proof of (4.26), and, hence, of (4.24).

For the proof of (4.25), we start from (A.32b) and use (4.27), (4.28) as well as (4.24). This concludes the proof of Lemma 4.14. \(\quad \square \)

The next result provides concentration estimates for the parameters \(\Phi _x\) and \(\Psi _x\).

Lemma 4.15

There is a constant \(0 < q \leqslant 1\), depending only on \(\Gamma \), such that the following holds. Let \(\mathfrak c>0\) be as in Lemma 4.14, and let \(x \in [N]\) and \(T \subset [N]\) be deterministic and satisfy \(|T | \leqslant \mathfrak cd / \Gamma ^2\). Then for any \(0 < \varepsilon \leqslant 1\) we have

$$\begin{aligned} \theta ^{(T)} {\mathbb {P}}\big ( |\Phi _x^{(T)} |> \varepsilon \bigm \vert H^{(T)} \big ) \leqslant \mathrm {e}^{- 32 q \varepsilon ^2 d}\,, \quad \theta ^{(T)} {\mathbb {P}}\big (|\Psi _x^{(T)} | > \varepsilon \bigm \vert H^{(T)} \big ) \leqslant \mathrm {e}^{ - 32 q \varepsilon ^2 d}\,,\nonumber \\ \end{aligned}$$
(4.31)

and, for any \(u \notin T\),

$$\begin{aligned} \Phi _x^{(Tu)} - \Phi _x^{(T)} = O\biggl (\frac{1}{d}\biggr )\,, \qquad \theta ^{(T)} \bigl (\Psi _x^{(Tu)} - \Psi _x^{(T)}\bigr ) = {{\mathcal {O}}}\biggl (\frac{1 + \beta _x}{d}\biggr ) \end{aligned}$$
(4.32)

with very high probability.

Before proving Lemma 4.15, we use it conclude the proof of Lemma 4.11.

Proof of Lemma 4.11

Using (A.27b), we find that \(\beta _x \leqslant {{\mathcal {C}}} (1 + \frac{\log N}{d})\) with very high probability. The claim now follows from (4.32) with \(T = \emptyset \) and the definition of \(\varphi _{\mathfrak a}\), choosing the constant \({{\mathcal {D}}}\) in (4.13) large enough. \(\quad \square \)

Proof of Lemma 4.15

Set \(q :=\frac{1}{2^{11}(\mathrm {e}\Gamma )^2}\). We get, using (A.27b) with \(r :=32 q \varepsilon ^2 d \leqslant d\), \({\mathbb {E}}|H_{xy} |^2 = 1/N\), and Chebyshev’s inequality,

$$\begin{aligned}&\theta ^{(T)} {\mathbb {P}}\Big ( |\Psi _x^{(T)} |> \varepsilon \Bigm \vert H^{(T)} \Big ) = {\mathbb {P}}\Bigg (\theta ^{(T)} \Biggl |\sum _y^{(Tx)} (|H_{xy} |^2 - {\mathbb {E}}|H_{xy} |^2) G_{yy}^{(T)} \Biggr | > \varepsilon \biggm \vert H^{(T)} \Bigg ) \\&\quad \leqslant \biggl (\frac{8 \Gamma }{\varepsilon } \sqrt{\frac{r}{d}}\biggr )^r = \mathrm {e}^{ -32 q \varepsilon ^2 d} \end{aligned}$$

with very high probability for any \(0 < \varepsilon \leqslant 1\). This proves the estimate on \(\Psi _x^{(T)}\) in (4.31), and the estimate for \(\Phi _x^{(T)}\) is proved similarly.

We now turn to the proof of (4.32). If \(x = u\) then the statement is trivial. Thus, we assume \(x \ne u\). In this case we have

$$\begin{aligned} \Phi _x^{(Tu)} - \Phi _x^{(T)} = - \bigg (|H_{xu} |^2 - \frac{1}{N} \bigg ) \end{aligned}$$
(4.33)

and the claim for \(\Phi \) follows by Definition 4.1. Next,

$$\begin{aligned} \Psi _x^{(Tu)} - \Psi _x^{(T)} = \sum _{y}^{(Tux)} \bigg ( |H_{xy} |^2 - \frac{1}{N} \bigg ) \Big ( G_{yy}^{(Tux)} - G_{yy}^{(Tx)} \Big ) - \bigg (|H_{xu} |^2 - \frac{1}{N} \bigg ) G_{uu}^{(Tx)}\,. \end{aligned}$$

The last term multiplied by \(\theta ^{(T)}\) is estimated by \(O(\Gamma / d)\) since \(\theta ^{(T)} |G_{uu}^{(Tx)} | \leqslant 4 \Gamma \) by (4.30). We estimate the first term using (4.25) in Lemma 4.14, which yields

$$\begin{aligned} \theta ^{(T)} \bigl |\Psi _x^{(Tu)} - \Psi _x^{(T)} \bigr | \leqslant \sum _{y}^{(Tux)} |H_{xy} |^2 \frac{{{\mathcal {C}}}}{d} + \frac{1}{N} \sum _{y}^{(Tux)} \frac{{{\mathcal {C}}}}{d} + O \biggl ( \frac{\Gamma }{d}\biggr ) = \mathcal O \biggl (\frac{1 + \beta _x}{d}\biggr ) \end{aligned}$$

with very high probability. This concludes the proof of Lemma 4.15. \(\quad \square \)

Proof of Lemma 4.12

Throughout the proof we abbreviate \({\mathbb {P}}_\theta (\Xi ) :={\mathbb {P}}(\Xi \cap \{ \theta = 1\})\). We have

$$\begin{aligned} {\mathbb {P}}\big ( T \subset {{\mathcal {T}}}_{\mathfrak a/ 2}^c,\, \theta = 1 \big ) = {\mathbb {P}}_\theta \Biggl (\bigcap _{x \in T}\Omega _x\Biggr )\,, \end{aligned}$$

where we defined the event

$$\begin{aligned} \Omega _x :=\bigl \{|\Phi _x |> \varphi _{\mathfrak a/ 2}\bigr \} \cup \bigl \{|\Psi _x |> \varphi _{\mathfrak a/ 2}\bigr \} = \bigl \{|\Phi _x |> \varphi _{\mathfrak a/ 2}\bigr \} \cup \bigl \{|\Phi _x | \leqslant \varphi _{\mathfrak a/ 2}, |\Psi _x | > \varphi _{\mathfrak a/ 2}\bigr \}\,. \end{aligned}$$

We have the inclusions

$$\begin{aligned} \bigl \{|\Phi _x |> \varphi _{\mathfrak a/ 2}\bigr \}&\subset \bigl \{|\Phi _x^{(T)} |> \varphi _{\mathfrak a/ 4}\bigr \} \cup \bigl \{|\Phi _x - \Phi _x^{(T)} |> \varphi _{\mathfrak a/ 4}\bigr \}\,, \\ \bigl \{|\Phi _x | \leqslant \varphi _{\mathfrak a/ 2}, |\Psi _x |> \varphi _{\mathfrak a/ 2}\bigr \}&\subset \bigl \{|\Psi _x^{(T)} |> \varphi _{\mathfrak a/ 4}\bigr \} \cup \bigl \{|\Phi _x | \leqslant \varphi _{\mathfrak a/ 2}, |\Psi _x - \Psi _x^{(T)} | > \varphi _{\mathfrak a/ 4}\bigr \}\,. \end{aligned}$$

Defining the event

$$\begin{aligned} \Omega _x^{(T)} :=\bigl \{|\Phi _x^{(T)} |> \varphi _{\mathfrak a/ 4}\bigr \} \cup \bigl \{|\Psi _x^{(T)} | > \varphi _{\mathfrak a/ 4}\bigr \}\,, \end{aligned}$$

we therefore deduce by a union bound that

$$\begin{aligned}&{\mathbb {P}}_\theta \Biggl (\bigcap _{x \in T}\Omega _x\Biggr ) \leqslant {\mathbb {P}}_\theta \Biggl (\bigcap _{x \in T}\Omega _x^{(T)}\Biggr ) + \sum _{x \in T} {\mathbb {P}}_\theta \bigl (|\Phi _x - \Phi _x^{(T)} |> \varphi _{\mathfrak a/ 4}\bigr ) \nonumber \\&\quad + \sum _{x \in T} {\mathbb {P}}_\theta \bigl (|\Phi _x | \leqslant \varphi _{\mathfrak a/ 2}, |\Psi _x - \Psi _x^{(T)} | > \varphi _{\mathfrak a/ 4}\bigr )\,. \end{aligned}$$
(4.34)

We begin by estimating the first term of (4.34). To that end, we observe that, conditioned on \(H^{(T)}\), the family \((\Omega _x^{(T)})_{x \in T}\) is independent. Using Lemma 4.13 we therefore get

$$\begin{aligned} {\mathbb {P}}_\theta \Biggl (\bigcap _{x \in T}\Omega _x^{(T)}\Biggr ) \leqslant {\mathbb {E}}\Biggl [ \theta ^{(T)} {\mathbb {P}}\Biggl (\bigcap _{x \in T}\Omega _x^{(T)} \biggm | H^{(T)}\Biggr )\Biggr ] + {{\mathcal {C}}} N^{-\nu } = {\mathbb {E}}\biggl [\theta ^{(T)} \prod _{x \in T} {\mathbb {P}}(\Omega _x^{(T)} | H^{(T)})\biggr ] + {{\mathcal {C}}} N^{-\nu }\,, \end{aligned}$$

and we estimate each factor using (4.31) from Lemma 4.15 as

$$\begin{aligned}&\theta ^{(T)} {\mathbb {P}}(\Omega _x^{(T)} | H^{(T)}) \leqslant \theta ^{(T)} {\mathbb {P}}\big ( |\Phi _x^{(T)} |> \varphi _{\mathfrak a/4} \bigm \vert H^{(T)} \big ) + \theta ^{(T)} {\mathbb {P}}\big ( |\Psi _x^{(T)} | > \varphi _{\mathfrak a/ 4} \bigm \vert H^{(T)} \big ) \\&\quad \leqslant 2 \mathrm {e}^{-8 q \varphi _{\mathfrak a}^2 d} \leqslant \mathrm {e}^{-4 q \varphi _{\mathfrak a}^2 d}\,, \end{aligned}$$

where in the last step we used that \(\mathrm {e}^{-4 q \varphi _{\mathfrak a}^2 d} \leqslant 1/2\). We conclude that

$$\begin{aligned} {\mathbb {P}}_\theta \Biggl (\bigcap _{x \in T}\Omega _x^{(T)}\Biggr ) \leqslant \mathrm {e}^{-4 q \varphi _{\mathfrak a}^2 d |T |} + {{\mathcal {C}}} N^{-\nu }\,. \end{aligned}$$

Next, we estimate the second term of (4.34). After renaming the vertices, we may assume that \(T = [k]\) with \(k \leqslant \varphi _\mathfrak ad / {{\mathcal {C}}}\), so that we get from (4.32) from Lemma 4.15 (using that \(\varphi _\mathfrak ad / {{\mathcal {C}}} \leqslant \mathfrak cd / \Gamma ^2\) provided that \({{\mathcal {D}}}\) in (4.13) is chosen large enough, depending on \(\mathfrak a\)), by telescoping and recalling Lemma 4.13,

$$\begin{aligned} |\Phi _x - \Phi _x^{(T)} | \leqslant \sum _{i = 0}^{k-1} \bigl |\Phi _x^{([i])} - \Phi _x^{([i+1])} \bigr | \leqslant O \biggl (\frac{k}{d}\biggr ) \leqslant \varphi _{\mathfrak a/ 4} \end{aligned}$$
(4.35)

with very high probability on the event \(\{\theta = 1\}\), if the constant \({{\mathcal {C}}}\) in the upper bound \(\varphi _\mathfrak ad / {{\mathcal {C}}}\) on k is large enough.

The last term of (4.34) is estimated analogously, with the additional observation that, by definition of \(\Phi _x\) and since \(\varphi _{\mathfrak a/2} \leqslant 1/2\), on the event \(\{|\Phi _x | \leqslant \varphi _{\mathfrak a/ 2}\}\) we have \(\beta _x \leqslant 2\). Thus, on the event \(\{\theta = 1\} \cap \{|\Phi _x | \leqslant \varphi _{\mathfrak a/ 2}\}\) we have, by Lemma 4.13,

$$\begin{aligned} |\Psi _x - \Psi _x^{(T)} | \leqslant \sum _{i = 0}^{k-1} \bigl |\Psi _x^{([i])} - \Psi _x^{([i+1])} \bigr | \leqslant {{\mathcal {O}}} \biggl (\frac{k(1 + \beta _x)}{d}\biggr ) \leqslant \varphi _{\mathfrak a/ 4} \end{aligned}$$
(4.36)

with very high probability, for large enough \({{\mathcal {C}}}\) in the upper bound on k. We conclude that the two last terms of (4.34) are bounded by \({{\mathcal {C}}} N^{-\nu }\), and the proof of (4.22a) is therefore complete.

The proof of (4.22b) is identical, replacing the matrix M with the matrix \(M^{(x)}\). \(\quad \square \)

4.3 Self-consistent equation and proof of Theorem 4.2

In this subsection, we derive an approximate self-consistent equation for the Green function G, and use it to prove Theorem 4.2. The key ingredient is Proposition 4.18 below, which provides a bootstrapping bound stating that if \(G_{xx} - m_{\beta _x}\) is smaller than some constant then it is in fact bounded by \(\varphi _\mathfrak a\) with very high probability. It is proved by first deriving and solving a self-consistent equation for the entries \(G_{xx}\) indexed by typical vertices \(x \in {{\mathcal {T}}}_\mathfrak a\), and using the obtained bounds to analyse \(G_{xx}\) for atypical vertices \(x \in {{\mathcal {T}}}^c_\mathfrak a\).

We begin with a simple algebraic observation.

Lemma 4.16

(Approximate self-consistent equation). For any \(x \in [N]\) and \(z \in {\mathbb {C}}_+\), we have

$$\begin{aligned} \frac{1}{G_{xx}} = - z - \sum _{y}^{(x)} |H_{xy} |^2 G_{yy}^{(x)} + Y_x\,, \end{aligned}$$

where we introduced the error term

$$\begin{aligned} Y_x :=H_{xx} + \frac{f}{N} - \sum _{a \ne b}^{(x)} H_{xa} G_{ab}^{(x)} H_{bx} - \sum _{a,b}^{(x)} \bigg ( \frac{f}{N} \Big ( H_{xa}G_{ab}^{(x)} + G_{ab}^{(x)} H_{bx} \Big ) + \frac{f^2}{N^2} G_{ab}^{(x)}\bigg )\,. \end{aligned}$$
(4.37)

Proof

The lemma follows directly from (A.31) and the definition (4.1). \(\quad \square \)

Let \(\theta \) be defined as in (4.19) with some \(\Gamma \geqslant 1\). The following lemma provides a priori bounds on the error terms appearing in the self-consistent equation.

Lemma 4.17

For all \(z \in {\mathbb {C}}\) with \({{\,\mathrm{Im}\,}}z \geqslant N^{-1 + \kappa }\), with very high probability,

$$\begin{aligned} \theta \max _x |Y_x |&\leqslant {\mathcal {C}}d^{-1/2}, \end{aligned}$$
(4.38a)
$$\begin{aligned} \theta \max _{x \ne y} |G_{xy} |&\leqslant {\mathcal {C}}d^{-1/2}, \end{aligned}$$
(4.38b)
$$\begin{aligned} \theta \max _{x \ne a \ne y} |G_{xy} - G_{xy}^{(a)} |&\leqslant {\mathcal {C}}d^{-1}. \end{aligned}$$
(4.38c)

Proof

We first estimate \(Y_x\). From Definition 4.1, the upper bound on f, and (4.13), we conclude that \(|H_{xx} | + f / N = O(d^{-1/2})\) almost surely. Moreover, the Cauchy–Schwarz inequality, Lemma A.23, (4.24) and the upper bound on f imply

$$\begin{aligned} \theta \frac{f^2}{N^2} \biggl |\sum _{a,b}^{(x)} G_{ab}^{(x)} \biggr | \leqslant C_\kappa \frac{f^2}{\sqrt{N{{\,\mathrm{Im}\,}}z}} \leqslant C_\kappa N^{-\kappa /6} \leqslant \frac{{\mathcal {C}}}{\sqrt{d}}\,, \end{aligned}$$

for some constant \(C_\kappa \) depending only on \(\kappa \). Next, we use the first estimate of (A.28), Lemma A.23, and the upper bound on f to conclude that

$$\begin{aligned} \frac{f}{N} \theta \biggl |\sum _{a,b}^{(x)} H_{xa} G_{ab}^{(x)} \biggr | + \frac{f}{N} \theta \biggl |\sum _{a,b}^{(x)} G_{ab}^{(x)} H_{bx} \biggr | \leqslant \frac{{\mathcal {C}}}{\sqrt{d}} \frac{f}{\sqrt{N{{\,\mathrm{Im}\,}}z}} \leqslant \frac{{\mathcal {C}}}{\sqrt{d}} N^{-\kappa /3} \leqslant \frac{{\mathcal {C}}}{\sqrt{d}} \end{aligned}$$

with very high probability (compare the proof of (4.28)). Moreover, from Lemma A.23 and the second estimate of (A.28) we deduce that remaining term in (4.37) is \(\mathcal O(d^{-1}) = {{\mathcal {O}}}(d^{-1/2})\). This concludes the proof of (4.38a).

For the proof of (4.38b), we start from (A.29) and use \(M_{xa} = H_{xa} + f/ N \) to obtain

$$\begin{aligned} G_{xy} = - G_{xx} \sum _{a}^{(x)} H_{xa} G_{ay}^{(x)} - G_{xx} H_{xy} G_{yy}^{(x)} - \frac{f}{N} G_{xx} \sum _a^{(x)} G_{ay}^{(x)}. \end{aligned}$$

Similar arguments as in (4.28) and (4.27) show that the first and third term, respectively, are bounded by \({\mathcal {C}}d^{-1/2}\) with very high probability. The same bound for the second term follows from Definition 4.1 and (4.24) in Lemma 4.14. This proves (4.38b).

Finally, (4.38c) follows directly from (4.25). \(\quad \square \)

Proposition 4.18 below is the main tool behind the proof of Theorem 4.2. To formulate it, we introduce the z-dependent random control parameters

$$\begin{aligned} \Lambda _{\mathrm d} :=\max _{x} |G_{xx} - m_{\beta _x} |\,, \qquad \Lambda _{\mathrm o} :=\max _{x \ne y} |G_{xy} |\,, \qquad \Lambda :=\Lambda _{\mathrm d} \vee \Lambda _{\mathrm o}\,, \end{aligned}$$

and, for some constant \(\lambda \leqslant 1\), the indicator function

$$\begin{aligned} \phi :=\mathbb {1}_{\Lambda \leqslant \lambda }\,. \end{aligned}$$
(4.39)

Proposition 4.18 below provides a strong bound on \(\Lambda \) provided the a priori condition \(\phi = 1\) is satisfied. Each step of its proof is valid provided \(\lambda \) is chosen small enough depending on \(\kappa \). Note that, owing to (A.4), there is a deterministic constant \(\Gamma \), depending only on \(\kappa \), such that, for all \(z \in {\mathbf {S}}\), we have

$$\begin{aligned} \phi \max _{x,y} |G_{xy} | \leqslant \Gamma \,. \end{aligned}$$
(4.40)

In particular, if \(\Gamma \) in the definition (4.19) of \(\theta \) is chosen as in (4.40) then

$$\begin{aligned} \phi \leqslant \theta \,. \end{aligned}$$
(4.41)

Proposition 4.18

There exists \(\lambda > 0\), depending only on \(\kappa \), such that, for all \(z \in {\mathbf {S}}\), with very high probability,

$$\begin{aligned} \phi \Lambda \leqslant {\mathcal {C}}\varphi _\mathfrak a\,. \end{aligned}$$

For the proof of Proposition 4.18, we employ the results of the previous subsections to show that the diagonal entries \((G_{xx})_{x \in {{\mathcal {T}}}_\mathfrak a}\) of the Green function of M at the typical vertices satisfy the approximate self-consistent equation (4.42) below. This is a perturbed version of the relation (4.6) for the Stieltjes transform m of the semicircle law, which holds for all \(z \in {\mathbb {C}}_+\). The stability estimate, (4.43) below, then implies that \(G_{xx}\) and m are close for all \(x \in \mathcal T_\mathfrak a\). From this we shall, in a second step, deduce that \(G_{xx}\) is close to \(m_{\beta _x}\) for all x; this steps includes also the atypical vertices.

The next lemma is a relatively standard stability estimate of self-consistent equations in random matrix theory (compare e.g. to [27, Lemma 3.5]). It is proved in Appendix A.9.

Lemma 4.19

(Stability of the self-consistent equation for m). Let \({{\mathcal {X}}}\) be a finite set, \(\kappa >0\), and \(z \in {\mathbb {C}}_+\) satisfy \(|\mathrm {Re}\,z | \leqslant 2- \kappa \). We assume that, for two vectors \((g_x)_{x \in {{\mathcal {X}}}}\), \((\varepsilon _x)_{x \in {{\mathcal {X}}}} \in {\mathbb {C}}^{{{\mathcal {X}}}}\), the identities

$$\begin{aligned} \frac{1}{g_x} = -z - \frac{1}{|{{\mathcal {X}}} |}\sum _{y \in {{\mathcal {X}}}} g_y + \varepsilon _x \end{aligned}$$
(4.42)

hold for all \(x \in {{\mathcal {X}}}\). Then there are constants \(b, C \in (0,\infty )\), depending only on \(\kappa \), such that if \(\max _{x \in {{\mathcal {X}}}} |g_x -m(z) | \leqslant b\) then

$$\begin{aligned} \max _{x \in {{\mathcal {X}}}} |g_x - m(z) | \leqslant C \max _{x \in {{\mathcal {X}}}} |\varepsilon _x |, \end{aligned}$$
(4.43)

where m(z) satisfies (4.6).

Proof of Proposition 4.18

Throughout the proof, we work on the event \(\{\phi = 1\}\), which, by (4.41), is contained in the event \(\{\theta = 1\}\). Fix \(\mathfrak a\) as in Proposition 4.8. Throughout the proof we use that \(d^{-1/2} \leqslant \varphi _\mathfrak a\) by the upper bound in (4.13). Owing to (4.38b), it suffices to estimate \(\Lambda _{\mathrm d}\). Let b be chosen as in Lemma 4.19, and set \(\lambda :=b/2\) in the definition (4.39) of \(\phi \).

For the analysis of \(G_{xx}\) we distinguish the two cases \(x \in {{\mathcal {T}}}_\mathfrak a\) and \(x \notin {{\mathcal {T}}}_\mathfrak a\).

If \(x \in {{\mathcal {T}}}_\mathfrak a\) then we write using Lemma 4.16 and the definition (4.18) of \(\Psi _x\) that

$$\begin{aligned}&\frac{1}{G_{xx}} = -z - \sum _y^{(x)} |H_{xy} |^2 G_{yy}^{(x)} + Y_x = -z - \frac{1}{N} \sum _y^{(x)} G_{yy}^{(x)} + Y_x - \Psi _x\\&= -z - \frac{1}{|{{\mathcal {T}}}_\mathfrak a |} \sum _{y \in {{\mathcal {T}}}_\mathfrak a} G_{yy} + \varepsilon _x\,, \end{aligned}$$

where the error term \(\varepsilon _x\) satisfies

$$\begin{aligned} |\varepsilon _x | = {{\mathcal {O}}} \biggl (d^{-1/2} + \frac{1}{N} \exp ( q \varphi _\mathfrak a^2 d) + \exp (-2 q \varphi _\mathfrak a^2 d ) + \varphi _\mathfrak a\biggr ) = \mathcal O(\varphi _\mathfrak a) \end{aligned}$$
(4.44)

with very high probability. Here, in the first step of (4.44) we used (4.38a), (4.38c), Proposition 4.8 (i), and the bound on \(\Psi _x\) in the definition (4.17) of \({{\mathcal {T}}}_\mathfrak a\), and in the second step of (4.44) we used that \(\varphi _\mathfrak a^2 d = \mathfrak a^2 (\log N)^{2/3} d^{-1/3}\) and (4.13) imply \((\log N)^{1/6} / {{\mathcal {C}}} \leqslant \varphi ^2_\mathfrak ad \leqslant {{\mathcal {C}}} (\log N)^{1/2}\), which yields

$$\begin{aligned} \frac{1}{N} \exp ( q \varphi _\mathfrak a^2 d) + \exp (-2 q \varphi _\mathfrak a^2 d ) \leqslant {{\mathcal {C}}} d^{-10} \leqslant \varphi _\mathfrak a\,. \end{aligned}$$
(4.45)

Thus, for \((G_{xx})_{x \in {{\mathcal {T}}}_\mathfrak a}\) we get the self-consistent equation in (4.42) with \(g_x = G_{xx}\) and \({{\mathcal {X}}} = {{\mathcal {T}}}_\mathfrak a\). Moreover, by the bound on \(\Phi _x\) in the definition (4.17) of \({{\mathcal {T}}}_\mathfrak a\), we have \(\beta _x = 1 + {{\mathcal {O}}}(\varphi _\mathfrak a)\). Hence, by (A.5), the assumption \(\phi = 1\) and \(d \geqslant {\mathcal {C}}\sqrt{\log N}\), we find that

$$\begin{aligned} |G_{xx} - m | \leqslant |G_{xx} - m_{\beta _x} | + |m_{\beta _x} - m | \leqslant b\,, \end{aligned}$$

choosing the constant \({{\mathcal {D}}}\) in (4.13) large enough that the right-hand side of (A.5), i.e. \(C |\beta _x - 1 |\), is bounded by b/2. Hence Lemma 4.19 is applicable and we obtain \(|G_{xx} - m | = O(\max _{y \in {{\mathcal {T}}}_\mathfrak a} |\varepsilon _y |)\). Therefore, we obtain

$$\begin{aligned} |G_{xx} - m_{\beta _x} | \leqslant |G_{xx} - m | + |m - m_{\beta _x} | \leqslant {\mathcal {C}}\varphi _\mathfrak a\end{aligned}$$
(4.46)

with very high probability. This concludes the proof in the case \(x \in {{\mathcal {T}}}_\mathfrak a\).

What remains is the case \(x \notin {{\mathcal {T}}}_\mathfrak a\). In that case, we obtain from Lemma 4.16 that

$$\begin{aligned} \frac{1}{G_{xx}} = -z - \sum _{y \in {{\mathcal {T}}}_\mathfrak a}^{(x)} |H_{xy} |^2 G_{yy}^{(x)} - \sum _{y \in {{\mathcal {T}}}_\mathfrak a^c}^{(x)} |H_{xy} |^2 G_{yy}^{(x)} + Y_x = -z - \beta _x m + \varepsilon _x\,, \end{aligned}$$
(4.47)

where the error term \(\varepsilon _x\) satisfies \(\varepsilon _x = {{\mathcal {O}}} ((1 + \beta _x) \varphi _\mathfrak a)\) with very high probability. Here we used (4.38a) as well as (4.38c), (4.45), (4.46) and Proposition 4.8 (ii) twice to conclude that

$$\begin{aligned}&\sum _{y \in {{\mathcal {T}}}_\mathfrak a}^{(x)} |H_{xy} |^2 G_{yy}^{(x)} = \beta _x m + {{\mathcal {O}}} (\beta _x \varphi _\mathfrak a) \, ,\\&\qquad \sum _{y \in \mathcal T_\mathfrak a^c}^{(x)} |H_{xy} |^2 G_{yy}^{(x)} = {{\mathcal {O}}}\big (\varphi _\mathfrak a+ d^4 \exp (-q \varphi _\mathfrak a^2 d )\big ) = {{\mathcal {O}}}(\varphi _\mathfrak a) \end{aligned}$$

with very high probability. From (4.7) and (4.47) we therefore get

$$\begin{aligned} G_{xx} - m_{\beta _x} = - m_{\beta _x} \, \frac{1}{-z - \beta _x m + \varepsilon _x} \, \varepsilon _x\,. \end{aligned}$$
(4.48)

To estimate the right-hand side of (4.48), we consider the cases \(\beta _x \leqslant 1\) and \(\beta _x > 1\) separately.

If \(\beta _x \leqslant 1\) then, by (A.4), the first factor of (4.48) is bounded by C. Thus, by (4.7), the second factor is bounded by 2C provided that \(|\varepsilon _x | \leqslant 1/{2C}\) by choosing \({{\mathcal {D}}}\) in (4.13) large enough, and the third factor is bounded by \({{\mathcal {C}}} \varphi _\mathfrak a\). This yields the claim.

If \(\beta _x > 1\), we use that \({{\,\mathrm{Im}\,}}m \geqslant c\) for some constant \(c > 0\) depending only on \(\kappa \) and L. Thus, the right-hand side of (4.48) is bounded in absolute value, again using (A.4), by \(C \frac{1}{\beta _x c/2} {{\mathcal {C}}} \beta _x \varphi _\mathfrak a\), provided that \({{\mathcal {D}}}\) in (4.13) is chosen large enough. This yields the claim. \(\quad \square \)

Proof of Theorem 4.2

After possibly increasing L, we can assume that L in the definition of \({\mathbf {S}}\) in (4.3) satisfies \(L \geqslant 2/\lambda + 1\), where \(\lambda \) is chosen as in Proposition 4.18.

We first show that (4.10) follows from (4.9). Indeed, averaging the estimate on \(|G_{xx} - m_{\beta _x} |\) in (4.9) over \(x \in [N]\), using that \(m_{\beta _x} = m + O(\varphi _\mathfrak a)\) for \(x \in {{\mathcal {T}}}_\mathfrak a\) by (A.5) and estimating the summands in \({{\mathcal {T}}}_\mathfrak a^c\) by Proposition 4.8 (i) and (A.4) yield (4.10) due to (4.45).

What remains is the proof of (4.9). Let \(z_0 \in {\mathbf {S}}\), set \(J :=\min \{ j \in {\mathbb {N}}_0 :\mathrm {Im}\,z_0 + j N^{-3} \geqslant 2 / \lambda \}\), and define \(z_j :=z_0 + \mathrm {i}j N^{-3}\) for \(j \in [J]\). We shall prove the bound in (4.9) at \(z = z_j\) by induction on j, starting from \(j = J\) and going down to \(j = 0\). Since \(|G_{xy}(z) | \leqslant (\mathrm {Im}\,z)^{-1}\) and \(|m_{\beta _x}(z) | \leqslant (\mathrm {Im}\,z)^{-1}\) for all \(x,y \in [N]\), we have \(\max _x |G_{xx}(z_J) - m_{\beta _x}(z_J) | \leqslant \lambda \) and \(\phi (z_J) = 1\).

For the induction step \(j \rightarrow j - 1\), suppose that \(\phi (z_j) = 1\) with very high probability. Then, by Proposition 4.18, we deduce that \(\Lambda (z_j) \leqslant \mathcal C \varphi _\mathfrak a\) with very high probability. Since \(G_{xy}\) and \(m_{\beta _x}\) are Lipschitz-continuous on \({\mathbf {S}}\) with constant \(N^2\), we conclude that \(\Lambda (z_{j-1}) \leqslant {\mathcal {C}}\varphi _\mathfrak a+ N^{-1}\) with very high probability. If N is sufficiently large and \(\varphi _\mathfrak a\) is sufficiently small, obtained by choosing \({{\mathcal {D}}}\) in (4.13) large enough, then we deduce that \(\Lambda (z_{j-1}) \leqslant \lambda \) with very high probability and hence \(\phi (z_{j - 1}) = 1\) with very high probability. Using Proposition 4.18, this concludes the induction step, and hence establishes \(\Lambda (z_0) \leqslant {{\mathcal {C}}} \varphi _\mathfrak a\) with very high probability. Here we used that the intersection of J events of very high probability is an event of very high probability, since \(J \leqslant C N^3\), where C depends on \(\kappa \). \(\quad \square \)