Delocalization Transition for Critical Erdős–Rényi Graphs

We analyse the eigenvectors of the adjacency matrix of a critical Erdős–Rényi graph \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {G}}(N,d/N)$$\end{document}G(N,d/N), where d is of order \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\log N$$\end{document}logN. We show that its spectrum splits into two phases: a delocalized phase in the middle of the spectrum, where the eigenvectors are completely delocalized, and a semilocalized phase near the edges of the spectrum, where the eigenvectors are essentially localized on a small number of vertices. In the semilocalized phase the mass of an eigenvector is concentrated in a small number of disjoint balls centred around resonant vertices, in each of which it is a radial exponentially decaying function. The transition between the phases is sharp and is manifested in a discontinuity in the localization exponent \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma (\varvec{\mathrm {w}})$$\end{document}γ(w) of an eigenvector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\mathrm {w}}$$\end{document}w, defined through \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \varvec{\mathrm {w}} \Vert _\infty / \Vert \varvec{\mathrm {w}} \Vert _2 = N^{-\gamma (\varvec{\mathrm {w}})}$$\end{document}‖w‖∞/‖w‖2=N-γ(w). Our results remain valid throughout the optimal regime \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{\log N} \ll d \leqslant O(\log N)$$\end{document}logN≪d⩽O(logN).


Overview. Let
A be the adjacency matrix of a graph with vertex set [N ]={1, . . . , N }. We are interested in the geometric structure of the eigenvectors of A, in particular their spatial localization. An 2 -normalized eigenvector w = (w x ) x∈ [N ] gives rise to a probability measure x∈ [N ] w 2 x δ x on the set of vertices. Informally, w is delocalized if its mass is approximately uniformly distributed throughout [N ], and localized if its mass is essentially concentrated in a small number of vertices.
There are several ways of quantifying spatial localization. One is the notion of concentration of mass, sometimes referred to as scarring [49], stating that there is some set B ⊂ [N ] of small cardinality and a small ε > 0 such that x∈B w 2 x = 1 − ε. In this case, it is also of interest to characterize the geometric structure of the vertex set B and of the eigenvector w restricted to B. Another convenient quantifier of spatial localization is the p -norm w p for 2 p ∞. It has the following interpretation: if the mass of w is uniformly distributed over some set B ⊂ [N ] then w 2 p = |B| −1+2/ p . Focusing on the ∞ -norm for definiteness, we define the localization exponent γ (w) through Thus, 0 γ (w) 1, and γ (w) = 0 corresponds to localization at a single vertex while γ (w) = 1 to complete delocalization. In this paper we address the question of spatial localization for the random Erdős-Rényi graph G(N , d/N ). We consider the limit N → ∞ with d ≡ d N . It is well known that G(N , d/N ) undergoes a dramatic change in behaviour at the critical scale d log N , which is the scale at and below which the vertex degrees do not concentrate. Thus, for d log N , with high probability all degrees are approximately equal and the graph is homogeneous. On the other hand, for d log N , the degrees do not concentrate and the graph becomes highly inhomogeneous: it contains for instance hubs of exceptionally large degree, leaves, and isolated vertices. As long as d > 1, the graph has with high probability a unique giant component, and we shall always restrict our attention to it.
Here we propose the Erdős-Rényi graph at criticality as a simple and natural model on which to address the question of spatial localization of eigenvectors. It has the following attributes.
(i) Its graph structure provides an intrinsic and nontrivial notion of distance.
(ii) Its spectrum splits into a delocalized phase and a semilocalized phase. The transition between the phases is sharp, in the sense of a discontinuity in the localization exponent. (iii) Both phases are amenable to rigorous analysis.
Our results are summarized in the phase diagram of Fig. 1, which is expressed in terms of the parameter b parametrizing d = b log N on the critical scale and the eigenvalue λ of A/ √ d associated with the eigenvector w. To the best of our knowledge, the phase coexistence for the critical Erdős-Rényi graph established in this paper had previously not been analysed even in the physics literature.
Throughout the following, we always exclude the largest eigenvalue of A, its Perron-Frobenius eigenvalue, which is an outlier separated from the rest of the spectrum. The delocalized phase is characterized by a localization exponent asymptotically equal to 1. It exists for all fixed b > 0 and consists asymptotically of energies in (−2, 0) ∪ (0, 2). The semilocalized phase is characterized by a localization exponent asymptotically less than 1. It exists only when b < b * , where b * . . the spectrum. Moreover, in the semilocalized phase scarring occurs in the sense that a fraction 1 − o(1) of the mass of the eigenvectors is supported in a set of at most N ρ b (λ)+o (1) vertices.
The eigenvalues in the semilocalized phase were analysed in [10], where it was proved that they arise precisely from vertices x of abnormally large degree, D x 2d. More precisely, it was proved in [10] that each vertex x with D x 2d gives rise to two The same result for the O(1) largest degree vertices was independently proved in [54] by a different method. We refer also to [14,15] for an analysis in the supercritical and subcritical phases.
In the current paper, we prove that the eigenvector w associated with an eigenvalue λ in the semilocalized phase is highly concentrated around resonant vertices at energy λ, which are defined as the vertices x such that (D x /d) is close to λ. For this reason, we also call the resonant vertices localization centres. With high probability, and after a small pruning of the graph, all balls B r (x) of a certain radius r 1 around the resonant vertices are disjoint, and within any such ball B r (x) the eigenvector w is an approximately radial exponentially decaying function. The number of resonant vertices at energy λ is comparable to the density of states, N ρ b (λ)+o (1) , which is much less than N . See Fig. 3 for a schematic illustration of the mass distribution of w. The behaviour of the critical Erdős-Rényi graph described above has some similarities but also differences to that of the Anderson model [11]. The Anderson model on Z n with n 3 is conjectured to exhibit a metal-insulator, or delocalization-localization, transition: for weak enough disorder, the spectrum splits into a delocalized phase in the middle of the spectrum and a localized phase near the spectral edges. See e.g. [8, Figure  1.2] for a phase diagram of its conjectured behaviour. So far, only the localized phase of the Anderson model has been understood rigorously, in the landmark works [4,39], as well as contributions of many subsequent developments. The phase diagram for the Anderson model bears some similarity to that of Fig. 1, in which one can interpret 1/b as the disorder strength, since smaller values of b lead to stronger inhomogeneities in the graph.
As is apparent from the proofs in [4,39], in the localized phase the local structure of an eigenvector of the Anderson model is similar to that of the critical Erdős-Rényi graph described above: exponentially decaying around well-separated localization centres associated with resonances near the energy λ of the eigenvector. The localization centres arise from exceptionally large local averages of the potential. The phenomenon of localization can be heuristically understood using the following well-known rule of thumb: one expects localization around a single localization centre if the level spacing is much larger than the tunnelling amplitude between localization centres. It arises from perturbation theory around the block diagonal model where the complement of balls B r (x) around localization centres is set to zero. On a very elementary level, this rule is illustrated by the matrix H (t) = 0 t t 1 , whose eigenvectors are localized for t = 0, remain essentially localized for t 1, where perturbation theory around H (0) is valid, and become delocalized for t 1, where perturbation theory around H (0) fails.
More precisely, it is a general heuristic that the tunnelling amplitude decays exponentially in the distance between the localization centres [25]. Denoting by β(λ) > 1 the rate of exponential decay at energy λ, the rule of thumb hence reads β(λ) −L ε(λ) , (1.3) where L is the distance between the localization centres and ε(λ) the level spacing at energy λ. For the Anderson model restricted to a finite cube of Z n with side length N 1/n , the level spacing ε(λ) is of order N −1 (see [57] and [8,Chapter 4]) whereas the diameter of the graph is of order N 1/n . Hence, the rule of thumb (1.3) becomes which is satisfied and one therefore expects localization. For the critical Erdős-Rényi graph, the level spacing ε(λ) is N −ρ(λ)+o (1) but the diameter of the giant component is only log N log d . Hence, the rule of thumb (1.3) becomes which is never satisfied because log β(λ) log d → 0 as N → ∞. Thus, the rule of thumb (1.3) is satisfied in the localized phase of the Anderson model but not in the semilocalized phase of the critical Erdős-Rényi graph. The underlying reason behind this difference is that the diameter of the Anderson model is polynomial in N , while the diameter of the critical Erdős-Rényi graph is logarithmic in N . Thus, the critical Erdős-Rényi graph is far more connected than the Anderson model; this property tends to push it more towards the delocalized behaviour of mean-field systems. As noted above, another important difference between the localized phase of the Anderson model and the semilocalized phase of the critical Erdős-Rényi graph is that the density of states is of order N in the former and a fractional power of N in the latter.
Up to now we have focused on the Erdős-Rényi graph on the critical scale d log N . It is natural to ask whether this assumption can be relaxed without changing its behaviour. The question of the upper bound on d is simple: as explained above, there is no semilocalized phase for d > b * log N , and the delocalized phase is completely understood up to d N /2, thanks to Theorem 1.8 below and [35,42]. The lower bound is more subtle. In fact, it turns out that all of our results remain valid throughout the regime log N d O(log N ) . (1.4) The lower bound √ log N is optimal in the sense that below it both phases are disrupted and the phase diagram from Fig. 1 no longer holds. Indeed, for d √ log N a new family of localized states, associated with so-called tuning forks at the periphery of the graph, appear throughout the delocalized and semilocalized phases. We refer to Sect. 1.5 below for more details.
Although a rigorous understanding of the metal-insulator transition for the Anderson model is still elusive, some progress has been made for random band matrices. Random band matrices [23,40,47,58] constitute an attractive model interpolating between the Anderson model and mean-field Wigner matrices. They retain the n-dimensional structure of the Anderson model but have proved somewhat more amenable to rigorous analysis. They are conjectured [40] to have a similar phase diagram as the Anderson model in dimensions n 3. As for the Anderson model, dimensions n > 1 have so far seen little progress, but for n = 1 much has been understood both in the localized [48,50] and the delocalized [20][21][22][28][29][30][31][32][33]43,51,52,59] phases. A simplification of band matrices is the ultrametric ensemble [41], where the Euclidean metric of Z n is replaced with an ultrametric arising from a tree structure. For this model, a phase transition was rigorously established in [56].
Another modification of the n-dimensional Anderson model is the Anderson model on the Bethe lattice, an infinite regular tree corresponding to the case n = ∞. For it, the existence of a delocalized phase was shown in [5,38,44]. In [6,7] it was shown that for unbounded random potentials the delocalized phase exists for arbitrarily weak disorder. It extends beyond the spectrum of the unperturbed adjacency matrix into the so-called Lifschitz tails, where the density of states is very small. The authors showed that, through the mechanism of resonant delocalization, the exponentially decaying tunnelling amplitudes between localization centres are counterbalanced by an exponentially large number of possible channels through which tunnelling can occur, so that the rule of thumb (1.3) for localization is violated. As a consequence, the eigenvectors are delocalized across many resonant localization centres. We remark that this analysis was made possible by the absence of cycles on the Bethe lattice. In contrast, the global geometry of the critical Erdős-Rényi graph is fundamentally different from that of the Bethe lattice (through the existence of a very large number of long cycles), which has a defining impact on the nature of the delocalization-semilocalization transition summarized in Fig. 1.
Transitions in the localization behaviour of eigenvectors have also been analysed in several mean-field type models. In [45,46] the authors considered the sum of a Wigner matrix and a diagonal matrix with independent random entries with a large enough variance. They showed that the eigenvectors in the bulk are delocalized while near the edge they are partially localized at a single site. Their partially localized phase can be understood heuristically as a rigorous (and highly nontrivial) verification of the rule of thumb for localization, where the perturbation takes place around the diagonal matrix. Heavy-tailed Wigner matrices, or Lévy matrices, whose entries have α-stable laws for 0 < α < 2, were proposed in [24] as a simple model that exhibits a transition in the localization of its eigenvectors; we refer to [3] for a summary of the predictions from [24,53]. In [18,19] it was proved that for energies in a compact interval around the origin, eigenvectors are weakly delocalized, and for 0 < α < 2/3 for energies far enough from the origin, eigenvectors are weakly localized. In [3], full delocalization was proved in a compact interval around the origin, and the authors even established GOE local eigenvalue statistics in the same spectral region. In [2], the law of the eigenvector components of Lévy matrices was computed. Conventions Throughout the following, every quantity that is not explicitly constant depends on the fundamental parameter N . We almost always omit this dependence from our notation. We use C to denote a generic positive universal constant, and write The entrywise nonnegative matrix A/ √ d has a trivial Perron-Frobenius eigenvalue, which is its largest eigenvalue. In the following we only consider the other eigenvalues, which we call nontrivial. In the regime d √ log N / log log N , which we always assume in this paper, the trivial eigenvalue is located at (1)), and it is separated from the nontrivial ones with high probability; see [14]. Moreover, without loss of generality in this subsection we always assume that d 3 log N , for otherwise the semilocalized phase does not exist (see Sect. 1.1).
In Theorem 1.7 below we show that the nontrivial eigenvalues of A/ √ d outside the interval [−2, 2] are in two-to-one correspondence with vertices with normalized degree greater than 2: each vertex x with α x > 2 gives rise to two eigenvalues of A/ √ d located with high probability near ± (α x ), where we defined the bijective function Our main result in the semilocalized phase is about the eigenvectors associated with these eigenvalues. To state it, we need the following notions. Definition 1.1. Let λ > 2 and 0 < δ λ − 2. We define the set of resonant vertices at energy λ through We denote by B r (x) the ball around the vertex x of radius r for the graph distance in G. Define r = c log N ; (1.8) all of our results will hold provided c > 0 is chosen to be a small enough universal constant. The quantity r will play the role of a maximal radius for balls around localization centres. We introduce the basic control parameters which under our assumptions will always be small (see Remark 1.5 below). We now state our main result in the semilocalized phase. Let w be a normalized eigenvector of A/ √ d with nontrivial eigenvalue λ 2 + Cξ 1/2 . Let 0 < δ (λ − 2)/2. Then for each x ∈ W λ,δ there exists a normalized vector v(x), supported in B r (x), such that the supports of v(x) and v(y) are disjoint for x = y, and Moreover, v(x) decays exponentially around x in the sense that for any r 0 we have Theorem 1.2 implies that w is almost entirely concentrated in the balls around the resonant vertices, and in each such ball B r (x), x ∈ W λ,δ , the vector w is almost collinear to the vector v(x). Thus, v(x) has the interpretation of the localization profile around the localization centre x. Since it has exponential decay, we deduce immediately from Theorem 1.2 that the radius r can be made smaller at the expense of worse error terms. In fact, in Definition 3.2 and Theorem 3.4 below, we give an explicit definition of v(x), which shows that it is radial in the sense that its value at a vertex y depends only on the distance between x and y, in which it is an exponentially decaying function. To ensure that the supports of the vectors v(x) for different x do not overlap, v(x) is in fact defined as the restriction of a radial function around x to a subgraph of G, the pruned graph, which differs from G by only a small number of edges and whose balls of radius r around the vertices of W λ,δ are disjoint (see Proposition 3.1 below). For positive eigenvalues, the entries of v(x) are nonnegative, while for negative eigenvalues its entries carry a sign that alternates in the distance to x. The set of resonant vertices W λ,δ is a small fraction of the whole vertex set [N ]; its size is analysed in Lemma A.12 below.
Remark 1.5. Note that, by the lower bounds imposed on d and λ in Theorem 1.2, we always have ξ, ξ λ−2 1/C.
Using the exponential decay of the localization profiles, it is easy to deduce from Theorem 1.2 that a positive proportion of the eigenvector mass concentrates at the resonant vertices.
Next, we state a rigidity result on the eigenvalue locations in the semilocalized phase. It generalizes [10, Corollary 2.3] by improving the error bound and extending it to the full regime (1.4) of d, below which it must fail (see Sect. 1.5 below). Its proof is a byproduct of the proof of our main result in the semilocalized phase, Theorem 1.2. We denote the ordered eigenvalues of a Hermitian matrix M ∈ C N ×N by λ 1 (M) For the following statements we order the normalized degrees by choosing a (random) permutation σ ∈ S N such that i → α σ (i) is nonincreasing. Theorem 1.7 (Eigenvalue locations in semilocalized phase). For any ν > 0 there exists a constant C such that the following holds. Suppose that (1.10) holds. Let Then with probability at least 1 − C N −ν , for all 1 i |U| we have and for all |U| + 2 i N − |U| we have We remark that the upper bound on d from (1.10), which is necessary for the existence of a semilocalized phase, can be relaxed in Theorem 1.7 to obtain an estimate on max 2 i N |λ i (A/ √ d)| in the supercritical regime d 3 log N , which is sharper than the one in [10]. The proof is the same and we do not pursue this direction here.
We conclude this subsection with a discussion on the counting function of the normalized degrees, which we use to give estimates on the number of resonant vertices (1.7). For b 0 and α 2 define the exponent (1.13) has the interpretation of the deterministic location of the largest normalized degree. See Fig. 4 for a plot of θ b .
In Appendix A.4 below, we obtain estimates on the density of the normalized degrees (α x ) x∈ [N ] and combine it with Theorem 1.2 to deduce a lower bound on the p -norm of eigenvectors in the semilocalized phase. The precise statements are given in Lemma A.12 and Corollary A.13, which provide quantitative error bounds throughout the regime (1.10). Here, we summarize them, for simplicity, in simple qualitative versions in the critical regime d log N . For b < b * we abbreviate In the delocalized phase, i.e. in S κ , we also show that the spectral measure of A/ √ d at any vertex x is well approximated by the spectral measure at the root of T dα x ,d , the infinite rooted (dα x , d)-regular tree, whose root has dα x children and all other vertices have d children. This approximation is a local law, valid for intervals containing down to N κ eigenvalues. See Remark 4.4 as well as Remark 4.3 and Appendix A.2 below for details. Remark 1.9. In [42] it is shown that (1.19) holds with probability at least 1 − C N −ν for all eigenvectors provided that We note that the domain S κ is optimal, up to the choice of κ > 0. Indeed, as explained in Sect. 1.5 below, delocalization fails in the neighbourhood of the origin, owing to a proliferation highly localized tuning fork states. Similarly, we expect the delocalization to fail in the neighbourhoods of ±2, where the masses of the eigenvectors become concentrated on vertices x with normalized degrees α x close to 2. The neighbourhoods of 0, ±2 are also singled out as the regions where the self-consistent equation used to prove Theorem 1.8 (see Lemma 4.16) becomes unstable. This instability is directly related to the appearance of singularities in the spectral measure of the tree T dα x ,d (see (4.11) and Fig. 8 for an illustration). The singularity near 0 occurs when α x is close to 0, and the singularities near ±2 when α x is close to 2. See Fig. 10 for a simulation that demonstrates numerically the failure of delocalization outside of S κ .
respectively. Here, the constants C depend on K in addition to ν and κ.
The modifications to the proofs of Theorems 1.2 and 1.7 required to establish Theorem 1.11 are minor and follow along the lines of [10,Section 10]. The modification to the proof of Theorem 1.8 is trivial, since the assumptions of the general Theorem 4.2 below include the sparse Wigner matrix M. We also remark that, with some extra work, one can relax the boundedness assumption on the entries of W , which we shall however not do here.

The limits of sparseness and the scale d
√ log N . We conclude this section with a discussion on how sparse G can be for our results to remain valid. We show that all of our results-Theorems 1.2, 1.7, and 1.8-are wrong below the regime (1.4), i.e. if d is smaller than order √ log N . Thus, our sparseness assumptions-the lower bounds on d from (1.10) and (1.18)-are optimal (up to the factor log log N in (1.10) and the factor C in (1.18)). The fundamental reason for this change of behaviour will turn out to be that the ratio |S 2 (x)|/|S 1  For D = 0, 1, 2, . . . we introduce a star 1 tuning fork of degree D rooted in G giant , or D-tuning fork for short, which is obtained by taking two stars with central degree D and connecting their hubs to a common base vertex in G giant . We refer to Fig. 5 for an illustration and Definition A.17 below for a precise definition.
It is not hard to see that every D-tuning fork gives rise to two eigenvalues ± √ D/d of A/ √ d restricted to G giant , whose associated eigenvectors are supported on the stars (see Lemma A.18 below). We denote by . . = { √ D/d . . a D-tuning fork exists} the spectrum of A/ √ d restricted to G giant generated by the tuning forks. Any eigenvector associated with an eigenvalue √ D/d ∈ is localized on precisely 2D + 2 vertices. Thus, D-tuning forks provide a simple way of constructing localized states. Note that this is a very basic form of concentration of mass, supported at the periphery of the graph on special graph structures, and is unrelated to the much more subtle concentration in the semilocalized phase described in Sect. 1.2.
For d > 0 and D ∈ N we now estimate the number of D-tuning forks in G(N , d/N ), which we denote by F(d, D). The following result is proved in Appendix A.6.  We deduce that if d (1/2 − ε) log N then = ∅ and hence the delocalization for all eigenvectors from Remark 1.9 fails. Hence, the lower bound (1.20) is optimal up to the value of C.
Similarly, for d √ log N the set is in general nonempty, but we always have ⊂ [−κ, κ] for any fixed κ > 0, so that eigenvalues from do not interfere with the statements of Theorems 1.2, 1.7, and 1.8. On the other hand, if d = √ log N /t for constant t, we find that is asymptotically dense in the interval [−t/ √ 2, t/ √ 2]. Since the conclusions of Theorems 1.2, 1.7, and 1.8 are obviously wrong for any eigenvalue from , they must all be wrong for large enough t. This shows that the lower bounds d from (1.10) and (1.18) are optimal (up to the factor log log N in (1.10) and the factor C in (1.18)).
In fact, the emergence of the tuning fork eigenvalues of order one and the failure of all of our proofs has the same underlying root cause, which singles out the scale d √ log N as the scale below which the concentration of the ratio fails for vertices x satisfying D x d. Clearly, to have a D-tuning fork with D d, (1.22) has to fail at the hubs of the stars. Moreover, (1.22) enters our proofs of both the semilocalized and the delocalized phase in a crucial way. For the former, it is linked to the validity of the local approximation by the (D x , d)-regular tree from Appendix A.2, which underlies also the construction of the localization profile vectors (see e.g. (3.35) below). For the latter, in the language of Definition 4.6 below, it is linked to the property that most neighbours of any vertex are typical (see Proposition 4.8 (ii) below).

Basic Definitions and Overview of Proofs
In this preliminary section we introduce some basic notations and definitions that are used throughout the paper, and give an overview of the proofs of Theorems 1.2 (semilocalized phase) and 1.8 (delocalized phase). These proofs are unrelated and, thus, explained separately. For simplicity, in this overview we only consider qualitative error terms of the form o (1), although all of our estimates are in fact quantitative.
Vectors in R N are denoted by boldface lowercase Latin letters like u, v and w. We In the above definitions, if the graph H is the Erdős-Rényi graph G, we systematically omit the superscript G.
The following notion of very high probability is a convenient shorthand used throughout the paper. It simplifies considerably the probabilistic statements of the kind that appear in Theorems 1.2, 1.7, and 1.8. It also introduces two special symbols, ν and C, which appear throughout the rest of the paper. Definition 2.1. Let ≡ N ,ν be a family of events parametrized by N ∈ N and ν > 0. We say that holds with very high probability if for every ν > 0 there exists C ≡ C ν such that

Convention 2.2.
In statements that hold with very high probability, we use the special symbol C ≡ C ν to denote a generic positive constant depending on ν such that the statement holds with probability at least 1 − C ν N −ν provided C ν is chosen large enough. Thus, the bound |X | CY with very high probability means that, for each ν > 0, there is a constant C ν > 0, depending on ν, such that Here, X and Y are allowed to depend on N . We also write X = O(Y ) to mean |X | CY .
We remark that the notion of very high probability from Definition 2.1 survives a union bound involving N O(1) events. We shall tacitly use this fact throughout the paper. Moreover, throughout the paper, the constant C ≡ C ν in the assumptions (1.10) and (1.18) is always assumed to be large enough.

2.2.
Overview of proof in semilocalized phase. The starting point of the proof of Theorem 1.2 is the following simple observation. Suppose that M is a Hermitian matrix with eigenvalue λ and associated eigenvector w. Let be an orthogonal projection and write . . = I − . If λ is not an eigenvalue of M then from (M − λ)w = 0 we deduce If is an eigenprojection of M whose range contains the eigenspace of λ (for instance = ww * if λ is simple) then clearly both sides of (2.1) vanish. The basic idea of our proof is to apply an approximate version of this observation to M = A/ √ d, by choosing appropriately, and showing that the left-hand side of (2.1) is small by estimating the right-hand side.
In fact, we choose 2 The construction of the localization profile v(x) uses the pruned graph G τ from [10], a subgraph of G depending on a threshold τ > 1, which differs from G by only a small number of edges and whose balls of radius r around the vertices of V τ . . = {x . . α x τ } are disjoint (see Proposition 3.1 below). Now we define the vector v(x) . . = v τ + (x), where, for σ = ± and τ > 1, The motivation behind this choice is explained in Appendix A.2: with high probability, the r -neighbourhood of x in G τ looks roughly like that of the root of infinite regular tree T D x ,d whose root has D x children and all other vertices d children. The adjacency matrix of T D x ,d has the exact eigenvalues ± √ d (α x ) with the corresponding eigenvectors given by (2.3) The central idea of our proof is the introduction of a block diagonal approximation of the pruned graph. Define the orthogonal projections The range of from (2.2) is a subspace of the range of τ , i.e. τ = . The interpretation of τ is the orthogonal projection onto all localization profiles around vertices x with normalized degree at least 2 + o (1), which is precisely the set of vertices around which one can define an exponentially decaying localization profile. Now we define the block diagonal approximation of the pruned graph as here we defined the centred and scaled adjacency matrix H τ .
where E τ is a suitably chosen matrix that is close to EA G / √ d and preserves the locality of A G τ in balls around the vertices of V τ . In the subspace spanned by the localization In the orthogonal complement, it is equal to H τ . The off-diagonal blocks are zero. The main work of our proof consists in an analysis of H τ . In Indeed, ignoring minor issues pertaining to the centring EA G , we trivially has a spectral gap: τ H τ τ has no eigenvalues in the δ-neighbourhood of λ, simply because the projection removes the projections v τ σ (x)v τ σ (x) * with eigenvalues σ (α x ) in the δ-neighbourhood of λ. Moreover, the τ -block also has such a spectral gap by (d) and λ > 2 + o (1). Hence, by (c), we deduce the desired spectral gap (b).
Thus, what remains is the proof of (c) and (d). To prove (c), we prove follows from a detailed analysis of the graph G \ G τ removed from G to obtain the pruned graph G τ , which we decompose as a union of a graph of small maximal degree and a forest, to which standard estimates of adjacency matrices of graphs can be applied (see Lemma 3.8 is an approximate eigenvector of H τ with approximate eigenvalue σ (α x ) (see Proposition 3.9 below). Then we deduce (1) , are disjoint and the locality of the operator H τ (see Lemma 3.11 below). Thus we obtain (c).
Finally, we sketch the proof of (d). The starting point is an observation going back to [10,15]: from an estimate on the spectral radius of the nonbacktracking matrix associated with H from [15] and an Ihara-Bass-type formula relating the spectra of H and its nonbacktracking matrix from [15], we obtain the quadratic form inequality , |H | is the absolute value of the Hermitian matrix H , and o(1) is in the sense of operator norm (see Proposition 3.13 below). Using (c), we deduce the inequality (2.5) To estimate τ H τ τ , we take a normalized eigenvector w of τ H τ τ with maximal eigenvalue λ > 0. Thus, w ⊥ v τ ± (x) for all x ∈ V 2+o(1) . We estimate τ H τ τ from above (an analogous argument yields an estimate from below) using (2.5) to get since max x α x C log N with very high probability. The estimate (2.7) is a delocalization bound, in the vertex set V τ , for any eigenvector (1) and whose associated eigenvalue is larger than 2τ + o (1). It crucially relies on the assumption that (1) , without which it is false (see Proposition 3.14 below). The underlying principle behind its proof is the same as that of the Combes-Thomas estimate [25]: the Green function ((λ − Z ) −1 ) i j of a local operator Z at a spectral parameter λ separated from the spectrum of Z decays exponentially in the distance between i and j, at a rate inversely proportional to the distance from λ to the spectrum of Z . We in fact use a radial form of a Combes-Thomas estimate, where Z is the tridiagonalization of a local restriction of H τ around a vertex x ∈ V τ (see Appendix A.2) and i, j index radii of concentric spheres. The key observation is that, by the orthogonality assumption on w, the Green function ((λ − Z ) −1 ) ir , 0 i < r , and the eigenvector components in the radial basis u i , 0 i < r , satisfy the same linear difference equation. Thus we obtain exponential decay for the components u i , which yields u 2 for all x ∈ V τ , from which (2.7) follows since the balls B G τ 2r (x), x ∈ V τ , are disjoint.

Overview of proof in delocalized phase.
The delocalization result of Theorem 1.8 is an immediate consequence of a local law for the matrix A/ √ d, which controls the entries of the Green function in the form of high-probability estimates, for spectral scales Im z down to the optimal scale 1/N , which is the typical eigenvalue spacing. Such a local law was first established for d (log N ) 6 in [35] and extended down to d C log N in [42]. In both of these works, the diagonal entries of G are close to the Stieltjes transform of the semicircle law. In contrast, in the regime (1.4) the diagonal entry G x x is close to the Stieltjes transform of the spectral measure at the root of an infinite (D x , d)-regular tree. Hence, G x x does not concentrate around a deterministic quantity.
The basic approach of the proof is the same as for any local law: derive an approximate self-consistent equation with very high probability, solve it using a stability analysis, and perform a bootstrapping from large to small values of Im z . For a set T ⊂ [N ] denote by A (T ) the adjacency matrix of the graph G where the vertices of T (and all incident edges) have been removed, and denote by Green function. In order to understand the emergence of the self-consistent equation, it is instructive to consider the toy situation where, for a given vertex x, all neighbours S 1 (x) are in different connected components of A (x) . This is for instance the case if G is a tree. On the global scale, where Im z is large enough, this assumption is in fact valid to a good approximation, since the neighbourhood of x is with high probability a tree. Then a simple application of Schur's complement formula and the resolvent identity yield Thus, on the global scale, using that G is bounded, we obtain the self-consistent equation with very high probability. It is instructive to solve the self-consistent equation (2.9) in the family on the global scale. To that end, we introduce the notion of typical vertices, which is roughly (In fact, as explained below, the actual definition for local scales has to be different; see (2.12) below.) A simple argument shows that with very high probability most neighbours of any vertex are typical. With this definition, we can try to solve (2.9) on the global scale as follows. From the boundedness of G we obtain a self-consistent equation for the vector (G x x ) x∈T that reads It is not hard to see that the equation (2.10) has a unique solution, which satisfies Here m is the Stieltjes transform of the semicircle law, which satisfies m = 1 −z−m . Plugging this solution back into (2.9) and using that most neighbours of any vertex are typical shows that for One readily finds (see Appendix A.2 below) that m α x is Stieltjes transform of the spectral measure of the infinite (D x , d)-regular tree at the root.
The first main difficulty of the proof is to provide a derivation of identities of the form (2.8) (and hence a self-consistent equation of the form (2.9)) on the local scale Im z 1. We emphasize that the above derivation of (2.8) is completely wrong on the local scale. Unlike on the global scale, on the local scale the behaviour of the Green function is not governed by the local geometry of the graph, and long cycles contribute to G in an essential way. In particular, eigenvector delocalization, which follows from the local law, is a global property of the graph and cannot be addressed using local arguments; it is in fact wrong outside of the region S κ , although the above derivation is insensitive to the real part of z.
We address this difficulty by replacing the identities (2.8) with the following argument, which ultimately provides an a posteriori justification of approximate versions of (2.8) with very high probability, provided we are in the region S κ . We make an a priori assumption that the entries of G are bounded with very high probability; we propagate this assumption from large to small scales using a standard bootstrapping argument and the uniform boundedness of the density of the spectral measure associated with m α . It is precisely this uniform boundedness requirement that imposes the restriction to S κ in our local law (as explained in Remark 1.10, this restriction is necessary). The key tool that replaces the simpleminded approximation (2.8) is a series of large deviation estimates for sparse random vectors proved in [42], which, as it turns out, are effective for the full optimal regime (1.4). Thus, under the bootstrapping assumption that the entries of G are bounded, we obtain (2.8) (and hence also (2.9)), with some additional error terms, with very high probability.
The second main difficulty of the proof is that, on the local scale and for sparse graphs, the self-consistent equation (2.10), which can be derived from (2.9) as explained above, is not stable enough to be solved in (G x x ) x∈T . This problem stems from the sparseness of the graphs that we are considering, and does not appear in random matrix theory for denser (or even heavy-tailed) matrices. Indeed, the stability estimates of (2.10) carry a logarithmic factor, which is usually of no concern in random matrix theory but is deadly for the sparse regime of this paper. This is a major obstacle and in fact ultimately dooms the self-consistent equation (2.10). To explain the issue, write the sum in (2.10) (2.10), and expanding to first order in ε x , we obtain, using the definition of m, that ε x = −m 2 ((I −m 2 S) −1 ζ ) x . Thus, in order to deduce smallness of ε x from the smallness of ζ x , we need an estimate on the norm 3 (I − m 2 S) −1 ∞→∞ . In Appendix A.10 below we show that for typical S, Re z ∈ S κ , and small enough Im z,we have for some universal constant C and some constant C κ depending on κ. In our context, where ζ x is small but much larger than the reciprocal of the lower bound of (2.11), such a logarithmic factor is not affordable.
To address this difficulty, we avoid passing by the form (2.10) altogether, as it is doomed by (2.11). The underlying cause for the instability of (2.10) is the inhomogeneous local structure of the matrix S, which is a multiple of the adjacency matrix of a sparse graph. Thus, the solution is to derive a self-consistent equation of the form (2.10) but with an unstructured S, which has constant entries. The basic intuition is to replace the local average 1 yy in the first identity of (2.8) with the global average yy . Of course, in general these two are not close, but we can include their closeness into the definition of a typical vertex. Thus, we define the set of typical vertices as The main work of the proof is then to prove the following facts with very high probability. With (a) and (b) at hand, we explain how to conclude the proof. Using (a) and the approximate version of (2.8) established above, we deduce the self-consistent equation for typical vertices, which, unlike (2.10), is stable (see Lemma 4.19 below) and can be easily solved to show that with very high probability, and hence concludes the proof. What remains, therefore, is the proof of (a) and (b); see Proposition 4.8 below for a precise statement. Using the bootstrapping assumption of boundedness of the entries of G, it is not hard to estimate the probability P(x ∈ T ), which we prove to be 1 − o(1), although {x ∈ T } does not hold with very high probability (this characterizes the critical and subcritical regimes). Now if the events {x ∈ T }, x ∈ [N ], were all independent, it would then be a simple matter to deduce (a) and (b).
The most troublesome source of dependence among the events yy in the definition of T . Thus, the main difficulty of the proof is a decoupling argument that allows us to obtain good decay for the probability P(T ⊂ T ) in the size of T . This decay can only work up to a threshold in the size of T , beyond which the correlations among the different events kick in. In fact, we essentially prove that see Lemma 4.12. Choosing the largest possible T , T = o(d), we find that the first term on the right-hand side of (2.13) is bounded by N −ν provided that o(1)d 2 ν log N , which corresponds precisely to the optimal lower bound in (1.18). Using (2.13), we may deduce (a) and (b).
To prove (2.13), we need to decouple the events {x ∈ T }, x ∈ T . We do so by replacing the Green functions G (x) in the definition of T by G (T ) , after which the corresponding events are essentially independent. The error that we incur depends on the difference G (T ) yy − G yy , which we have to show is small with very high probability under the bootstrapping assumption that the entries of G are bounded. For T of fixed size, this follows easily from standard resolvent identities. However, for our purposes it is crucial that T can have size up to o(d), which requires a more careful quantitative analysis. As it turns out, which is precisely what we need to reach the optimal scale d √ log N from (1.4).

The Semilocalized Phase
In this section we prove the results of Sect. 1.2-Theorems 1.2 and 1.7.
3.1. The pruned graph and proof of Theorem 1.2. The balls (B r (x)) x∈W λ,δ in Theorem 1.2 are in general not disjoint. For its proof, and in order to give a precise definition of the vector v(x) in Theorem 1.2, we need to make these balls disjoint by pruning the graph G. This is an important ingredient of the proof, and will also allow us to state a more precise version of Theorem 1.2, which is Theorem 3.4 below. This pruning was previously introduced in [10]; it is performed by cutting edges from G in such a way that the balls (B r (x)) x∈W λ,δ are disjoint for appropriate radii, r = 2r , by carefully cutting in the right places, thus reducing the number of cut edges. This ensures that the pruned graph is close to the original graph in an appropriate sense. The pruned graph, G τ , depends on a parameter τ > 1, and its construction is the subject of the following proposition.
To state it, we introduce the following notations. For a subgraph G τ of G we abbreviate . Moreover, we define the set of vertices with large degrees (i) Any path in G τ connecting two different vertices in V τ has length at least 4r + 1.
In particular, the balls with very high probability.
For each x ∈ V τ and all 2 i 2r , the bound holds with very high probability.
The proof of Proposition 3.1 is postponed to the end of this section, in Sect. 3.5 below. It is essentially [10, Lemma 7.2], the main difference being that (vi) is considerably sharper than its counterpart, [10, Lemma 7.2 (vii)]; this stronger bound is essential to cover the full optimal regime (1.4) (see Sect. 1.5). As a guide for the reader's intuition, we recall the main idea of the pruning. First, for every x ∈ V τ , we make the 2rneighbourhood of x a tree by removing appropriate edges incident to x. Second, we take all paths of length less than 4r + 1 connecting different vertices in V τ , and remove all of their edges incident to any vertex in V τ . Note that only edges incident to vertices in V τ are removed. This informal description already explains properties (i)-(iv). Properties (v) and (vi) are probabilistic in nature, and express that with very high probability the pruning has a small impact on the graph. See also Lemma 3.8 below for a statement in terms of operator norms of the adjacency matrices. For the detailed algorithm, we refer to the proof of [10, Lemma 7.2].
Using the pruned graph G τ , we can give a more precise formulation of Theorem 1.2, where the localization profile vector v(x) from Theorem 1.2 is explicit. For its statement, we introduce the set of vertices around which a localization profile can be defined.
The following result restates Theorem 1.2 by identifying v(x) there as v τ + (x) given in (3.5). It easily implies Theorem 1.2, and the rest of this section is devoted to its proof.

Theorem 3.4. The following holds with very high probability. Suppose that d satisfies
Remark 3.5. An analogous result holds for negative eigenvalues −λ, where λ is as in Theorem 3.4 and v τ For the motivation behind Definition 3.2, we refer to the discussion in Sect. 2.2 and Appendix A.2. As explained there, if G τ is sufficiently close to the infinite tree T D x ,d in a ball of radius r around x, and if r is large enough for u r (x) to be very small, we expect (3.5) to be an approximate eigenvector of A. This will in fact turn out to be true; see Proposition 3.9 below. That r is in fact large enough is easy to see: the definition of r in (1.8) and the bound ξ 1/d imply that, for α This means that the last element of the sequence (u i (x)) r i=0 is bounded by ξ . Note that the lower bound on α x imposed above always holds for x ∈ V, since, by (1.10), As a guide to the reader, in Fig. 6, we summarize the three main sets of vertices that are used in the proof of Theorem 3.4. We conclude this subsection by proving Theorem 1.2 and Corollary 1.6 using Theorem 3.4. Each vertex x is plotted as a dot at its normalized degree α x . The largest set is V τ from Proposition 3.1, where It is the set of vertices for which we can define the localization profile vector v(x) that decays exponentially around x. The smallest set To verify the claim about the exponential decay of v, we note that the graph distance in G is bounded by the graph distance in G τ , which implies from which the claim easily follows using the definition (3.4).
Since u 0 (y) was chosen such that v τ + (y) is normalized, we find where we used (3.7) and the upper bound on δ in the last step. By an elementary computation, and the claim hence follows by recalling (3.7) and plugging (3.9) and (3.11) into (3.10).

Block diagonal approximation of pruned graph and proof of Theorems 3.4 and 1.7.
We now introduce the adjacency matrix of G τ and a suitably defined centred version. Then we define a block diagonal approximation of this matrix, called H τ in (3.16) below, which is the central construction of our proof.
and χ τ is the orthogonal projection onto Span{1 y . . y / ∈ x∈V τ B τ 2r (x)}. The definition of A τ is chosen so that (i) A τ is close to A provided that A τ is close to A, since the kernel of χ τ has a relatively low dimension, and (ii) when restricted to vertices at distance at most 2r from V τ , the matrix A τ coincides with A τ . In fact, property (i) is made precise by the simple estimate with very high probability (see [10,Eq. (8.17)] for details). Property (ii) means that A τ inherits the locality of the matrix A, meaning that applying A τ to a vector localized in space to a small enough neighbourhood of V τ yields again a vector localized in space. This property will play a crucial role in the proof, and it can be formalized as follows.
Remark 3.7. Let i + j 2r . Then for any x ∈ V τ and vector v we have The next result states that H τ is a small perturbation of H . The next result states that v τ σ (x) is an approximate eigenvector of H τ . Proposition 3.9. Let d satisfy (1.10). Let x ∈ [N ] and suppose that 1 + ξ 1/2 τ 2.
with very high probability.
The proofs of Lemma 3.8 and Proposition 3.9 are deferred to Sect. 3.3. The following object is the central construction in our proof.  (3.15) and the matrix That τ and τ are indeed orthogonal projections follows from Remark 3.3. Note that H τ may be interpreted as a block diagonal approximation of H τ . Indeed, completing the orthonormal family (v τ σ (x)) x∈V,σ =± to an orthonormal basis of R N , which we write as the columns of the orthogonal matrix R, we have The following estimate states that H τ is a small perturbation of H τ . The proof of Lemma 3.11 is deferred to Sect. 3.3. The following result is the key estimate of our proof; it states that on the range of τ the matrix H τ is bounded by 2τ + o(1). Proposition 3.12. Let d satisfy (1.10).
The proof of Proposition 3.12 is deferred to Sect. 3.4. We now use Lemma 3.11 and Proposition 3.12 to conclude Theorems 3.4 and 1.7.
where we used that the cross terms vanish because of the block diagonal structure of H τ . The core of our proof is the spectral gap To establish (3.19), it suffices to establish the same spectral gap for each term on the righthand side of (3.18) separately, since the right-hand side of (3.18) is a block decomposition of its left-hand side. The first term on the right-hand side of (3.18) is explicit: which trivially has no eigenvalues in [λ − δ, λ + δ].
In order to establish the spectral gap for the second term of (3.18), we begin by remarking that E τ has rank one and, by (3.13), its unique nonzero eigenvalue is Hence, by rank-one interlacing and Proposition 3.12, we find for some simple eigenvalue μ = √ d + O (1). Thus, to conclude the proof of the spectral gap for the second term of (3.18), it suffices to show that To prove (3.21), we suppose that λ 2 + 8Cξ 1/2 and, recalling the condition on δ and the choice of τ in Theorem 3.4, obtain where in the last step we used that ξ τ −1 < ξ 1/2 by our choice of τ and the lower bound on λ. This is (3.21). For the following arguments, we compare A/ √ d with H τ + E τ using the estimate  (1) and all other eigenvalues are at most C log N d . Since λ is nontrivial, we conclude that λ C log N d . By the upper bound δ (λ − 2)/2 and the lower bound on d in (1.10), this concludes the proof of (3.22) and, thus, the one of the spectral gap (3.19).
Proposition 3.12 is also the main tool to prove Theorem 1.7.

Proof of Lemma 3.8, Proposition 3.9, and Lemma 3.11.
Proof of Lemma 3.8. To begin with, we reduce the problem to the adjacency matrices by using the estimate (3.13). Hence, with very high probability, where A D τ is the adjacency matrix of the graph D τ . . = G \ G τ . Hence, since d −1/2 Cξ τ −1 by d 3 log N and the definition (1.9), it suffices to show that A D τ Cξ τ −1 √ d. We know from Proposition 3.1 (iii) and (v) that with very high probability D τ consists of (possibly overlapping) stars 4 around vertices x ∈ V τ of central degree D D τ x Cdξ 2 τ −1 . Moreover, with very high probability, (i) any ball B 2r (x) around x ∈ V τ has at most C cycles; (ii) any ball B 2r (x) around x ∈ V τ contains at most Cdξ 2 τ −1 vertices in V τ . Let x ∈ V τ . We claim that we can remove at most C edges of D τ incident to x so that no cycle passes through x. Indeed, if there were more than C cycles in D τ passing through x, then at least one such cycle would have to leave B 2r (x) (by (i)), which would imply that B 2r (x) has at least r vertices in V τ , which, by (ii), is impossible since r 2Cdξ 2 τ −1 by τ 1 + ξ 1/2 . See Fig. 7 for an illustration of D τ . Thus, we can remove a graph U τ from D τ such that U τ has maximal degree C and D τ \ U τ is a forest of maximal degree Cdξ 2 τ −1 (by (ii)). The claim now follows from Lemma A.4.
Proof of Proposition 3.9. We focus on the case σ = +; trivial modifications yield (3.14) for σ = −. The basic strategy is to decompose (H τ − (α x ))v τ + (x) into several error terms that are estimated separately. A similar argument was applied in [10, Proposition 5.1] to the original graph G instead of G τ , which however does not yield sharp enough estimates to reach the optimal scale d √ log N (see Sect. 1.5). We omit x from the notation in this proof and write Note that (s τ i ) 2r i=0 form an orthonormal system. Defining the vectors a straightforward computation using the definition of v τ + yields For a detailed proof of (3.34) in a similar setup, we refer the reader to [10,Lemma 5.2] (note that in the analogous calculation of [10] the left-hand side of (3.34) is multiplied by √ d). The terms in (3.34) analogous to w 0 and w 1 in [10] vanish, respectively, because the projection χ τ is included in (3.12) and because G τ | B τ 2r is a tree by Proposition 3.1 (ii). The vector w 4 from (3.33) differs from the one in [10] due to the special choice of u r in (3.4).
We now complete the proof of (3.14) by showing that each term on the right-hand side of (3.34) is bounded in norm by Cξ with very high probability. We start with w 3 by first proving the concentration bound with very high probability, for i = 1, . . . , r . To prove this, we use Proposition 3.1 (iv) and (vi), as well as [10,Lemma 5.4], to obtain with very high probability, where we used that α x 1, and the assumption [10, Eq. (5.13)] is satisfied by the definition (1.8). Therefore, invoking [10,Lemma 5.4] in the following expansion yields with very high probability. Hence, recalling the lower bound τ 1 + ξ 1/2 , we obtain (3.35).
We take the norm in the definition of w 3 , use the orthonormality of (s τ i ) r i=0 , and end up with Consequently, (3.35) and r i=0 u 2 i = 1 yield the desired bound on w 3 . In order to estimate w 2 , we use the definitions and the Pythagorean theorem to obtain with very high probability. Here, in the last step, we used (3.35), r i=0 u 2 C with very high probability due to [10, Eq. (5.12b)] and Lemma A.7.
Next, we claim that with very high probability, for i = 2, . . . , r . The proof of (3.39) is based on a dyadic decomposition analogous to the one used in the proof of [10, Eq. (5.26)]. We distinguish two regimes and estimate with very high probability, where we introduced In (3.40), we used that, with very high probability, is defined as in the proof of [10,Eq. (5.26)]. (Note that, in the notation of [10], there is a one-to-one mapping between A (B i−1 ) and B i .) In this proof it is shown that, with very high probability, Using (3.36) and (3.37), and then plugging the resulting bound into (3.40) concludes the proof of (3.39).
Proof of Lemma 3.11. We have to estimate the norm of Each x ∈ V satisfies the condition of Proposition 3.9 since ξ 1/4 C(log d) 2 / √ log N (see (3.8)). Hence, for any x ∈ V and σ = ±, Proposition 3.9 yields with very high probability, where the second statement follows from the first together with the definition (3.5) of v τ σ (x) and Remark 3.7. By Proposition 3.1 (i), the balls B τ 2r (x) and B τ 2r (y) are disjoint for x, y ∈ V τ with x = y. Hence, in this case, v τ Thus, with very high probability, x,σ ξ 2 = 4C 2 ξ 2 a 2 by orthogonality. Therefore, τ H τ τ Cξ with very high probability. Similarly, the representation yields the desired estimate on the sum of the two first terms on the right-hand side of (3.41).

Proof of Proposition 3.12.
In this section we prove Proposition 3.12. Its proof relies on two fundamental tools. The first tool is a quadratic form estimate, which estimates H in terms of the diagonal matrix of the vertex degrees. It is an improvement of [10, Proposition 6.1]. To state it, for two Hermitian matrices X and Y we use the notation X Y to mean that Y − X is a nonnegative matrix, and |X | is the absolute value function applied to the matrix X .
The second tool is a delocalization estimate for an eigenvector w of H τ associated with an eigenvalue λ > 2. Essentially, it says that w x is small at any x ∈ V τ unless w happens to be the specific eigenvector v τ ± (x) of H τ , which is by definition localized around x. Thus, in any ball B τ 2r (x) around x ∈ V τ , all eigenvectors except v τ ± (x) are locally delocalized in the sense that their magnitudes at x are small. Using that the balls (B τ 2r (x)) x∈V τ are disjoint, this implies that eigenvectors of τ H τ τ have negligible mass on the set V. Proposition 3.14. Let d satisfy (1.10). If 1 + ξ 1/2 τ 2 then the following holds with very high probability. Let λ be an eigenvalue of H τ with λ > 2τ +Cξ and w = (w x ) x∈ [N ] its corresponding eigenvector.
We may now conclude the proof of Proposition 3.12.
Proof of Proposition 3.12. By Proposition 3.13, Lemma 3.11, and Lemma 3.8 we have with very high probability, where we used log N d 2 ∨ d −1/2 (ξ + ξ τ −1 ). Arguing by contradiction, we assume that there exists an eigenvalue λ > 2τ + C (ξ + ξ τ −1 ) of τ H τ τ for some C 2C to be chosen later. By the lower bound in (1.10), we may assume that C ξ 1. Thus, by the definition of H τ , there is an eigenvector w of H τ corresponding to λ, which is orthogonal to v τ ± (x) for all x ∈ V. From (3.42), we conclude α y + C(ξ + ξ τ −1 ). (3.43) It remains to estimate the two sums on right-hand side of (3.43).
Since w ⊥ v τ ± (x) for all x ∈ V, we can apply Proposition 3.14 (ii). We find 2r log 2τ + Cξ λ 2r log 2τ + Cξ where in the last step we recalled the definition (1.8) and used that τ 2 and C ξ 1.
Using the estimate combined with Proposition 3.14 (ii), (3.44) and Lemma A.7, yields where the third step follows by choosing C large enough, depending on C.
Proof of Proposition 3.13. We only establish an upper bound on H . The proof of the same upper bound on −H is identical and, therefore, omitted. for all t 1 + Cd −1/2 with very high probability. We now define the matrix

We introduce the matrices H (t) = (H xy (t)) x,y∈[N ] and M(t) = (δ xy m x (t)) x,y∈[N ]
It is easy to check that is a nonnegative matrix. We also have where we used that |H xy | d −1/2 and y H 2 xy α x + d N by definition of H . We use this to estimate the diagonal entries of and obtain On the other hand, for the diagonal matrix M(t), we have the trivial upper bound since α x C(log N )/d with very high probability due to Lemma A.7. Finally, combining (3.45), (3.46) and (3.47) yields and Proposition 3.13 follows by choosing t = 1 + Cd −1/2 .
What remains is the proof of Proposition 3.14. The underlying principle behind the proof is the same as that of the Combes-Thomas estimate [25]: the Green function ((λ− Z ) −1 ) i j of a local operator Z at a spectral parameter λ separated from the spectrum of Z decays exponentially in the distance between i and j, at a rate inversely proportional to the distance from λ to the spectrum of Z . Here local means that Z i j vanishes if the distance between i and j is larger than 1. Since a graph is equipped with a natural notion of distance and the adjacency matrix is a local operator, a Combes-Thomas estimate would be applicable directly on the level of the graph, at least for the matrix H τ . For our purposes, however, we need a radial version of a Combes-Thomas estimate, obtained by first tridiagonalizing (a modification of) H τ around a vertex x ∈ V τ (see Appendix A.2). In this formulation, the indices i and j have the interpretation of radii around the vertex x, and the notion of distance is simply that of N on the set of radii. Since Z is tridiagonal, the locality of Z is trivial, although the matrix H τ (or its appropriate modification) is not a local operator on the graph G τ .
To ensure the separation of λ > 2τ + o(1) and the spectrum of Z , we cannot choose Z to be the tridiagonalization of H τ , since λ is an eigenvalue of H τ . In fact, Z is the tridiagonalization of a new matrix H τ,x , obtained by restricting H τ to the ball B τ 2r (x) and possibly subtracting a suitably chosen rank-two matrix, which allows us to show H τ,x 2τ + o (1). By the orthogonality assumption on w, we then find that the Green function ((λ − Z ) −1 ) ir , 0 i < r , and the eigenvector components in the radial basis u i , 0 i < r , satisfy the same linear difference equation. The exponential decay of Going back to the original vertex basis, this implies that w 2 for all x ∈ V τ , from which Proposition 3.14 follows since the balls B τ 2r (x), x ∈ V τ , are disjoint.
Proof of Proposition 3.14. For a matrix M ∈ R N ×N and a set V ⊂ [N ], we use the notation (M| V ) xy . . = 1 x,y∈V M xy .
We begin with part (i). We first treat the case x ∈ V. To that end, we introduce the matrix with very high probability, and since v τ with very high probability. The estimate (3.51) is rough in the sense that the subtraction of the two last terms of (3.48) is not needed for its validity (since (α x ) √ τ (α x /τ ∨2)). Nevertheless, it is sufficient to establish (3.49) in the following cases, which may be considered degenerate.
If α x 2τ then (3.51) immediately implies (3.49), since which is (3.49) after renaming the constant C. Hence, to prove (3.49), it suffices to consider the case (α x ) > 2 √ τ + Cξ . By Proposition 3.1 (i) and (ii), G τ restricted to B τ 2r (x) \ {x} is a forest of maximal degree at most τ d. Lemma A.4 therefore yields H τ | B τ 2r (x)\{x} 2 √ τ . Moreover, the adjacency matrix of the star graph consisting of all edges of G τ incident to x has precisely two nonzero eigenvalues, ± √ dα x . By first order perturbation theory, we therefore conclude that H τ | B τ 2r (x) has at most one eigenvalue strictly larger than 2 √ τ and at most one strictly smaller than −2 √ τ . Using (3.50) we conclude that H τ | B τ 2r (x) has at most one eigenvalue strictly larger than 2 √ τ +Cξ and at most one strictly smaller than −2 √ τ −Cξ .
Next, let (g i ) r i=0 be the Gram-Schmidt orthonormalization of the vectors (( H τ,x ) To that end, we note that by Proposition 3.1 (i) we have H τ,x = τ H τ τ | B τ 2r (x) . Hence, by induction assumption, Proposition 3.1 (i), and Remark 3.7, We set u i . . = g i , w for any 0 i r . Because w is an eigenvector of H τ that is orthogonal to v τ ± (x), for any i < r , (3.52) implies with the conventions u −1 = 0 and Z 0,−1 = 0. Let G(λ) . . = (λ − Z ) −1 be the resolvent of Z at λ. Note that λ − Z is invertible since λ > Z by assumption and (3.54). Since Therefore (G ir (λ)) i r and (u i ) i r satisfy the same linear recursive equation (cf. (3.55)); solving them recursively from i = 0 to i = r yields for all i r . Moreover, as λ > Z by assumption and (3.54), we have the convergent Neumann series G(λ) = 1 λ k 0 (Z /λ) k . Thus, the offdiagonal entries of the resolvent satisfy Since Z is tridiagonal, we deduce that (Z /λ) k 0r = 0 if k < r , so that, by (3.54), On the other hand, for the diagonal entries of the resolvent, we get, by splitting the summation over k into even and odd values, where in the thid step we discarded the terms k > 0 to obtain a lower bound using that I + Z /λ 0 by (3.54), and in the last step we used (3.54). Hence, the definition of u i and (3.52) imply Here, we used (3.56) in third step and (3.57) as well as (3.58) in the last step. This concludes the proof of (i) for x ∈ V.

Proof of Proposition 3.1.
We conclude this section with the proof of Proposition 3.1.

Proof of Proposition 3.1. Parts (i)-(v) follow immediately from parts (i)-(iv) and (vi)
of [10,Lemma 7.2]. To see this, we remark that the function h from [10] satisfies h((τ − 1)/2) (τ − 1) 2 for 1 < τ 2. Moreover, by Lemma A.7 and the upper bound on d, we have max x D x C log N with very high probability. Hence, choosing the universal constant c small enough in (1.8) and recalling the lower bound on τ − 1, in the notation of [10, Equations (5.1) and (7.2)] we obtain for any x ∈ V τ the inequality 2r ( 1 4 r x ) ∧ ( 1 2 r (τ )) with very high probability. This yields parts (i)-(v). It remains to prove (vi), which is the content of the rest of this proof. From now on we systematically omit the argument x from our notation. Part (v) already implies the bound with very high probability, which is (3.2) for i = 1. From [10, Eq. (7.13)] we find (As a guide to the reader, this estimate follows from the construction of G τ given in [10, Proof of Lemma 7.2], which ensures that if a vertex z ∈ S i is not in S τ i then any path in G of length i connecting z to x is cut in G τ at its edge incident to x.) Hence, in order to show (vi) for i 2, it suffices to prove with very high probability, for all 2 i 2r . We start with the case i = 2. We shall use the relation where, for y ∈ S 1 , we introduced N 2 (y) . . = |S 1 (y)∩S 2 |. Note that N 2 (y) is the number of vertices in S 2 connected to x via a path of minimal length passing through y. The identity The second and third terms of (3.61) are smaller than the right-hand side of (3.60) for i = 2 due to [10, Eq. (5.23)] and (3.59), respectively. Hence, it remains to estimate the first term on the right-hand side of (3.61) in order to prove (3.60) for i = 2.
To that end, we condition on the ball B 1 and abbreviate P B 1 (·) . . = P( · | B 1 ). Since we find that conditioned on B 1 the random variables (N 2 (y)) y∈S 1 are independent Binom(N − |B 1 |, d/N ) random variables. We abbreviate . . = log N (τ −1) 2 . For given C, C , we set C . . = C + 2C and estimate In order to estimate the first term on the right-hand side of (3.63), we shall prove that if for all y ∈ S 1 and t 1/8. To that end, we estimate With Poisson approximation, Lemma A.6 below, we obtain (assuming that 2d is an integer to simplify notation) By Stirling's approximation we get The term in the parentheses on the right-hand side is negative for t 1/8, and hence for large enough d, which gives (3.64). Since the family (N 2 (y)) y∈S 1 is independent conditioned on B 1 , we can now use Chebyshev's inequality to obtain, for 0 t 1/8, Now we set t = 1/8, recall the bound τ 2, plug this estimate back into (3.63), and take the expectation. We use Lemma A.7 to estimate |S 1 |, which in particular implies that |B 1 | N 1/4 with very high probability; this concludes the estimate of the expectation of the first term of (3.63) by choosing C large enough. Next, the expectation of the second term is easily estimated by Lemma A.7 since N 2 (y) has law Binom(N − |B 1 |, d/N ) when conditioned on B 1 . Finally, the expectation of the last term of (3.63) is estimated by (3.59) by choosing C large enough. This concludes the proof of (3.60) for i = 2.
We now prove (3.60) for i + 1 with i 2 by induction. Using [10, Lemma 5.4 (ii)] combined with Lemma A.7, we deduce that with very high probability for all y ∈ S 1 \ S τ 1 and all i r . Therefore, using the induction assumption, i.e. (3.60) for i, we obtain with very high probability, where we used the concavity of √ · in the second step, (3.59) and (3.60) for i in the last step. Since d i log N d i/2+1 d i for i 2 and the sequence (d 1−i/2 ) i∈N is summable, this proves (3.60) for i + 1 with a constant C independent of i. This concludes the proof of Proposition 3.1.

The Delocalized Phase
In this section we prove Theorem 1.8. In fact, we state and prove a more general result, Theorem 4.2 below, which immediately implies Theorem 1.8.

Local law. Theorem 4.2 is a local law for a general class of sparse random matrices of the form
where f 0 and e . . = N −1/2 (1, 1, . . . , 1) * . Here H is a Hermitian random matrix satisfying the following definition. It is easy to check that the set of matrices M defined as in (4.1) and Definition 4.1 contains those from Theorem 1.8 (see the proof of Theorem 1.8 below). From now on we suppose that K = 1 to simplify notation.
The local law for the matrix M established in Theorem 4.2 below provides control of the entries of the Green function for z in the spectral domain for some constant L 1. We also define the Stieltjes transform g of the empirical spectral measure of M given by The limiting behaviour of G and g is governed by the following deterministic quantities. Denote by C + . . = {z ∈ C . . Im z > 0} the complex upper half-plane. For z ∈ C + we define m(z) as the Stieltjes transform of the semicircle law μ 1 , An elementary argument shows that m(z) can be characterized as the unique solution m in C + of the equation For α 0 and z ∈ C + we define where in the last step we omitted all terms except i satisfying λ i (M) = λ. The claim follows by renaming κ → κ/2. (Here we used that Theorem 4.2 holds also for random z ∈ S, as follows form a standard net argument; see e.g. [16,Remark 2.7].) α x and β x ). In the special case M = d −1/2 A with A the adjacency matrix of G (N , d/N ), we have

Remark 4.3 (Relation between
with very high probability, by Lemma A.7. By definition, m α (z) ∈ C + for z ∈ C + , i.e. m α is a Nevanlinna function, and lim z→∞ zm α (z) = −1. By the integral representation theorem for Nevanlinna functions, we conclude that m α is the Stieltjes transform of a Borel probability measure μ α on R, Theorem 4.2 implies that the spectral measure of M at a vertex x is approximately μ β x with very high probability. Inverting the Stieltjes transform (4.11) and using the definitions (4.5) and (4.7), we find after a short calculation The family (μ α ) α 0 contains the semicircle law (α = 1), the Kesten-McKay law of parameter d (α = d/(d − 1)), and the arcsine law (α = 2). For rational α = p/q, the measure μ p/q can be interpreted as the spectral measure at the root of the infinite rooted ( p, q)-regular tree, whose root has p children and all other vertices have q children. We refer to Appendix A.2 for more details. See Fig. 8 for an illustration of the measure μ α .
The error is smaller than the left-hand side provided that |I | C N κ−1 .
The remainder of this section is devoted to the proof of Theorem 4.2. For the rest of this section, we assume that M is as in Theorem 4.2. To simplify notation, we consistently omit the z-dependence from our notation in quantities that depend on z ∈ S. Unless mentioned otherwise, from now on all statements are uniform in z ∈ S.
For the proof of Theorem 4.2, it will be convenient to single out the generic constant For α > 2, μ α has two atoms which we draw using vertical lines. The measure μ α is the semicircle law for α = 1, the arcsine law for α = 2, and the Kesten-McKay law with d = α α−1 for 1 < α < 2. Note that the density of μ α is bounded in S κ , uniformly in α. The divergence of the density near 0 is caused by values of α close to 0, and the divergence of the density near ±2 by values of α close to 2 Fig. 9. The dependency graph of the various quantities appearing in the proof of Theorem 4.2. An arrow from x to y means that y is chosen as a function of x. The independent parameters, κ and ν, are highlighted in blue Our proof will always assume that C ≡ C ν and D ≡ D ν are large enough, and the constant C in (1.18) can be taken to be C ∨ D. For the rest of this section we assume that d satisfies (4.13) for some large enough D, depending on κ and ν. To guide the reader through the proof, in Fig. 9 we include a diagram of the dependencies of the various quantities appearing throughout this section.

Typical vertices.
We start by introducing the key tool in the proof of Theorem 4.2, a decomposition of vertices into typical vertices and the complementary atypical vertices.
Heuristically, a typical vertex x has close to d neighbours and the spectral measure of M at x is well approximated by the semicircle law. In fact, in order to be applicable to the proof of Proposition 4.18 below, the notion of a typical vertex is somewhat more complicated, and when counting the number of neighbours of a vertex x we also need to weight the neighbours with diagonal entries of a Green function, so that the notion of typical vertex also depends on the spectral parameter z, which in this subsection we allow to be any complex number z with Im z N −1+κ . This notion is defined precisely using the parameters x and x from (4.18) below. The main result of this subsection is Proposition 4.8 below, which states, in the language of graphs when M = d −1/2 A with A the adjacency matrix of G (N , d/N ), that most vertices are typical and most neighbours of any vertex are typical. To state it, we introduce some notation.
Note that this notion depends on the spectral parameter z, i.e. T a ≡ T a (z). The constant a will depend only on ν and κ. It will be fixed in (4.23) below. The constant D a 3/2 from (4.13) is always chosen large enough so that ϕ a 1.
The following proposition holds on the event {θ = 1}, where we introduce the indicator function depending on some deterministic constant 1. In (4.40) below, we shall choose a constant ≡ κ , depending only on κ, such that the condition θ = 1 can be justified by a bootstrapping argument along the proof of Theorem 4.2 in Sect. 4.3 below.
Throughout the sequel we use the following generalization of Definition 2.1.

Definition 4.7. An event holds with very high probability on an event
if for all ν > 0 there exists C > 0 such that P( ∩ ) P( ) − C N −ν for all N ∈ N.
We now state the main result of this subsection.
Note that For any deterministic x ∈ [N ], the same estimates hold for T Before proving Lemmas 4.10 and 4.11, we use them to establish Proposition 4.8.
Proof of Proposition 4.8. For (i), we choose X = [N ] in Lemma 4.10 (i), using that T a/2 ⊂ T a .
We now turn to the proof of (ii). By Lemma 4.11, on the event {θ = 1} we have T c a ⊂ T (x) a/2 c with very high probability and hence with very high probability. Since |H xy | 2 1/d almost surely, we obtain the decomposition where we defined Since (x) y |H xy | 2 Cd with very high probability by Definition 4.1 and Bennett's inequality, we conclude that with very high probability. We shall apply Lemma 4.10 to the sets X = X k and (T (x) a/2 ) c . To that end, note that X k ⊂ [N ]\{x} is a measurable function of the family (H xy ) y∈ [N ] , and hence independent of H (x) . Thus, we may apply Lemma 4.10.
We define K . . = max k 0 . . Cd k+3 e 2qϕ 2 a d and decompose the sum on the right-hand side of (4.20) into 2ϕ a + Cd 2 e −qϕ 2 a d log N with very high probability. Here, we used Lemma 4.10 (ii) to estimate the summands if k K and Lemma 4.10 (i) and (4.21) for the other summands. Since log N d 2 , this concludes the proof of (ii).
The rest of this subsection is devoted to the proofs of Lemmas 4.10 and 4.11. Let θ be defined as in (4.19) for some constant 1. For any subset T ⊂ [N ], we define the indicator function Lemma 4.10 is a direct consequence of the following two lemmas.
The first one, Lemma 4.12, is mainly a decoupling argument for the random variables ( x ) x∈ [N ] . Indeed, the probability that any fixed vertex x is atypical is only small, o(1), and not very small, N −ν ; see (4.31) below. If the events of different vertices being atypical were independent, we could deduce that the probability that a sufficiently large set of vertices are atypical is very small. However, these events are not independent. The most serious breach of independence arises from the Green function G (x) yy in the definition of x . In order to make this argument work, we have to replace the parameters x and x with their decoupled versions (T ) x and (T ) x from Definition 4.9. To that end, we have to estimate the error involved, | x − (T ) x |. Unfortunately the error bound on the latter is proportional to β x (see (4.32)), which is not affordable for vertices of large degree. The solution to this issue involves the observation that if β x is too large then the vertex is atypical by the condition on x , which allows us to disregard the size of x . The details are given in the proof of Lemma 4.12 below.
The second one, Lemma 4.13, gives a priori bounds on the entries of the Green function G (T ) , which shows that if the entries of G are bounded then so are those of G (T ) for |T | = o(d). For T of fixed size, this fact is a standard application of the resolvent identities from Lemma A.24. For our purposes, it is crucial that T can have size up to o(d), and such a quantitative estimate requires slightly more care.

Lemma 4.12.
There is a constant 0 < q 1, depending only on , such that, for any ν > 0, there is C > 0 such that the following holds for any fixed a > 0. If x / ∈ T ⊂ [N ] are deterministic with |T | ϕ a d/C then For the proof of (ii), we choose k = ϕ a d/C and estimate where in the second step we used (4.22a). Thus, by our choice of a, we have P θ (|X ∩ T c a/2 | k) (C + 1)N −ν/2 , from which (ii) follows after renaming ν and C.
To prove (i) we estimate, for t > 0 and l ∈ N, Choosing l = ϕ a d/C, regrouping the summation according to the partition of coincidences, and using Lemma 4.12 yield Here, P l denotes the set of partitions of [l], and we denote by k = |π | the number of blocks in the partition π ∈ P l . We also used that the number of partitions of l elements consisting of k blocks is bounded by l k l l−k . The last step follows from the binomial theorem. Therefore, using l = ϕ a d/C and choosing t = e qϕ 2 a d + |X |e −2qϕ 2 a d as well as C and ν sufficiently large imply the bound in Lemma 4.10 (i) with very high probability, after renaming C and ν. Here we used (4.13).
To obtain the same statements for T (x) a/2 instead of T a/2 , we estimate For both parts, (i) and (ii), the conditional probability P |X ∩ (T (x) a/2 ) c | t, θ (x) = 1 X can be bounded as before using (4.22b) instead of (4.22a) since, by assumption on X , the set T  with very high probability.
Before proving Lemma 4.14, we use it to conclude the proof of Lemma 4.13.
Proof of Lemma 4.13. The bound in (4.24) of Lemma 4.14 implies that θ = θθ (T ) with very high probability. Since θ 1, the proof is complete.
Proof of Lemma 4.14. Throughout the proof we work on the event {θ = 1} exclusively. After a relabelling of the vertices [N ], we can suppose that T = [k] with k cd/ 2 . For k ∈ [N ], we set Note that 0 by definition of θ . We now show by induction on k that there is C > 0 such that k 0 1 + The initial step with k = 0 is trivially correct. For the induction step k → k + 1, we set T = [k] and u = k + 1. The algebraic starting point for the induction step is the identities (A.32a) and (A.32b). We shall need the following two estimates. First, from Lemma A.23 and Cauchy-Schwarz, we get where we used that k+1 1, f N κ/6 , and Im z N −1+κ . Second, the first estimate of (A.28) in Corollary A.21 with ψ = k+1 / √ d and γ = √ k+1 /(N Im z), Lemma A.23, and k+1 1 imply with very high probability. Hence, owing to (A.32a) and (A.32b) with T = [k] and u = k+1, we get, respectively, with very high probability. By the induction assumption (4.26) we have C k / √ d 2C / √ d 1/2, so that the first inequality in (4.29) implies the rough a priori bound with very high probability. From the second inequality in (4.29) and (4.30), we deduce that where in the second step we used k 2 , by the induction assumption (4.26). This concludes the proof of (4.26), and, hence, of (4.24).
The next result provides concentration estimates for the parameters x and x .

Lemma 4.15.
There is a constant 0 < q 1, depending only on , such that the following holds. Let c > 0 be as in Lemma 4.14, and let x ∈ [N ] and T ⊂ [N ] be deterministic and satisfy |T | cd/ 2 . Then for any 0 < ε 1 we have 31) and, for any u / ∈ T , with very high probability.
Before proving Lemma 4.15, we use it conclude the proof of Lemma 4.11.
Proof of Lemma 4.11. Using (A.27b), we find that β x C(1 + log N d ) with very high probability. The claim now follows from (4.32) with T = ∅ and the definition of ϕ a , choosing the constant D in (4.13) large enough.
Proof of Lemma 4.15. Set q . . = 1 2 11 (e ) 2 . We get, using (A.27b) with r . . = 32qε 2 d d, E|H xy | 2 = 1/N , and Chebyshev's inequality, with very high probability for any 0 < ε 1. This proves the estimate on (T ) x in (4.31), and the estimate for (T ) x is proved similarly. We now turn to the proof of (4.32). If x = u then the statement is trivial. Thus, we assume x = u. In this case we have and the claim for follows by Definition 4.1. Next,

The last term multiplied by θ (T ) is estimated by O( /d) since θ (T ) |G
(T x) uu | 4 by (4.30). We estimate the first term using (4.25) in Lemma 4.14, which yields with very high probability. This concludes the proof of Lemma 4.15.
Proof of Lemma 4.12. Throughout the proof we abbreviate P θ ( ) . . = P( ∩ {θ = 1}). We have where we defined the event We have the inclusions Defining the event we therefore deduce by a union bound that We begin by estimating the first term of (4.34). To that end, we observe that, conditioned on H (T ) , the family ( x ) x∈T is independent. Using Lemma 4.13 we therefore get and we estimate each factor using (4.31) from Lemma 4.15 as where in the last step we used that e −4qϕ 2 a d 1/2. We conclude that Next, we estimate the second term of (4.34). After renaming the vertices, we may assume that T = [k] with k ϕ a d/C, so that we get from (4.32) from Lemma 4.15 (using that ϕ a d/C cd/ 2 provided that D in (4.13) is chosen large enough, depending on a), by telescoping and recalling Lemma 4.13, with very high probability, for large enough C in the upper bound on k. We conclude that the two last terms of (4.34) are bounded by C N −ν , and the proof of (4.22a) is therefore complete.
The proof of (4.22b) is identical, replacing the matrix M with the matrix M (x) .

Self-consistent equation and proof of Theorem 4.2.
In this subsection, we derive an approximate self-consistent equation for the Green function G, and use it to prove Theorem 4.2. The key ingredient is Proposition 4.18 below, which provides a bootstrapping bound stating that if G x x − m β x is smaller than some constant then it is in fact bounded by ϕ a with very high probability. It is proved by first deriving and solving a self-consistent equation for the entries G x x indexed by typical vertices x ∈ T a , and using the obtained bounds to analyse G x x for atypical vertices x ∈ T c a . We begin with a simple algebraic observation.
where we introduced the error term Proof. The lemma follows directly from (A.31) and the definition (4.1).
Let θ be defined as in (4.19) with some 1. The following lemma provides a priori bounds on the error terms appearing in the self-consistent equation.
for some constant C κ depending only on κ. Next, we use the first estimate of (A.28), Lemma A.23, and the upper bound on f to conclude that with very high probability (compare the proof of (4.28)). Moreover, from Lemma A.23 and the second estimate of (A.28) we deduce that remaining term in (4.37) is This concludes the proof of (4.38a).
For the proof of (4.38b), we start from (A.29) and use M xa = H xa + f /N to obtain Similar arguments as in (4.28) and (4.27) show that the first and third term, respectively, are bounded by Cd −1/2 with very high probability. The same bound for the second term follows from Definition 4.1 and (4.24) in Lemma 4.14. This proves (4.38b). Finally, (4.38c) follows directly from (4.25). For the proof of Proposition 4.18, we employ the results of the previous subsections to show that the diagonal entries (G x x ) x∈T a of the Green function of M at the typical vertices satisfy the approximate self-consistent equation (4.42) below. This is a perturbed version of the relation (4.6) for the Stieltjes transform m of the semicircle law, which holds for all z ∈ C + . The stability estimate, (4.43) below, then implies that G x x and m are close for all x ∈ T a . From this we shall, in a second step, deduce that G x x is close to m β x for all x; this steps includes also the atypical vertices.
The next lemma is a relatively standard stability estimate of self-consistent equations in random matrix theory (compare e.g. to [27,Lemma 3.5]). It is proved in Appendix A.9. Lemma 4.19 (Stability of the self-consistent equation for m). Let X be a finite set, κ > 0, and z ∈ C + satisfy |Re z| 2 − κ. We assume that, for two vectors (g x ) x∈X , (ε x ) x∈X ∈ C X , the identities hold for all x ∈ X . Then there are constants b, C ∈ (0, ∞), depending only on κ, such where m(z) satisfies (4.6). For the analysis of G x x we distinguish the two cases x ∈ T a and x / ∈ T a . If x ∈ T a then we write using Lemma 4.16 and the definition (4.18) of x that where the error term ε x satisfies with very high probability. Here, in the first step of (4.44) we used (4.38a), (4.38c), Thus, for (G x x ) x∈T a we get the self-consistent equation in (4.42) with g x = G x x and X = T a . Moreover, by the bound on x in the definition (4.17) of T a , we have β x = 1 + O(ϕ a ). Hence, by (A.5), the assumption φ = 1 and d C √ log N , we find that choosing the constant D in (4.13) large enough that the right-hand side of (A.5), i.e. C|β x − 1|, is bounded by b/2. Hence Lemma 4.19 is applicable and we obtain |G x x − m| = O(max y∈T a |ε y |). Therefore, we obtain with very high probability. This concludes the proof in the case x ∈ T a . What remains is the case x / ∈ T a . In that case, we obtain from Lemma 4.16 that where the error term ε x satisfies ε x = O((1+β x )ϕ a ) with very high probability. Here we used (4.38a) as well as (4.38c), (4.45), (4.46) and Proposition 4.8 (ii) twice to conclude that with very high probability. From (4.7) and (4.47) we therefore get To estimate the right-hand side of (4.48), we consider the cases β x 1 and β x > 1 separately.
If β x 1 then, by (A.4), the first factor of (4.48) is bounded by C. Thus, by (4.7), the second factor is bounded by 2C provided that |ε x | 1/2C by choosing D in (4.13) large enough, and the third factor is bounded by Cϕ a . This yields the claim.
If β x > 1, we use that Im m c for some constant c > 0 depending only on κ and L. Thus, the right-hand side of (4.48) is bounded in absolute value, again using (A.4), by C 1 β x c/2 Cβ x ϕ a , provided that D in (4.13) is chosen large enough. This yields the claim.
Proof of Theorem 4.2. After possibly increasing L, we can assume that L in the definition of S in (4.3) satisfies L 2/λ + 1, where λ is chosen as in Proposition 4.18.
We first show that (4.10) follows from (4.9). Indeed, averaging the estimate on |G x x − m β x | in (4.9) over x ∈ [N ], using that m β x = m + O(ϕ a ) for x ∈ T a by (A.5) and estimating the summands in T c a by Proposition 4.8 (i) and (A.4) yield (4.10) due to (4.45).
What remains is the proof of (4.9). Let z 0 ∈ S, set J . . = min{ j ∈ N 0 : Im z 0 + j N −3 2/λ}, and define z j . . = z 0 + i j N −3 for j ∈ [J ]. We shall prove the bound in (4.9) at z = z j by induction on j, starting from j = J and going down to j = 0. Since |G xy (z)| (Im z) −1 and |m β x (z)| (Im z) −1 for all x, y ∈ [N ], we have For the induction step j → j − 1, suppose that φ(z j ) = 1 with very high probability. Then, by Proposition 4.18, we deduce that (z j ) Cϕ a with very high probability. Since G xy and m β x are Lipschitz-continuous on S with constant N 2 , we conclude that (z j−1 ) Cϕ a + N −1 with very high probability. If N is sufficiently large and ϕ a is sufficiently small, obtained by choosing D in (4.13) large enough, then we deduce that (z j−1 ) λ with very high probability and hence φ(z j−1 ) = 1 with very high probability. Using Proposition 4.18, this concludes the induction step, and hence establishes (z 0 ) Cϕ a with very high probability. Here we used that the intersection of J events of very high probability is an event of very high probability, since J C N 3 , where C depends on κ.
Funding Open Access funding provided by Université de Genève.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A. Appendices
In the following appendices we collect various tools and explanations used throughout the paper.
A.1. Simulation of the ∞ -norms of eigenvectors. In Fig. 10 we depict a simulation of the ∞ -norms of the eigenvectors of the adjacency matrix A/ √ d of the Erdős-Rényi graph G(N , d/N ) restricted to its giant component. We take d = b log N with N = 10 000 and b = 0.6. The eigenvalues and eigenvectors are drawn using a scatter plot, where the horizontal coordinate is the eigenvalue and the vertical coordinate the ∞ -norm of the associated eigenvector. The higher a dot is located, the more localized the associated eigenvector is. Complete delocalization corresponds to a vertical coordinate ≈ 0.01, and localization at a single site to a vertical coordinate 1. Note the semilocalization near the origin and outside of [−2, 2]. The two semilocalized blips around ±0.4 are a finite-N effect and tend to 0 as N is increased. The Perron-Frobenius eigenvalue is an outlier near 2.8 with delocalized eigenvector. In this appendix we describe the spectrum, eigenvectors, and spectral measure of the following simple graph.
Definition A.1. For p, q ∈ N * we define T p,q as the infinite rooted ( p, q)-regular tree, whose root has p children and all other vertices have q children.
A convenient way to analyse the adjacency matrix of T p,q is by tridiagonalizing it around its root. To that end, we first review the tridiagonalization 5 of a general symmetric matrix X ∈ R N ×N around a vertex x ∈ [N ]; we refer to [10, Appendices A-C] for details. Let r ∈ N and x ∈ [N ]. Suppose that the vectors 1 x , X 1 x , X 2 1 x , . . . , X r 1 x are linearly independent, and denote by g 0 , g 1 , g 2 , . . . , g r the associated orthonormalized sequence. Then the tridiagonalization of X around x up to radius r is the (r + 1) × (r + 1) matrix Z = (Z i j ) r i, j=0 with Z i j . . = g i , X g j . By construction, Z is tridiagonal and conjugate to X restricted to the subspace Span{g 0 , g 1 , . . . , g r }.
Let now X = A ≡ A T p,q be the adjacency matrix of T p,q , whose root we denote by o. Then it is easy to see that g i = 1 S i (o) / 1 S i (o) and the tridiagonalization of A around the root up to radius ∞ is the infinite matrix If α > 2, a transfer matrix analysis (see [10,Appendix C]) shows that Z (α) has precisely two eigenvalues in R \ [−2, 2], which are ± (α). The associated eigenvectors are ((±) i u i ) i∈N , where u 0 > 0 and u i . . = √ α (α−1) i/2 u 0 for i 1. Note that the eigenvector components are exponentially decaying since α > 2, and hence u 0 can be chosen so that the eigenvectors are normalized. Going back to the original vertex basis of T p,q , setting α = p/q, we conclude that the adjacency matrix A has eigenvalues ± √ q (α) with associated eigenvectors i∈N (±) i u i 1 S i (o) / 1 S i (o) . Next, we show that the measure μ α from (4.12) is the spectral measure at the root of A T p,q / √ d and the spectral measure at 0 of (A.1). The proof of (ii) is analogous. Denote the root of T p,q by o. Again using Schur's complement formula to remove the oth row and column of H = A T p,q / √ q, we deduce that

Lemma
where we used that T p,q from which o has been removed consists of p disconnected copies of T q,q . Setting p = q in (A.3) and comparing to (4.6) implies that the left-hand side of (A.3) is equal to m(z) if p = q, and hence (ii) for general p follows from (4.7). Finally, we remark that the equality of the spectral measures of Z ( p/q) and A T p,q / √ q can also be seen directly, by noting that Z ( p/q) is the tridiagonalization of A T p,q / √ q around the root o.
We conclude with some basic estimates for the Stieltjes transform m α of μ α used in Sect. 4.
Moreover, the operator (1 − P 0 )A(1 − P 0 ) is the adjacency matrix of a forest whose vertices have degree at most q. By Lemma A.4, we therefore obtain By (A.6) and (A.7), the proof is complete.
There is a universal constant C > 0 such that for 1 l The following result is a slight generalization of [10, Proposition D.1], which can be established with the same proof. We note that the qualitative notion of high probability can be made stronger and quantitative with some extra effort, which we however refrain from doing here.
Lemma A.9. If d 1 and l 1 satisfies β l (d) 3/2 then with high probability, where ζ is any sequence tending to infinity with N .
The following result 6 gives bounds on the counting function of the normalized degrees Lemma A.10. Suppose that ζ satisfies for some large enough universal constant C. Then for any α 2 we have with high probability Proof. If d > 3 log N , then an elementary analysis using Bennett's inequality shows that |{x ∈ [N ] . . α x α}| = 0 with high probability. Since N e − f d (α) 1 for α 2, the claim follows. Thus, for the following we assume that d 3 log N . Abbreviate ϒ . . = 3 2 ζ d , which is an upper bound for the right-hand side of (A.11). For the following we adopt the convention that β 0 (d) = ∞. Choose l 0 such that (A.14) and define We shall show that for k 1, 16) and k N e − f d (3/2) . (A.17) 6 The assumption d log log N in Lemma A.10 is tailored so that it covers the entire range α 2, which is what we need in this paper. The assumption on d could also be removed at the expense of introducing a nontrivial lower bound on α.
Thus β k (d) 3/2 and, assuming k 1, Lemma A.9 is applicable to the indices k and k. We obtain, with high probability, from which we deduce that which also holds trivially also for the case k = 0. By applying the function f d to (A.14) we obtain l N e − f d (α) l + 1, so that (A. 19) yields (A.13). Next, we verify (A.17). We consider the cases l = 0 and l 1 separately. If l = 0 then, by the definition of β k (d), for (A.17) we require (log N ) 2ζ + 1 N e − f d (3/2) , which holds by the assumption d 3 log N and the upper bound on ζ . Let us therefore suppose that l 1. By (A.14), α 2, and the definition of β l (d), we have l N e − f d (2) , and we have to ensure that (l + 2)(log N ) 2ζ N e − f d (3/2) . Since l 1, this is satisfied provided that 3e − f d (2) 2α . What remains, therefore, is the proof of (A.15) and (A.16). We begin with the proof of (A.15). We get from the mean value theorem that (A.20) The right-hand side of (A.20) is bounded from below by ϒ provided that log l k 2ζ log β k (d) .
Together with β l+1 (d) β 1 (d) log N from (A.22), we deduce that the right-hand side of (A.23) is bounded from below by ϒ provided that log k l+1 2ζ log log N , which is true by definition of k. This concludes the proof of (A.16).
The following result follows easily from Lemma A.10. Recall the definition (1.13) of the exponent θ b (α). Corollary A.11. Suppose that ζ satisfies (A.12). Write d = b log N . Then for any α 2 we have with high probability.
Using the exponent θ b (α) from (1.13) and α max (b) defined below it, we may state the following estimate on the density of the normalized degrees and the number of resonant vertices.
Lemma A.12. The following holds for a large enough universal constant C. Suppose that ζ satisfies (A.12). Write d = b log N .
(ii) For δ C ζ log log N d and 2 + δ λ (α max (b)), with high probability we have Note that, since ξ d −1/2 , if the conclusion of Theorem 1.2 is nontrivial then δ d −1/2 , and hence the assumption on δ in Lemma A.12 (ii) is automatically satisfied for suitably chosen ζ .
Proof of Lemma A.12. Part (i) follows Corollary A.11 below by noting that the assumption on β implies θ b (α) − θ b (β) C ζ log log N log N by the mean value theorem. Part (ii) follows from Part (i), using that log(λ − δ) log 2, that is bounded on [2, ∞), and the mean value theorem.
Corollary A.13. The following holds for large enough universal constants C, C. Suppose that (1.10) holds. Write d = b log N . Let w = (w x ) x∈[N ] be a normalized eigenvector of A/ √ d with nontrivial eigenvalue 2 + Cξ 1/2 λ (α max (b)). Then with high probability for any 2 p ∞ we have Proof. We choose δ . . = C(ξ +ξ λ−2 ). Then by assumption on λ we have δ (λ−2)/2, and hence Theorem 1.2 yields, using that v(x) is supported in B r (x), x∈W λ,δ y∈B r (x) w 2 y 1 2 with high probability. Using that for any vector x ∈ R n we have x 2 p n 2/ p−1 x 2 2 (by Hölder's inequality), with the choice n = x∈W λ,δ |B r (x)|, we get log N with high probability. Next, using the mean value theorem and elementary estimates on the derivatives of θ b and −1 , we estimate Invoking Lemma A.12 (ii) with ζ . . = log log N , and recalling (A.25), therefore yields the claim.
A.5. Connected components of G (N , d/N ). In this appendix we give some basic estimates on the sizes of connected components of G (N , d/N ). These are needed for the analysis of the tuning forks in Appendix A.6 below. The arguments are standard and are tailored to work well in the regime 1 d log N that we are interested in. For smaller values of d, see e.g. [17]. (1 − A xy ) .
Taking the expectation now easily yields the claim, using |T (X )| = |X | |X |−2 by Cayley's theorem, that a tree on k vertices has k − 1 edges, Stirling's approximation, and 1 − x e −x . The argument to estimate W k is similar, noting that in addition to a spanning tree T of X , we also have to have at least one edge not in T connecting two vertices of X .  1) C N e −d/3 using Lemma A.14, and the third claim follows from Chebyshev's inequality.
We may now estimate the adjacency matrix on the small components of G (N , d/N  A.6. Tuning forks and proof of Lemma 1.12. In this appendix we give a precise definition of the D-tuning forks from Sect. 1.5 and prove Lemma 1.12.
Definition A.17. A star of degree D ∈ N consists of a vertex, the hub, and D leaves adjacent to the hub, the spokes. A star tuning fork of degree D is obtained by taking two disjoint stars of degree D along with an additional vertex, the base, and connecting both hubs to the base. We say that a star tuning fork is rooted in a graph H if it is a subgraph of H in which both hubs have degree D + 1 and all spokes are leaves.
Lemma A.18. If a star tuning fork of degree D is rooted in some graph H, then the adjacency matrix of H has eigenvalues ± √ D with corresponding eigenvectors supported on the stars of the tuning fork, i.e. on 2D + 2 vertices.
Proof. Suppose first that D 1. Note first that the adjacency matrix of a star of degree D has rank two and has the two nonzero eigenvalues ± √ D, with associated eigenvector equal to ± √ D at the hub and 1 at the spokes. Now take a star tuning fork of degree D rooted in a graph H. Define a vector on the vertex set of H by setting it to be ± √ D at the hub of the first star, 1 at the spokes of the first star, ∓ √ D at the hub of the second star, −1 at the spokes of the second star, and 0 everywhere else. Then it is easy to check that this vector is an eigenvector of the adjacency matrix of H with eigenvalue ± √ D. If D = 0 the construction is analogous, defining the vector to be +1 at one hub and −1 at the other.
We recall from Sect. 1.5 that F(d, D) denotes the number of star tuning forks of degree D rooted in G giant .  (1) . The claim then follows from the second moment estimate in Lemma A. 19 and Chebyshev's inequality.
Proof of Lemma A. 19. Let x 1 , x 2 ∈ [N ] be distinct vertices and R 1 , R 2 ⊂ [N ]\{x 1 , x 2 } be disjoint subsets of size D. We abbreviate U = (x 1 , x 2 , R 1 , R 2 ) and sometimes identify U with {x 1 , x 2 } ∪ R 1 ∪ R 2 . The family U and a vertex o ∈ [N ] \ U define a star tuning fork of degree D with base o, hubs x 1 and x 2 , and associated spokes R 1 and R 2 . Let The factor 1 2 corrects the overcounting from the labelling of the two stars.

A.7. Multilinear large deviation bounds for sparse random vectors.
In this appendix we collect basic large deviation bounds for multilinear functions of sparse random vectors, which are proved in [42]. The following result is proved in Propositions 3.1, 3.2, and 3.5 of [42]. We denote by X r . . = (E|X | r ) 1/r the L r -norm of a random variable X . Proof of Lemma 4.19. We introduce the vectors g . . = (g x ) x∈X and ε . . = (ε x ) x∈X . Moreover, with the abbreviation m . . = m(z) we introduce the constant vectors m = (m) x∈X and e . . = |X | −1/2 (1) x∈X . We regard all vectors as column vectors. A simple computation starting from the difference of (4.6) and ( For a matrix R ∈ C X ×X , we write R ∞→∞ for the operator norm induced by the norm r ∞ = max x∈X |r x | on C X . It is easy to see that there is c > 0, depending only on κ, such that |1 − m(w) 2 | c for all w ∈ C + satisfying |Re w| 2 − κ. Hence, owing to ee * ∞→∞ = 1, we obtain B −1 ∞→∞ 1 + |1 − m 2 | −1 1 + c −1 . Therefore, inverting B in (A.33) and choosing b, depending only on κ, sufficiently small to absorb the term quadratic in g − m into the left-hand side of the resulting bound yields (4.43) for some sufficiently large C > 0, depending only on κ. This concludes the proof of Lemma 4.19.
A. 10. Instability estimate-proof of (2.11). In this appendix we prove (2.11), which shows that the self-consistent equation (2.10) is unstable with a logarithmic factor, which renders it useless for the analysis of sparse random graphs. More precisely, we show that the norm (I − m 2 S) −1 ∞→∞ is ill-behaved precisely in the situation where we need it. For simplicity, we replace m 2 with a phase α −1 ∈ S 1 separated from ±1, since for Re z ∈ S κ we have |m(z)| 2 = 1 − O(Im z) , Im m(z) 1 , (A. 34) by [33,Lemma 3.5]. Moreover, for definiteness, recalling that with very high probability most of the d (1 + o(1)) neighbours of any vertex in T are again in T , we assume that S is the adjacency matrix of a d-regular graph on T divided by d. By the spectral theorem and because S is Hermitian, (α − S) −1 2→2 is bounded, but, as we now show, the same does not apply to (α − S) −1 ∞→∞ . Indeed, the upper bound of (2.11) follows from [34,Proposition A.2], and the lower bound from the following result.
Lemma A.25 (Instability of (2.10)). Let S be 1/d times the adjacency matrix of a graph whose restriction to the ball of radius r ∈ N * around some distinguished vertex is a d-regular tree. Let α ∈ S 1 be an arbitrary phase. Then for some universal constant c > 0.