Cusp Universality for Random Matrices I: Local Law and the Complex Hermitian Case

For complex Wigner-type matrices, i.e. Hermitian random matrices with independent, not necessarily identically distributed entries above the diagonal, we show that at any cusp singularity of the limiting eigenvalue distribution the local eigenvalue statistics are universal and form a Pearcey process. Since the density of states typically exhibits only square root or cubic root cusp singularities, our work complements previous results on the bulk and edge universality and it thus completes the resolution of the Wigner–Dyson–Mehta universality conjecture for the last remaining universality type in the complex Hermitian class. Our analysis holds not only for exact cusps, but approximate cusps as well, where an extended Pearcey process emerges. As a main technical ingredient we prove an optimal local law at the cusp for both symmetry classes. This result is also the key input in the companion paper (Cipolloni et al. in Pure Appl Anal, 2018. arXiv:1811.04055) where the cusp universality for real symmetric Wigner-type matrices is proven. The novel cusp fluctuation mechanism is also essential for the recent results on the spectral radius of non-Hermitian random matrices (Alt et al. in Spectral radius of random matrices with independent entries, 2019. arXiv:1907.13631), and the non-Hermitian edge universality (Cipolloni et al. in Edge universality for non-Hermitian random matrices, 2019. arXiv:1908.00969).


Introduction
The celebrated Wigner-Dyson-Mehta (WDM) conjecture asserts that local eigenvalue statistics of large random matrices are universal: they only depend on the symmetry type of the matrix and are otherwise independent of the details of the distribution of the matrix ensemble. This remarkable spectral robustness was first observed by Wigner in the bulk of the spectrum. The correlation functions are determinantal and they were L. Erdős computed in terms the sine kernel via explicit Gaussian calculations by Dyson, Gaudin and Mehta [59]. Wigner's vision continues to hold at the spectral edges, where the correct statistics was identified by Tracy and Widom for both symmetry types in terms of the Airy kernel [70,71]. These universality results have been originally formulated and proven [17,35,36,[67][68][69] for traditional Wigner matrices, i.e. Hermitian random matrices with independent, identically distributed (i.i.d.) entries and their diagonal [55,57] and non-diagonal [51] deformations. More recently they have been extended to Wigner-type ensembles, where the identical distribution is not required, and even to a large class of matrices with general correlated entries [7,8,11]. In different directions of generalization, sparse matrices [1,32,47,56], adjacency matrices of regular graphs [14] and band matrices [19,20,66] have also been considered. In parallel developments bulk and edge universal statistics have been proven for invariant β-ensembles [12,15,17,18,29,30,52,61,62,64,65,73] and even for their discrete analogues [13,16,41,48] but often with very different methods.
A precondition for the Tracy-Widom distribution in all these generalizations of Wigner's original ensemble is that the density of states vanishes as a square root near the spectral edges. The recent classification of the singularities of the solution to the underlying Dyson equation indeed revealed that at the edges only square root singularities appear [6,10]. The density of states may also form a cusp-like singularity in the interior of the asymptotic spectrum, i.e. single points of vanishing density with a cubic root growth behaviour on either side. Under very general conditions, no other type of singularity may occur. At the cusp a new local eigenvalue process emerges: the correlation functions are still determinantal but the Pearcey kernel replaces the sine-or the Airy kernel.
The Pearcey process was first established by Brézin and Hikami for the eigenvalues close to a cusp singularity of a deformed complex Gaussian Wigner (GUE) matrix. They considered the model of a GUE matrix plus a deterministic matrix ("external source") having eigenvalues ±1 with equal multiplicity [21,22]. The name Pearcey kernel and the corresponding Pearcey process have been coined by [72] in reference to related functions introduced by Pearcey in the context of electromagnetic fields [63]. Similarly to the universal sine and Airy processes, it has later been observed that also the Pearcey process universality extends beyond the realm of random matrices. Pearcey statistics have been established for non-intersecting Brownian bridges [3] and in skew plane partitions [60], always at criticality. We remark, however, that critical cusp-like singularity does not always induce a Pearcey kernel, see e.g. [31].
In random matrix theory there are still only a handful of rather specific models for which the emergence of the Pearcey process has been proven. This has been achieved for deformed GUE matrices [2,4,23] and for Gaussian sample covariance matrices [42][43][44] by a contour integration method based upon the Brézin-Hikami formula. Beyond linear deformations, the Riemann-Hilbert method has been used for proving Pearcey statistics for a certain two-matrix model with a special quartic potential with appropriately tuned coefficients [40]. All these previous results concern only specific ensembles with a matrix integral representation. In particular, Wigner-type matrices are out of the scope of this approach.
The main result of the current paper is the proof of the Pearcey universality at the cusps for complex Hermitian Wigner-type matrices under very general conditions. Since the classification theorem excludes any other singularity, this is the third and last universal statistics that emerges from natural generalizations of Wigner's ensemble.
This third universality class has received somewhat less attention than the other two, presumably because cusps are not present in the classical Wigner ensemble. We also note that the most common invariant β-ensembles do not exhibit the Pearcey statistics as their densities do not feature cubic root cusps but are instead 1/2-Hölder continuous for somewhat regular potentials [28]. The density vanishes either as 2kth or (2k + 1 2 )th power with their own local statistics (see [26] also for the persistence of these statistics under small additive GUE perturbations before the critical time). Cusp singularities, hence Pearcey statistics, however, naturally arise within any one-parameter family of Wigner-type ensembles whenever two spectral bands merge as the parameter varies. The classification theorem implies that cusp formation is the only possible way for bands to merge, so in that sense Pearcey universality is ubiquitous as well.
The bulk and edge universality is characterized by the symmetry type alone: up to a natural shift and rescaling there is only one bulk and one edge statistic. In contrast, the cusp universality has a much richer structure: it is naturally embedded in a one-parameter family of universal statistics within each symmetry class. In the complex Hermitian case these are given by the one-parameter family of (extended) Pearcey kernels, see (2.5) later. Thinking in terms of fine-tuning a single parameter in the space of Wigner-type ensembles, the density of states already exhibits a universal local shape right before and right after the cusp formation; it features a tiny gap or a tiny nonzero local minimum, respectively [5,10]. When the local lengthscale of these almost cusp shapes is comparable with the local eigenvalue spacing δ, then the general Pearcey statistics is expected to emerge whose parameter is determined by the ratio /δ. Thus the full Pearcey universality typically appears in a double scaling limit.
Our proof follows the three step strategy that is the backbone of the recent approach to the WDM universality, see [38] for a pedagogical exposé and for detailed history of the method. The first step in this strategy is a local law that identifies, with very high probability, the empirical eigenvalue distribution on a scale slightly above the typical eigenvalue spacing. The second step is to prove universality for ensembles with a tiny Gaussian component. Finally, in the third step this Gaussian component is removed by perturbation theory. The local law is used for precise apriori bounds in the second and third steps.
The main novelty of the current paper is the proof of the local law at optimal scale near the cusp. To put the precision in proper context, we normalize the N × N real symmetric or complex Hermitian Wigner-type matrix H to have norm of order one. As customary, the local law is formulated in terms of the Green function G(z) . . = (H − z) −1 with spectral parameter z in the upper half plane. The local law then asserts that G(z) becomes deterministic in the large N limit as long as η . . = z is much larger than the local eigenvalue spacing around z. The deterministic approximant M(z) can be computed as the unique solution of the corresponding Dyson equation (see (2.2) and (3.1) later). Near the cusp the typical eigenvalue spacing is of order N −3/4 ; compare this with the N −1 spacing in the bulk and N −2/3 spacing near the edges. We remark that a local law at the cusp on the non-optimal scale N −3/5 has already been proven in [8]. In the current paper we improve this result to the optimal scale N −3/4 and this is essential for our universality proof at the cusp.
The main ingredient behind this improvement is an optimal estimate of the error term D (see (3.4) later) in the approximate Dyson equation that G(z) satisfies. The difference M − G is then roughly estimated by B −1 (M D), where B is the linear stability operator of the Dyson equation. Previous estimates on D (in averaged sense) were of order ρ/N η, where ρ is the local density; roughly speaking ρ ∼ 1 in the bulk, ρ ∼ N −1/3 at the edge and ρ ∼ N −1/4 near the cusp. While this estimate cannot be improved in general, our main observation is that, to leading order, we need only the projection of M D in the single unstable direction of B. We found that this projection carries an extra hidden cancellation due to a special local symmetry at the cusp and thus the estimate on D effectively improves to ρ 2 /N η. Customary power counting is not sufficient, we need to compute this error term explicitly at least to leading order. We call this subtle mechanism cusp fluctuation averaging since it combines the well established fluctuation averaging procedure with the additional cancellation at the cusp. Similar estimates extend to the vicinity of the exact cusps. We identify a key quantity, denoted by σ (z) (in (3.5b) later), that measures the distance from the cusp in a canonical way: σ (z) = 0 characterizes an exact cusp, while |σ (z)| 1 indicates that z is near an almost cusp. Our final estimate on D is of order (ρ + |σ |)ρ/N η. Since the error term D is random and we need to control it in high moment sense, we need to lift this idea to a high moment calculation, meticulously extracting the improvement from every single term. This is performed in the technically most involved Sect. 4 where we use a Feynman diagrammatic formalism to bookkeep the contributions of all terms. Originally we have developed this language in [34] to handle random matrices with slow correlation decay, based on the revival of the cumulant expansion technique in [45] after [50]. In the current paper we incorporate the cusp into this analysis. We identify a finite set of Feynman subdiagrams, called σ -cells (Definition 4.10) with value σ that embody the cancellation effect at the cusp. To exploit the full strength of the cusp fluctuation averaging mechanism, we need to trace the fate of the σ -cells along the high moment expansion. The key point is that σ -cells are local objects in the Feynman graphs thus their cancellation effects act simultaneously and the corresponding gains are multiplicative.
Formulated in the jargon of diagrammatic field theory, extracting the deterministic Dyson equation for M from the resolvent equation (H − z)G(z) = 1 corresponds to a consistent self-energy renormalization of G. One way or another, such procedure is behind every proof of the optimal local law with high probability. Our σ -cells conceptually correspond to a next order resummation of certain Feynman diagrams carrying a special cancellation.
We remark that we prove the optimal local law only for Wigner-type matrices and not yet for general correlated matrices unlike in [11,34]. In fact we use the simpler setup only for the estimate on D (Theorem 3.7) the rest of the proof is already formulated for the general case. This simpler setup allows us to present the cusp fluctuation averaging mechanism with the least amount of technicalities. The extension to the correlated case is based on the same mechanism but it requires considerably more involved diagrammatic manipulations which is better to develop in a separate work to contain the length of this paper.
Our cusp fluctuation averaging mechanism has further applications. It is used in [9] to prove an optimal cusp local law for the Hermitization of non-Hermitian random matrices with a variance profile, demonstrating that the technique is also applicable in settings where the flatness assumption is violated. The cusp of the Hermitization corresponds to the edge of the non-Hermitian model via Girko's formula, thus the optimal cusp local law leads to an optimal bound on the spectral radius [9] and ultimately also to edge universality [25] for non-Hermitian random matrices.
Armed with the optimal local law we then perform the other two steps of the three step analysis. The third step, relying on the Green function comparison theorem, is fairly standard and previous proofs used in the bulk and at the edge need only minor adjustments. The second step, extracting universality from an ensemble with a tiny Gaussian component can be done in two ways: (i) Brézin-Hikami formula with contour integration or (ii) Dyson Brownian Motion (DBM). Both methods require the local law as an input. In the current work we follow (i) mainly because this approach directly yields the Pearcey kernel, at least for the complex Hermitian symmetry class. In the companion work [24] we perform the DBM analysis adapting methods of [37,53,54] to the cusp. The main novelty in the current work and in [24] is the rigidity at the cusp on the optimal scale provided below. Once this key input is given, the proof of the edge universality from [53] is modified in [24] to the cusp setting, proving universality for the real symmetric case as well. We remark, however, that, to our best knowledge, the analogue of the Pearcey kernel for the real symmetric case has not yet been explicitly identified.
We now explain some novelty in the contour integration method. We first note that a similar approach was initiated in the fundamental work of Johansson on the bulk universality for Wigner matrices with a large Gaussian component in [49]. This method was generalised later to Wigner matrices with a small Gaussian component in [35] as well as it inspired the proof of bulk universality via the moment matching idea [68] once the necessary local law became available. The double scaling regime has also been studied, where the density is very small but the Gaussian component compensates for it [27]. More recently, the same approach was extended to the cusp for deformed GUE matrices [23,Theorem 1.3] and for sample covariance matrices but only for large Gaussian component [42][43][44]. For our cusp universality, we need to perform a similar analysis but with a small Gaussian component. We represent our matrix H as H + √ tU , where U is GUE and H is an independent Wigner-type matrix. The contour integration analysis (Sect. 5.1) requires a Gaussian component of size at least t N −1/2 . The input of the analysis in Sect. 5.1 for the correlation kernel of H is a very precise description of the eigenvalues of H just above N −3/4 , the scale of the typical spacing between eigenvalues-this information is provided by our optimal local law. While in the bulk and in the regime of the regular edge finding an appropriate H is a relatively simple matter, in the vicinity of a cusp point the issue is very delicate. The main reason is that the cusp, unlike the bulk or the regular edge, is unstable under small perturbations; in fact it typically disappears and turns into a small positive local minimum if a small GUE component is added. Conversely, a cusp emerges if a small GUE component is added to an ensemble that has a density with a small gap. In particular, even if the density function ρ(τ ) of H exhibits an exact cusp, the density ρ(τ ) of H will have a small gap: in fact ρ is given by the evolution of the semicircular flow up to time t with initial data ρ. Unlike in the bulk and edge cases, here one cannot match the density of H and H by a simple shift and rescaling. Curiously, the contour integral analysis for the local statistics of H at the cusp relies on an optimal local law of H with a small gap far away from the cusp.
Thus we need an additional ingredient: the precise analysis of the semicircular flow ρ s . . = ρ ρ (s) sc near the cusp up to a relatively long times s N −1/2+ ; note that ρ t = ρ is the original density with the cusp. Here ρ (s) sc is the semicircular density with variance s and indicates the free convolution. In Sects. 5.2-5.3 we will see that the edges of the support of the density ρ s typically move linearly in the time s while the gap closes at a much slower rate. Already s N −3/4 is beyond the simple perturbative regime of the cusp whose natural lengthscale is N −3/4 . Thus we need a very careful tuning of the parameters: the analysis of a cusp for H requires constructing a matrix H that is far from having a cusp but that after a relatively long time t = N −1/2+ will develop a cusp exactly at the right location. In the estimates we heavily rely on various properties of the solution to the Dyson equation established in the recent paper [10]. These results go well beyond the precision of the previous work [5] and they apply to a very general class of Dyson equations, including a non-commutative von-Neumann algebraic setup.

The Dyson equation.
Let W = W * ∈ C N ×N be a self-adjoint random matrix and A = diag(a) be a deterministic diagonal matrix with entries a = (a i ) N i=1 ∈ R N . We say that W is of Wigner-type [8] if its entries w i j for i ≤ j are centred, E w i j = 0, independent random variables. We define the variance matrix or self-energy matrix S = (s i j ) N i, j=1 by This matrix is symmetric with non-negative entries. In [8] it was shown that as N tends to infinity, the resolvent G(z) . . = (H −z) −1 of the deformed Wigner-type matrix H = A+W entrywise approaches a diagonal matrix We call M or m the self-consistent Green's function. The normalised trace of M is the Stieltjes transform of a unique probability measure on R that approximates the empirical eigenvalue distribution of A + W increasingly well as N → ∞, motivating the following definition.
Definition 2.1 (Self-consistent density of states). The unique probability measure ρ on R, defined through is called the self-consistent density of states (scDOS). Accordingly, its support supp ρ is called self-consistent spectrum.

Cusp universality.
We make the following assumptions: Assumption (A) (Bounded moments). The entries of the Wigner-type matrix √ N W have bounded moments and the expectation A is bounded, i.e. there are positive C k such that

Assumption (B) (Fullness).
If the matrix W = W * ∈ C N ×N belongs to the complex hermitian symmetry class, then we assume as quadratic forms, for some positive constant c > 0. If W = W T ∈ R N ×N belongs to the real symmetric symmetry class, then we assume E w 2 i j ≥ c N . Assumption (C) (Bounded self-consistent Green's function). In a neighbourhood of some fixed spectral parameter τ ∈ R the self-consistent Green's function is bounded, i.e. for positive C, κ we have We call the constants appearing in Assumptions (A)-(C) model parameters. All generic constants C in this paper may implicitly depend on these model parameters. Dependence on further parameters however will be indicated.
Remark 2.2. The boundedness of m in Assumption (C) can be ensured by assuming some regularity of the variance matrix S. For more details we refer to [5,Chapter 6].
From the extensive analysis in [10] we know that the self-consistent density ρ is described by explicit shape functions in the vicinity of local minima with small value of ρ and around small gaps in the support of ρ. The density in such almost cusp regimes is given by precisely one of the following three asymptotics: There is a cusp point c ∈ R in the sense that ρ(c) = 0 and ρ(c ± δ) > 0 for 0 = δ 1. In this case the self-consistent density is locally around c given by for some γ > 0.
(ii) Small gap. There is a maximal interval [e − , e + ] of size 0 < . . = e + − e − 1 such that ρ| [e − ,e + ] ≡ 0. In this case the density around e ± is, for some γ > 0, locally given by where the shape function around the edge is given by There is a local minimum at m ∈ R of ρ such that 0 < ρ(m) 1. In this case there exists some γ > 0 such that where the shape function around the local minimum is given by We note that the parameter γ in (2.4a) is chosen in a way which is convenient for the universality statement. We also note that the choices for γ in (2.4b)-(2.4d) are consistent with (2.4a) in the sense that in the regimes x 1 and ρ(m) 3 |x| 1 the respective formulae asymptotically agree. Depending on the three cases (i)-(iii), we define the almost cusp point b as the cusp c in case (i), the midpoint (e − + e + )/2 in case (ii), and the minimum m in case (iii). When the local length scale of the almost cusp shape starts to match the eigenvalue spacing, i.e. if N −3/4 or ρ(m) N −1/4 , then we call the local shape a physical cusp. This terminology reflects the fact that the shape becomes indistinguishable from the exact cusp with ρ(c) = 0 when resolved with a precision above the eigenvalue spacing. In this case we call b a physical cusp point.
The extended Pearcey kernel with a real parameter α (often denoted by τ in the literature) is given by where is a contour consisting of rays from ±∞e iπ/4 to 0 and rays from 0 to ±∞e −iπ/4 , and is the ray from −i∞ to i∞. The simple Pearcey kernel with parameter α = 0 has been first observed in the context of random matrix theory by [21,22]. We note that (2.5) is a special case of a more general extended Pearcey kernel defined in [72,Eq. (1.1)].
It is natural to express universality in terms of a rescaled k-point function p . . dx k , and c(k) > 0 is a small constant only depending on k.

Local law.
We emphasise that the proof of Theorem 2.3 requires a very precise a priori control on the fluctuation of the eigenvalues even at singular points of the scDOS. This control is expressed in the form of a local law with an optimal convergence rate down to the typical eigenvalue spacing. We now define the scale on which the eigenvalues are predicted to fluctuate around the spectral parameter τ .

Definition 2.4 (Fluctuation scale).
We define the self-consistent fluctuation scale We will see later (cf. (A.8b)) that (2.7) is the fluctuation of the edge eigenvalue adjacent to a spectral gap of length as predicted by the local behaviour of the scDOS. The control on the fluctuation of eigenvalues is expressed in terms of the following local law.

Theorem 2.5 (Local law).
Let H be a deformed Wigner-type matrix of the real symmetric or complex Hermitian symmetry class. Fix any τ ∈ R. Assuming (A)-(C) for any , ζ > 0 and ν ∈ N the local law holds uniformly for all z for any u, v ∈ C N and We remark that later we will prove the local law also in a form which is uniform in τ ∈ [−N 100 , N 100 ] and η ∈ [N −1+ζ , N 100 ], albeit with a more complicated error term, see Proposition 3.11. The local law Theorem 2.5 implies a large deviation result for the fluctuation of eigenvalues on the optimal scale uniformly for all singularity types. Corollary 2.6 (Uniform rigidity). Let H be a deformed Wigner-type matrix of the real symmetric or complex Hermitian symmetry class satisfying Assumptions (A)-(C) for τ ∈ int(supp ρ). Then for any > 0 and ν ∈ N and some C = C( , ν), where we defined the (self-consistent) eigenvalue index k(τ ) . . = Nρ((−∞, τ )) , and where x = min{k ∈ Z|k ≥ x}.
In particular, the fluctuation of the eigenvalue whose expected position is closest to the cusp location does not exceed N −3/4+ for any > 0 with very high probability. The following corollary specialises Corollary 2.6 to the neighbourhood of a cusp.
We remark that a variant of Corollary 2.7 holds more generally for almost cusp points. It is another consequence of Corollary 2.6 that with high probability there are no eigenvalues much further than the fluctuation scale η f away from the spectrum. We note that the following corollary generalises [11,Corollary 2.3] by also covering internal gaps of size 1.
Corollary 2.8 (No eigenvalues outside the support of the self-consistent density). Let τ ∈ supp ρ. Under the assumptions of Theorem 2.5 we have for any , ν > 0, where c and C are positive constants, depending on model parameters. The latter also depends on and ν. involves E w 2 i j . Second, uniform primitivity allows certain matrix elements of W to vanish. The proof under these more general assumptions follows the same strategy but requires minor modifications within the stability analysis. 1

Local Law
In order to directly appeal to recent results on the shape of solution to Matrix Dyson Equation (MDE) from [10] and the flexible diagrammatic cumulant expansion from [34], we first reformulate the Dyson equation (2.2) for N -vectors m into a matrix equation that will approximately be satisfied by the resolvent G. This viewpoint also allows us to treat diagonal and off-diagonal elements of G on the same footing. In fact, (2.2) is a special case of In the special case of Wigner-type matrices the self-energy operator is given by i j 1(i = j) and denotes the entrywise Hadamard product. The solution to (3.1) is then given by M = diag(m), where m solves (2.2). Note that the action of S on diagonal matrices is independent of T , hence the Dyson equation (2.2) for Wigner-type matrices is solely determined by the matrix S, the matrix T plays no role. However, T plays a role in analyzing the error matrix D, see (3.4) below.
The proof of the local law consists of three largely separate arguments. The first part concerns the analysis of the stability operator and shape analysis of the solution M to (3.1). The second part is proving that the resolvent G is indeed an approximate solution to (3.1) in the sense that the error matrix is small. In previous works [8,11,34] it was sufficient to establish smallness of D in an isotropic form x, Dy and averaged form B D with general bounded vectors/matrices x, y, B. In the vicinity of a cusp, however, it becomes necessary to establish an additional cancellation when D is averaged against the unstable direction of the stability operator B. We call this new effect cusp fluctuation averaging. Finally, the third part of the proof consists of a bootstrap argument starting far away from the real axis and iteratively lowering the imaginary part η = z of the spectral parameter while maintaining the desired bound on G − M.
Remark 3.1. We remark that the proofs of Theorem 2.5, and Corollaries 2.6 and 2.8 use the independence assumption on the entries of W only very locally. In fact, only the proof of a specific bound on D (see (3.15) later), which follows directly from the main result of the diagrammatic cumulant expansion, Theorem 3.7, uses the vector structure and the specific form of S in (3.2) at all. Therefore, assuming (3.15) as an input, our proof of Theorem 2.5 remains valid also in the correlated setting of [11,34], as long as S is flat (see (3.6) below), and Assumption (C) is replaced by the corresponding assumption on the boundedness of M .
For brevity we will carry out the proof of Theorem 2.5 only in the vicinity of almost cusps as the local law in all other regimes was already proven in [8,11] to optimality. Therefore, within this section we will always assume that z = τ + iη = τ 0 + ω + iη ∈ H lies inside a small neighbourhood of the location τ 0 of a local minimum of the scDOS within the self-consistent spectrum supp ρ. Here c is a sufficiently small constant depending only on the model parameters. We will further assume that either (i) ρ(τ 0 ) ≥ 0 is sufficiently small and τ 0 is the location of a cusp or internal minimum, or (ii) ρ(τ 0 ) = 0 and τ 0 is an edge adjacent to a sufficiently small gap of length > 0. The results from [10] guarantee that these are the only possibilities for the shape of ρ, see (2.4). In other words, we assume that τ 0 ∈ supp ρ is a local minimum of ρ with a shape close to a cusp (cf. (2.4)). For concreteness we will also assume that if τ 0 is an edge, then it is a right edge (with a gap of length > 0 to the right) and ω ∈ (−c, 2 ]. The case when τ 0 is a left edge has the same proof.
We now introduce a quantity that will play an important role in the cusp fluctuation averaging mechanism. We define where M . . = (M + M * )/2 is the real part of M = M(z). It was proven in [10, Lemma 5.5] that σ (z) extends to the real line as a 1/3-Hölder continuous function wherever the scDOS ρ is smaller than some threshold c ∼ 1, i.e. ρ ≤ c. In the specific case of S as in (3.2) the definition simplifies to The analogues of the quantities f, p and σ in (3.5b) are denoted by f u , s and σ in [10], respectively. The significance of σ for the classification of singularity types in Wignertype ensembles was first realised in [5]. Although in this paper we will use only [10] and will not rely on [5], we remark that the definition of σ in [5,Eq. (8.11)] differs slightly from the definition (3.5b). However, both definitions equally fulfil the purpose of classifying singularity types, since the ensuing scalar quantities σ are comparable inside the self-consistent spectrum. For the interested reader, we briefly relate our notations to the respective conventions in [10] and [5]. The quantity denoted by f in both [10] and [5] is the normalized eigendirection of the saturated self-energy operator F in the respective settings and is related to f from (3.5b) via f = f/ f + O (η/ρ). Moreover, σ in [5] is defined as f 3 sgn m , justifying the comparability to σ from (3.5b). for the difference G − M. In order to apply the results of [10] to the stability operator B, we first have to check that the flatness condition [10, Eq. (3.10)] is satisfied for the self-energy operator S. We claim that S is flat, i.e.

S[R]
as quadratic forms for any positive semidefinite R ∈ C N ×N . We remark that in the earlier paper [8] in the Wigner-type case only the upper bound s i j ≤ C/N defined the concept of flatness. Here with the definition (3.6) we follow the convention of the more recent works [10,11,34] which is more conceptual. We also warn the reader, that in the complex Hermitian Wigner-type case the condition c/N ≤ s i j ≤ C/N implies (3.6) only if t i j is bounded away from −s i j . However, the flatness (3.6) is an immediate consequence of the fullness Assumption (B). Indeed, (B) is equivalent to the condition that the covariance operator of all entries above and on the diagonal, defined as ab,cd . . = E w ab w cd , is uniformly strictly positive definite. This implies that ≥ c G for some constant c ∼ 1, where G is the covariance operator of a GUE or GOE matrix, depending on the symmetry class we consider. This means that S can be split into S = S 0 + cS G , where S G and S 0 are the self-energy operators corresponding to G and − c G , respectively. It is now an easy exercise to check that S G and thus S is flat.
In particular, [10, Proposition 3.5 and Lemma 4.8] are applicable implying that [10, Assumption 4.5] is satisfied. Thus, according to [10, Lemma 5.1] for spectral parameters z in a neighbourhood of τ 0 the operator B has a unique isolated eigenvalue β of smallest modulus and associated right B[V r ] = βV r and left B * [V l ] = βV l eigendirections normalised such that V r hs = V l , V r = 1. We denote the spectral projections to V r and to its complement by P . . = V l , · V r and Q . . = 1 − P. For convenience of the reader we now collect some important quantitative information about the stability operator and its unstable direction from [10].
Proof. We first explain how to translate the notations from the present paper to the notations in [10]: The operators S, B, Q are simply denoted by S, B, Q in [10]; the matrices V l , V r here are denoted by l/ l , b , b there. The bound on B −1 Q in (3.7a) follows directly from [10,Eq. (5.15)]. The bounds on V l , V r in (3.7a) follow from the definition of the stability operator (3.3) together with the fact that M 1 (by Assumption (C)) and S hs→ · 1, following from the upper bound in flatness (3.6). The asymptotic expansion of ρ in (3.7b) follows from [10,Remark 7.3] and [5,Corollary A.1]. The claims in (iii) follow directly from [10, Proposition 6.1]. Finally, the claims in (iv) follow directly from [10,Remark 10.4].
The following lemma establishes simplified lower bounds on ξ 1 , ξ 2 whenever η is much larger than the fluctuation scale η f . We defer the proof of the technical lemma which differentiates various regimes to the Appendix.
We now define an appropriate matrix norm in which we will measure the distance between G and M. The · * -norm is defined exactly as in [11] and similar to the one first introduced in [34]. It is a norm comparing matrix elements on a large but finite set of vectors with a hierarchical structure. To define this set we introduce some notations. For second order cumulants of matrix elements κ(w ab , w cd ) . . = E w ab w cd we use the short-hand notation κ(ab, cd). We also use the short-hand notation κ(xb, cd) for the x = (x a ) a∈[N ] -weighted linear combination a x a κ(ab, cd) of such cumulants. We use the notation that replacing an index in a scalar quantity by a dot (·) refers to the corresponding vector, e.g. A a· is a short-hand notation for the vector (A ab ) b∈ [N ] . Matrices R xy with vector subscripts x, y are understood as short-hand notations for x, Ry , and matrices R xa with mixed vector and index subscripts are understood as x, Re a with e a being the ath normalized e a = 1 standard basis vector. We fix two vectors x, y and some large integer K and define the sets of vectors Here the cross and the direct part κ c , κ d of the 2-cumulants κ(·, ·) refer to the natural splitting dictated by the Hermitian symmetry. In the specific case of (3.2) we simply have κ c (ab, cd) = δ ad δ bc s ab and κ d (ab, cd) = δ ac δ bd t ab . Then the · * -norm is given by We remark that the set I k hence also · * depend on z via M = M(z). We omit this dependence from the notation as it plays no role in the estimates.
In terms of this norm we obtain the following estimate on G − M in terms of its projection = V l , G − M onto the unstable direction of the stability operator B. It is a direct consequence of a general expansion of approximate quadratic matrix equations whose linear stability operators have a single eigenvalue close to 0, as given in Lemma A.1.

Proposition 3.4 (Cubic equation for ).
Fix K ∈ N, x, y ∈ C N and use · * = · K ,x,y * . For fixed z ∈ D cusp and on the event that G − M * + D * N −10/K the difference G − M admits the expansion with an error matrix E and the scalar . . = V l , G − M that satisfies the approximate cubic equation Here, the error * satisfies the upper bound where R is a deterministic matrix with R 1 and the coefficients of the cubic equation satisfy the comparison relations Proof. We first establish some important bounds involving the · * -norm. We claim that for any matrices R, The proof of (3.9) follows verbatim as in [11,Lemma 3.4] with (3.7a) as an input. Moreover, the bound on V l , · follows directly from the bound on Q. Obviously, we also have · * ≤ 2 · . Next, we apply Lemma A.1 from the Appendix with the choices The operator B in Lemma A.1 is chosen as the stability operator (3.3). Then (A.1) is satisfied with λ . . = N 1/2K according to (3.9) and (3.7a). With δ . . = N −25/4K we verify (3.8a) directly from (A.5), where = V l , G − M satisfies Here we used | | ≤ G−M * N −10/K and M D * N 1/2K D * . The coefficients μ 0 , μ 2 , μ 3 are defined through (A.4) and R is given by . Now we bound | R, D | ≤ N −1/4K | | 3 + N 1/8K | R, D | 3/2 by Young's inequality, absorb the error terms bounded by N −1/4K | | 3 into the cubic term, μ 3 3 3 3 , by introducing a modified coefficient μ 3 and use that |μ 3 | ∼ | μ 3 | ∼ 1 for any z ∈ D cusp . Finally, we safely divide (3.10) by μ 3 to verify (3.8b) with ξ 1 . . = −β/ μ 3 and ξ 2 . . = μ 2 / μ 3 . For the fact |μ 3 | ∼ 1 on D cusp and the comparison relations (3.8d) we refer to (3.7c)-(3.7d).

Probabilistic bound.
We now collect bounds on the error matrix D from [34, Theorem 4.1] and Sect. 4. We first introduce the notion of stochastic domination.

Definition 3.5 (Stochastic domination). Let
be sequences of nonnegative random variables. We say that X is stochastically dominated by Y (and use the notation for any > 0, ν ∈ N and some family of positive constants C( , ν) that is uniform in N and other underlying parameters (e.g. the spectral parameter z in the domain under consideration).
It can be checked (see [33,Lemma 4.4]) that ≺ satisfies the usual arithmetic properties, Furthermore, to formulate bounds on a random matrix R compactly, we introduce the notations for random matrices R and a deterministic control parameter = (z). We also introduce high moment norms for p ≥ 1, scalar valued random variables X and random matrices R. To translate high moment bounds into high probability bounds and vice versa we have the following easy lemma [11, Lemma 3.7].
Lemma 3.6. Let R be a random matrix, a deterministic control parameter such that ≥ N −C and R ≤ N C for some C > 0, and let K ∈ N be a fixed integer. Then we have the equivalences Expressed in terms of the · p -norm we have the following high-moment bounds on the error matrix D. The bounds (3.11a)-(3.11b) have already been established in [34,Theorem 4.1]; we just list them for completeness. The bounds (3.11c)-(3.11d), however, are new and they capture the additional cancellation at the cusp and are the core novelty of the present paper. The additional smallness comes from averaging against specific weights p, f from (3.5b).
Moreover, for the specific weight matrix B = diag(pf) we have the improved bound and the improved bound on the off-diagonal component where we defined the following z-dependent quantities Theorem 3.7 will be proved in Sect. 4. We now translate the high moment bounds of Theorem 3.7 into high probability bounds via Lemma 3.6 and use those to establish bounds on G − M and the error in the cubic equation for . To simplify the expressions we formulate the bounds in the domain (3.12) Lemma 3.8 (High probability error bounds). Fix ζ, c > 0 sufficiently small and suppose that |G − M| ≺ , | (G − M)| ≺ and | | ≺ θ hold at fixed z ∈ D ζ , and assume that the deterministic control parameters , , θ satisfy + + θ N −c . Then for any sufficiently small > 0 it holds that as well as where the coefficients ξ 1 , ξ 2 are those from Proposition 3.4, and we recall that Proof. We translate the high moment bounds (3.11a)-(3.11b) into high probability bounds using Lemma 3.6 and |G| ≺ M + 1 to find (3.14) In particular, these bounds together with the assumed bounds on G − M guarantee the applicability of Proposition 3.4. Now we use (3.14) and (3.9) in (3.8a) to get (3.13b).
Here we used (3.9), translated · p -bounds into ≺-bounds on · * and vice versa via Lemma 3.6, and absorbed the N 1/K factors into ≺ by using that K can be chosen arbitrarily large. It remains to verify (3.13a). In order to do so, we first claim that for any sufficiently small > 0.
Proof of (3.15). We first collect two additional ingredients from [10] specific to the vector case.
for some constants c, c ∼ 1. We now turn to the proof of (3.15) and first note that, according to (a) and (b) we have with errors in · -norm-sense, for some constant c ∼ 1 to see where w 1 ∈ C N is a deterministic vector with uniformly bounded entries. Since | diag(w 1 )D | ≺ (ρ + )/N η by (3.14), the bound on the first term in (3.15) follows together with (3.11c) via Lemma 3.6. Now we consider the second term in (3.15). We split D = D d + D o into its diagonal and off-diagonal components. Since B and S preserve the space of diagonal and the space of off-diagonal matrices we find with an appropriate deterministic matrix u i j having bounded entries. In particular, the cross terms vanish and the first term is bounded by according to (3.14). By taking the off-diagonal part of (3.8a) and using the fact that M and V r and therefore also for any such that θ N − by Young's inequality in the last step. Together with (3.17), (3.14) and the assumption that Thus the bound on the second term on the lhs. in (3.15) follows together with (3.18)- This completes the proof of (3.15).
With (3.14) and (3.15) the upper bound (3.8c) on the error * of the cubic equation (3.8b) takes the same form as the rhs. of (3.15) if K is sufficiently large depending on . By the first estimate in (3.13b) we can redefine the control parameter on |G − M| as . . = θ + ((ρ + )/N η) 1/2 and the claim (3.13a) follows directly with (3.15), thus completing the proof of Lemma 3.8.

3.3.
Bootstrapping. Now we will show that the difference G − M converges to zero uniformly for all spectral parameters z ∈ D ζ as defined in (3.12). For convenience we refer to existing bounds on G − M far away from the real line to establish a rough bound on G − M in, say, D 1 . We then iteratively lower the threshold on η by appealing to Proposition 3.4 and Lemma 3.8 until we establish the rough bound in all of D ζ . As a second step we then improve the rough bound iteratively until we obtain Theorem 2.5. (3.20) Proof. The rough bound (3.20) in a neighbourhood of a cusp has first been established for Wigner-type random matrices in [8]. For the convenience of the reader we present a streamlined proof that is adapted to the current setting. The lemma is an immediate consequence of the following statement. Let ζ s > 0 be a sufficiently small step size, depending on ζ . Then for any N 0 k ≤ 1/ζ s on the domain D max{1−kζ s ,ζ } we have We prove (3.21) by induction over k. For sufficiently small ζ the induction start k = 0 holds due to the local law away from the self-consistent spectrum, e.g. [34, Theorem 2.1]. Now as induction hypothesis suppose that (3.21) holds on D k . . = D max{1−kζ s ,ζ } , and in particular, |G| ≺ 1, G p ≤ , p N for any , p according to Lemma 3.6. The monotonicity of the function η → η G(τ + iη) p (see e.g. [34, proof of Prop. 5.5]) implies G p ≤ , p N +ζ s ≤ N 2ζ s and therefore, according to Lemma 3.6, that |G| ≺ N 2ζ s on D k+1 . This, in turn, implies |D| ≺ N −ζ /3 on D k+1 by (3.11a) and Lemma 3.6, provided ζ s is chosen small enough. We now fix x, y and a large integer K as the parameters of · * = · x,y,K * for the rest of the proof and omit them from the notation but we stress that all estimates will be uniform in x, y. We find sup z∈ D k+1 D(z) * ≺ N −ζ /3 , by using a simple union bound and ∂ z D ≤ N C for some C > 0. Thus, for K large enough, we can use (3.8a), (3.8b), (3.8c) and (3.9) to infer → C be complex valued functions and ξ 1 , ξ 2 , d : [η * , η * ] → R + be continuous functions such that at least one of the following holds true: (3.23) With direct arithmetics we can now verify that the coefficients ξ 1 , ξ 2 in (3.8b) and the auxiliary coefficients ξ 1 , ξ 2 defined in (3.7e) satisfy the assumptions in Lemma 3.10 with the choice of the constant function d = N −4 −k ζ +δ for any δ > 0, by using only the information on ξ 1 , ξ 2 given by the comparison relations (3.8d). As an example, in the regime where τ 0 is a right edge and ω ∼ , we have ξ 1 ∼ (η + ) 2/3 and ξ 2 ∼ (η + ) 1/3 and both functions are monotonically increasing in η. Then Assumption (ii) of Lemma 3.10 is satisfied. All other regimes are handled similarly.
Proof of Theorem 2.5. The bounds within the proof hold true uniformly for z ∈ D ζ , unless explicitly specified otherwise. We therefore suppress this qualifier in the following statements. First we apply Lemma 3.8 with the choice = , i.e. we do not treat the imaginary part of the resolvent separately. With this choice the first inequality in (3.13b) becomes self-improving and after iteration shows that (3.24) and, in other words, (3.13a) holds with = θ + (ρ/N η) 1/2 + 1/N η. This implies that if | | ≺ θ N −c for some arbitrarily small c > 0, then holds for all sufficiently small with overwhelming probability, where we defined For this conclusion we used the comparison relations (3.8d), Proposition 3.2(iv) as well as (3.7b), and the bound The bound (3.25) is a self-improving estimate on | | in the following sense. For k ∈ N and l ∈ N ∪ { * } let Here we needed to check the condition in (3.23) but at η * ∼ 1 we have . After a k-step iteration until N −k becomes smaller than N 6 d * , we find | | ≺ θ * , where we used that can be chosen arbitrarily small. We are now ready to prove the following bound which we, for convenience, record as a proposition. Proof. Using | | ≺ θ * proven above, we apply (3.24) with θ = θ * to conclude the first inequality in (3.27). For the second inequality in (3.27) we use the estimate on |G − M| av from (3.13b) with θ = θ * and = (ρ/N η) 1/2 + 1/N η.

Proposition 3.11. For any ζ > 0 we have the bounds
The bound on |G − M| from (3.27) implies a complete delocalisation of eigenvectors uniformly at singularities of the scDOS. The following corollary was established already in [8, Corollary 1.14] and, given (3.27), the proof follows the same line of reasoning.

Corollary 3.12 (Eigenvector delocalisation).
Let u ∈ C N be an eigenvector of H corresponding to an eigenvalue λ ∈ τ 0 + (−c, c) for some sufficiently small positive constant c ∼ 1. Then for any deterministic x ∈ C N we have The bounds (3.27) simplify in the regime η ≥ N ζ η f above the typical eigenvalue spacing to if ω is well inside a gap of size ≥ N δ+ζ η f . Then we find > N −3/4 by the definition of η f = 1/9 /N 2/3 in (2.7) and use Lemma 3.3 and (3.7b), (3.7e) to conclude In the last bound we used 1/N ω ≤ N −δ /N η and 1/6 The bounds on |G − M| av from (3.28) and (3.30), inside and outside the self-consistent spectrum, allow us to show the uniform rigidity, Corollary 2.6. We postpone these arguments until after we finish the proof of Theorem 2.5. The uniform rigidity implies that for dist(z, supp ρ) ≥ N ζ η f we can estimate the imaginary part of the resolvent via for any normalised x ∈ C N , where u λ denotes the normalised eigenvector corresponding to λ. For the first inequality in (3.31) we used Corollary 3.12 and for the second we applied Corollary 2.6 that allows us to replace the Riemann sum with an integral as Using with (3.31), we apply Lemma 3.8, repeating the strategy from the beginning of the proof. But this time we can choose the control parameter = ρ. In this way we find where we defined Note that the estimates in (3.32) are simpler than those in (3.27). The reason is that the additional terms 1/N η, 1/(N η) 2 and 1/(N η) 3 in (3.27) are a consequence of the presence of in (3.13a), (3.13b). With = ρ these are immediately absorbed into ρ and not present any more. The second term in the definition of d # can be dropped since we still have Now we turn to the proof of (2.8b). Given the second bound in (3.28), it is sufficient to consider the case when τ = e − + ω and η ≤ ω ≤ /2 with ω ≥ N ζ η f . In this case Proposition 3.2 yields ξ 2 ρ/ ξ 1 + ρ η/ω ∼ η/ dist(z, supp ρ). Thus we have and therefore the second bound in (3.32) implies (2.8b). This completes the proof of Theorem 2.5.

Rigidity and absence of eigenvalues.
The proofs of Corollaries 2.6 and 2.8 rely on the bounds on |G − M| av from (3.28) and (3.30). As before, we may restrict ourselves to the neighbourhood of a local minimum τ 0 ∈ supp ρ of the scDOS which is either an internal minimum with a small value of ρ(τ 0 ) > 0, a cusp location or a right edge adjacent to a small gap of length > 0. All other cases, namely the bulk regime and regular edges adjacent to large gaps, have been treated prior to this work [8,11].
showing that with overwhelming probability the interval [τ 1 , τ 2 ] does not contain any eigenvalues. A simple union bound finishes the proof of Corollary 2.8.
Proof of Corollary 2.6. Now we establish Corollary 2.6 around a local minimum τ 0 ∈ supp ρ of the scDOS. Its proof has two ingredients. First we follow the strategy of the proof of [8, Corollary 1.10] to see that where C > 0 is chosen sufficiently large such that τ 1 lies far to the left of the self-consistent spectrum. The control (3.33) suffices to prove Corollary 2.6 for all τ = τ 0 + ω except for the case when τ 0 = e − is an edge at a gap of length ≥ N ζ η f and ω ∈ [−N ζ η f , 0] for some fixed ζ > 0 and η f = η f (e − ) ∼ 1/9 /N 2/3 , i.e. except for some N ζ eigenvalues close to the edge with arbitrarily small ζ > 0. In all other cases, the proof follows the same argument as the proof of [8, Corollary 1.11] using the uniform 1/N -bound from (3.33) and we omit the details here.
The reason for having to treat the eigenvalues very close to the edge e − separately is that (3.33) does not give information on which side of the gap these N ζ eigenvalues are found. To get this information requires the second ingredient, the band rigidity, for any ν ∈ N, ≥ ω ≥ N ζ η f and large enough N . The combination of (3.34) and (3.33) finishes the proof of Corollary 2.6.
Band rigidity has been shown in case is bounded from below in [11] as part of the proof of Corollary 2.5. We will now adapt this proof to the case of small gap sizes ≥ N ζ −3/4 . Since by Corollary 2.8 with overwhelming probability there are no eigenvalues in e − + [N ζ η f , /2], it suffices to show (3.34) for ω = /2. As in the proof of [11, Corollary 2.5] we consider the interpolation between the original random matrix H = H 0 and the deterministic matrix Let ρ t denote the scDOS of H t . Exactly as in the proof from [11] it suffices to show that no eigenvalue crosses the gap along the interpolation with overwhelming probability, i.e. that for any ν ∈ N we have Here t → a t ∈ R\ supp ρ t is some spectral parameter inside the gap, continuously depending on t, such that a 0 = τ . In [11] a t was chosen independent of t, but the argument remains valid with any other choice of a t . We call I t the connected component of R\ supp ρ t that contains a t and denote t = |I t | the gap length. In particular, 0 = and τ ∈ I t for all t ∈ [0, 1] by [ Since W 1 with overwhelming probability, in the regime t ≥ 1 − for some small constant > 0, the matrix H t is a small perturbation of the deterministic matrix H 1 whose resolvent (H 1 − τ ) −1 = M(τ ) at spectral parameter τ is bounded by Assumption (C), in particular 1 1. By 1/3-Hölder continuity hence t 1, and there are no eigenvalues of H t in a neighbourhood of a t , proving (3.35) for t ≥ 1 − . For t ∈ [ , 1 − ] we will now show that t ∼ 1 for any > 0. In fact, we have dist(τ, supp ρ t ) 1. This is a consequence of [10, Lemma D.1]. More precisely, we use the equivalence of (iii) and (v) of that lemma. We check (iii) and conclude the uniform distance to the self-consistent spectrum by (v). Since M t (τ ) = M(τ ) and M(τ ) 1 we only need to check that the stability operator Eq. (4.24)]) and C hs→hs = 1 to first show the uniform bound B t hs→hs 1/t and then improve the bound to B t 1/t using the trick of expanding in a geometric series from [7, Eqs. (4.60)-(4. 63)]. This completes the argument that t ∼ 1. Now we apply [34,Corollary 2.3] to see that there are no eigenvalues of H t around a t as long as t is bounded away from zero and one, proving (3.35) for this regime.
Finally, we are left with the regime t ∈ [0, ] for some sufficiently small > 0. By [10, Proposition 10.1(a)] the self-consistent Green's function M t corresponding to H t is bounded even in a neighbourhood of τ , whose size only depends on model parameters. In particular, Assumptions (A)-(C) are satisfied for H t and Corollary 2.8, which was already proved above, is applicable. Thus it suffices to show that the size t of the gap in supp ρ t containing τ is bounded from below by t ≥ N ζ −3/4 for some ζ > 0. The size of the gap can be read off from the following relationship between the norm of the saturated self-energy operator and the size of the gap: Let H be a random matrix satisfying (A)-(C) and τ be well inside the interior of the gap of length ∈ [0, c] in the self-consistent spectrum for a sufficiently small c ∼ 1. Then where in the first step we used [7, Eqs. (4.23)-(4.25)], in the second step (3.7b), and in the last step that dist(τ, supp ρ) ∼ . Applying the analogue of (3.36) for H t with F t (τ ) and using that dist(τ, ρ t ) t , we obtain 1 − F t (τ ) hs→hs 2/3 t . Combining this inequality with (3.36) and using that In particular, the gap size t never drops below c N ζ −3/4 . This completes the proof of the last regime in (3.35).

Cusp Fluctuation Averaging and Proof of Theorem 3.7
We will use the graphical multivariate cumulant expansion from [34] which automatically exploits the self-energy renormalization of D to highest order. Since the final formal statement requires some custom notations, we first give a simple motivating example to illustrate the type of expansion and its graphical representation. If W is Gaussian, then integration by parts shows that where we recall that κ(α, β) . . = κ(w α , w β ) is the second cumulant of the matrix entries w α , w β index by double indices α = (a, b), β = (a , b ), and (a,b) denotes the matrix of all zeros except for an 1 in the (a, b)th entry. Since for non-Gaussian W or higher powers of D the expansion analogous to (4.1) consists of much more complicated polynomials in resolvent entries, we represent them concisely as the values of certain graphs. As an example, the rhs. of (4.1) is represented simply by The graphs retain only the relevant information of the complicated expansion terms and chains of estimates can be transcribed into simple graph surgeries. Graphs also help identify critical terms that have to be estimated more precisely in order to obtain the improved high moment bound on D. For example, the key cancellation mechanism behind the cusp fluctuation averaging is encoded in a small distinguished part of the expansion that can conveniently be identified as certain subgraphs, called the σ -cells, see Definition 4.10 later. It is easy to count, estimate and manipulate σ -cells as part of a large graph, while following the same operations on the level of formulas would be almost intractable.
First we review some of the basic nomenclature from [34]. We consider random matrices H = A + W with diagonal expectation A and complex Hermitian or real symmetric zero mean random component W indexed by some abstract set J of size |J | = N . We recall that Greek letters α, β, . . . stand for labels, i.e. double-indices from a) for its transpose. Underlined Greek letters stand for multisets of labels, whereas bold-faced Greek letters stand for tuples of labels with the counting combinatorics being their-for our purposes-only relevant difference.
According to [34,Proposition 4.4] with N (α) = {α, α t } it follows from the assumed independence that for general (conjugate) linear functionals (k) , of bounded norm where we recall that and that (4.3c) Some notations in (4.3) require further explanation. The qualifier "if α k " is satisfied for those terms in which α k is a summation variable when the brackets in the product j (1 + ) are opened. The notation indicates the union of multisets. For even p we apply (4.3) with (k) (D) . . = diag(fp)D for k ≤ p/2 and (k) (D) . . = diag(fp)D for k > p/2. This is obviously a special case of (k) (D) = B D which was considered in the so-called averaged case of [34] with arbitrary B of bounded operator norm since diag(fp) = fp ∞ ≤ C. It was proved in [34] that which is not good enough at the cusp. We can nevertheless use the graphical language developed in [34] to estimate the complicated right hand side of (4.3).  .3), while the β-labels are strongly restricted; in the independent case they can only be of the type α or α t . These graphs will be called "double indexed" graphs since the vertices are naturally equipped with labels (double indices). Here we introduced the terminology "double indexed" for the graphs in [34] to distinguish them from the "single indexed" graphs to be introduced later in this paper.

Graphical representation via
To be more precise, the graphs in [34] were vertex-coloured graphs. The colours encoded a resummation of the terms in (4.3): vertices whose labels (or their transpose) appeared in one of the cumulants in (4.3) received the same colour. We then first summed up the colours and only afterwards we summed up all labels compatible with the given colouring. According to [34,Proposition 4.4] and the expansion of the main term [34,Eq. (49)] for every even p it holds that where G av( p,6 p) is a certain finite collection of vertex coloured directed graphs with p connected components, and Val( ), the value of the graph , will be recalled below. According to [34] each graph ∈ G av( p,6 p) fulfils the following properties: If V is the vertex set of and for each colour c ∈ C, V c denotes the c-coloured vertices then we recall that where the ultimate product is the product over all p of the cycles in the graph. By the notation Cyc(v 1 , . . . , v k ) we indicate a directed cycle with vertices v 1 , . . . , v k . Depending upon whether a given cycle is a G-cycle or G * -cycle, it then contributes with one of the factors indicated after the last curly bracket in (4.4b) with the vertex order chosen in such a way that the designated edge represents the G diag(fp) or diag(fp)G * matrix.
As an example illustrating (4.4b) we have Actually in [34] the graphical representation of the graph is simplified, it does not contain all information encoded in the graph. First, the direction of the edges are not indicated. In the picture both cycles should be oriented in a clockwise orientation. Secondly, the type of edges are not indicated, apart from the wiggled line. In fact, the edges in the second subgraph stand for G * , while those in the first subgraph stand for G. To translate the pictorial representation directly let the striped vertices in the first and second cycle be associated with α 1 , β 1 and the dotted vertices with α 2 , β 2 . Accordingly, the wiggled edge in the first cycle stands for G diag(fp), while the wiggled edge in the second cycle stands for diag(fp)G * . The reason why these details were omitted in the graphical representation of a double index graph is that they do not influence the basic power counting estimate of its value used in [34].

Single index graphs.
In [34] we operated with double index graphs that are structurally simple and appropriate for bookkeeping complicated correlation structures, but they are not suitable for detecting the additional smallness we need at the cusp. The contribution of the graphs in [34] were estimated by a relatively simple power counting argument where only the number of (typically off-diagonal) resolvent elements were recorded. In fact, for many subleading graphs this procedure already gave a very good bound that is sufficient at the cusps as well. The graphs carrying the leading contribution, however, have now to be computed to a higher accuracy and this leads to the concept of "single index graphs". These are obtained by a certain refinement and reorganization of the double index graphs via a procedure we will call graph resolution to be defined later. The main idea is to restructure the double index graph in such a way that instead of labels (double indices) α = (a, b) its vertices naturally represent single indices a and b. Every double indexed graph will give rise to a finite number of resolved single index graphs. The double index graphs that require a more precise analysis compared with [34] will be resolved to single index graphs. After we explain the structure of the single index graphs and the graph resolution procedure, double index graphs will not be used in this paper any more. Thus, unless explicitly stated otherwise, by graph we will mean single index graph in the rest of this paper. We now define the set G of single index graphs we will use in this paper. They are directed graphs, where parallel edges and loops are allowed. Let the graph be denoted by with vertex set V ( ) and edge set E( ). We will assign a value to each which comprises weights assigned to the vertices and specific values assigned to the edges. Since an edge may represent different objects, we will introduce different types of edges that will be graphically distinguished by different line style. We now describe these ingredients precisely.
Vertices. Each vertex v ∈ V ( ) is equipped with an associated index a v ∈ J . Graphically the vertices are represented by small sunlabelled bullets , i.e. in the graphical representation the actual index is not indicated. It is understood that all indices will be independently summed up over the entire index set J when we compute the value of the graph.
Vertex weights. Each vertex v ∈ V ( ) carries some weight vector w (v) ∈ C J which is evaluated w (v) a v at the index a v associated with the vertex. We generally assume these weights to be uniformly bounded in N , i.e. sup N w (v) ∞ < ∞. Visually we indicate vertex weights by incoming arrows as in w . Vertices without explicitly indicated weight may carry an arbitrary bounded weight vector. We also use the notation 1 to indicate the constant 1 vector as the weight, this corresponds to summing up the corresponding index unweighted where all these edges can also be loops. The convention is that continuous lines represent G, dashed lines correspond to G * , while the diamond on both types of edges indicates the subtraction of M or M * . An edge e ∈ GE( ) carries its type as its attribute, so as a short hand notation we can simply write G e for G a i(e) ,a t (e) , G * a i(e) ,a t (e) , (G − M) a i(e) ,a t (e) and (G − M) * a i(e) ,a t (e) depending on which type of G-edge e represents. Due to their special role in the later estimates, we will separately bookkeep those G − M or G * − M * edges that appear looped. We thus define the subset GE g−m ⊂ GE as the set of G-edges e ∈ GE( ) of type G − M or G * − M * such that i(e) = t (e). We write g − m to refer to the fact that looped edges are evaluated on the diagonal (g − m) (G-)edge degree. For any vertex v we define its in-degree deg − (v) and out-degree deg + (v) as the number of incoming and outgoing G-edges. Looped edges (v, v) are counted for both in-and out-degree. We denote the total degree by deg Interaction edges. Besides the G-edges we also have interaction edges, IE( ), representing the cumulants κ. A directed interaction edge e = (u, v) represents the matrix ab a,b∈J given by the cumulant ). Visually we will represent interaction edges as Although the interaction matrix R (e) is completely determined by the in-and out-degrees of the adjacent vertices i(e), t (e) we still write out the specific S and T names because these will play a special role in the latter part of the proof. As a short hand notation we shall frequently use R e . . = R (e) a i(e) ,a t (e) to denote the matrix element selected by the indices a i(e) , a t (e) associated with the initial and terminal vertex of e. We also note that we do not indicate the direction of edges associated with S as the matrix S is symmetric.
Generic weighted edges. Besides the specific G-edges and interaction edges, additionally we also allow for generic edges reminiscent of the generic vertex weights introduced above. They will be called generic weighted edges, or weighted edges for short. To every weighted edge e we assign a weight matrix K (e) = (k (e) ab ) a,b∈J which is evaluated as k (e) a i(e) ,a t (e) when we compute the value of the graph by summing up all indices. To simplify the presentation we will not indicate the precise form of the weight matrix K (e) but only its entry-wise scaling as a function of N . A weighted edge presented as N −l represents an arbitrary weight matrix K (e) whose entries scale like |k (e) ab | ≤ cN −l . We denote the set of weighted edges by WE( ). For a given weighted edge e ∈ WE we record the entry-wise scaling of K (e) in an exponent l(e) ≥ 0 in such a way that we always have |k Graph value. For graphs ∈ G we define their value which differs slightly from that in (4.4b) because it applies to a different class of graphs.

Single index resolution.
There is a natural mapping from double indexed graphs to a collection of single indexed graphs that encodes the rearranging of the terms in (4.4b) when the summation over labels α v is reorganized into summation over single indices. Now we describe this procedure. In this process we are free to consider the mapping of x 1 (or any other vertex, for that matter) as fixed by symmetry of u ↔ v. (iv) If a wiggled G-edge is mapped to an edge from u to v, then v is equipped with a weight of pf. If a wiggled G * -edge is mapped to an edge from u to v, then u is equipped with a weight of pf. All vertices with no weight specified in this process are equipped with the constant weight 1.
We define the set G( p) ⊂ G as the set of all graphs obtained from the double index graphs G av( p,6 p) via the single index resolution procedure. (ii) We also remark that loops in double index graphs are never mapped into loops in single index graphs along the single index resolution. Indeed, double index loops are always mapped to edges parallel to the interaction edge of the corresponding vertex.
A few simple facts immediately follow from the the single index construction in Definition 4.2. From (i) it is clear that the number of vertices in the single index graph is twice the number of colours of the double index graph. From (ii) it follows that the number of interaction edges in the single index graph equals the number of colours of the double index graph. Finally, from (iii) it is obvious that if for some colour c there are k = k(c) vertices in the double index graph with colour c, then the resolution of this colour gives rise to 2 k(c)−1 single indexed graph. Since these resolutions are done independently for each colour, we obtain that the number of single index graphs originating from one double index graph is Since the number of double index graph in G av( p,6 p) is finite, so is the number of graphs in G( p).
Let us present an example of single index resolution applied to the graph from (4.5) where we, for the sake of transparency, label all vertices and edges. is a graph consisting of one 2-cycle on the vertices x 1 , y 2 and one 2-cycle on the vertices x 2 , y 1 as in with x 1 , y 1 and x 2 , y 2 being of equal colour (i.e. being associated to labels connected through cumulants). In order to explain steps (i)-(iii) of the construction we first neglect that some edges may be wiggled, but we restore the orientation of the edges in the picture. We then fix the mapping of x i to pairs of vertices (u i , v i ) for i = 1, 2 in such a way that the incoming edges of x i are incoming at u i and the outgoing edges from x i are outgoing from v i . It remains to map y i to (u i , v i ) and for each i there are two choices of doing so that we obtain the four possibilities  (4.9) in the language of single index graphs where the S, T assignment agrees with (4.6).
Finally we want to visualize step (iv) in the single index resolution in our example. Suppose that in (4.8) the edges e 1 and e 2 are G-edges while e 3 and e 4 are G * edges with e 2 and e 4 being wiggled (in agreement with (4.5)). According to (iv) it follows that the terminal vertex of e 2 and the initial vertex of e 4 are equipped with a weight of pf while the remaining vertices are equipped with a weight of 1. The first graph in (4.9) would thus be equipped with the weights We note that in contrast to the value definition for double index graphs (4.4), where each average in (4.4b) contains an 1/N prefactor, the single index graph value (4.7) does not include the N − p prefactor. We chose this convention in this paper mainly because the exponent p in the prefactor N − p cannot be easily read off from the single index graph itself, whereas in the double index graph p is simply the number of connected components. We now collect some simple facts about the structure of these graphs in G( p) which directly follow from the corresponding properties of the double index graphs listed in Proposition 4.1.

Examples of graphs.
We now turn to some examples explaining the relation of the double index graphs from [34] and single index graphs. We note that the single index graphs actually contain more information because they specify edge direction, specify weights explicitly and differentiate between G and G * edges. These information were not necessary for the power counting arguments used in [34], but for the improved estimates they will be crucial.
We start with the graphs representing the following simple equality following from κ(α, β) = E w α w β which can be represented as We now turn to the complete graphical representation for the second moment in the case of Gaussian entries, where we again stress that the double index graphs hide the specific weights and the fact that one of the connected components actually contains G * edges. In terms of single index graphs, the rhs. of (4.11) can be represented as the sum over the values of the six graphs The first two graphs were already explained above. The additional four graphs come from the second term in the rhs. of (4.11). Since κ(α 1 , β 1 ) is non-zero only if α 1 = β 1 or α 1 = β t 1 , there are four possible choices of relations among the α and β labels in the two kappa factors. For example, the first graph in the second line of (4.12) corresponds to the choice α t 1 = β 1 , α t 2 = β 2 . Written out explicitly with summation over single indices, this value is given by where in the picture the left index corresponds to a 1 , the top index to b 2 , the right one to a 2 and the bottom one to b 1 . We conclude this section by providing an example of a graph with some degree higher than two which only occurs in the non-Gaussian situation and might contain looped edges. For example, in the expansion of N 2 E | diag(fp)D | 2 in the non-Gaussian setup there is the term where r ab = κ(ab, ba, ba)/2 and s ab = κ(ab, ba), in accordance with (4.6). where we used that the interaction edges form a perfect matching and that deg(e) ≥ 2, |IE( )| ≤ p. The somewhat informal notation in (4.13) hides a technical subtlety. The resolvent entries G ab are indeed bounded by an O (1) constant in the sense of very high moments but not almost surely. We will make bounds like the one in (4.13) rigorous in a high moments sense in Lemma 4.8.

Simple estimates on
The estimate (4.13) ignores the fact that typically only the diagonal resolvent matrix elements of G are of O (1), the off-diagonal matrix elements are much smaller. This is manifested in the Ward-identity Thus the sum of off-diagonal resolvent elements G ab is usually smaller than its naive size of order N , at least in the regime η N −1 . This is quantified by the so called Ward estimates Similarly to (4.13) the inequalities in (4.14b) are meant in a power counting sense ignoring that the entries of G might not be bounded by ρ almost surely but only in some high moment sense. As a consequence of (4.14b) we can gain a factor of ψ for each off-diagonal (that is, connecting two separate vertices) G-factor, but clearly only for at most two G-edges per adjacent vertex. Moreover, this gain can obviously only be used once for each edge and not twice, separately when summing up the indices at both adjacent vertices. As a consequence a careful counting of the total number of ψ-gains is necessary, see [34, Section 4.3] for details. Ward bounds for the example graphs from Sect. 4.4. From the single index graphs drawn in (4.12) we can easily obtain the known bound E | diag(fp)D | 2 ψ 4 . Indeed, the last four graphs contribute a combinatorial factor of N 4 from the summations over four single indices and a scaling factor of N −2 from the size of S, T . Furthermore, we can gain a factor of ψ for each G-edge through Ward estimates and the bound follows. Similarly, the first two graphs contribute a factor of N = N 2−1 from summation and S/T and a factor of ψ 2 from the Ward estimates, which overall gives N −1 ψ 2 ψ 4 . As this example shows, the bookkeeping of available Ward-estimates is important and we will do so systematically in the following sections.

Improved estimates on Val( ): Wardable edges.
For the sake of transparency we briefly recall the combinatorial argument used in [34], which also provides the starting point for the refined estimate in the present paper. Compared to [34], however, we phrase the counting argument directly in the language of the single index graphs. We only aim to gain from the G-edges adjacent to vertices of degree two or three; for vertices of higher degree the most naive estimate |G ab | 1 is already sufficient as demonstrated in [34]. We collect the vertices of degree two and three in the set V 2,3 and collect the G-edges adjacent to V 2,3 in the set E 2,3 . In [34, Section 4.3] a specific marking procedure on the G-edges of the graph is introduced that has the following properties. For each v ∈ V 2,3 we put a mark on at most two adjacent G-edges in such a way that those edges can be estimated via (4.14b) while performing the a v summation. In this case we say that the mark comes from the v-perspective. An edge may have two marks coming from the perspective of each of its adjacent vertices. Later, marked edges will be estimated via (4.14b) while summing up a v . After doing this for all of V 2,3 we call an edge in E 2,3 marked effectively if it either (i) has two marks, or (ii) has one mark and is adjacent to only one vertex from V 2,3 . While subsequently using (4.14b) in the summation of a v for v ∈ V 2,3 (in no particular order) on the marked edges (and estimating the remaining edges adjacent to v trivially) we can gain at least as many factors of ψ as there are effectively marked edges. Indeed, this follows simply from the fact that effectively marked edges are never estimated trivially during the procedure just described, no matter the order of vertex summation.

Fact 3. For each ∈ G( p) there is a marking of edges adjacent to vertices of degree at most 3 such that there are at least e∈IE( ) (4 − deg(e)) + effectively marked edges.
Proof. On the one hand we find from Fact 1 (more specifically, from the equality deg(e) = deg(u) = deg(v) for (u, v) = e ∈ IE( )) that deg(e). In [34] it was sufficient to estimate the value of each graph in G( p) by subsequently estimating all effectively marked edges using (4.14b). For the purpose of improving the local law at the cusp, however, we need to introduce certain operations on the graphs of G( p) which allow to estimate the graph value to a higher accuracy. It is essential that during those operations we keep track of the number of edges we estimate using (4.14b). Therefore we now introduce a more flexible way of recording these edges. We first recall a basic definition [58] from graph theory.

Definition 4.4. For k ≥ 1 a graph = (V, E) is called k-degenerate if any induced subgraph has minimal degree at most k.
It is well known that being k-degenerate is equivalent to the following sequential property. 2 We provide a short proof for convenience. Proof. Suppose the graph is k-degenerate and let n . . = |V |. Then there exists some vertex v n ∈ V such that deg(v n ) ≤ k by definition. We now consider the subgraph induced by V . . = V \{v n } and, by definition, again find some vertex v n−1 ∈ V of degree deg [V ] (v n−1 ) ≤ k. Continuing inductively we find a vertex ordering with the desired property. Conversely, assume there exists a vertex ordering such that (4.17) holds for each m. Let V ⊂ V be an arbitrary subset and let m . . = max{l ∈ [n]|v l ∈ V }. Then it holds that and the proof is complete.
The reason for introducing this graph theoretical notion is that it is equivalent to the possibility of estimating edges effectively using (4.14b). A subset GE of G-edges in ∈ G can be fully estimated using (4.14b) if and only if there exists a vertex ordering such that we can subsequently remove vertices in such a way that in each step at most two edges from GE are removed. Due to Lemma 4.5 this is the case if and only if = (V, GE ) is 2-degenerate. For example, the graph eff = (V, GE eff ) induced by the effectively marked G-edges GE eff is a 2-degenerate graph. Indeed, each effectively marked edge is adjacent to at least one vertex which has degree at most 2 in eff : Vertices of degree 2 in (V, GE) are trivially at most of degree 2 in eff , and vertices of degree 3 in (V, GE) are also at most of degree 2 in eff as they can only be adjacent to 2 effectively marked edges. Consequently any induced subgraph of eff has to contain some vertex of degree at most 2 and thereby eff is 2-degenerate.  Proof. This follows immediately from Fact 3, the observation that (V, GE eff ) is 2degenerate and the fact that sub-graphs of 2-degenerate graphs remain 2-degenerate.
For each ∈ G( p) we choose a Wardable subset GE W ( ) ⊂ GE( ) satisfying (4.18). At least one such set is guaranteed to exist by the lemma. For graphs with several possible such sets, we arbitrarily choose one, and consider it permanently assigned to . Later we will introduce certain operations on graphs ∈ G( p) which produce families of derived graphs ∈ G ⊃ G( p). During those operations the chosen Wardable subset GE W ( ) will be modified in order to produce eligible sets of Wardable edges GE W ( ) and we will select one among those to define the Wardable subset of . We stress that the relation (4.18) on the Wardable set is required only for ∈ G( p) but not for the derived graphs .
We now give a precise meaning to the vague bounds of (4.13), (4.14b). We define the N -exponent, n( ), of a graph = (V, GE ∪ IE ∪ WE) as the effective N -exponent in its value-definition, i.e. as We defer the proof of the following technical lemma to the Appendix.

Lemma 4.8.
For any c > 0 there exists some C > 0 such that the following holds. Let = (V, GE ∪ IE ∪ WE) ∈ G be a graph with Wardable edge set GE W ⊂ GE and at most |V | ≤ cp vertices and at most |GE| ≤ cp 2 G-edges. Then for each 0 < < 1 it holds that factor in (4.19), the reader should think of (4.19) as the heuristic inequality Using Lemma 4.7, N −1/2 ψ 1, |V | = 2 |IE| ≤ 2 p and deg(e) ≥ 2 (from Fact 1) we thus find For ∈ G we denote the number of σ -cells in by σ ( ).
Next, we state a simple lemma, estimating W-Est( ) of the graphs in the restricted class ∈ G( p). Proof. We introduce the short-hand notations IE k . . = {e ∈ IE | deg(e) = k} and IE ≥k . . = l≥k IE l . Starting from (4.19b) and Lemma 4.7 we find It remains to relate (4.21) to the number σ ( ) of σ -cells in . Since each interaction edge of degree two which is not a σ -cell has an additional weight pf attached to it, it follows from Fact 2 that |IE 2 | − σ ( ) ≤ p − |IE|. Therefore, from (4.21) and η/ρ ≤ C we have that proving the claim.
Using Lemma 4.8 and √ η/ρ ≤ σ q , the estimate in Lemma 4.11 has improved the previous bound (4.20) by a factor σ p−σ ( ) q (ignoring the irrelevant factors). In order to prove (3.11c), we thus need to remove the −σ ( ) from this exponent, in other words, we need to show that from each σ -cell we can multiplicatively gain a factor of σ q . This is the content of the following proposition.
and all graphs σ and ∈ G have exactly one σ -cell less than .
Using Lemmas 4.8 and 4.11 together with the repeated application of Proposition 4.12 we are ready to present the proof of Theorem 3.7.
Proof of Theorem 3.7. We remark that the isotropic local law (3.11a) and the averaged local law (3.11b) are verbatim as in [34,Theorem 4.1]. We therefore only prove the improved bound (3.11c)-(3.11d) in the remainder of the section. We recall (4.10) and partition the set of graphs G( p) = G 0 ( p) ∪ G ≥1 ( p) into those graphs G 0 ( p) with no σ -cells and those graphs G ≥1 ( p) with at least one σ -cell. For the latter group we then use Proposition 4.12 for some σ -cell to find where the number of σ -cells is reduced by 1 for σ and each ∈ G as compared to . We note that the Ward-estimate W-Est( ) from Lemma 4.11 together with Lemma 4.8 is already sufficient for the graphs in G 0 ( p). For those graphs G 1 ( p) with exactly one σ -cell the expansion in (4.23) is sufficient because σ ≤ σ q and, according to (4.22), each ∈ G has a Ward estimate which is already improved by σ q . For the other graphs we iterate the expansion from Proposition 4.12 until no sigma cells are left.
It only remains to count the number of G-edges and vertices in the successively derived graphs to make sure that Lemma 4.8 and Proposition 4.12 are applicable and that the last two factors in (3.11c) come out as claimed. Since every of the σ ( ) ≤ p applications of Proposition 4.12 creates at most 6 p additional G-edges and one additional vertex, it follows that |GE( )| ≤ C p 2 , |V | ≤ C p also in any successively derived graph. Finally, it follows from the last factor in Lemma 4.11 that for each e ∈ IE with deg(e) ≥ 5 we gain additional factors of N −1/2 . Since |IE| ≤ p, we easily conclude that if there are more than 4 p G-edges, then each of them comes with an additional gain of N −1/2 . Now (3.11c) follows immediately after taking the pth root.
We turn to the proof of (3.11d). We first write out ( p f ) a t ab G ba G ba and therefore can, for even p, write the pth moment as the value of the graph 0 = (V, GE ∪ IE) ∈ G which is given by p disjoint 2-cycles as where there are p/2 cycles of G-edges and p/2 cycles of G * edges. It is clear that (V, GE) is 2-degenerate and since |GE| = 2 p it follows that On the other hand each of the p interaction edges in 0 is a σ -cell and we can use Proposition 4.12 p times to obtain (3.11d) just as in the proof of (3.11c).

Proof of Proposition 4.12. It follows from the MDE that
which we use to locally expand a term of the form G xa G * ay for fixed a, x, y further. To make the computation local we allow for an arbitrary random function f = f (W ), which in practice encodes the remaining G-edges in the graph. A simple cumulant expansion shows where ∂ α . . = ∂ w α and introduced the stability operator B . . = 1−diag(|m| 2 )S. The stability operator B appears from rearranging the equation obtained from the cumulant expansion to express the quantity E G xb G * by f . In our graphical representation, the stability operator is a special edge that we can also express as x y . (4.25) An equality like (4.25) is meant locally in the sense that the pictures only represent subgraphs of the whole graph with the empty, labelled vertices symbolizing those vertices which connect the subgraph to its complement. Thus (4.25) holds true for every fixed graph extending x, y consistently in all three graphs. The doubly drawn edge in (4.25) means that the external vertices x, y are identified with each other and the associated indices are set equal via a δ a x ,a y function. Thus (4.25) should be understood as the equality where the outside edges incident at the merged vertices x, y are reconnected to one common vertex in the middle graph. For example, in the picture (4.26) the vertex x is connected to the rest of the graph by two edges, and the vertex y by one. In order to represent (4.24) in terms of graphs we have to define a notion of differential edge. First, we define a targeted differential edge represented by an interaction edge with a red ∂-sign written on top and a red-coloured target G-edge to denote the collection of graphs The second picture in (4.27) shows that the target G-edge may be a loop; the definition remains the same. This definition extends naturally to G * edges and is exactly the same for G − M edges (note that this is compatible with the usual notion of derivative as M does not depend on W ). Graphs with the differential signs should be viewed only as an intermediate simplifying picture but they really mean the collection of graphs indicated in the right hand side of (4.27). They represent the identities In other words we introduced these graphs only to temporary encode expressions with derivatives (e.g. second term in the rhs. of (4.24)) before the differentiation is actually performed. We can then further define the action of an untargeted differential edge according the Leibniz rule as the collection of graphs with the differential edge being targeted on all G-edges of the graph one by one (in particular not only those in the displayed subgraph), i.e. for example . . . . (4.28) Here the union is a union in the sense of multisets, i.e. allows for repetitions in the resulting set (note that also this is compatible with the usual action of derivative operations).
The · · · symbol on the rhs. of (4.28) indicates that the targeted edge cycles through all G-edges in the graph, not only the ones in the subgraph. For example, if there are k G-edges in the graph, then the picture (4.28) represents a collection of 2k graphs arising from performing the differentiation where f = f (W ) represents the value of the G-edges outside the displayed subgraph. Finally we introduce the notation that a differential edge which is targeted on all G-vertices except for those in the displayed subgraph. This differential edge targeted on the outside will be denoted by ∂.
Regarding the value of the graph, we define the value of a collection of graphs as the sum of their values. We note that this definition is for the collection of graphs encoded by the differential edges also consistent with the usual differentiation.
Written in a graphical form (4.24) reads (4.29) where the ultimate graph encodes the ultimate terms in the last two lines of (4.24).
We worked out the example for the resolution of the quantity E G xa G * ay f , but very similar formulas hold if the order of the fixed indices (x, y) and the summation index a changes in the resolvents, as well as for other combinations of the complex conjugates. In graphical language this corresponds to changing the arrows of the two G-edges adjacent to a, as well as their types. In other words, equalities like the one in (4.29) hold true for other any degree two vertex but the stability operator changes slightly: In total there are 16 possibilities, four for whether the two edges are incoming or outgoing at a and another four for whether the edges are of type G or of type G * . The general form for the stability operator is where R = S if there is one incoming and one outgoing edge, R = T if there are two outgoing edges and R = T t otherwise, and where # 1 , # 2 represent complex conjugations if the corresponding edges are of G * type. Thus for, for example, the stability operator in a for G * xa G * ya is 1 − diag(m 2 )T t . Note that the stability operator at vertex with degree two is exclusively determined by the type and orientation of the two G-edges adjacent to a. In the sequel the letter B will refer to the appropriate stability operator, we will not distinguish their 9 possibilities (R = S, T, T t and m # 1 m # 2 = |m| 2 , m 2 , m 2 ) in the notation.
Moreover all σ -cells in , except possibly a σ -cell adjacent to a, remain σ -cells also in each .
Proof. As the proofs for all of the 9 cases of B-operators are almost identical we prove the lemma for the case (4.29) for definiteness. Now we compare the value of the graph with the graph in the lhs. of (4.29), i.e. when the stability operator B is attached to the vertex a. We remind the reader that the displayed graphs only show a certain subgraph of the whole graph. The goal is to show that W-Est ≤ ρ +ψ +η/ρ +ψ q +ψ q W-Est( ) for each graph occurring on the rhs. of (4.29). The forthcoming reasoning is based on comparing the quantities |V |, |GE W |, GE g−m and e∈IE deg(e)/2 defining the Ward estimate W-Est from (4.19b) of the graph and the various graphs occurring on the rhs. of (4.29).
(a) We begin with the first graph and claim that W-Est (b) Next, we consider the third and fourth graph and claim that Here there is one more vertex (corresponding to an additional summation index), V ( ) = |V ( )| + 1, whose effect in (4. There is one more vertex whose effect in (4.19b) is compensated by one more interaction edge of degree 2, whence the number N -exponent remains unchanged. The number of Wardable edges can be increased by one by setting GE W ( ) to be a suitable subset of GE W ( )\{(x, a), (a, y)} ∪ {(x, b), (a, b), (a, y)} which is 2-degenerate as the subset of a 2-degenerate graph together with two vertices of degree 2. The number of (g − m)-loops remains unchanged. (d) For the last graph in (4.29), i.e. where the derivative targets an outside edge, we claim that Here the argument on the lhs., , stands for a whole collection of graphs but we essentially only have to consider two types: The derivative edge either hits a G-edge or a (g − m)-loop, i.e.  .27)). In both cases the N -size of W-Est remains constant since the additional vertex is balanced by the additional degree two interaction edge. In both cases all four displayed edges can be included in GE W ( ). So |GE W | can be increased by 1 in the first case and by 2 in the second case while the number of (g −m)-loops remains constant in the first case is decreased by 1 in the second case. The claim follows directly in the first case and from in the second case. (e) It remains to consider the second graph in the rhs. of (4.29) with the higher derivative edge. We claim that for each k ≥ 2 it holds that We prove the claim by induction on k starting from k = 2. For any k ≥ 2 we write ∂ k = ∂ k−1 ∂. For the action of the last derivative we distinguish three cases: (i) action on an edge adjacent to the derivative edge, (ii) action on a non-adjacent G-edge and (iii) an action on a non-adjacent (g − m)-loop. Graphically this means We ignored the case where the derivative acts on (a, y) since it is estimated identically to the first graph. We also neglected the possibility that the derivative acts on a gloop, as this is estimated exactly as the last graph and the result is even better since no (g − m)-loop is destroyed. After performing the last derivative in (4.31) we obtain the following graphs where we neglected the transposition of the third graph with u, v exchanged because this is equivalent with regard to the counting argument. First, we handle the second, third and fourth graphs in (4.32). In all these cases the set GE W ( ) is defined simply by adding all edges drawn in (4.32) to the set GE W ( )\{(x, a), (a, y)}. The new set remains 2-degenerate since all these new edges are adjacent to vertices of degree 2. Compared to the original graph, , we thus have increased |GE W | + GE g−m by at least 1. We now continue with the first graph in (4.32), where we explicitly expand the action of another derivative (notice that this is the only graph where k ≥ 2 is essentially used). We distinguish four cases, depending on whether the derivative acts on (i) the b-loop, (ii) an adjacent edge, (iii) a non-adjacent edge or (iv) a non-adjacent (g − m)-loop, i.e. graphically we have (4.33) After performing the indicated derivative, the encoded graphs are where we again neglected the version of the third graph with u, v exchanged. We note that both the first and the second graph in (4.33) produce the first graph in (4.34). Now we define how to get the set GE W ( ) from GE W ( )\{(x, a), (a, y)} for each case. In the first graph of (4.34) we add all three non-loop edges to GE W ( ), in the second graph we add both non-loop edges, in the third and fourth graph we add the non-looped edge adjacent to b as well as any two non-looped edges adjacent to a. Thus, compared to the original graph the number |GE W | + GE g−m is at least preserved. On the other hand the N -power counting is improved by N −1/2 . Indeed, there is one additional vertex b, yielding a factor N , which is compensated by the scaling factor N −3/2 from the interaction edge of degree 3.
To conclude the inductive step we note that additional derivatives (i.e. the action of ∂ k−2 ) can only decrease the Ward-value of a graph. Indeed, any single derivative can at most decrease the number |GE W ( )| + GE g−m by 1 by either differentiating a (g − m)-loop or differentiating an edge from GE W . Thus the number |GE W | + GE g−m is decreased by at most k − 2 while the number GE g−m is not increased. In particular, by choosing a suitable subset of Wardable edges, we can define GE W ( ) in such a way that |GE W |+ GE g−m is decreased by exactly k −2. But at the same time each derivative provides a gain of cN −1/2 ≤ ψ ≤ ψ + ψ q since the degree of the interaction edge is increased by one. Thus we have just as claimed.
Lemma 4.13 shows that the insertion of the B-operator reduces the Ward-estimate by at least ρ. However, this insertion does not come for free since the inverse is generally not a uniformly bounded operator. For example, it follows from (2.2) that m = η |m| 2 + |m| 2 S m and therefore (1 − diag(|m| 2 )S) −1 is singular for small η with m being the unstable direction. It turns out, however, that B is invertible on the subspace complementary to some bad direction b (B) . At this point we distinguish two cases. If B has a uniformly bounded inverse, i.e. if B −1 ∞→∞ ≤ C for some constant C > 0, then we set P B . . = 0. Otherwise we define P B as the spectral projection operator onto the eigenvector b (B) of B corresponding to the eigenvalue β with smallest modulus: where v, w . . = N −1 a v a w a denotes the normalized inner product and l (B) is the corresponding left eigenvector, (B * − β)l (B) = 0. Lemma 4.14. For all 9 possible B-operators in (4.30) it holds that for some constant C > 0, depending only on model parameters.
Proof. First we remark that it is sufficient to prove the bound (4.36) on B −1 Q B as an operator on C N with the Euclidean norm, i.e. B −1 Q B ≤ C. For this insight we refer to [5, Proof of (5.28) and (5.40a)]. Recall that R = S, R = T or R = T t , depending on which stability operator we consider (cf. (4.30)). We begin by considering the complex hermitian symmetry class and the cases R = T and R = T t . We will now see that in this case B has a bounded inverse and thus Q B = 1. Indeed, we have Here we used F (S) ≤ 1, a general property of the saturated self-energy matrix F (S) that was first established in [6,Lemma 4.3] (see also [7,Eq. (4.24)] and [10, Eq. (4.5)]). Now we turn to the case R = S for both the real symmetric and complex hermitian symmetry classes. In this case B is the restriction to diagonal matrices of an operator T : All of these operators were covered in [10, Lemma 5.1] and thus (4.36) is a consequence of that lemma. Recall that the flatness (3.6) of S ensured the applicability of the lemma.
We will insert the identity 1 = P B + B B −1 Q B , and we will perform an explicit calculation for the P B component, while using the boundedness of B −1 Q B in the other component. We are thus left with studying the effect of inserting B-operators and suitable projections into a σ -cell. To include all possible cases with regard to edge-direction and edge-type (i.e. G or G * ), in the pictures below we neither indicate directions of the G-edges nor their type but implicitly allow all possible assignments. We recall that both the R-interaction edge as well as the relevant B-operators (cf. (4.30)) are completely determined by the type of the four G-edges as well as their directions. To record the type of the inserted B, P B , Q B operators we call those inserted on the rhs. of the R-edge B , P B and Q B in the following graphical representations. Pictorially we first decompose the σ -cell subgraph of some graph as where we allow the vertices x, y to agree with z or w. With formulas, the insertion in (4.37) means the following identity ab ( p f ) a G ya G xa R ab G bw G bz = abc ( p f ) c G ya G xa P ac + Q ac R cb G bw G bz since P ac + Q ac = δ ac . We first consider with the second graph in (4.37), whose treatment is independent of the specific weights, so we already removed the weight information.

Val(Γ) = Val
We insert the B operator as x y z w and notice that due to Lemma 4.14 the matrix K = (B −1 ) t Q t B R, assigned to the weighted edge in the last graph, is entry-wise |k ab | ≤ cN −1 bounded (the transpositions compensate for the opposite orientation of the participating edges). It follows from Lemma 4.13 that where all ∈ G satisfy W-Est( ) ≤ p σ q W-Est( ) and all σ -cells in except for the currently expanded one remain σ -cells in . We note that it is legitimate to compare the Ward estimate of with that of because with respect to the Ward-estimate there is no difference between and the modification of in which the R-edge is replaced by a generic N −1 -weighted edge.
We now consider the first graph in (4.37) and repeat the process of inserting projections P B + Q B to the other side of the R-edge to find where we already neglected those weights which are of no importance to the bound. The argument for the second graph in (4.39) is identical to the one we used in (4.38) and we find another finite collection of graphs G such that (4.40) where the weighted edge carries the weight matrix K = P t B R Q B B −1 , which is according to Lemma 4.14 indeed scales like |k ab | ≤ cN −1 . The graphs ∈ G also satisfy W-Est( ) ≤ p σ q W-Est( ) and all σ -cells in except for the currently expanded one remain σ -cells in .
It remains to consider the first graph in (4.39) in the situation where B does not have a bounded inverse. We compute the weight matrix of the P t B R P B interaction edge as which we separate into the scalar factor Note that the B and B operators are not completely independent: According to Fact 1 it follows that for an interaction edge e = (u, v) associated with the matrix R the number of incoming G-edges in u is the same as the number of outgoing G-edges from v, and vice versa. Thus, according to (4.30), the B-operator at u comes with an S if and only if the B -operator at v comes also with an S. Furthermore, if the B-operator comes with an T , then the B -operator comes with an T t , and vice versa. The distribution of the conjugation operators to B, B in (4.30), however, can be arbitrary. We now use the fact that the scalar factor in (4.42) can be estimated by |σ | + ρ + η/ρ (cf. Lemma A.2). Summarising the above arguments, from (4.37)-(4.42), the proof of Proposition 4.12 is complete.

Cusp Universality
The goal of this section is the proof of cusp universality in the sense of Theorem 2. which preserves expectation and variance. In our setting of deformed Wigner-type matrices the covariance operator : C N ×N → C N ×N is given by The OU process effectively adds a small Gaussian component to H t along the flow in the sense thatH t = A + e −t/2 (H − A) + U t in distribution with U t being and independent centred Gaussian matrix with covariance Cov( U ) = (1 − e −t/2 ) . Due to the fullness Assumption (B) there exist small c, t * such that U t can be decomposed as U t = √ ctU + U t with U ∼ GUE and U t Gaussian and independent of U for t ≤ t * . Thus there exists a Wigner-type matrix H t such that with U independent of H t . Note that we do not define H t as a stochastic process and we will use the representation (5.2) only for one carefully chosen t = N −1/2+ . We note that H t satisfies the assumption of our local law from Theorem 2.5. It thus follows that In particular, by setting t = 0, M 0 well approximates the resolvent of the original matrix H and ρ 0 = ρ is its self-consistent density. Note that the Dyson equation of H t and hence its solution as well are independent of t, since they are entirely determined by the first and second moments of H t that are the same A and S for any t. Thus the resolvent of H t is well approximated by the same M 0 and the self-consistent density of H t is given by ρ 0 = ρ for any t. While H and H t have identical self-consistent data, structurally they differ in a key point: H t has a small Gaussian component. Thus the correlation kernel of the local eigenvalue statistics has a contour integral representation using a version of the Brézin-Hikami formulas, see Sect. 5.2. The contour integration analysis requires a Gaussian component of size at least ct N −1/2 and a very precise description of the eigenvalues of H t just above the scale of the eigenvalue spacing. This information will come from the optimal rigidity, Corollary 2.6, and the precise shape of the self-consistent density of states of H t . The latter will be analysed in Sect. 5.1 where we describe the evolution of the density near the cusp under an additive GUE perturbation √ sU . We need to construct H t with a small gap carefully so that after a relatively long time s = ct the matrix H t + √ ctU develops a cusp exactly at the right location. In fact, we the process has two scales in the shifted variable ν = s − ct that indicates the time relative to the cusp formation. It turns out that the locations of the edges typically move linearly with ν, while the length of the gap itself scales like (−ν) 3/2 + , i.e. it varies much slower and we need to fine-tune the evolution of both. To understand this tuning process, we fix t = N −1/2+ and we consider the matrix flow s → H t (s) . . = H t + √ sU for any s ≥ 0 and not just for s = ct. It is well known that the corresponding self-consistent densities are given by the semicircular flow. Equivalently, these densities can be described by the free convolution of ρ t with a scaled semicircular distribution ρ sc . In short, the self-consistent density of H t (s) is given by ρ fc s . . = ρ t √ sρ sc , where we omitted t from the notation ρ fc s since we consider t fixed. In particular we have ρ fc 0 = ρ t , the density of H t and ρ fc ct = ρ, the density of ctU as well as that of H . Hence, as a preparation to the contour integration, in Sect. 5.1 we need to describe the cusp formation along the semicircular flow. Before going into details, we describe the strategy.
Since in the sequel the densities ρ fc s and their local minima and gaps will play an important role, we introduce the convention that properties of the original density ρ will always carry ρ as a superscript for the remainder of Sect. 5. In particular, the points c, e ± , m and the gap size from (2.4) and Theorem 2.3 will from now on be denoted by c ρ , e ρ ± , m ρ and ρ . In particular a superscript of ρ never denotes a power.
Proof strategy. First we consider case (i) when ρ, the self-consistent density associated with H , has an exact cusp at the point c ρ ∈ R. Note that c ρ is also a cusp point of the self-consistent density of H t for any t.
We set t . . = N −1/2+ . Define the functions for any ν ≥ 0. For s < ct denote the gap in the support of ρ fc s close to c ρ by [e − s , e + s ] and its length by s . . = e + s − e − s . In Sect. 5.1 we will prove that if ρ has an exact cusp in c ρ as in (2.4a), then ρ fc s has a gap of size s ≈ (ct − s), and, in particular, ρ t = ρ fc 0 has a gap of size 0 ≈ (ct) ∼ t 3/2 , only depending on c, t and γ . The distance of c ρ from the gap is ≈ const · t. This overall shift will be relatively easy to handle, but notice that it must be tracked very precisely since the gap changes much slower than its location. For s > ct with s − ct = O(1) we will similarly prove that ρ fc s has no gap anymore close to c ρ but a unique local minimum in m s of size ρ fc s (m s ) ≈ ρ min (s − ct). Now we consider the case where ρ has no exact cusp but a small gap of size ρ > 0. We parametrize this gap length via a parameter t ρ > 0 defined by ρ = (t ρ ). It follows from the associativity (5.3b) of the free convolution that ρ t has a gap of size 0 ≈ (ct + t ρ ). Finally, the third case is where ρ has a local minimum of size ρ(m ρ ). We parametrize it as ρ(m ρ ) = ρ min (t ρ ) with 0 < t ρ < ct then it follows that ρ t has a gap of size 0 ≈ (ct − t ρ ). Note that these conclusions follow purely from the considerations in Sect. 5.1 for exact cusps and the associativity of the free convolution. We note that in both almost cusp cases t ρ should be interpreted as a time (or reverse time) to the cusp formation.
In the final part of the proof in Sects. 5.2-5.3 we will write the correlation kernel of H t + √ ctU as a contour integral purely in terms of the mesoscopic shape parameter γ and the gap size 0 of the density ρ t associated with H t . If 0 ≈ (ct), then the gap closes after time s ≈ ct and we obtain a Pearcey kernel with parameter α = 0. If 0 ≈ (ct + t ρ ) and t ρ ∼ N −1/2 , then the gap does not quite close at time s = ct and we obtain a Pearcey kernel with α > 0, while for 0 ≈ (ct − t ρ ) with t ρ ∼ N −1/2 the gap after time s = ct is transformed into a tiny local minimum and we obtain a Pearcey kernel with α < 0. The precise value of α in terms of ρ and ρ(m ρ ) are given in (2.6). Note that as an input to the contour integral analysis, in all three cases we use the local law only for H t , i.e. in a situation when there is a small gap in the support of ρ t , given by 0 defined as above in each case.

Free convolution near the cusp.
In this section we quantitatively investigate the free semi-circular flow before and after the formation of cusp. We first establish the exact rate A B Fig. 1. (a) illustrates the evolution of ρ fc s along the semicircular flow at two times 0 < s < t * < s before and after the cusp. We recall that ρ * = ρ fc 0 and ρ = ρ fc t * . (b) shows the points ξ s (e ± s ) as well as their distances to the edges e ± 0 at which a gap closes to form a cusp, and the rate at which the cusp is transformed into a non-zero local minimum. We now suppose that ρ * is a general density with a small spectral gap [e * − , e * + ] whose Stieltjes transform m * can be obtained from solving a Dyson equation. Let ρ sc (x) . . = (4 − x 2 ) + /2π be the density of the semicircular distribution and let s ≥ 0 be a time parameter. The free semicircular convolution ρ fc s of ρ * with √ sρ sc is then defined implicitly via its Stieltjes transform It follows directly from the definition that s → m fc s is associative in the sense that m fc s+s (z) = m s (z + s m fc s+s (z)), s, s ≥ 0. (5.3b) Figure 1a illustrates the quantities in the following lemma. We state the lemma for scDOSs from arbitrary data pairs (A * , S * ) satisfying the conditions in [10], i.e.
for any self-adjoint R = R * and some constants c, C > 0.
Furthermore, m s can approximately be found by solving a simple equation, namely there exists m s such that In particular we find that the initial gap Proof. Within the proof of the lemma we rely on the extensive shape analysis from [10]. We are doing so not only for the density ρ * = ρ fc 0 and its Stieltjes transform, but also for ρ fc s and its Stieltjes transform m fc s for 0 ≤ s ≤ 2t * . The results from [10] also apply here since m fc s (z) = M * (ξ s (z)) can also be realized as the solution to the Dyson equation with perturbed self-energy S * + sS GUE . Since t * 1 it follows that the shape analysis from [10] also applies to ρ fc s for any s ∈ [0, 2t * ]. We begin with part (i). Set ν . . = s − t * , then for 0 ≤ ν ≤ t * we want to find x ν such that m fc s has a local minimum in m s . . = c * + x ν near c * , i.e.
We now turn to part (5.5). It follows from the analysis in [10] that ρ fc s exhibits either a small gap, a cusp or a small local minimum close to c * . It follows from (i) that a cusp is transformed into a local minimum, and a local minimum cannot be transformed into a cusp along the semicircular flow. Therefore it follows that the support of ρ fc s has a gap of size s = e + s − e − s between the edges e ± s . Evidently e − t * = e + t * = c * , e + 0 − e − 0 = 0 , e ± 0 = e * ± and for s > 0 we differentiate (5.3a) to obtain (m fc s ) (z) 1 + s(m fc s ) (z) = m * (z + sm fc s (z)) and conclude m * (ξ s (e ± s )) = 1/s (5.8) by considering the z → e ± s limit and the fact that ρ fc s has a square root at edge (for s < t * ) hence (m fc s ) blows up at this point. Denoting the d/ds derivative by dot, from we can thus conclude thatė ± s = −m fc s (e ± s ). This implies that the gap as a whole moves with linear speed (for non-zero m fc s (e ± s )), and, in particular, the distance of the gap of ρ * to c * is an order of magnitude larger than the size of the gap. It follows that the size We now use the precise shape of ρ fc s close to e ± s according to (2.4b) which is given by Using (5.9), we computė where the (1 + O((t * − s) 1/3 )) factor in (5.9) encapsulates two error terms; both are due to the fact that the shape factor γ s of ρ fc s from (2.4b) is not exactly the same as γ , i.e. the one for s = t * . To track this error in γ we go back to [10]. First, |σ | in [10, Eq. (7.5a)] is of size (t * − s) 1/3 by the fact that σ vanishes at s = t * and is 1/3-Hölder continuous according to [10,Lemma 10.5]. Secondly, according to [10,Lemma 10.5] the shape factor (which is directly related to γ in the present context) is also 1/3-Hölder continuous and therefore we know that the shape factors of ρ * at e ± 0 are at most multiplicatively perturbed by a factor of (1 + O((t * − s) 1/3 )). By solving the differential equation (5.10) with the initial condition t * = 0, the claim (5.5c) follows.
Besides the asymptotic expansion for gap size and local minimum we also require some quantitative control on the location of ξ t * (c * ), as defined in (5.3a), and some slight perturbations thereof within the spectral gap [e * − , e * + ] of ρ * . We remark the the point ξ * . . = ξ t * (c * ) plays a critical role for the contour integration in Sect. 5.2 since it will be the critical point of the phase function. From (5.5c) we recall that the gap size scales as t 3/2 * which makes it natural to compare distances on that scale. In the regime where t t * all of the following estimates thus identify points very close to the centre of the initial gap.
Lemma 5.2. Suppose that we are in the setting of Lemma 5.1. We then find that ξ t * (c * ) is very close to the centre of [e * − , e * + ] in the sense that Proof. We begin with proving (5.11a). For s < t * we denote the distance of ξ s (e ± s ) to the edges e ± 0 by D ± s . . = ±(e ± 0 − ξ s (e ± s )), cf. Fig. 1b. We have, by differentiating m * (ξ s (e ± s )) = 1/s from (5.8) thaṫ and by differentiating (5.3a), We now consider z = e ± s + iη with η → 0 and compute from (5.9), for any s < t * , .
Due to the determinantal structure we can freely conjugate to redefine the correlation kernel as This redefinition K t N does not agree point-wise with the previous definition K t N , but gives rise to the same determinant, and in particular to the same k-point correlation function. Here b is the base point chosen in Theorem 2.3. The central result concerning the correlation kernel is the following proposition.

Proposition 5.3. Under the assumptions of Theorem 2.3, the rescaled correlation kernel
around the base point b chosen in (2.6) converges uniformly to the Pearcey kernel from (2.5) in the sense that Here R is an arbitrary large threshold, c > 0 is some universal constant, C > 0 is a constant depending only on the model parameters and R, and α is chosen according to (2.6).
Proof. We now split the contour ϒ into two parts, one encircling all eigenvalues λ i to the left of ξ = b + ct M(b) , and the other one encircling all eigenvalues λ i to the right of ξ , which does not change the value of K t N . We then move the vertical contour so that it crosses the real axis in ξ . This does also not change the value K t N as the only pole is the one in z for which the residue reads We now perform a linear change of variables z → ξ + 0 z, w → ξ + 0 w in (5.17) to transform the contours ϒ, into contours Here 0 . . = e + 0 − e − 0 indicates the length of the gap [e − 0 , e + 0 ] in the support of ρ t . From Lemma 5.1 with ρ * = ρ t and t * = ct we infer 0 ∼ t 3/2 ∼ N −3/4+3 /2 . In order to obtain (5.19) we used the relation We begin by analysing the deterministic variant of f (z), We separately analyse the large-and small-scale behaviour of f (z). On the one hand, using the 1/3-Hölder continuity of u → M t (u) , eq. (5.5c) and we conclude the large-scale asymptotics We now turn to the small-scale |z| 1 asymptotics. We first specialize Lemmas 5.1 and 5.2 to ρ * = ρ t and collect the necessary conclusions in the following Lemma.
Furthermore, in all three cases we have that ξ is is very close to the centre of the gap in the support of ρ t in the sense that By using the precise shape (5.9) (with s = 0) of ρ t close to the edges e ± Proof. Just as in (5.24) we have the expansion It thus follows that for some small δ > 0, and Fig. 2c. For large z, however, it also follows from (5.20) together with (5.25) and (5.23) that for some large R, and we have > 0 , > 4 ⊂ + and > ±2 ⊂ − , in agreement with Fig. 2a. We denote the connected component of ± containing some set A by cc(A).
The claimed bounds on g now follow from Claims 4-5 and compactness. The claimed small scale shape (5.26) follows by construction of the sets < k .
From Lemmas 5.5 and 2.8 it follows that K t N and thereby also K t N remain, with overwhelming probability, invariant under the chosen contour deformation. Indeed, K t N only has poles where z = w or z = λ i for some i. Due to self-adjointness and Lemma 5.5, z = λ i can only occur if λ i = ξ or dist(λ i , supp ρ t ) > 1. Both probabilities are exponentially small as a consequence of Lemma 2.8, since for the former we have η f (ξ ) ∼ N −3/4+ /6 according to (2.7), while dist(ξ, supp ρ t ) ∼ N −3/4+3 /2 . (c) furthermore shows the cone sections > k and < k , where we for clarity do not indicate the precise area thresholds given by δ and R. We also do not specifically indicate < k for k = ±1, ±2, ±3 as then cc( < k ) = cc( > k ), cf. Claims 4-5 in the proof of Lemma 5.5 For z ∈ ∪ ϒ it follows from (5.26) that we can estimate Indeed, for (5.28) we used (5.26) to obtain dist( u, supp ρ t ) t 3/2 , so that | G t (u) − M t (u) | ≺ 1/N t 3/2 follows from the local law from (2.8b).
We now distinguish three regimes: |z| N − /2 , N − /2 |z| 1 and finally |z| 1 which we call microscopic, mesoscopic and macroscopic. We first consider the latter two regimes as they only contribute small error terms.
Microscopic regime. We can now concentrate on the important regime where |z|+|w| N − /2 and to do so perform another change of variables z → ctγ z/ 0 N 1/4 ∼ N − /2 z, w → ctγ w/ 0 N 1/4 ∼ N − /2 w which gives rise to two new contours .
as depicted in Fig. 2B, and the kernel (5.29) We only have to consider w, z with |w| + |z| 1 in (5.29) since t/ 0 N 1/4 ∼ N − /2 and the other regime has already been covered in the previous paragraph before the change of variables. We now separately estimate the errors stemming from replacing f (z) first by f (z), then by g(z) and finally by ±t ρ z 2 /2ct −4z 4 /27. We recall that 0 ∼ t 3/2 = N −3/4+3 /2 from (5.21a), t ρ N −1/2 from the definition of t ρ in (5.21a), and that t = N −1/2+ which will be used repeatedly in the following estimates. According to (5.28), we have We recognize (5.31) as the extended Pearcey kernel from (2.5).
It is easy to see that all error terms along the contour integration are uniform in x, y running over any fixed compact set. This proves that K t N (x, y) converges to K α (x, y) uniformly in x, y in a compact set. This completes the proof of Proposition 5.3.

Green function comparison.
We will now complete the proof of Theorem 2.3 by demonstrating that the local k-point correlation function at the common physical cusp location τ 0 of the matrices H t does not change along the flow (5.1). Together with Proposition 5.3 this completes the proof of Theorem 2.3. A version of this continuity of the matrix Ornstein-Uhlenbeck process with respect to the local correlation functions that is valid in the bulk or at regular edges is the third step in the well known three step approach to universality [38]. We will present this argument in the more general setup of correlated random matrices, i.e. in the setting of [34]. In particular, we assume that the cumulants of the matrix elements w ab satisfy the decay conditions [34, Assumptions (C,D)], an assumption that is obviously fulfilled for deformed Wigner-type matrices.
We claim that the k-point correlation function p (N ) k of H = H 0 and the corresponding k-point correlation function p (N ) k,t of H t stay close along the OU-flow in the sense that for > 0, t ≤ N −1/4− , smooth functions F and some constant c = c(k, ), where b is the physical cusp point. The proof of (5.32) follows the standard arguments of computing t-derivatives of products of traces of resolvents G (t) = ( H t − z) at spectral parameters z just below the fluctuation scale of eigenvalues, i.e. for z ≥ N −ζ η f ( z). Since the procedure detailed e.g. in [38,Chapter 15] is well established and not specific to the cusp scaling, we keep our explanations brief. The only cusp-specific part of the argument is estimating products of random variables and we claim that as long as t ≤ N −1/4− for some c = c(k, , ζ ). For simplicity we first consider k = 1 and find from Itô's Lemma that E dX t dt = E − 1 2 α w α ∂ α X t + 1 2 α,β κ(α, β)∂ α ∂ β X t , (5.34) which we further compute using a standard cumulant expansion, as already done in the bulk regime in [34, Proof of Corollary 2.6] and in the edge regime in [11,Section 4.2]. We recall that κ(α, β), and more generally κ(α, β 1 , . . . , β k ) denote the joint cumulants of the random variables w α , w β and w α , w β 1 , . . . , w β k , respectively, which accordingly scale like N −1 and N −(k+1)/2 . Here greek letters α, β ∈ [N ] 2 are double indices. After cumulant expansion, the leading term in (5.34) cancels, and the next order contribution is with N −3/2 being the size of the cumulant κ(α, β 1 , β 2 ). With α = (a, b) and β i = (a i , b i ) we then estimate where we used the Ward-identity and that max α β 1 ,β 2 κ(α, β 1 , β 2 ) N −3/2 . We now use that according to [34,Proof of Prop. 5.5], η → η G (t) p and similarly η → η G (t) p are monotonically increasing with η = N −3/4+ζ to find G (t) p ≤ p N 3ζ −1/4 and G (t) p ≤ p N 3ζ from the local law from Theorem 2.5 and the scaling of ρ at η . Since all other error terms can be handled similarly and give an even smaller contribution it follows that E dX t dt N 1/4+Cζ and similarly, but more generally, E d dt regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
where V r is the eigenmatrix corresponding to β and V l , · a linear functional. Assume that for some positive constant λ > 1 the bounds are satisfied, where we denote the induced norms on linear operators, linear functionals and bilinear forms on C N ×N by the same symbol · . Then there exists a universal constant c > 0 such that for any δ ∈ (0, 1) and any Y, X ∈ C N ×N with Y + X ≤ cλ −4 that satisfies the quadratic equation  Here, the constants implicit in the O-notation depend on c only.
Proof. We decompose Y as Then (A.2) takes the form We project both sides with Q, invert B and take the norm to conclude Then we use the smallness of Y 2 by properly choosing δ and the definition of Y 1 to infer Y 2 = λ 4 O 2 , where we introduced the notation Inserting this information back into (A.6) and using | | + X = O(λ −3 ) reveals In particular, (A.5) follows. Plugging (A.7) into (A.6) and applying the projection P yields For a linear operator K 1 and a bilinear form K 2 with K 1 + K 2 ≤ 1 we use the general bounds for any R ∈ C N ×N and δ > 0 to find which proves (A.3).
Proof of Lemma 3.3. Due to the asymptotics edge ∼ min{λ 1/2 , λ 1/3 } and min ∼ min{λ 2 , |λ| 1/3 } and the classification of singularities in (2.4), we can infer the following behaviour of the self-consistent fluctuation scale from Definition 2.4. There exists a constant c > 0 only depending on the model parameters such that we have the following asymptotics. First of all, in the spectral bulk we trivially have that η f (τ ) ∼ N −1 as long as τ is at least a distance of c > 0 away from local minima of ρ. In the remaining cases we use the explicit shape formulae from (2.4) to compute η f directly from Definition 2.4.
The claimed bounds in Lemma 3.3 now follow directly from (3.7e) and (A.8) by distinguishing the respective regimes.
Proof of Lemma 4.8. We start from (4.7) and estimate all vertex weights w (v) , interaction matrices R (e) and weight matrices K (e) trivially by  where we choose 1/q = 1/q 1 + 1/q 2 in such a way that q 2 ≥ p/c . Since |E 3 | ≤ 2 we can use (4.14a) to estimate and it thus follows from for q ≥ 2q 2 |GE|. By using (A.9) inductively m = |V | ≤ cp times it thus follows that for some |c| ∼ 1, provided B −1 ∞→∞ ≥ C for some large enough constant C > 0.
Proof. Recall from the explanation after (4.42) that R = S, T, T t if R = S, T t , T , respectively. As we saw in the proof of Lemma 4.14, in the case R = T, T t in the complex Hermitian symmetry class, the operator B as well as B has a bounded inverse. Since we assume that B −1 ∞→∞ is large, we have R = R = S, which also includes the real symmetric symmetry class. In particular, we also have (B ) −1 ∞→∞ ≥ C and all subsequent statements hold simultaneously for B and B . We call f ( where P f (S) is the orthogonal projection onto the f (S) direction. The error terms are measured in · ∞ -norm. For the expansions (A.11) we used that F has a spectral gap in the sense that for some constant c > 0, depending only on model parameters. By using (A.11) we see that the lhs. of (A.10) becomes ± (f (S) ) 2 pf F (S) |m| −2 (f (S) ) 2 +O (ρ). To complete the proof of the Lemma we note that f (S) = f/ f +O (η/ρ) according to [10,Eq. (5.10)].