Supersymmetric Cluster Expansions and Applications to Random Schr\"odinger Operators

We study discrete random Schr\"odinger operators via the supersymmetric formalism. We develop a cluster expansion that converges at both strong and weak disorder. We prove the exponential decay of the disorder-averaged Green's function and the smoothness of the local density of states either at weak disorder and at energies in proximity of the unperturbed spectrum or at strong disorder and at any energy. As an application, we establish Lifshitz-tail-type estimates for the local density of states and thus localization at weak disorder.


Introduction
In this paper we consider discrete random Schrödinger operators on ℓ 2 (Z D )⊗ C S : where H is a Hermitian translation-invariant hopping operator, with fast decaying matrix elements, and V ω is a local random potential, i.e., V ω u x = ω x u x , {ω x } x∈Z D being i.i.d. random variables with probability distribution ν(dω x ). The set S ⊂ N is a finite set of indices, which can possibly represent, e.g., spin or sub-lattice "colour". Our work focuses on the study of the disorder-averaged Green's function via the supersymmetric (SUSY) formalism. The SUSY approach to random systems was pioneered in the physics literature by Parisi and Sourals [42,43], and by Efetov [27] based on the seminal work of Schäfer and Wegner [56,45]. In the mathematics literature, the SUSY formalism has been rigorously applied in the study of random Schrödinger operators and random matrices, see [37,18,36,19,15,14,52,55,25,24,23,21,47,50,48,49].
The analysis carried out in this paper is inspired by [6], where SUSY and renormalization group have been used to study a massless hierarchical model for disordered three-dimensional semimetals. One of the main obstacles in the control of SUSY integrals for disordered systems is represented by the presence of mostly complex reference Gaussian "measures". The extension to the non-hierarchical case requires the use of a cluster expansion that exploits this strong oscillatory nature of the SUSY integrals. The scope of this work is thus methodological. We develop a SUSY cluster expansion that is applicable in the regimes of both strong disorder and weak disorder with massive covariance, and we review some known results in the context of random Schrödinger operators.
Let us provide some preliminary definitions. If Λ ⊂ Z D is a finite subset, we denote by H ω,Λ the restriction of H ω to ℓ 2 (Λ) ⊗ C S with zero boundary conditions outside of Λ. The disorder-averaged Green's function at finite volume is the following C S×S -valued function: where E ω denotes the expectation with respect to the product measure . The local density of states (LDOS) ρ(E) at energy E ∈ R can be defined as: Im Tr S G Λ (0, 0; E + iǫ) .
This definition of the LDOS can be suitably extended to a function of E ∈ C. The existence of the limit is a consequence of Birkhoff's ergodic theorem and of the properties of the Stieltjes transform, see [5] and references therein for more details.
We make the following assumptions on the disorder distribution: (H1) The measure ν is Lebesgue absolutely continuous, is even and satisfies the finite-moment condition |ω| |S|+1 ν(ω)dω < ∞; (H2) The Fourier transform of the density ν satisfies suitable smoothness and decay properties that will be made precise at separate times, in Definition 3.1 as (H2-I) and Definition 4.1 as (H2-II).
As will be clear below, these assumptions are quite restrictive but yet apply to a large class of disorder distributions. This class includes measures with unbounded support like the Gaussian distribution and perturbations of it, but also measures whose density is smooth and compactly supported. Since ν is even, w.l.o.g. we shall henceforth restrict to γ > 0. We prove the following results: (i) At strong disorder (γ ≫ 1) and at any energy E, G Λ (x, y; E + iǫ) decays exponentially in |x− y|, uniformly in Λ, ǫ. This result is presented in Theorem 3.2 and is relevant for bounded energies, in particular for E ∈ σ(H). The proven bound also implies Wegner estimates [57], which in turn imply localization via finite volume criteria [4].
(ii) At strong disorder (γ ≫ 1) and at any energy in a suitable complex strip around the real axis ρ(E) is analytic. The result is presented in Corollary 3.4.
(iv) The LDOS is analytic at energies such that γ δ −1 O(1) and satisfies Lifshitz-tail-type estimates in the proximity of the unperturbed spextrum, that is, it is exponentially small in γ −1 δ. These type of bounds are well-known [38] to imply localization via finite moment criteria, see Remark 4.7 for details. The results are presented respectively in Corollary 4.4 and Theorem 4.5.
Comparison with previous results. (i) Similar decay estimates are already implied, e.g., by the probability estimates in [31] or by fractional moments [2], but the resulting bounds are not uniform down to ǫ = 0. In [51,52] a SUSY representation is used to obtain exponential decay, uniformly in Λ and ǫ; the technique is based on complex deformation of the "oscillatory measure" and works for Cauchy distribution of the disorder (or perturbations of it). (ii) Regularity properties of the LDOS at strong disorder have been extensively studied, see, e.g., [26,20,15]. In [15] the application of cluster expansion techniques to the SUSY representation of the LDOS is pioneered. The authors consider the Laplacian on Z D in the presence of a random potential with uniform distribution. Our analysis relies on a different expansion and applies to a larger class of hopping Hamiltonians: due to the presence of internal degrees of freedom, a non-trivial quartic fermionic interaction appears in the SUSY representation of G Λ , not present in [15] by Pauli principle. (iii) The exponential decay of the disorder-averaged Green's function at weak disorder and at energies close to the spectrum was expected to hold true [54], but a proof was lacking to the best of our knowledge. (iv) The LDOS is expected to be analytic at weak disorder and away from the unperturbed spectrum [20]. In [14] Bovier studied the analyticity of the LDOS in a hierarchical model at weak Gaussian-distributed disorder and at energies in proximity of the "band edge". The work is based on SUSY and on the renormalization group analysis of the hierarchical Laplacian. Our result applies to any hopping Hamiltonian that has quadratic spectrum a the band edge, e.g., the discrete Laplacian ∆ Z D . Localization in the Lifshitz-tail regime has already been established in [2,55,38,29]. In [2], Aizenman establishes localization up to δ γ 1 1+D+ǫ , ǫ > 0. The result was improved by Wang [55] to δ γ and later on boosted by Klopp [38] up to δ γ 1+ D 4D+4 . Finally, in [29] Elgart proved localization up to δ ≥ Cγ 2 + γ 4−ǫ , with ǫ > 0 and optimal C > 0. The proof is based on the systematic resummation of the "tadpole graph" in the perturbative expansion of the Green's function, as analysed by Spencer in [53]. Notice that in [2] a very general class of disorder distribution is considered and localization is established at energies close to the unperturbed spectrum. On the other hand, in [55,38] and in [29] they consider disorder distributions with semi-bounded and bounded support respectively; furthermore, the result is established at energies close to the spectrum of the random Hamiltonian (in these cases δ ≡ dist(E, σ(H ω ))). Our Lifshitz-tail-type estimate in Theorem 4.5 applies to disorder measures with unbounded support, e.g., the Gaussian distribution, and allows to prove localization in the proximity of the unperturbed spectrum, at energies up to δ ≥ γ | ln γ| α , for some α sufficiently large. We believe that this is the best achievable result with a single-step SUSY cluster expansion.
The paper is organized as follows. In Section 2 we describe the machinery of the "superformalism" and we provide two SUSY representations of the disorder-averaged Green's function. In Section 3 we formalise the assumptions onν as (H2-I), and we prove the results (i) and (ii) above. In Section 4 we introduce the assumption (H2-II) onν and we prove the results (iii) and (iv). In appendix we discuss in more detail examples of disorder distributions that satisfy (H2-I) and (H2-II).

SUSY formalism
After a brief introduction to normed Grassmann algebras and superfunctions, we state three main propositions that are crucial in our analysis. We conclude the section with Proposition 2.7: we provide two SUSY representations of G Λ that will be used respectively at strong and weak disorder. The use of super Fourier transform and the estimation in norm of the SUSY integrals are the novel features of our method.

Normed Grassmann Algebras
Grassmann algebras formalise the algebraic structure of anti-commuting variables. They are widely used in statistical mechanics and field theory [41]. It useful to equip these algebras with a suitable norm: this will make the estimates in Sections 3 and 4 rather simple and intuitive. Previous examples of the use of norms in the context of Grassmann integration can be found in [30,8].
Definition 2.1. A normed Grassmann algebra G is a complex unital algebra that has anti-commuting generators and that is equipped with a norm · satisfying: 1 denoting the multiplicative identity in G . Notice that a normed Grassmann algebra is a Banach algebra.
Let X, Y be subsets of Λ; we use the boldface font do denote the Cartesian product with S, that is, we set X = X × S, Y = Y × S and so on. We introduce the Grassmann algebras G := C S×{±} and for any X ⊂ Λ G X := C X×{±} . Since G ∼ = G {x} for any x ∈ Λ, the following discussion applies to G as well.
Let {ψ ε x,σ } ε=± (x,σ)∈X be the set of generators of G X and let the set Λ × {±} be provided with a total order. We can write any element of G X as the prime meaning that the product is ordered and 1 X denoting the indicator function of X . One can check that G X equipped with the following is a normed Grassmann algebra. It is useful to think of G X as a set of functions of anti-commuting variables; if f ∈ G X and if {ψ ε x,σ } ε=± (x,σ)∈X is the set of generators, we accordingly write f = f (ψ). Grassmann integration is defined by setting where Grassmann derivation is the linear operation defined by: x ′ ,σ ′ really stands for the operation of left-multiplication by the corresponding Grassmann variable. For the sake of notation, we also set dψ
Superfields are maps from subsets of Λ to supervectors, that is Φ : where A x,y ∈ C S×S , will be widely used in the rest of the script.
Superfunctions are maps f : S X → G X . It is clear that a superfunction can be decomposed following Eq. (5): where the functions f X : R 2X → C will be called the coefficients of f . With abuse of notation we will write f ((φ, 0)) = f (Φ)| ψ=0 ≡ f ∅ (φ). We also introduce some useful terminology. We say that a superfunction f : S X → G X belongs to the Banach space B(S X , G X ) if all its coefficients belong to B(R 2X , C). The norm in B(S X , G X ) is inherited from the norm in B(R 2X , C): The same terminology is adopted for the Fréchet space of Schwartz superfunctions S (S X , G X ). Superintegration is denoted by: having set the measures dφ x := × x∈x dφ x with dφ x := × σ∈S dφ + x,σ dφ − x,σ ≡ × σ∈S π −1 dφ x,σ, 1 dφ x,σ, 2 . Notice the bounds: which will be repeatedly used in the rest of the script.

Three main propositions
The first identity that we present is the so-called supersymmetric replica trick, which is a way to write the entries of a matrix via super Gaussian integrals. This trick was first introduced by Efetov [27].
Proposition 2.2. Let A ∈ C X×X be positive definite. The following representation holds true: where (A −1 ) x,y ∈ C S×S and ψ − x ψ + y is a S × S matrix of Grassmann variables. See, e.g., [28] for a proof. It is important to notice that the positive definiteness requirement is necessary in order for the integral to be welldefined.
In the second proposition we state super Plancherel identity. This identity is the cornerstone of the weak-disorder SUSY cluster expansion that we present in Section 4. It is based on the theory of super Fourier transform, which we shall briefly cover. We point out that Berezin had already considered the Fourier transform on Grassmann algebras in his pioneering work [9], see also [11,10]. Let f ∈ L 1 (S X , G X ). The super Fourier transform of f , denoted by f , is the function f : where ξ = (κ, χ) ∈ S X is another superfield and where ξ + Some important properties of the Fourier transform on L 1 (R 2X , C) and S (R 2X , C) carry over to L 1 (S X , G X ) and S (S X , G X ). In particular, we shall remark that the super Fourier transform is invertible in the latter space, the inversion being the super Fourier transform with flipped sign. Proposition 2.3 (Super Plancherel identity). Let f ∈ S (S X , G X ) and g ∈ L 1 (S X , G X ), then Proof. The proof is obvious once the inversion theorem for the super Fourier transform is established. To this end, it suffices to check the case with anti-commuting variables only. Since no convergence problem arises, we accomplish the goal by proving the following two identities: where χ, ψ, ψ ′ are different (independent) Grassmann variables. To prove the fist identity, we notice that by nilpotency the integrand can be written x,σ and the identity follows. To prove the second identity, it suffices to check it for f (ψ) = ψ X . The only term where we swapped the superintegrals because f L 1 (S X ,G X ) and g L 1 (S X ,G X ) are finite.
Last but not least, supersymmetry is a crucial property in the analysis of superintegrals. The last proposition we present is an instance of the well-known localization formula for supersymmetric functions [42].
Definition 2.4 (SUSY). Introduce the differential operator: We say that f is supersymmetric if it is Q−closed, that is, it is differentiable and satisfies For the sake of generality, we shall state the SUSY localization formula under weak decay assumptions.
See [25,8] for a proof. A more geometrical perspective on this statement can be found in [12,46,13].
The following lemma will be useful for the application of the SUSY localization formula.

Disorder-averaged Green's function
In the proposition below two SUSY representations of the disorder-averaged Green's function are finally discussed. The first representation is well-known and has already been applied to the study of the Anderson model at strong disorder [27]. The second representation is new, to the best of our knowledge. It is particularly useful at weak disorder and energies close to the spectrum of H. We call these two SUSY representations respectively "direct SUSY integral" and "Fourier SUSY integral". We provide a preliminary definition. By assumption on the disorder distribution, see (H1), we have that |ω| |S|+1 ν(ω)dω < ∞ and thus we can define the following function of a supervector: where z ∈ C and wherê The condition on the moments of ν is necessary in order to haveν (n) welldefined for n = 0, . . . , |S|. Notice in passing that F z is supersymmetric We can finally state the proposition.
Proof. We apply Proposition 2.2 to ±i(H ω − E ∓ iǫ) ∈ C Λ×Λ , which is positive definite for any ω (H ω is Hermitian) provided that ǫ > 0. After rescaling Φ → γ 1/2 Φ (which preserves dΦ Λ ) we can write: where ψ − x ψ + y is a S×S matrix of Grassmann variables and where dµ ± Λ (Φ) are as in the statement. We shall swap disorder-average with superintegration by Fubini-Tonelli theorem. To this end, we need to prove that If we denote by Tay n the n-th order Taylor expansion in zero and for translation-invariant matrices we define the following bound proves (33), where we used that {ω x } x∈Λ are i.i.d. and ν is even.
, by standard application of dominated convergence and Morera's theorem G Λ (x, y; E ± iǫ) are analytic respectively in ± Im E > 0.
To prove claim (ii), we apply super Plancherel identity (see Proposition 2.3) to the r.h.s. of (28), and use the fact that Remark 2.8. Our proof of the SUSY representation relies on ν having finite moments (H1). This hypothesis can be weakened by use of, e.g., supersymmetric polar coordinates [22]. In such a case, the SUSY representation could possibly involve a more complicated expression thanν(Φ + Φ − ).
With stronger assumptions on ν, we can actually analytically extend the SUSY integral and hence G Λ (x, y; E ± iǫ) in the variable E. Lemma 2.9 (Analytic continuation). Let Λ ⊂ Z D be finite and ǫ > 0. If for some β > 0 F β ∈ L 1 (S, G ), then the functions G Λ (x, y; E ± iǫ) can be analytically continued respectively in ± Im E > −β.
Proof. We bound in norm the direct SUSY integral as: uniformly in ± Im E > −β and ǫ ≥ 0 at finite Λ. The claim then follows again by application of dominated convergence and Morera's theorem.
We shall henceforth assume that G Λ (x, y; E ±iǫ) is the analytic extension if needed.
One of the technical difficulties in the analysis of the SUSY integrals in Eq. (28) and Eq. (30) is to obtain estimates that are uniform in the volume. We will achieve this goal by means of SUSY cluster expansions.

SUSY cluster expansion at strong disorder
In this section we prove the exponential decay of the disorder-averaged Green's function (Theorem 3.2) and we establish analiticity of the LDOS (Corollary 3.4) at strong disorder and arbitrary energies. The analysis of the direct SUSY integral is based on the SUSY cluster expansion presented in Proposition 3.8. The proof of the theorem is then completed by means of tree estimates together with some suitable bounds on the norm of the superfunctions to be integrated. The latter bounds can be achieved under some reasonable assumptions on the disorder distribution as we anticipated in the Introduction, hypothesis (H2). This is made precise with the following definition.
The connection with the disorder distribution can now be established: We believe that our analysis could be extended to the case of weakly positively correlated disorder if we make assumptions on the decay of ν that are stronger than (H2-I), but still applicable to Gaussian disorder.
We can now state the main results of this section.
Assume that F β satisfies IMB for some K, M, p and that the Hamiltonian decays as In particular, the theorem holds for E ∈ σ(H). Proof. The idea of the proof is already provided in [15]. We shall rephrase it in our framework. For the sake of notation, set G Λ (z) ≡ G Λ (0, 0; z) ∈ C S×S . The objective is to prove the analyticity of for z ∈ B×(−β, β) ⊂ C. Since F β satisfies IMB, the hypothesis of Lemma 2.9 is satisfied and hence Tr S G Λ (z ± iǫ) are analytic in B × (−β, β), in ǫ > 0 and continuous in ǫ = 0. The hypotheses of Theorem 4.2 are also satisfied, e.g., setting θ = 0. Define C K,M,p := C K,M,p,θ=0 . Thus, uniformly in z ∈ B × (−β, β), ǫ ≥ 0 and Λ ⊂ Z D , Tr S G Λ (z ± iǫ) are bounded. Since lim ΛրZ D Tr S G Λ (z ±iǫ) = Tr S G Z D (z ±iǫ) exist for any ǫ > 0 [5], we can apply Vitali's theorem and obtain that the convergence is uniform, Tr S G Z D (z ± iǫ) being analytic in z ∈ B × (−β, β), ǫ > 0 and continuous in ǫ → 0 + . Thus, Tr S G Z D (z ± i0 + ) exist and are analytic in z ∈ B × (−β, β), and the claim follows.
Remark 3.5. In [51] they also prove that the limit exists provided that either γ or E is large. We point out that the expansion presented in Proposition 3.8 and the estimates in the proof of Theorem 3.2 allow to prove that uniformly in ǫ ≥ 0, G Λ (x, y; E ± iǫ) σ,σ ′ form a Cauchy sequence in Λ, at any fixed x, y ∈ Z D , σ, σ ′ ∈ S and E ∈ R.
The SUSY cluster expansion presented in Proposition 3.8 is based on the following important result in statistical mechanics.
Theorem 3.6 (Battle-Brydges-Federbush representation). For any X ⊂ Λ let V X := 1/2 x,y∈X v x,y , where v x,y = v y,x is an even element of a Grassmann algebra. The following representation holds true: where for s ∈ [0, 1] P(Y) , P(Y) being the set of unordered pairs in Y, we have defined: and where dp t is a probability measure supported on s Notice that the first sum is over all partitions of X, while the second one is over all tree-graphs whose vertices are the elements of Y.
The idea of the proof is to perform an iterative interpolation in which clusters of increasing size are decoupled from the whole X [7,16,17,1]. This naturally produces an expansion over trees with V Y (s) interpolated as a convex decoupling.
Remark 3.7. We say that V X is stable if the real part of (V X ) ∅ is nonpositive. If V X is stable also its convex decouplings are stable: this property is crucial in order to have well-defined integrals. All the cases we consider below have this stability provided that ǫ ≥ 0. For example, We apply the BBF representation to the super Gibbs' weights µ ± Λ (Φ). By means of the SUSY localization formula, we are then able to expand G Λ into "connected clusters" that join the end-points x, y. The expansion is well-suited for our purposes because it exploits the smallness of γ −1 : it formalises the fact that µ ± Λ (Φ) is somewhat close to one. The expansion can also be considered as an improvement of the simple Taylor expansion of µ ± Λ (Φ).
Proposition 3.8. Let E ∈ R, β ≥ 0 and set z ± := E∓iβ. If F β ∈ L 1 (S, G ), then the following representation holds true: where we have set dµ ± y (Φ, s) := dΦ Λ µ ± y (Φ, s) and Above, s = (s x,y ) ∈ [0, 1] P(Y) while dp t (s) is a probability measure with support on s such that the exponent in the super Gibbs' weights µ ± y (Φ, s) satisfies respectively Re ±i x,y∈Y s x,y v x,y ((φ, 0)) ≤ 0. Furthermore, we have set Remark 3.9. It is important to notice that the expansion of G Λ is in terms of connected objects, in our case tree-graphs. Thus, the supersymmetric formalism takes care of this property "automatically", without the need for logarithms.
As already pointed out in the discussion before Proposition 2.7, F z (Φ) is supersymmetric, even and invariant under U (1) ×S fermionic transformations. Therefore, by Lemma 2.6 we have that and On the other hand, the super Gibbs' measures µ ± x are supersymmetric and even by inspection, and uniformly bounded in norm by e γ −1 H ∞,1 |Λ| , see, e.g., (35). Thus, the products µ ± x F x z are supersymmetric, even and satisfy the hypotheses of the SUSY localization formula. These facts help us simply the expansion in (48) as follows. By parity, we have that K(Y) = 0 unless Y ∩ {x, y} is either {x, y} or ∅. By the SUSY localization formula, if Y ∩ {x, y} = ∅ then K(Y) = 1 for |Y| = 1 and K(Y) = 0 otherwise. Thus the sum in Eq. (48) is really just The 1/N ! factor is needed because each set Y is counted that many times in the sum over the distinct points x 1 , . . . , x N . We then set x N +1 = x, x N +2 = y and swap the sum over distinct points with the sum over trees on Y, the latter becoming a sum over trees on {1, . . . , N + 2 − δ x,y }. Notice that if x = y, then the set Y contains only N + 1 points.
In interacting fermionic systems determinant bounds provide a useful tool to control the convergence of perturbative expansions [32,40,33,44,41]. In the proof of Theorem 3.2 we follow a different approach, based on a combination of the BBF formula (through Proposition 3.8) and Grassmann norms.
Proof of Theorem 3.2. Since F β satisfies IMB, F β ∈ L 1 (S, G ) and thus we can use Proposition 3.8. We shall prove that if E ∈ B, then for some constant C K,M,p,θ depending on B the following bounds hold true: Plugging these bounds into the expansion of Proposition 3.8 proves the claim.
Let us consider a polymer Y ∋ x, y and a directed tree T ℘ on Y, ℘ denoting the choice of directions on the links of the tree. We denote by ℓ + (ℓ − ) the starting (ending) vertex of the directed link. Links have to be directed in order to select one of the two elements in v ℓ (Φ), see in Eq. (47). Furthermore, let us introduce the sequences σ = σ ε ℓ ∈ S ℓ∈T, ε=± and ♯ = ♯ ℓ ∈ {B, F } ℓ∈T . Also, we set Φ ε B,x,σ = φ ε x,σ and Φ ε F,x,σ = ψ ε x,σ , and we shall henceforth write |Y| instead of N + 2 − δ x,y ≡ |Y|. We define: Accordingly, we can rewrite Eq. (47) as which can be bounded as follows: We shall first obtain a useful bound for the superintegral by using the properties of the Grassmann norms: The super Gibbs' weight in the superintegral can be simply bounded according to sup where recall that H ∞,1 = sup σ x∈Λ σ ′ (H 0,x ) σ,σ ′ . On the other hand, it is clear that F y t℘, ♯, σ (Φ) is a local function, that is, it factorizes having set: that is, given T ℘ , σ and ♯, it counts how many times (ℓ ε , ♯ ℓ , σ ε ℓ ) = (i, ♯, σ). Notice that the sign in front in Eq. (86) is unimportant and thus left unspecified. For the sake of notation, we also define d i,♯ : Using that F β satisfies IMB, we finally obtain To estimate the second line in (55), we shall exploit the exponential decay of the hopping Hamiltonian. If we define (H θ ) x,y := e (1+θ)α|x−y|/2 H x,y , it follows that where clearly C θ > C. Standard tree-stripping estimates (see Fig. 1) based on the exponential decay of H give [16]: x 3 x 5 x 4 For simplicity, we did not represent the internal degrees of freedom. We fix a root, say x, and start stripping the tree from its outermost branches, those with incidence number equal to one. The estimate is then carried out iteratively.
where C q,θ > 1 is a constant that depends also on D, the dimension of the lattice. The estimate is carried out by progressively stripping the outer branches as shown in Fig 1. The branches that have been removed are then bounded as follows: In the second inequality in (65) we extracted an exponential weight from H and pulled it out of the summation by taking the sup over all distinct points x 1 = · · · = x d−1 =x. The latter can be computed by noticing that While stripping the tree, if one of the outer vertices is y, there is no summation and this produces simply H θ ∞,∞ instead of H θ ∞,1 , see Fig. 1. We shall remark that the factorial (d i !) −q is gained only because we are summing over distinct points and because the Hamiltonian decays exponentially.
Plugging the bounds (92) and (95) with q = p + 1 into (85) and using Cayley's theorem on the number of trees with fixed coordination numbers {d i } i , see [16], we obtain for E ∈ B:

SUSY cluster expansion at weak disorder
In this section we prove the exponential decay of the disorder-averaged Green's function at weak disorder and away from the spectrum (Theo-rem 4.2), and we establish Lipshitz-tail-type estimates for the LDOS (Theorem 3.4). The analysis of the Fourier SUSY integral is based on the SUSY cluster expansion presented in Proposition 4.8. The proof of the theorems is again completed by means of tree estimates together with some suitable bounds on the norm of the superfunctions to be integrated. Like in Section 3, we shall make reasonable assumptions on the disorder distribution. This is made precise in the following definition.  = (φ, ψ). We say that f satisfies integrable derivative bounds (IDB) if for some K ≥ 0, M ≥ 1 and p ≥ 0 the following holds true for all {n ε σ } ∈ N S×{±} , having set n := ε,σ n ε σ . The connection with the disorder distribution can now be established: Notice that in the case of Gaussian disorder F z with Re z γ satisfies IDB for some K, M and p independent of γ. Furthermore, we believe that our analysis could be extended to the case of weakly positively correlated disorder if we make assumptions on the regularity of ν that are stronger than (H2-II), but still applicable to Gaussian disorder.
We can finally state the main theorems of this section.

Remark 4.3.
Notice that (68) is satisfied when H = ∆ Z D and, more generally, when H exhibit quadratic dispersion at the band edges. Notice that at x = y the limit γ → 0 of the rhs of (69) is divergent; this is due to the fact that the bound is uniform in ǫ ≥ 0. The meaningful way to compute the γ → 0 limit is however at finite ǫ. A bound which is uniform in ǫ ≥ ǫ 0 for some ǫ 0 > 0, and which does not diverge as γ is sent to 0 can also be obtained with our methods but it is beyond our scope.
The case β = 0, ǫ → 0 + gives the exponential decay of the disorderaveraged Green's function for energies up to the edges of the spectrum. On the other hand, at β > 0 and x = y, the result implies analyticity of the LDOS in a suitable region of the complex plane.
Corollary 4.4 (Analyticity of LDOS). Let β > 0. Assume F β satisfies IDB for some K, M, p and assume that the covariance satisfies (68). Then, there exists a constant C K,M,p > 0 such that ρ(E) can be extended to an analytic function on The proof of this corollary is identical to that of Corollary 3.4: Lemma 2.9 gives the analyticity of G Λ (0, 0; E ± iǫ) while Theorem 4.2 provides the uniform bounds the allow to extend the analyticity to G Z D (0, 0; E ± i0 + ). The other main theorem is as follows.
Theorem 4.5 (Lifshitz-tail-type estimate). Let E ∈ R and set δ := dist(E, σ(H)). Assume F 0 satisfies IDB for some K, M and p and assume that the covariance satisfies (68) and C E x,y ∈ R. There exists a constant C K,M,p such that if δ ≥ 2γC K,M,p then a Lifshitz-tail-type estimate is satisfied: uniformly in Λ ⊂ Z D and ǫ ≥ 0.
Remark 4.6. Notice that the condition C E x,y ∈ R is satisfied in the case H = ∆ Z D and, more generally, in systems for which the Bloch Hamiltonian is such thatĤ(−k) =Ĥ(k). This property holds in many condensed-matter systems, an example of which is graphene [34].
Remark 4.7. The integrated density of states IDOS, has a close connection with the spectrum of H ω,Λ . In fact, it controls the probability of having an eigenvalue of H ω,Λ below E [38,39]: A sufficiently small IDOS, e.g., as γ → 0, allows to prove localization via finite-volume criteria [4], see [38] for details. Integration of the Lifshitztail-type estimate of Theorem 4.5, up to energies below the bottom of the spectrum such that (γ −1 δ) 1/2p | ln γ|, provides a sufficiently small upper bound on the IDOS.
The proof of the theorems above is based on a SUSY cluster expansion. The approach is the same as the one in Proposition 3.8. This time we shall expand the super Gibbs' weight µ ± Λ (ξ) and obtain polymers connected by the covariance. The expansion is well-suited for our purposes because it exploits the smallness of γ δ −1 .
In order to prove this SUSY cluster expansion, we need an auxiliary result that allows us to apply the SUSY localization formula in this context as well. Lemma 4.9. Let f ∈ L 1 (S, G ) be even, supersymmetric and invariant under U (1) ×S fermionic transformations, see Lemma 2.6. Assume that the is even, satisfies and is therefore supersymmetric.
Proof. Integration by parts gives provided that the integrands are in L 1 (S, G ). In the first line this is the case by assumption. In the second line, by Lemma 2.6 we have that and thus the integrand is in L 1 (S, G ) as well. Taking the difference of the two equations at fixed ε and σ, ans using Eq. (80) gives identity (78). Parity follows by using that dΦ is invariant under Φ → −Φ.
Proof of Proposition 4.8. Since F β satisfies IDB, F β (ξ) decays in norm faster than any power of κ. Thus, F β ∈ L 1 (S, G ) and we can make sense of the SUSY integral Eq. (28) when E → E ∓ iβ, ǫ ≥ 0. We set µ ± x =: e V ± X , apply the BBF formula and accordingly obtain a polymer expansion like the one in the proof of Proposition 3.8. The stability conditions Re(V ± X ) ∅ = Re ±iγ x,y∈X κ + x C E±iǫ x,y κ − y ≤ 0 are satisfied, since C E±iǫ = (H −E ±iǫ)/((H −E) 2 +ǫ 2 ) with H Hermitian and ǫ ≥ 0. Therefore, dp T (s) is supported on s such that Re ±i x,y∈Y s x,yv ± x,y ((κ, 0)) ≤ 0 respectively. The rest of the proof is identical to the one of Proposition 3.8, provided that the products µ ± x F β x are proven to be even and to satisfy the hypotheses of the SUSY localization formula (Proposition 2.5). We have already pointed out that F β is even, supersymmetric and invariant under U (1) ×S fermionic transformation, see discussion before Proposition 2.7. Since by assumption it satisfies IDB, by Lemma 4.9 we have that F β is even, supersymmetric and satisfies identity (78), or equivalently: On the other hand, µ ± x are even and supersymmetric by inspection, and uniformly bounded in norm by e γ C E±iǫ |Λ| . Therefore, the products µ ± x F β x are even and satisfy the hypotheses of Proposition 2.5.
Proof of Theorem 4.2. Since F β satisfies IDB, F β ∈ L 1 (S, G ) and thus we can use Proposition 4.8. We shall prove that if γ δ −1 is small enough then for some constant C K,M,p,θ the following bounds holds true: Plugging these bounds into the expansion of Proposition 4.8 proves the claim. The proof of the bound (82) follows closely the strategy in the proof of Theorem 3.2, with some small differences which we shall stress. The setting is as in the proof of Theorem 3.2, but we shall recall it for the sake of clarity. Let us consider a polymer Y ∋ x, y and a directed tree T ℘ on Y, ℘ denoting the choice of directions on the links of the tree. We denote by ℓ + (ℓ − ) the starting (ending) vertex of the directed link. Links have to be directed in order to select one of the two elements inv ± ℓ (ξ), see in Eq. (76). Furthermore, let us introduce the sequences σ = σ ε ℓ ∈ S ℓ∈T, ε=± and ♯ = ♯ ℓ ∈ {B, F } ℓ∈T . We set ξ ε B,x,σ = κ ε x,σ and ξ ε F,x,σ = χ ε x,σ , and we shall henceforth write |Y| instead of N + 2 − δ x,y ≡ |Y|. Let us define: We can rewrite Eq. (75) as which can be bounded as follows: We shall first obtain a useful bound for the superintegral. The analysis differs slightly from the one of the proof of Theorem 3.2 because we need to prove suitable decay bounds for F y t℘, ♯, σ (ξ) σ,σ ′ . We again see by inspection that F y t℘, ♯, σ (ξ) is a local function: having set: x,y (ξ x i ) σ,σ ′ by means of the IDB on F β . In fact, for any m ∈ N: thus, by the IDB on F β we obtain: On the other hand, the non-local part in the superintegral is bounded as: All in all, if we apply the bound in Eq. (90) with m = |S| + 1 and use | dξ y f (ξ)| ≤ f L 1 (S Y ,G Y ) , we finally obtain: for some K ′ > K. To estimate the second line in (85), the strategy is the same as in the proof of Theorem 3.2. We exploit the exponential decay of the covariance (68): where for the sake of notation we have dropped the dependence of C on D, the dimension of the lattice. If we define (C E±iǫ θ ) x,y := e (1+θ) √ δ|x−y|/2 C E±iǫ x,y , it follows that for some new C θ > C. Standard tree-stripping estimates (see Fig. 1) based on the exponential decay of C E±iǫ give [16]: The details on the tree-stripping procedure are explained in (65), text around it and in Fig. 1. Plugging the bounds (92) and (95) with q = p + 1 into (85) and using Cayley's theorem on the number of trees with fixed coordination numbers {d i } i , see [16], we obtain: where C K,M,p,θ : To prove Theorem 4.5, we need to further expand the super Gibbs' weight. We will need the following lemma.
Proof. Since f (Φ) is finite, the function t → e tf (Φ) ≡ n≥0 (tf (Φ)) n /n! ∈ G X is analytic. As usual, define the integral Lagrange remainder which we estimate in norm, hence the claim.
Proof of Theorem 4.5. For the sake of brevity, we shall write G N (E + iǫ) ≡ G N (0, 0; E + iǫ). By Proposition 4.8 we have that for E ∈ R where, by the proof of Theorem 4.2 the following bounds hold for C K,M,p ≡ C K,M,p,θ=0 : sup Let us now fix N ∈ N sufficiently large to be optimized later. To prove the claim, it suffices to attain for N < N : (101) for some constantsC M,p andC K,M,p . Indeed, if bounds (100) and (101) hold true, then for any N : for some other constant C ′ K,M,p . Therefore, by taking the inf over N ∈ N we obtain the statement for a suitable constantC K,M,p : Let us now prove (101); we Taylor expand the super Gibbs' weight µ + y (ξ, s): and accordingly introduce the splitting, Since ν is even by assumption,ν (2n+1) (0) = E ω 2n+1 = 0 and thus ψ − 0 ψ + 0 F y 0 (Φ) has non-vanishing derivatives in Φ = 0 only of order 2(2n + 1), n ∈ N. As a consequence, only the terms such that l +N is odd survive in the sum above. Finally, C E+i0 + x,y ∈ R for any x, y, andν (2n) (0) ∈ R imply that G N ,N (E + iǫ) does not contribute to ρ Λ (E), that is We are left with bounding R N ,N (E + iǫ) σ,σ : (108) where the notation is again borrowed from the previous proof, see (83) and the text above. For the sake of computations, introducex = ( For anyx,♯ andσ, we define a sequenced i ≡ {(d i ) ε ♯,σ } ε=±,♯=B,F at any vertex i ∈ {1, ..., N + 1} of the tree, where (d i ) ε ♯,σ := N −N j=1 δ x i ,x ε j δ ♯,♯ j δ σ,σ ε j . Furthermore, we define the following local function: where again the sign is unimportant for our purposes. Finally, by Lemma 4.10, we estimate Rem N −N in Grassmann norm as a super Lagrange remainder: To bound the integral we follow the strategy used in the previous proof. Using Eq. (109) we notice that sup ℘, ♯, σ x,♯,σ F y t℘, ♯, σ x,♯,σ for some constant K ′ M depending on |S| as well. Finally, by noticing that we obtain the following bound for some constantC M,p ,

A Disorder Distribution
We discuss two examples that connect the density ν with the IMB of Definition 3.1: Example I.1: We can state the following lemma.
Lemma A.2. Assume that for any α ∈ R, e α|t| ν(t) is bounded and that F z 1,W is finite for some 0 < W ≤ 1. Then F z satisfies IDB with K = F z 1,W , M = W −1 and p = 1.
Example II.2: Introduce the following norm on the functions of a supervector: |||f ||| 1,W := sup where D(0, W ) ⊂ C was defined above. We can state the following lemma.
Proof. We notice thatν(t) is holomorphic on the strip | Im t| ≤ W which we shall use to estimate its derivative. We think of F z (Φ) as the composite function f (Φ + Φ − ) and compute its derivative accordingly. For simplicity, we carry out the computation in the case of |S| = 1 and we take, e.g., n + σ ≥ n − σ and n σ = n + σ + n − σ . We bound the norm of the derivative as follows: We apply Cauchy integral representation to estimate the derivatives of f and we use the bound (φ + σ φ − σ ) nσ/2 ≤ n σ ! e (φ + σ φ − σ ) 1/2 . Integrating in dφ σ and taking the sup over the contour variable gives the claim.