Principal Eigenvalue and Landscape Function of the Anderson Model on a Large Box

We state a precise formulation of a conjecture concerning the product of the principal eigenvalue and the sup-norm of the landscape function of the discrete Anderson model restricted to a large box. We first provide the asymptotic of the principal eigenvalue as the size of the box grows, and then use it to give a partial proof of the conjecture. For the one dimensional case, we give a complete proof by means of Green function bounds.


Introduction and Results
The landscape function, introduced by Filoche and Mayboroda in [FM12], has been conjectured to capture the low eigenvalues of the Anderson model operator, discrete or continuous, restricted to a finite large box.We can find this conjecture loosely stated in [DFM21, Equation 1.4] as: If 0 is the minimum of the support of the potential distribution then where {λ i } i are the eigenvalues ordered increasingly, {L i } i are the local maxima of the landscape function ordered decreasingly, d is the dimension, and n is the linear size on the box.Numerical experiments with Bernoulli and Uniform potential distributions support the conjecture (see [ADF + 19],[ADJ + 16]), but to this moment there is no mathematical proof.In this article we give a precise formulation of the conjecture on the discrete setting for the case i = 1, that is, for the product of the principal (smallest) eigenvalue and the sup-norm of the Landscape function on a large box.We claim such product converges almost surely to an explicit dimensional constant, different from 1 + d 4 , as the size of the box goes to infinity and give the proof of the lim inf.For a special case in d = 1, we also give the proof of the lim sup.
We start with some definitions and notation.Given a non-empty and finite A ⊆ Z d and a positive potential W : A → [0, ∞) we consider the Schrödinger operator where −∆ A has Dirichlet boundary conditions.From it, we define its principal eigenvalue and landscape function Notice that λ A,W > 0 and L A,W is always well defined on A since −∆ A > 0 and W ≥ 0.
Let V = {V (x)} x∈Z d be an i.i.d.random non-negative potential whose probability measure and expectation we denote P and E, and define for n ∈ N the box Λ n := [−n, n] d ∩ Z d .Our main objectives are the asymptotics of λ Λn,V and L Λn,V ∞ as n → ∞, where, as customary, the restriction of V to Λ n is implicit.
In addition to V being non-negative (i.e., P [V (0) ∈ (−∞, 0)] = 0) we will always assume the distribution function F (t) = P [V (0) ≤ t] satisfies one of the following mutually exclusive conditions: (C1) 0 < F (0) < 1, (Example: Bernoulli(p) distribution) (C2) F (t) = c t η (1 + o(1)) as t ↓ 0 for some c, η > 0. (Example: Uniform(0, 1) distribution) We write n instead of Λ n whenever convenient (e.g.−∆ n = −∆ Λn , λ n,V = λ Λn,V ).We denote by ω d and µ d respectively, the volume of the unit ball in R d and the principal eigenvalue of the continuous Laplacian (− i ) on such ball with Dirichlet boundary conditions.We now state our conjecture and results.We are always assuming that V is non-negative and satisfies (C1) or (C2).We claim that: The heuristic argument behind this conjecture is that both λ n,V and L n,V ∞ are controlled by the largest ball inside of Λ n with zero or very low potential.If the radius of such ball is r then, roughly, λ n,V is proportional to r −2 and L n,V ∞ is proportional to r 2 , making the product of order one in r.The appearance of the continuous constant µ d 2d is another instance of the solution of a discrete problem converging to the solution of the corresponding continuous one.The disagreement between the dimensional constants µ d 2d and 1 + d 4 is simply explained by the fact that 1 + d 4 was "guessed" from the numerical experiments, and the two constants are close to each other.For example, for d = 1 we have 1 + 1 4 = 1.25 and µ1 2 = π 2 8 ≈ 1.23.Using the Min-Max Principle and our hypothesis on V it is straightforward to show that λ n,V is decreasing in n and converges to 0. Our first result is on the speed of this convergence, depending on whether V satisfies (C1) or (C2): The proof of Theorem 1 is given in Section 2, and it is divided into the upper and lower bounds of λ n,V .The upper bound follows form the Min-Max Principle and the previously mentioned heuristic of the largest ball with zero or very low potential.The lower bound is a bit more involved; it uses a Lifshitz tails result form [BK01] and the connection between the integrated density of states of the (infinite) Anderson model and the distribution function of λ n,V .
We tried to illustrate Theorem 1, say in the case d = 1 and V (0) 10 2 , 10 3 , 10 4 , 10 5 .These are given in Figure 1, from which we can see that the empirical mean, variance, and distribution concentrate towards 0 as n increases.
Our second result is a partial proof of Conjecture 0, and a complete proof when d = 1.
Remark.The preprint [CWZ21] has a proof of ii) in the continuous setting for the (C1) case.Both proofs follow the heuristic of the largest ball with zero or very low potential, but differ on how to obtain the lower bound of λ n,V and the upper bound of L n,V .
We prove Theorem 2 in Section 3 after deriving some general properties of landscape functions.Most notable among these properties is Proposition 9, which states that λ A,W L A,W ∞ is bounded form above and bellow by two dimensional constants uniformly on A and W .This is a consequence of an upper bound of the ∞ → ∞ norm of the semigroup generated by the Schrödinger operator, which we adapted from the book [Szn98] to the discrete setting.The statement i) of Theorem 2 follows from domain monotonicity of the landscape function and the asymptotic of λ n,V given in Theorem 1, while ii) is based on the geometric resolvent identity and the restrictions of one dimensional geometry.In Figure 2 we illustrate ii) by showing the convergence of the empirical distribution of λ n,V L n,V ∞ − µ1 2 towards 0.
In the proofs that follow, C(d) is a finite positive constant that may only depend on the dimension and can change form line to line.By a t ∼ t b t we mean lim t→∞ at bt = 1.  2 Principal Eigenvalue (Proof of Theorem 1)

Upper Bound of λ n,V
We introduce the sequences , so we can write the goal of this subsection as lim n→∞ y 2 n λ n,V ≤ µ d P-a.s. (1) As usual, getting a sharp upper bound on λ n,V is much easier than a sharp lower bound.It just requires choosing a good test function and applying the Min-Max Principle.Let Y n be the radius of the largest open euclidean ball contained in Λ n in which V is uniformly bounded by ε n , that is, the center of a ball at which the maximum is attained (it may not be unique).The asymptotic growth of Y n is given in the next proposition, whose proof we delay a short moment.
where we have used Proposition 3, lim r→∞ r 2 λ B(0,r)∩Z d ,0 = µ d and translation invariance.This last limit is a consequence of the discrete Laplacian converging to the continuous one, or random walk converging to Brownian motion.A proof following the latter approach can be found in [LL10, Proposition 8.4.2],where an extra factor d appears as a result of the probabilistic normalization of the Laplacian.
Approximating the number of points in such balls by #(B(0, r) ∩ Z d ) ∼ r Vol(B(0, r)) = ω d r d , we obtain for large n which is summable.Therefore, the Borel-Cantelli Lemma and sending δ → 0 give We show the lim sup bound first on an exponential sub-sequence and then we extend it to the whole sequence.The extending argument requires a monotone sequence of random variables, which Y n may fail to be if (C2) holds.For this reason we introduce which is increasing on n, decreasing on n and satisfies Y n,n = Y n .Since for δ > 0 and large m we have the Borel-Cantelli Lemma and the limit δ → 0 give lim m→∞ y −1 e m Y e m+1 , e m ≤ 1 P-a.s.

Lower Bound of λ n,V
In this subsection we show that P-a.s.we have for (C1) and (C2) respectively.The main input for this is a Lifshitz tail result on the integrated density of states from [BK01].We recall the integrated density of states of the Anderson model is a deterministic distribution function given by the P-a.s.limit where the eigenvalues are counted with multiplicities.The central hypothesis of [BK01] is a scaling assumption of the cumulant-generating function H(t) := ln E e −tV (0) of V (0), which we prove in the following proposition.To state it, we first need to define Proposition 4. For any compact K ⊆ (0, ∞) we have Proof.First assume (C1).In this case α d+2 (t) t = 1 and t α d (t) = t 2/(d+2) .Since for t > 0 we have ln d+2) .We introduce a parameter 0 < δ < 1 and observe For the lim t→∞ inf y∈K we use Having checked the scaling assumption on H, we now have the Lifshitz tail result: Remark.The function t → α(t) is eventually increasing so α −1 (t) is well defined for large t.The original statement from [BK01] is far more general; our conditions on V make H fall into, what is there called, the (γ = 0)-class.
The constant χ can be explicitly computed by means of the Faber-Krahn inequality: .
Proof.Starting from χ = inf g∈H 1 (R d ), g 2 =1 ∇g 2 2 + D Vol(supp g) we see that we only need to consider the finite volume case.Hence finishes the proof.
We now exploit the connection between I and the distribution of λ n,V .This is a classic argument that can be found, for instance, in [AW15, Equation 4.46].We present here a slightly modified version.Let n ∈ N and define a new potential Clearly V ≤ V so for any k ∈ N and t ∈ R we have where we use implicitly the convention of Dirichlet boundary conditions wherever V is infinite.Taking k → ∞ and noting that the infinities of V decompose −∆ (2n+2)k + V in to a direct sum of (2k) d independent terms equal in distribution to −∆ n + V we obtain From the previous inequality, Theorem 5 and Proposition 6 we have where we have introduced To finish the proof we need the asymptotic of f −1 (t) as t → ∞: with all the constants collected in k = . Since α is eventually increasing and has infinite limit, the same is true for f , in particular f −1 (t) exists for large t.By solving for the α −1 term in the first equality above, applying α and simplifying some exponents we arrive at Going back to (3) with n = e m and t = 1/f −1 ((1 + δ)dm) for some m ∈ N and δ > 0, we see that which is summable over m ∈ N. Therefore, by the Borel-Cantelli Lemma we have As in the proof of Proposition 3, we define m(n) ∈ N by e m(n) ≤ n < e m(n)+1 , so that ln n ∼ n (m(n) + 1).Since n → λ n,V is monotone decreasing we have By sending δ → 0 and replacing the f −1 term by its asymptotic given in Proposition 7 we obtain the desired result of this subsection.

Landscape Function
We start this section by deriving some general properties of landscape functions.For a finite A ⊆ Z d and W : A → [0, ∞) we introduce the Green function (with 0 as spectral parameter) This function is known to be symmetric, non-negative, decreasing on the potential W ; and to satisfy the geometric resolvent identity (see [Kir08, Section 5 where is the boundary of A .By extending the definition of L A,W to L A,W (x) := y∈Z d G A,W (x, y) for all x ∈ Z d , the previously stated properties of G A,W translate into non-negativity, potential monotonicity and domain monotonicity of landscape functions: (4) Our last general property is that λ A,W L A,W ∞ is bounded from above and bellow by two positive constants uniformly on A and W .This is based on the following upper bound of the ∞ → ∞ norm of the semigroup, which can be found, for the continuous setting, in [Szn98, Chapter 3, Theorem 1.2].We could not find a proof in the literature for the discrete case, so we provide one in Appendix A.

Theorem 8. For any finite
As an an immediate consequence we obtain: Remark.The lower bound is sharp.It is attained when A is a single point of Z d .
Proof.For the upper bound we use Theorem 8 and the substitution u = λ A,W t: For the lower bound we just need to notice that the positivity of G A,W implies sup By plugging in the eigenvector associated to λ A,W we obtain L A,W ∞ ≥ 1 λ A,W .

Proof of Theorem 2 i)
We start with the asymptotic of the sup-norm of the landscape function on balls with 0 potential.
Proof.Let r > 0 and consider the function φ r (x) := r 2 −|x| 2 2d defined on Z d .Clearly −∆φ r (x) = 1 for all x ∈ Z d and therefore L B(0,r)∩Z d ,0 − φ r is harmonic in B(0, r) ∩ Z d .By the Maximum Principle we have where Dividing by r 2 2d and taking the limit r → ∞ give the proposition.Recall the definitions of ε n , Y n , x n and y n from Subsection 2.1 and notice that Theorem 1 can be restated as λ n,V ∼ n µ d y 2 n P-a.s.From domain monotonicity of landscape functions we have For (C1), V is identically 0 in B(x n , Y n ) ∩ Z d so Theorem 1, Proposition 10 and translation invariance give For (C2), we use the second resolvent identity, domain monotonicity of the eigenvalue, and Propositions 9, 10, 3 to obtain This concludes the proof of Theorem 2 i).

Proof of Theorem 2 ii)
We assume from this point on that d = 1.We set a, b := [a, b] ∩ Z for any two a, b ∈ Z.This proof is based on the following deterministic bound of the Green function in terms of the values of the potential.
Proposition 11.Let n ∈ N and W : 1, n → [0, ∞).For any y ∈ 1, n we have Proof.We only prove the first inequality; the second one follows from reflecting W across the midpoint of 1, n and the symmetry of the Green function.Fix some y ∈ 1, n .By potential monotonicity we have G 1,n ,W ≤ G 1,n ,W 1 1,y .The Cramer's rule lets us write where [−∆ 1,n + W 1 1,y ] 1→δy is the matrix obtained by replacing the first column (in the canonical δ j basis) of −∆ 1,n + W 1 1,y by δ y .By computing the determinant from such first column we see that since T is a lower triangular square matrix of size y − 1 with (−1) on all the diagonal, and det(−∆ 1,k ) = k + 1 for all k ∈ N (we use the convention det(−∆ ∅ ) = 1).
Consider det(−∆ 1,n + W 1 1,y ) as a polynomial in (W (j)) y j=1 .It is clear that it does not contain squares, or grater powers, of any W (j).Moreover, the coefficient of the monomial The remaining constant coefficient is det(−∆ 1,n ) = n + 1, which means all coefficients of det(−∆ 1,n + W 1 1,y ) are positive and therefore .
With the previous proposition in mind we define for δ > 0 and Notice that V (x) is not included in the definition of Z ± δ (x) and therefore Z + δ (x) and Z − δ (x) are independent for all x ∈ Z.
It follows from (4), the definitions above, potential monotonicity, and Propositions 9, 11 that By domain monotonicity and translation invariance, the last maximum above is attained at the x ∈ Λ n that also maximises #A δ (x) = Z + δ (x) + Z − δ (x) + 1.Moreover, V being i.i.d.implies lim n→∞ max x∈Λn Z + δ (x) + Z − δ (x) = ∞ P-a.s. and therefore Proposition 10 and Theorem 1 give The proof of Theorem 2 ii) is finished with the next proposition followed by the limit δ → 0.
Proposition 12.For all δ > 0, lim Proof.We will prove this over an exponential subsequence; the extension is done as in the proof of Proposition 3 using the monotonicity of n → max x∈Λn [Z + δ (x) + Z − δ (x)].In addition to Z + δ (x) being independent of Z − δ (x) for all x ∈ Z d , we also have that all Z ± δ (x) are equal in distribution to Z + δ (0).Assume (C1).For all t > 0 we have With this, we use the exponential Markov inequality and independence to obtain Now we proceed with the distribution of Z + δ (0) + Z − δ (0) as For any ε > 0 define t(ε) by ln F 1/ t(ε) + e − √ t(ε) ≤ ln F (0) 1+ε so that which is summable over the exponential subsequence n = e m , m ∈ N. Assume (C2).We follow the same steps as for (C1) above.To bound the Laplace transform of V (0) we consider the function f (t) := a[F (t)] 1/η for some a > 0. From (C2) follows that there exists t 0 ∈ (0, ∞) such that F (t) ≤ 2 c t η for all t ∈ [0, t 0 ].Therefore, by choosing a The exponential Markov inequality at t = nηδ, independence, and the Stirling bound (n/e) n ≤ n! lead us to from which follows The function [2, n − 1] j → (j − 1) −(j−1) (n − j) −(n−j) attains its unique maximum at j = (n + 1)/2, therefore Finally, for ε > 0 we have which is summable over the exponential subsequence n = e m , m ∈ N.

A Proof of Theorem 8
Let (X t ) t≥0 be a continuous time simple symmetric random walk on Z d with jump intensity 1, and let P x , E x be the associated probability measure and expectation conditioned on X 0 = x.We remark that (X t ) t≥0 is the Markov process that generates −∆/(2d) on 2 (Z d ).
For a finite A ⊆ Z d and W : A → [0, ∞), the Feynman-Kac formula lets us write the semigroup generated by (the kernel of the semigroup).Depending on λ we distinguish two cases.
The term P 0 X 2/λ = 0 can be estimated using the characteristic function of X s , which is where S n is a discrete time simple symmetric random walk on Z starting at 0, and P is its probability measure.Recalling that the first component of X t , which we denote X 1 t , is a continuous time simple symmetric random walk on Z with jump intensity 1/d we have P 0 X t/λ / ∈ B ∞ (0, r) ≤ 2dP 0 X 1 t/λ > r ≤ 2d We split the series at n = 2et λd and bound the two terms separately: With this we have shown K t/λ 1 A ∞ ≤ C(d) 1 + t d/2 e −t for t ≥ 1.Since K t/λ 1(x) is always bounded by 1 we can add inf 0≤t≤1 1 + t d/2 e −t −1 to C(d), if necessary, to have Replacing t by 2dλt and gives the desired bound.

Figure 2 :
Figure 2: Empirical distribution of λ n,V L n,V ∞ − µ1 2 for d = 1 and V (0) d = Bernoulli(0.3)computed form 10 5 samples.The empirical mean (m) and empirical standard deviation (s) are shown in red and blue respectively.
be the normalized eigenvector of −∆ B(xn,Yn) associated to λ B(xn,Yn)∩Z d ,0 and extend it by 0 to Λ n .Then, by the Min-Max Principle, we have where µ(A) is the principal eigenvalue of the continuous Laplacian (− d i=1 ∂ 2 /∂x 2 i ) defined on A with Dirichlet boundary conditions.The Faber-Krahn inequality states that over all domains of a given volume the one with the lowest principal eigenvalue is the ball, therefore, using µ(B(0, r)) = µ d /r 2 and Vol(B(0, r)) = ω d r d we obtainχ = inf0<r<∞ µ d r 2 + Hω d r d .Evaluating at the only critical point r = 2µ d Hω d d 1/(d+2)

,
and then multiplying by t and taking the logarithm leads to ln