Abstract
We state a precise formulation of a conjecture concerning the product of the principal eigenvalue and the sup-norm of the landscape function of the discrete Anderson model restricted to a large box. We first provide the asymptotic of the principal eigenvalue as the size of the box grows, and then use it to give a partial proof of the conjecture. For the one dimensional case, we give a complete proof by means of Green function bounds.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and Results
The landscape function, introduced by Filoche and Mayboroda in [1], has been conjectured to capture the low eigenvalues of the Anderson model operator, discrete or continuous, restricted to a finite large box. We can find this conjecture loosely stated in [2, Equation 1.4] as: If 0 is the minimum of the support of the potential distribution then
where \(\{\lambda _i\}_{i}\) are the eigenvalues ordered increasingly, \(\{L_i\}_{i}\) are the local maxima of the landscape function ordered decreasingly, d is the dimension, and n is the linear size on the box. Numerical experiments with Bernoulli and Uniform potential distributions support the conjecture (see [3, 4]), but to this moment there is no mathematical proof. In this article we give a precise formulation of the conjecture on the discrete setting for the case \(i=1\), that is, for the product of the principal (smallest) eigenvalue and the sup-norm of the Landscape function on a large box. We claim such product converges almost surely to an explicit dimensional constant, different from \(1+\frac{d}{4}\), as the size of the box goes to infinity and give the proof of the \(\liminf \). For the special case \(d=1\), we also give the proof of the \(\limsup \).
We start with some definitions and notation. Given a finite set \(A\subseteq {\mathbb {Z}}^d\) and a positive potential \(W:A\rightarrow [0,\infty )\) we consider the Schrödinger operator
where \(-\Delta _{A}\) has Dirichlet boundary conditions. From it, we define its principal eigenvalue and landscape function
Notice that \(\lambda _{A,W}>0\) and that \(L_{A,W}\) is well defined on A, since \(-\Delta _{A}>0\) and \(W\ge 0\).
Let \(V=\{V(x)\}_{x\in {\mathbb {Z}}^d}\) be an independent and identically distributed (i.i.d.) random non-negative potential whose probability measure and expectation we denote \({\mathbb {P}}\) and \({\mathbb {E}}\), and define for \(n\in {\mathbb {N}}\) the box \(\Lambda _n{:}{=}[-n,n]^d\cap {\mathbb {Z}}^d\). Our main objectives are the asymptotics of \( \lambda _{\Lambda _n,V}\) and \(\left\| L_{\Lambda _n,V}\right\| _\infty \) as \(n\rightarrow \infty \), where, as customary, the restriction of V to \(\Lambda _n\) is implicit.
In addition to V being non-negative (i.e., \(\mathbb {P}\left[ V(0)\in (-\infty ,0)\right] =0\)) we will always assume the distribution function \(F(t)=\mathbb {P}\left[ V(0)\le t\right] \) satisfies one of the following mutually exclusive conditions:
- (C1):
-
\(0<F(0)<1\), (e.g., Bernoulli(p))
- (C2):
-
\(F(t)=c\,t^\eta (1+o(1))\) as \(t\downarrow 0\) for some \(c,\,\eta >0\). (e.g., Uniform(0, 1))
We write n instead of \(\Lambda _n\) whenever convenient, for instance \(-\Delta _n=-\Delta _{\Lambda _n}\) and \(\lambda _{n,V}=\lambda _{\Lambda _n,V}\). We denote by \(\omega _d\) and \(\mu _d\) respectively, the volume of the unit ball in \({\mathbb {R}}^d\) and the principal eigenvalue of the continuous Laplacian (\(-\sum _{i=1}^d\partial ^2/\partial x_i^2\)) on such ball with Dirichlet boundary conditions.
We now state our conjecture and results. We are always assuming that V is non-negative and satisfies (C1) or (C2). We claim that:
Conjecture 0
\(\displaystyle \lim _{n\rightarrow \infty }\lambda _{n,V}\left\| L_{n,V}\right\| _\infty =\frac{\mu _d}{2d}\) \({\mathbb {P}}\)-a.s.
The heuristic argument behind this conjecture is that both \(\lambda _{n,V}\) and \(\left\| L_{n,V}\right\| _\infty \) are controlled by the largest ball inside of \(\Lambda _n\) with zero or very low potential. If the radius of such ball is r then, roughly, \(\lambda _{n,V}\) is proportional to \(r^{-2}\) and \(\left\| L_{n,V}\right\| _\infty \) is proportional to \(r^{2}\), making the product of “order one” in r. The appearance of the continuous constant \(\frac{\mu _d}{2d}\) is another instance of the solution of a discrete problem converging to the solution of the corresponding continuous one. The disagreement between the dimensional constants \(\frac{\mu _d}{2d}\) and \(1+\frac{d}{4}\) is simply explained by the fact that \(1+\frac{d}{4}\) was “guessed” from the numerical experiments, and the two constants are close to each other. For example, for \(d=1\) we have \(1+\frac{1}{4}=1.25\) and \(\frac{\mu _1}{2}=\frac{\pi ^2}{8}\approx 1.23\).
Using the Min–Max Principle and our hypothesis on V it is straightforward to show that \(\lambda _{n,V}\) is decreasing in n and converges to 0. Our first result is on the speed of this convergence; to state it we first need to define the deterministic sequences:
Theorem 1
\(\displaystyle \lim _{n\rightarrow \infty }y_n^2\lambda _{n,V} =\mu _d\) \({\mathbb {P}}\)-a.s.
The proof of Theorem 1 is given in Sect. 2, and it is divided into the \(\limsup \) and \(\liminf \) bounds. The \(\limsup \) bound follows from the Min–Max Principle and the previously mentioned heuristic of the largest ball with zero or very low potential. The \(\liminf \) bound is more involved; it uses a Lifshitz tails result form [5] and the connection between the integrated density of states of the (infinite) Anderson model and the cumulative distribution function of \(\lambda _{n,V}\).
Our second result is a partial proof of Conjecture 0, and a complete proof when \(d=1\).
Theorem 2
-
(i)
\(\displaystyle \varliminf _{n\rightarrow \infty }\lambda _{n,V} \left\| L_{n,V}\right\| _\infty \ge \frac{\mu _d}{2d}\) \({\mathbb {P}}\)-a.s.
-
(ii)
If \(d=1\) then \(\displaystyle \lim _{n\rightarrow \infty }\lambda _{n,V}\left\| L_{n,V}\right\| _\infty =\frac{\mu _1}{2}\) \({\mathbb {P}}\)-a.s.
Remark
The article [6] has a proof of (ii) in the continuous setting for the (C1) case. Both proofs follow the heuristic of the largest ball with zero potential, but differ on how to obtain a lower bound of \(\lambda _{n,V}\) and an upper bound of \(L_{n,V}\).
We prove Theorem 2 in Sect. 3 after deriving some general properties of landscape functions. Most notable among these properties is Proposition 9, which states that \(\lambda _{A,W}\left\| L_{A,W}\right\| _\infty \) is bounded from above and below by two dimensional constants uniformly on A and W. This is a consequence of an upper bound of the \(\ell ^\infty \rightarrow \ell ^\infty \) norm of the semigroup generated by the Schrödinger operator, which we adapted from the book [7] to the discrete setting. The statement (i) of Theorem 2 follows from domain monotonicity of the landscape function and the asymptotic of \(\lambda _{n,V}\) given in Theorem 1, while (ii) is based on the geometric resolvent identity and the restrictions of one dimensional geometry.
We tried to illustrate Theorem 2(ii) when \(V(0)\overset{\text {d}}{=}\) Bernoulli(p) by plotting \(\lambda _{n,V}\left\| L_{n,V}\right\| _\infty \,\text {v.s.}\,n\) for a single realization of the potential. However, the plot does not show any kind of accumulation up to \(n=10^5\), suggesting the convergence is very slow. Instead, we draw the empirical distribution of \(\lambda _{n,V}\left\| L_{n,V}\right\| _\infty -\frac{\mu _1}{2}\) from \(10^5\) realizations for \(n=10^2,10^3,10^4,10^5\). These are given in Fig. 1, from which we can see that the empirical distribution concentrates towards 0 as n increases.
In the proofs that follow, C or C(d) is a finite positive constant that may only depend on the dimension and can change from line to line. By \(a_t\sim _t b_t\) we mean \(\lim _{t\rightarrow \infty }\frac{a_t}{b_t}=1\).
2 Principal Eigenvalue (Proof of Theorem 1)
2.1 The Lim Sup Bound
The goal of this subsection is to show
As usual, getting a good upper bound on \(\lambda _{n,V}\) just requires choosing a good test function and applying the Min–Max Principle.
Define \(Y_n\) to be the radius of the largest open euclidean ball contained in \(\Lambda _n\) in which V is uniformly bounded by \(\epsilon _n\), that is,
where \(B(x,r)=\{x'\in {\mathbb {R}}^d\,\vert \,\left| x-x'\right| <r\}\). Also, define \(x_n\in \Lambda _n\) to be the center of a ball at which the maximum is attained (it may not be unique); and let \(\phi \in \ell ^2(B(x_n,Y_n)\cap {\mathbb {Z}}^d)\) be the normalized eigenvector of \(-\Delta _{B(x_n,Y_n)}\) associated to \(\lambda _{B(x_n,Y_n)\cap {\mathbb {Z}}^d,0}\), extended to \(\Lambda _n\) by 0. Then, by the Min–Max Principle, we have
To conclude the proof of (1) we use that the event \(\{\lim _{n\rightarrow \infty }Y_n/y_n=1\}\) has probability one (this will be proven in Proposition 3), together with translation invariance and the limit \(\displaystyle \lim _{r\rightarrow \infty }r^2\lambda _{B(0,r)\cap {\mathbb {Z}}^d,0}=\mu _d\), to obtain
The limit \(\displaystyle \lim _{r\rightarrow \infty }r^2\lambda _{B(0,r)\cap {\mathbb {Z}}^d,0}=\mu _d\) is a consequence of the discrete Laplacian converging to the continuous one, or random walk converging to Brownian motion. A proof following the latter approach can be found in [8, Proposition 8.4.2], where an extra factor d appears as a result of the probabilistic normalization of the Laplacian.
Proposition 3
\(\displaystyle Y_n\sim _n y_n\) \({\mathbb {P}}\)-a.s.
Proof
If \(Y_n< y_n(1-\delta )^{1/d}\) for some \(0<\delta <1\), then the inscribed ball of each of the \(\left( \frac{2n}{2 y_n(1-\delta )^{1/d}}\right) ^d(1+o(1))\) disjoint cubes of side length \( \big \lceil 2 y_n(1-\delta )^{1/d} \big \rceil \) that make up \(\Lambda _n\) contains a point x with \(V(x)>\epsilon _n\). Approximating the number of points in such balls by \(\#(B(0,r)\cap {\mathbb {Z}}^d)\sim _r{\text {Vol}}(B(0,r))=\omega _d r^d\), we obtain for large n
which is summable. Therefore, the Borel–Cantelli Lemma and sending \(\delta \rightarrow 0\) give
We show the \(\limsup \) bound, first on an exponential sub-sequence, and then we extend it to the whole sequence. The extending argument requires a monotone sequence of random variables, which \(Y_n\) may fail to be if (C2) holds. For this reason we introduce
which is increasing on n, decreasing on \(n'\) and satisfies \(Y_{n,n}=Y_n\). Since for \(\delta >0\) and large m we have
the Borel–Cantelli Lemma and the limit \(\delta \rightarrow 0\) give
For \(n\in {\mathbb {N}}\) define \(m(n)\in {\mathbb {N}}\) by \(\left\lfloor e^{m(n)} \right\rfloor \le n<\left\lfloor e^{m(n)+1} \right\rfloor \). Since \(y_n\sim _n y_{\left\lfloor e^{m(n)} \right\rfloor }\) and \(Y_n\le Y_{\left\lfloor e^{m(n)+1} \right\rfloor ,\left\lfloor e^{m(n)} \right\rfloor }\), we conclude
\(\square \)
2.2 The Lim Inf Bound
In this subsection we show that
The main input for this is a result from [5] on the Lifshitz tail of the integrated density of states. We recall the integrated density of states of the Anderson model is a deterministic distribution function given by the \({\mathbb {P}}\)-a.s. limit
where the eigenvalues are counted with multiplicities. The central hypothesis of [5] is a scaling assumption on the cumulant-generating function of V(0), \(H(t){:}{=}\ln \mathbb {E}\left[ e^{-t V(0)}\right] \), which we will check in the following proposition. To state it, we first need to define
Proposition 4
(Scaling assumption of [5]) For any compact \(K\subseteq (0,\infty )\) we have
uniformly on \(y\in K\).
Proof
First assume (C1). In this case \(\frac{\alpha ^{d+2}(t)}{t}=1\) and \(\frac{t}{\alpha ^d(t)}=t^{2/(d+2)}\). Since for \(t>0\) we have
we conclude that
Now assume (C2). In this case \(\frac{\alpha ^{d+2}(t)}{t}=\frac{1}{\ln t}\) and \(\frac{t}{\alpha ^d(t)}=t^{2/(d+2)}(\ln t)^{d/(d+2)}\). We introduce a parameter \(0<\delta <1\) and observe that
which implies
For the \(\displaystyle \varliminf _{t\rightarrow \infty }\inf _{y\in K}\) we use
to obtain
\(\square \)
Having checked the scaling assumption on H, we now have the Lifshitz tail result:
Theorem 5
[5, Theorem 1.3] Define the constant
then
Remark
The function \(t\mapsto \alpha (t)\) is eventually increasing so \(\alpha ^{-1}(t)\) is well defined for large t. The original statement from [5] is far more general; our conditions on V make H fall into, what is there called, the \((\gamma =0)\)-class.
The constant \(\chi \) can be explicitly computed by means of the Faber–Krahn inequality:
Proposition 6
\(\displaystyle \chi =(d+2)\left( \frac{{\widetilde{H}}\omega _d}{2}\right) ^{2/(d+2)}\left( \frac{\mu _d}{d}\right) ^{d/(d+2)}\).
Proof
Starting from \(\chi =\inf _{g\in H^1({\mathbb {R}}^d),\,\left\| g\right\| _2=1}\left( \left\| \nabla g\right\| _2^2+D{\text {Vol}}({\text {supp}}g)\right) \) we see that we only need to consider the finite volume case. Hence
where \(\mu (A)\) is the principal eigenvalue of the continuous Laplacian (\(-\sum _{i=1}^d\partial ^2/\partial x_i^2\)) defined on A with Dirichlet boundary conditions. The Faber-Krahn inequality states that over all domains of a given volume the one with the lowest principal eigenvalue is the ball, therefore, using \(\mu (B(0,r))=\mu _d/r^2\) and \({\text {Vol}}(B(0,r))=\omega _d r^d\) we obtain
Evaluating at the only critical point \(\displaystyle r=\left( \frac{2\mu _d}{{\widetilde{H}}\omega _d d}\right) ^{1/(d+2)}\) finishes the proof. \(\square \)
We now exploit the connection between I and the distribution of \(\lambda _{n,V}\). This is a classic argument that can be found, for instance, in [9, Equation 4.46]. We present here a slightly modified version. Let \(n\in {\mathbb {N}}\) and define a new potential
Clearly \(V\le V'\) so for any \(k\in {\mathbb {N}}\) and \(t\in {\mathbb {R}}\) we have
where \(-\Delta _{(2n+2)k}+V'\) has (by definition) Dirichlet boundary conditions at \(\Gamma \). These Dirichlet boundary conditions at \(\Gamma \) imply that \(-\Delta _{(2n+2)k}+V'\) is a direct sum of \((2k)^d\) independent terms, all equal in distribution to \(-\Delta _{n}+V\). Therefore, by taking the limit \(k\rightarrow \infty \) on the above inequality and applying the Law of Large Numbers, we obtain
From the previous inequality, Theorem 5 and Proposition 6 we have
where we have introduced \(\displaystyle f(t){:}{=}\frac{{\widetilde{H}}\omega _d\mu _d^{d/2}\alpha ^{-1}(t^{1/2})}{t}\). To finish the proof we need the asymptotic of \(f^{-1}(t)\) as \(t\rightarrow \infty \):
Proposition 7
-
(i)
For (C1), \(\displaystyle f^{-1}(t)=\frac{1}{\mu _d}\left( \frac{t}{\omega _d\left| \ln F(0)\right| }\right) ^{2/d}\).
-
(ii)
For (C2), \(\displaystyle f^{-1}(t)\sim _t\frac{1}{\mu _d}\left( \frac{d\,t}{2\eta \omega _d\ln t}\right) ^{2/d}\).
Proof
For (C1) there is nothing to prove since \(f(t)=\omega _d\left| \ln F(0)\right| \mu _d^{d/2}t^{d/2}\).
For (C2) we have
with all the constants collected in \(k=\frac{2\eta \omega _d\mu _d^{d/2}}{d+2}\). Since \(\alpha \) is eventually increasing and has infinite limit, the same is true for f, in particular \(f^{-1}(t)\) exists for large t. By solving for the \(\alpha ^{-1}\) term in the first equality above, applying \(\alpha \) and simplifying some exponents we arrive at \(t=\left( \frac{f(t)}{k\ln \left[ t f(t)/k\right] }\right) ^{2/d}\). Replacing t by \(f^{-1}(t)\) we have
In order to deal with the \(\ln [t f^{-1}(t)]\) term above, we multiplying the equality by t and then take the logarithm to obtain
Since \(t f^{-1}(t)\xrightarrow [t\rightarrow \infty ]{}\infty \), the last equality implies \(\ln [t f^{-1}(t)]\sim _t \frac{d+2}{d}\ln t\) and therefore
\(\square \)
Going back to (3) with \(n=\left\lfloor e^m \right\rfloor \) and \(t=1/f^{-1}((1+\delta )d m)\) for some \(m\in {\mathbb {N}}\) and \(\delta >0\), we see that
which is summable over \(m\in {\mathbb {N}}\). Therefore, by the Borel–Cantelli Lemma we have
As in the proof of Proposition 3, we define \(m(n)\in {\mathbb {N}}\) by \(\left\lfloor e^{m(n)} \right\rfloor \le n<\left\lfloor e^{m(n)+1} \right\rfloor \), so that \(\ln n\sim _n (m(n)+1)\). Since \(n\mapsto \lambda _{n,V}\) is monotone decreasing we have
The proof of (2) is finished by sending \(\delta \rightarrow 0\) and noticing that Proposition 7 implies \(f^{-1}(d \ln n)\sim _n\frac{y_n^2}{\mu _d}\) for both (C1) and (C2).
3 Landscape Function
We start this section by deriving some general properties of landscape functions.
For a finite set \(A\subseteq {\mathbb {Z}}^d\) and a positive potential \(W:A\rightarrow [0,\infty )\) we introduce the Green function with 0 as spectral parameter
This function is known to be symmetric, non-negative, and decreasing on the potential W. It is also known to satisfy the geometric resolvent identity:
where \(\partial A'{:}{=}\{(i,j)\in A'\times ({\mathbb {Z}}^d{\setminus } A')\,\vert \,\left| i-j\right| =1\}\) is the boundary of \(A'\) (see [10, Section 5.3]). By extending the definition of \(L_{A,W}\) to \(L_{A,W}(x){:}{=}\sum _{y\in {\mathbb {Z}}^d}G_{A,W}(x,y)\) for all \(x\in {\mathbb {Z}}^d\), the previously stated properties of \(G_{A,W}\) translate into non-negativity, potential monotonicity and domain monotonicity of landscape functions:
-
\(L_{A,W}\ge 0\) and \(L_{A,W}(x)=0\) if \(x\in {\mathbb {Z}}^d\setminus A\).
-
If \(0\le W'\le W\) then \(L_{A,W}\le L_{A,W'}\).
-
If \(A'\subseteq A\) then
$$\begin{aligned} L_{A,W}(x)=L_{A',W}(x)+\sum _{(i,j)\in \partial A'}G_{A',W}(x,i)L_{A,W}(j)\ge L_{A',W}(x). \end{aligned}$$(4)
Our last general property is that \(\lambda _{A,W}\left\| L_{A,W}\right\| _\infty \) is bounded from above and below by two positive constants uniformly on A and W. This is based on the following upper bound of the \(\ell ^\infty \rightarrow \ell ^\infty \) norm of the semigroup, which can be found, for the continuous setting, in [7, Chapter 3, Theorem 1.2]. We could not find a proof in the literature for the discrete case, so we provide one in Appendix A.
Theorem 8
For a finite \(A\subseteq {\mathbb {Z}}^d\) and \(W:A\rightarrow [0,\infty )\) we have
As an an immediate corollary we obtain:
Proposition 9
For a finite \(A\subseteq {\mathbb {Z}}^d\) and \(W:A\rightarrow [0,\infty )\) we have
Remark
The bound 1 is sharp. It is attained when A is a single point of \({\mathbb {Z}}^d\).
Proof
For the upper bound we use Theorem 8 and the substitution \(u=\lambda _{A,W}t\):
For the lower bound we just need to notice that the positivity of \(G_{A,W}\) implies
and therefore, denoting \(\phi \in \ell ^2(A)\) an eigenvector of \(-\Delta _A+W\) associated to \(\lambda _{A,W}\), we obtain
\(\square \)
3.1 Proof of Theorem 2(i)
We start with the asymptotic of the sup-norm of the landscape function on balls with 0 potential.
Proposition 10
\(\displaystyle \left\| L_{B(0,r)\cap {\mathbb {Z}}^d,0}\right\| _\infty \sim _r\frac{r^2}{2d}\).
Proof
Let \(r>0\) and consider the function \(\phi _r(x){:}{=}\frac{r^2-\left| x\right| ^2}{2d}\) defined on \({\mathbb {Z}}^d\). Clearly \(-\Delta \phi _r(x)=1\) for all \(x\in {\mathbb {Z}}^d\) and therefore \(L_{B(0,r)\cap {\mathbb {Z}}^d,0}-\phi _r\) is harmonic in \(B(0,r)\cap {\mathbb {Z}}^d\). By the Maximum Principle (see [8, Theorem 6.2.1]) we have
where \(\partial ^+ A{:}{=}\left\{ x\in {\mathbb {Z}}^d{\setminus } A\,\Big \vert \,\exists \,y\in A\text { such that }\left| x-y\right| =1\right\} \) is the outer boundary of \(A\subseteq {\mathbb {Z}}^d\). Dividing by \(\frac{r^2}{2d}\) and taking the limit \(r\rightarrow \infty \) give the proposition. \(\square \)
Recall the definitions of \(Y_n\) and \(x_n\) from Sect. 2.1. From domain monotonicity of landscape functions we have
For (C1), V is identically 0 in \(B(x_n,Y_n)\cap {\mathbb {Z}}^d\) so Theorem 1, translation invariance and Propositions 3, 10 give
For (C2), we use the second resolvent identity, domain monotonicity of the eigenvalue, and Propositions 3, 9, 10 to obtain
which implies
This concludes the proof of Theorem 2(i).
3.2 Proof of Theorem 2(ii)
We assume from this point on that \(d=1\). We set \(\llbracket a , b\rrbracket {:}{=}[a,b]\cap {\mathbb {Z}}\) for any \(a,b\in {\mathbb {Z}}\), \(a\le b\). This proof is based on the following deterministic bound of the Green function in terms of the values of the potential.
Proposition 11
Let \(a,b\in {\mathbb {Z}}\), \(a\le b\) and \(W:\llbracket a , b\rrbracket \rightarrow [0,\infty )\). For any \(x\in \llbracket a , b\rrbracket \) we have
Proof
By translation invariance and isotropy of the Laplacian, it is enough to show the first inequality assuming that \(a=1\), \(b\ge 1\), \(W:\llbracket 1 , b\rrbracket \rightarrow [0,\infty )\).
Fix some \(x\in \llbracket 1 , b\rrbracket \). By potential monotonicity and Cramer’s rule we have
where \([-\Delta _{\llbracket 1 , b\rrbracket }+W\mathbbm {1}_{\llbracket 1 , x\rrbracket }]_{1\rightarrow \delta _x}\) is the matrix obtained by replacing the first column of \(-\Delta _{\llbracket 1 , b\rrbracket }+W\mathbbm {1}_{\llbracket 1 , x\rrbracket }\) by \(\delta _x\). Computing the determinant on the numerator from its first column we see that
since T is a lower triangular square matrix of size \(x-1\) with \((-1)\) on all the diagonal, and \(\det (-\Delta _{\llbracket 1 , k\rrbracket })=k+1\) for all \(k\in {\mathbb {N}}\). Here we have used that the determinant of an empty matrix is 1.
Consider \(\det (-\Delta _{\llbracket 1 , b\rrbracket }+W\mathbbm {1}_{\llbracket 1 , x\rrbracket })\) as a polynomial in \((W(j))_{j=1}^x\). It is clear that it does not contain squares, or greater powers, of any W(j). Moreover, a straightforward computation shows that the coefficient of \(W(j_1)W(j_2)\cdots W(j_{k-1})W(j_k)\), with \(1\le j_1<j_2<\cdots j_{k-1}<j_k\le x\) and \(1\le k\le x\), is
The remaining coefficient (the constant one) is \(\det (-\Delta _{\llbracket 1 , b\rrbracket })=b+1\), which means all coefficients of \(\det (-\Delta _{\llbracket 1 , b\rrbracket }+W\mathbbm {1}_{\llbracket 1 , x\rrbracket })\) are positive. Therefore, by keeping only the linear terms, we obtain
\(\square \)
With the previous proposition in mind we define for \(\delta >0\) and \(x\in {\mathbb {Z}}^d\)
Notice that V(x) is not included in the definition of \(Z^\pm _\delta (x)\) and therefore \(Z^+_\delta (x)\) and \(Z^-_\delta (x)\) are independent for all \(x\in {\mathbb {Z}}\). Moreover, \(Z^\pm _\delta (x)\) is equal in distribution to \(Z^+_\delta (0)\) for all \(x\in {\mathbb {Z}}\).
It follows from (4), the definitions above, potential monotonicity, and Propositions 9, 11 that
By domain monotonicity and translation invariance, the last maximum above is attained at the \(x\in \Lambda _n\) that also maximises \(\#A_\delta (x)=Z^+_\delta (x)+Z^-_\delta (x)+1\). Moreover, V being i.i.d. implies
and therefore Theorem 1 and Proposition 10 give
The proof of Theorem 2(ii) is finished with the next proposition followed by the limit \(\delta \rightarrow 0\).
Proposition 12
For all \(\delta >0\), \(\displaystyle \varlimsup _{n\rightarrow \infty }\frac{1}{2y_n}\max _{x\in \Lambda _n}[Z^+_\delta (x)+Z^-_\delta (x)]\le 1\quad {\mathbb {P}}\)-a.s.
Proof
We will prove this over an exponential subsequence by means of the Borel–Cantelli Lemma; the extension to the whole sequence is done as in the proof of Proposition 3 using the monotonicity of \(n\mapsto \max _{x\in \Lambda _n}[Z^+_\delta (x)+Z^-_\delta (x)]\).
Assume (C1). For all \(t>0\) we have
With this, we use the exponential Markov inequality and independence to obtain
Now we proceed with the distribution of \(Z^+_\delta (0)+Z^-_\delta (0)\) as
For any \(\epsilon >0\) define \(t(\epsilon )\) by \(\ln \left( F\left( 1/\sqrt{t(\epsilon )}\right) +e^{-\sqrt{t(\epsilon )}}\right) \le \frac{\ln F(0)}{1+\epsilon }\) so that
which is summable over the exponential subsequence \(n=\left\lfloor e^m \right\rfloor \), \(m\in {\mathbb {N}}\).
Assume (C2). We follow the same steps as for (C1) above. To bound the Laplace transform of V(0) we consider the function \(f(t){:}{=}a[F(t)]^{1/\eta }\) for some \(a>0\). From (C2) follows that there exists \(t_0\in (0,\infty )\) such that \(F(t)\le 2\,c\,t^\eta \) for all \(t\in [0,t_0]\). Therefore, by choosing \(a{:}{=}(t_0^{-1}+(2c)^{1/\eta })^{-1}\) we obtain
Moreover, since \(\mathbb {P}\left[ f(V(0))\le t\right] =\left( \frac{t}{a}\right) ^\eta \) for \(t\in [0,a]\), we have
The exponential Markov inequality at \(t=n\eta \delta \), independence, and the Stirling bound \((n/e)^n\le n!\) lead us to
from which follows
The function \([2,n-1]\ni j\mapsto \left( j-1\right) ^{-(j-1)}(n-j)^{-(n-j)}\) attains its unique maximum at \(j=(n+1)/2\), therefore
Finally, for any \(\epsilon >0\) we have
which is summable over the exponential subsequence \(n=\left\lfloor e^m \right\rfloor \), \(m\in {\mathbb {N}}\). \(\square \)
Data Availability
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
References
Filoche, M., Mayboroda, S.: Universal mechanism for Anderson and weak localization. Proc. Natl. Acad. Sci. 109(37), 14761–14766 (2012). https://doi.org/10.1073/pnas.1120432109
David, G., Filoche, M., Mayboroda, S.: The landscape law for the integrated density of states. Adv. Math. 390, 107946 (2021). https://doi.org/10.1016/j.aim.2021.107946
Arnold, D.N., David, G., Filoche, M., Jerison, D., Mayboroda, S.: Computing spectra without solving eigenvalue problems. SIAM J. Sci. Comput. 41(1), 69–92 (2019). https://doi.org/10.1137/17M1156721
Arnold, D.N., David, G., Jerison, D., Mayboroda, S., Filoche, M.: Effective confining potential of quantum states in disordered media. Phys. Rev. Lett. 116, 056602 (2016). https://doi.org/10.1103/PhysRevLett.116.056602
Biskup, M., König, W.: Long-time tails in the parabolic Anderson model with bounded potential. Ann. Probab. 29(2), 636–682 (2001). https://doi.org/10.1214/aop/1008956688
Chenn, I., Wang, W., Zhang, S.: Approximating the ground state eigenvalue via the effective potential. Nonlinearity 35(6), 3004–3035 (2022). https://doi.org/10.1088/1361-6544/ac692a
Sznitman, A.-S.: Brownian Motion Obstacles and Random Media. Springer Monographs in Mathematics, p. 353. Springer, Berlin (1998). https://doi.org/10.1007/978-3-662-11281-6
Lawler, G.F., Limic, V.: Random Walk: A Modern Introduction. Cambridge Studies in Advanced Mathematics, vol. 123, p. 364. Cambridge University Press, Cambridge (2010). https://doi.org/10.1017/CBO9780511750854
Aizenman, M., Warzel, S.: Random Operators: Disorder Effects on Quantum Spectra and Dynamics. Graduate Studies in Mathematics, vol. 168, p. 326. American Mathematical Society, Providence (2015). https://doi.org/10.1090/gsm/168
Kirsch, W.: An invitation to random Schrödinger operators. In: Random Schrödinger Operators. Panoramas et Synthèses, vol. 25, pp. 1–119. Société Mathématique de France, Paris (2008). https://smf.emath.fr/publications/operateurs-de-schrodinger-aleatoires
Barlow, M.T.: Random Walks and Heat Kernels on Graphs. London Mathematical Society Lecture Note Series, vol. 438, p. 226. Cambridge University Press, Cambridge (2017). https://doi.org/10.1017/9781107415690
Acknowledgements
This work has benefited from support provided by the University of Strasbourg Institute for Advanced Study (USIAS), within the French national program “Investment for the future” (IdEx-Unistra). This research has been supported by Grant PID2021-124195NB-C31 of Agencia Estatal de Investigación (Spain).
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Communicated by Simone Warzel.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A Proof of Theorem 8
Appendix A Proof of Theorem 8
Let \((X_t)_{t\ge 0}\) be a continuous time simple symmetric random walk on \({\mathbb {Z}}^d\) with jump intensity 1, and let \(\textrm{P}_x\), \(\textrm{E}_x\) be the associated probability measure and expectation conditioned on \(X_0=x\). We remark that \((X_t)_{t\ge 0}\) is the Markov process of generator \(-\Delta /(2d)\) on \(\ell ^2({\mathbb {Z}}^d)\).
For a finite \(A\subseteq {\mathbb {Z}}^d\) and \(W:A\rightarrow [0,\infty )\), the Feynman-Kac formula lets us write the semigroup generated by \(-\Delta _{A}+W\) acting on as
where \(\tau _A{:}{=}\inf \{t\ge 0\,\vert \, X_t\notin A\}\) is the exit time of A. To simplify notation we set \(\lambda =\frac{\lambda _{A,W}}{2d}\) and \(K_t=\exp \left( -\frac{t}{2d}[-\Delta _{A}+W]\right) \). Depending on \(\lambda \) we distinguish two cases.
Case 1: \(\lambda \le \frac{1}{d}\).
Let \(\overline{B_\infty }(x,r){:}{=}\{y\in {\mathbb {R}}^d\,\vert \,\left\| x-y\right\| _\infty \le r\}\). For \(t\ge 1\), \(x\in A\) and \(r=2t\sqrt{\frac{e}{\lambda d}}\) we decompose \(K_{t/\lambda }\mathbbm {1}_A(x)\) as
and bound each term as follows.
For the first term we use that \(t\mapsto K_t\) is a semigroup to obtain
We can estimate \(\textrm{P}_{0}\left[ X_{2/\lambda }=0\right] \) using the characteristic function of \(X_s\), which is \(\phi _s(\theta )=\exp \left( -s+\frac{s}{d}\sum _{i=1}^d\cos (\theta _i)\right) \), by means of
Laplace’s method applied to the integral inside the the square brackets yields \(\textrm{P}_{0}\left[ X_{s}=0\right] s^{d/2}\xrightarrow [s\rightarrow \infty ]{}\left( \frac{d}{2\pi }\right) ^{d/2}\), and therefore \(\textrm{P}_{0}\left[ X_{2/\lambda }=0\right] \le C(d)\lambda ^{d/2}\).
For the second term we use [11, Lemma 4.6] which states that
where \(S_n\) is a discrete time simple symmetric random walk on \({\mathbb {Z}}\) starting at 0, and P is its probability measure. Recalling that the first component of \(X_{t}\), which we denote \(X_t^1\), is a continuous time simple symmetric random walk on \({\mathbb {Z}}\) with jump intensity 1/d we have
We split the series at \(n=\frac{2e t}{\lambda d}\) and bound the two terms separately:
With this we have shown \(\left\| K_{t/\lambda }\mathbbm {1}_A\right\| _\infty \le C(d)\left( 1+t^{d/2}\right) e^{-t}\) for \(t\ge 1\). Since \(K_{t/\lambda }\mathbbm {1}(x)\) is always bounded by 1 we can add \(\left( \inf _{0\le t\le 1}\left( 1+t^{d/2}\right) e^{-t}\right) ^{-1}\) to C(d), if necessary, to have
Replacing t by \(2d\lambda t\) and gives the desired bound.
Case 2: \(\lambda \ge \frac{1}{d}\).
This case follows from the heat kernel bound
taken from [11, Theorem 5.17]. We proceed as before but now we use \(\overline{B_1}(x,r){:}{=}\{y\in {\mathbb {R}}^d\,\vert \,\left\| x-y\right\| _1\le r\}\). For \(t\ge 0\) and \(r=\lambda t d e^2\) we have
Clearly \(r\ge e t\), so we can apply the heat kernel bound to obtain
and therefore \(\left\| K_{t}\mathbbm {1}_A\right\| _\infty \le C(d)\left( 1+[\lambda t]^{d/2}\right) e^{-\lambda t}\). Replacing t by 2dt and gives the desired bound.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sánchez-Mendoza, D. Principal Eigenvalue and Landscape Function of the Anderson Model on a Large Box. J Stat Phys 190, 122 (2023). https://doi.org/10.1007/s10955-023-03130-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10955-023-03130-6