1 Introduction and Results

The landscape function, introduced by Filoche and Mayboroda in [1], has been conjectured to capture the low eigenvalues of the Anderson model operator, discrete or continuous, restricted to a finite large box. We can find this conjecture loosely stated in [2, Equation 1.4] as: If 0 is the minimum of the support of the potential distribution then

$$\begin{aligned} \lambda _i L_i\approx 1+\frac{d}{4}, \qquad 1\le i\ll n^d, \end{aligned}$$

where \(\{\lambda _i\}_{i}\) are the eigenvalues ordered increasingly, \(\{L_i\}_{i}\) are the local maxima of the landscape function ordered decreasingly, d is the dimension, and n is the linear size on the box. Numerical experiments with Bernoulli and Uniform potential distributions support the conjecture (see [3, 4]), but to this moment there is no mathematical proof. In this article we give a precise formulation of the conjecture on the discrete setting for the case \(i=1\), that is, for the product of the principal (smallest) eigenvalue and the sup-norm of the Landscape function on a large box. We claim such product converges almost surely to an explicit dimensional constant, different from \(1+\frac{d}{4}\), as the size of the box goes to infinity and give the proof of the \(\liminf \). For the special case \(d=1\), we also give the proof of the \(\limsup \).

We start with some definitions and notation. Given a finite set \(A\subseteq {\mathbb {Z}}^d\) and a positive potential \(W:A\rightarrow [0,\infty )\) we consider the Schrödinger operator

$$\begin{aligned} -\Delta _{A}+W&:\ell ^2(A)\longrightarrow \ell ^2(A),\\ (-\Delta _{A}+W)\phi (x)&{:}{=}\sum _{ \left| y-x\right| =1}\left[ \phi (x)-\phi (y)\right] +W(x)\phi (x), \end{aligned}$$

where \(-\Delta _{A}\) has Dirichlet boundary conditions. From it, we define its principal eigenvalue and landscape function

$$\begin{aligned} \lambda _{A,W}{:}{=}\inf \sigma (-\Delta _{A}+W),\qquad L_{A,W}{:}{=}(-\Delta _{A}+W)^{-1}\mathbbm {1}_{A}. \end{aligned}$$

Notice that \(\lambda _{A,W}>0\) and that \(L_{A,W}\) is well defined on A, since \(-\Delta _{A}>0\) and \(W\ge 0\).

Let \(V=\{V(x)\}_{x\in {\mathbb {Z}}^d}\) be an independent and identically distributed (i.i.d.) random non-negative potential whose probability measure and expectation we denote \({\mathbb {P}}\) and \({\mathbb {E}}\), and define for \(n\in {\mathbb {N}}\) the box \(\Lambda _n{:}{=}[-n,n]^d\cap {\mathbb {Z}}^d\). Our main objectives are the asymptotics of \( \lambda _{\Lambda _n,V}\) and \(\left\| L_{\Lambda _n,V}\right\| _\infty \) as \(n\rightarrow \infty \), where, as customary, the restriction of V to \(\Lambda _n\) is implicit.

In addition to V being non-negative (i.e., \(\mathbb {P}\left[ V(0)\in (-\infty ,0)\right] =0\)) we will always assume the distribution function \(F(t)=\mathbb {P}\left[ V(0)\le t\right] \) satisfies one of the following mutually exclusive conditions:

(C1):

\(0<F(0)<1\),   (e.g., Bernoulli(p))

(C2):

\(F(t)=c\,t^\eta (1+o(1))\) as \(t\downarrow 0\) for some \(c,\,\eta >0\).   (e.g., Uniform(0, 1))

We write n instead of \(\Lambda _n\) whenever convenient, for instance \(-\Delta _n=-\Delta _{\Lambda _n}\) and \(\lambda _{n,V}=\lambda _{\Lambda _n,V}\). We denote by \(\omega _d\) and \(\mu _d\) respectively, the volume of the unit ball in \({\mathbb {R}}^d\) and the principal eigenvalue of the continuous Laplacian (\(-\sum _{i=1}^d\partial ^2/\partial x_i^2\)) on such ball with Dirichlet boundary conditions.

We now state our conjecture and results. We are always assuming that V is non-negative and satisfies (C1) or (C2). We claim that:

Conjecture 0

\(\displaystyle \lim _{n\rightarrow \infty }\lambda _{n,V}\left\| L_{n,V}\right\| _\infty =\frac{\mu _d}{2d}\)   \({\mathbb {P}}\)-a.s.

The heuristic argument behind this conjecture is that both \(\lambda _{n,V}\) and \(\left\| L_{n,V}\right\| _\infty \) are controlled by the largest ball inside of \(\Lambda _n\) with zero or very low potential. If the radius of such ball is r then, roughly, \(\lambda _{n,V}\) is proportional to \(r^{-2}\) and \(\left\| L_{n,V}\right\| _\infty \) is proportional to \(r^{2}\), making the product of “order one” in r. The appearance of the continuous constant \(\frac{\mu _d}{2d}\) is another instance of the solution of a discrete problem converging to the solution of the corresponding continuous one. The disagreement between the dimensional constants \(\frac{\mu _d}{2d}\) and \(1+\frac{d}{4}\) is simply explained by the fact that \(1+\frac{d}{4}\) was “guessed” from the numerical experiments, and the two constants are close to each other. For example, for \(d=1\) we have \(1+\frac{1}{4}=1.25\) and \(\frac{\mu _1}{2}=\frac{\pi ^2}{8}\approx 1.23\).

Using the Min–Max Principle and our hypothesis on V it is straightforward to show that \(\lambda _{n,V}\) is decreasing in n and converges to 0. Our first result is on the speed of this convergence; to state it we first need to define the deterministic sequences:

$$\begin{aligned} \epsilon _n{:}{=}{\left\{ \begin{array}{ll} 0,&{}{\textbf {(C1)}},\\ (\ln n)^{-2/d},&{}{\textbf {(C2)}}, \end{array}\right. } \qquad y_n{:}{=}\left( \frac{d\ln n}{\omega _d\left| \ln F(\epsilon _n)\right| }\right) ^{1/d}. \end{aligned}$$

Theorem 1

\(\displaystyle \lim _{n\rightarrow \infty }y_n^2\lambda _{n,V} =\mu _d\)   \({\mathbb {P}}\)-a.s.

The proof of Theorem 1 is given in Sect. 2, and it is divided into the \(\limsup \) and \(\liminf \) bounds. The \(\limsup \) bound follows from the Min–Max Principle and the previously mentioned heuristic of the largest ball with zero or very low potential. The \(\liminf \) bound is more involved; it uses a Lifshitz tails result form [5] and the connection between the integrated density of states of the (infinite) Anderson model and the cumulative distribution function of \(\lambda _{n,V}\).

Our second result is a partial proof of Conjecture 0, and a complete proof when \(d=1\).

Theorem 2

  1. (i)

    \(\displaystyle \varliminf _{n\rightarrow \infty }\lambda _{n,V} \left\| L_{n,V}\right\| _\infty \ge \frac{\mu _d}{2d}\)   \({\mathbb {P}}\)-a.s.

  2. (ii)

    If \(d=1\) then \(\displaystyle \lim _{n\rightarrow \infty }\lambda _{n,V}\left\| L_{n,V}\right\| _\infty =\frac{\mu _1}{2}\)   \({\mathbb {P}}\)-a.s.

Remark

The article [6] has a proof of (ii) in the continuous setting for the (C1) case. Both proofs follow the heuristic of the largest ball with zero potential, but differ on how to obtain a lower bound of \(\lambda _{n,V}\) and an upper bound of \(L_{n,V}\).

We prove Theorem 2 in Sect. 3 after deriving some general properties of landscape functions. Most notable among these properties is Proposition 9, which states that \(\lambda _{A,W}\left\| L_{A,W}\right\| _\infty \) is bounded from above and below by two dimensional constants uniformly on A and W. This is a consequence of an upper bound of the \(\ell ^\infty \rightarrow \ell ^\infty \) norm of the semigroup generated by the Schrödinger operator, which we adapted from the book [7] to the discrete setting. The statement (i) of Theorem 2 follows from domain monotonicity of the landscape function and the asymptotic of \(\lambda _{n,V}\) given in Theorem 1, while (ii) is based on the geometric resolvent identity and the restrictions of one dimensional geometry.

We tried to illustrate Theorem 2(ii) when \(V(0)\overset{\text {d}}{=}\) Bernoulli(p) by plotting \(\lambda _{n,V}\left\| L_{n,V}\right\| _\infty \,\text {v.s.}\,n\) for a single realization of the potential. However, the plot does not show any kind of accumulation up to \(n=10^5\), suggesting the convergence is very slow. Instead, we draw the empirical distribution of \(\lambda _{n,V}\left\| L_{n,V}\right\| _\infty -\frac{\mu _1}{2}\) from \(10^5\) realizations for \(n=10^2,10^3,10^4,10^5\). These are given in Fig. 1, from which we can see that the empirical distribution concentrates towards 0 as n increases.

In the proofs that follow, C or C(d) is a finite positive constant that may only depend on the dimension and can change from line to line. By \(a_t\sim _t b_t\) we mean \(\lim _{t\rightarrow \infty }\frac{a_t}{b_t}=1\).

Fig. 1
figure 1

Empirical distribution of \(\lambda _{n,V}\left\| L_{n,V}\right\| _\infty -\frac{\mu _1}{2}\) for \(d=1\) and \(V(0)\overset{\text {d}}{=}\) Bernoulli(0.3) computed form \(10^5\) samples. The empirical mean (m) and empirical standard deviation (s) are shown in red and blue respectively

2 Principal Eigenvalue (Proof of Theorem 1)

2.1 The Lim Sup Bound

The goal of this subsection is to show

$$\begin{aligned} \varlimsup _{n\rightarrow \infty }y_n^2\lambda _{n,V}\le \mu _d\quad {\mathbb {P}}\text {-a.s.} \end{aligned}$$
(1)

As usual, getting a good upper bound on \(\lambda _{n,V}\) just requires choosing a good test function and applying the Min–Max Principle.

Define \(Y_n\) to be the radius of the largest open euclidean ball contained in \(\Lambda _n\) in which V is uniformly bounded by \(\epsilon _n\), that is,

$$\begin{aligned} Y_{n}{:}{=}\max \left\{ r\in {\mathbb {N}}\,\Big \vert \,\exists x\in \Lambda _n\,\text { such that }\,B(x,r)\cap {\mathbb {Z}}^d\subseteq \Lambda _n\cap V^{-1}\left( [0,\epsilon _n]\right) \right\} , \end{aligned}$$

where \(B(x,r)=\{x'\in {\mathbb {R}}^d\,\vert \,\left| x-x'\right| <r\}\). Also, define \(x_n\in \Lambda _n\) to be the center of a ball at which the maximum is attained (it may not be unique); and let \(\phi \in \ell ^2(B(x_n,Y_n)\cap {\mathbb {Z}}^d)\) be the normalized eigenvector of \(-\Delta _{B(x_n,Y_n)}\) associated to \(\lambda _{B(x_n,Y_n)\cap {\mathbb {Z}}^d,0}\), extended to \(\Lambda _n\) by 0. Then, by the Min–Max Principle, we have

$$\begin{aligned} \lambda _{n,V}&\le \left\langle \phi ,\left( -\Delta _{n}+V\right) \phi \right\rangle _{\ell ^2(\Lambda _n)}\\&=\left\langle \phi ,\left( -\Delta _{B(x_n,Y_n)\cap {\mathbb {Z}}^d}+V\right) \phi \right\rangle _{\ell ^2(B(x_n,Y_n)\cap {\mathbb {Z}}^d)}\\&\le \lambda _{B(x_n,Y_n)\cap {\mathbb {Z}}^d,0}+\epsilon _n. \end{aligned}$$

To conclude the proof of (1) we use that the event \(\{\lim _{n\rightarrow \infty }Y_n/y_n=1\}\) has probability one (this will be proven in Proposition 3), together with translation invariance and the limit \(\displaystyle \lim _{r\rightarrow \infty }r^2\lambda _{B(0,r)\cap {\mathbb {Z}}^d,0}=\mu _d\), to obtain

$$\begin{aligned} \varlimsup _{n\rightarrow \infty }y_n^2\lambda _{n,V}&\le \lim _{n\rightarrow \infty }y_n^2\left( \lambda _{B(x_n,Y_n)\cap {\mathbb {Z}}^d,0}+\epsilon _n\right) \\&=\lim _{n\rightarrow \infty }\frac{y_n^2}{Y_n^2}Y_n^2\lambda _{B(x_n,Y_n)\cap {\mathbb {Z}}^d,0}=\mu _d\quad {\mathbb {P}}\text {-a.s.} \end{aligned}$$

The limit \(\displaystyle \lim _{r\rightarrow \infty }r^2\lambda _{B(0,r)\cap {\mathbb {Z}}^d,0}=\mu _d\) is a consequence of the discrete Laplacian converging to the continuous one, or random walk converging to Brownian motion. A proof following the latter approach can be found in [8, Proposition 8.4.2], where an extra factor d appears as a result of the probabilistic normalization of the Laplacian.

Proposition 3

\(\displaystyle Y_n\sim _n y_n\)   \({\mathbb {P}}\)-a.s.

Proof

If \(Y_n< y_n(1-\delta )^{1/d}\) for some \(0<\delta <1\), then the inscribed ball of each of the \(\left( \frac{2n}{2 y_n(1-\delta )^{1/d}}\right) ^d(1+o(1))\) disjoint cubes of side length \( \big \lceil 2 y_n(1-\delta )^{1/d} \big \rceil \) that make up \(\Lambda _n\) contains a point x with \(V(x)>\epsilon _n\). Approximating the number of points in such balls by \(\#(B(0,r)\cap {\mathbb {Z}}^d)\sim _r{\text {Vol}}(B(0,r))=\omega _d r^d\), we obtain for large n

$$\begin{aligned} \mathbb {P}\left[ Y_n< y_n(1-\delta )^{1/d}\right]&\le \left( 1-F(\epsilon _n)^{\omega _d y_n^d(1-\delta )(1+o(1))}\right) ^{\frac{n^d}{y_n^d(1-\delta )}(1+o(1))}\\&=\left( 1-\frac{1}{n^{d(1-\delta )(1+o(1))}}\right) ^{\frac{n^d\omega _d\left| \ln F(\epsilon _n)\right| }{d(\ln n)(1-\delta )}(1+o(1))}\\&\le \exp \left( -\frac{n^\delta \omega _d\left| \ln F(\epsilon _n)\right| }{2 d(\ln n)(1-\delta )}\right) , \end{aligned}$$

which is summable. Therefore, the Borel–Cantelli Lemma and sending \(\delta \rightarrow 0\) give

$$\begin{aligned} 1\le \varliminf _{n\rightarrow \infty }y_n^{-1} Y_n\quad {\mathbb {P}}\text {-a.s.} \end{aligned}$$

We show the \(\limsup \) bound, first on an exponential sub-sequence, and then we extend it to the whole sequence. The extending argument requires a monotone sequence of random variables, which \(Y_n\) may fail to be if (C2) holds. For this reason we introduce

$$\begin{aligned} Y_{n,n'}{:}{=}\max \left\{ r\in {\mathbb {N}}\,\Big \vert \,\exists x\in \Lambda _n\,\text { such that }\,B(x,r)\cap {\mathbb {Z}}^d\subseteq \Lambda _n\cap V^{-1}\big ([0,\epsilon _{n'}]\big )\right\} , \end{aligned}$$

which is increasing on n, decreasing on \(n'\) and satisfies \(Y_{n,n}=Y_n\). Since for \(\delta >0\) and large m we have

$$\begin{aligned}&{\mathbb {P}}\left[ Y_{\left\lfloor e^{m+1} \right\rfloor ,\left\lfloor e^{m} \right\rfloor }> y_{\left\lfloor e^{m} \right\rfloor }(1+\delta )^{1/d}\right] \\&\qquad \qquad \le \sum _{x\in \Lambda _{\left\lfloor e^{m+1} \right\rfloor }}{\mathbb {P}}\left[ B(x,y_{\left\lfloor e^{m} \right\rfloor }(1+\delta )^{1/d})\cap {\mathbb {Z}}^d\subseteq V^{-1}[0,\epsilon _{\left\lfloor e^m \right\rfloor }]\right] \\&\qquad \qquad =\#\Lambda _{\left\lfloor e^{m+1} \right\rfloor } F(\epsilon _{\left\lfloor e^m \right\rfloor })^{\omega _d y_{\left\lfloor e^{m} \right\rfloor }^d(1+\delta )(1+o(1))}\\&\qquad \qquad = \frac{\#\Lambda _{\left\lfloor e^{m+1} \right\rfloor }}{\left\lfloor e^{m} \right\rfloor ^{d(1+\delta )(1+o(1))}}\\&\qquad \qquad \le C(d) e^{-m d \delta /2}, \end{aligned}$$

the Borel–Cantelli Lemma and the limit \(\delta \rightarrow 0\) give

$$\begin{aligned} \varlimsup _{m\rightarrow \infty }y_{\left\lfloor e^{m} \right\rfloor }^{-1}Y_{\left\lfloor e^{m+1} \right\rfloor ,\left\lfloor e^{m} \right\rfloor }\le 1\quad {\mathbb {P}}\text {-a.s.} \end{aligned}$$

For \(n\in {\mathbb {N}}\) define \(m(n)\in {\mathbb {N}}\) by \(\left\lfloor e^{m(n)} \right\rfloor \le n<\left\lfloor e^{m(n)+1} \right\rfloor \). Since \(y_n\sim _n y_{\left\lfloor e^{m(n)} \right\rfloor }\) and \(Y_n\le Y_{\left\lfloor e^{m(n)+1} \right\rfloor ,\left\lfloor e^{m(n)} \right\rfloor }\), we conclude

$$\begin{aligned} \varlimsup _{n\rightarrow \infty }y_n^{-1}Y_n&\le \varlimsup _{n\rightarrow \infty }y_n^{-1} Y_{\left\lfloor e^{m(n)+1} \right\rfloor ,\left\lfloor e^{m(n)} \right\rfloor }\\&=\varlimsup _{n\rightarrow \infty }y_{\left\lfloor e^{m(n)} \right\rfloor }^{-1}Y_{\left\lfloor e^{m(n)+1} \right\rfloor ,\left\lfloor e^{m(n)} \right\rfloor }\le 1\quad {\mathbb {P}}\text {-a.s.} \end{aligned}$$

\(\square \)

2.2 The Lim Inf Bound

In this subsection we show that

$$\begin{aligned} \mu _d\le \varliminf _{n\rightarrow \infty }y_n^2\lambda _{n,V}\quad {\mathbb {P}}\text {-a.s.} \end{aligned}$$
(2)

The main input for this is a result from [5] on the Lifshitz tail of the integrated density of states. We recall the integrated density of states of the Anderson model is a deterministic distribution function given by the \({\mathbb {P}}\)-a.s. limit

$$\begin{aligned} I(t){:}{=}\lim _{n\rightarrow \infty }\frac{1}{\#\Lambda _n}\#\{\lambda \in \sigma (-\Delta _n+V)\,\vert \,\lambda \le t\},\qquad t\in {\mathbb {R}}, \end{aligned}$$

where the eigenvalues are counted with multiplicities. The central hypothesis of [5] is a scaling assumption on the cumulant-generating function of V(0), \(H(t){:}{=}\ln \mathbb {E}\left[ e^{-t V(0)}\right] \), which we will check in the following proposition. To state it, we first need to define

$$\begin{aligned} (1,\infty )\ni t\longmapsto \alpha (t){:}{=}{\left\{ \begin{array}{ll}t^{1/(d+2)},&{} {\textbf {(C1)}},\\ \left( \frac{t}{\ln t}\right) ^{1/(d+2)},&{}{\textbf {(C2)}}, \end{array}\right. }\qquad {\widetilde{H}}{:}{=}{\left\{ \begin{array}{ll}\left| \ln F(0)\right| ,&{} {\textbf {(C1)}},\\ \frac{2\eta }{d+2},&{} {\textbf {(C2)}}. \end{array}\right. } \end{aligned}$$

Proposition 4

(Scaling assumption of [5]) For any compact \(K\subseteq (0,\infty )\) we have

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{\alpha ^{d+2}(t)}{t}H\left( \frac{t}{\alpha ^d(t)}y\right) =-{\widetilde{H}} \end{aligned}$$

uniformly on \(y\in K\).

Proof

First assume (C1). In this case \(\frac{\alpha ^{d+2}(t)}{t}=1\) and \(\frac{t}{\alpha ^d(t)}=t^{2/(d+2)}\). Since for \(t>0\) we have

$$\begin{aligned} \ln F(0)\le H(t)&=\ln \left( \mathbb {E}\left[ e^{-t V(0)}\mathbbm {1}_{V(0)\le \frac{1}{\sqrt{t}}}\right] +\mathbb {E}\left[ e^{-t V(0)}\mathbbm {1}_{V(0)> \frac{1}{\sqrt{t}}}\right] \right) \\&\le \ln \left( F\left( 1/\sqrt{t}\right) +e^{-\sqrt{t}}\right) , \end{aligned}$$

we conclude that

$$\begin{aligned}&\sup _{y\in K}\left| \frac{\alpha ^{d+2}(t)}{t}H\left( \frac{t}{\alpha ^d(t)}y\right) -\ln F(0)\right| \\&\qquad \qquad \le \sup _{y\in K}\ln \left( F\left( \frac{1}{t^{1/(d + 2)}\sqrt{y}}\right) +e^{-t^{1/(d + 2)}\sqrt{y}}\right) -\ln F(0)\\&\qquad \qquad =\ln \left( F\left( \frac{1}{t^{1/(d + 2)}\sqrt{\min K}}\right) +e^{-t^{1/(d + 2)}\sqrt{\min K}}\right) -\ln F(0)\xrightarrow [t\rightarrow \infty ]{} 0. \end{aligned}$$

Now assume (C2). In this case \(\frac{\alpha ^{d+2}(t)}{t}=\frac{1}{\ln t}\) and \(\frac{t}{\alpha ^d(t)}=t^{2/(d+2)}(\ln t)^{d/(d+2)}\). We introduce a parameter \(0<\delta <1\) and observe that

$$\begin{aligned} H(t)=&\ln \left( \mathbb {E}\left[ e^{-t V(0)}\mathbbm {1}_{V(0)\le t^{-\delta }}\right] +\mathbb {E}\left[ e^{-t V(0)}\mathbbm {1}_{V(0)>t^{-\delta }}\right] \right) \\&\le \ln \left( F\left( t^{-\delta }\right) +e^{-t^{1-\delta }}\right) ,\qquad t>0, \end{aligned}$$

which implies

$$\begin{aligned}&\varlimsup _{t\rightarrow \infty }\sup _{y\in K}\frac{\alpha ^{d+2}(t)}{t}H\left( \frac{t}{\alpha ^d(t)}y\right) \\&\qquad \qquad \le \varlimsup _{t\rightarrow \infty }\sup _{y\in K}\frac{1}{\ln t}\ln \left( F\left( \left[ \frac{t}{\alpha ^d(t)}y\right] ^{-\delta }\right) +\exp \left( -\left[ \frac{t}{\alpha ^d(t)}y\right] ^{1-\delta }\right) \right) \\&\qquad \qquad = \varlimsup _{t\rightarrow \infty }\frac{1}{\ln t}\ln \left( F\left( \left[ \frac{t}{\alpha ^d(t)}\min K\right] ^{-\delta }\right) +\exp \left( -\left[ \frac{t}{\alpha ^d(t)}\min K\right] ^{1-\delta }\right) \right) \\&\qquad \qquad =-\frac{2\delta \eta }{d+2}\xrightarrow [\delta \rightarrow 1]{}-\frac{2\eta }{d+2}. \end{aligned}$$

For the \(\displaystyle \varliminf _{t\rightarrow \infty }\inf _{y\in K}\) we use

$$\begin{aligned} H(t)&=\ln \left( \mathbb {E}\left[ e^{-t V(0)}\mathbbm {1}_{V(0)\le t^{-1}}\right] +\mathbb {E}\left[ e^{-t V(0)}\mathbbm {1}_{V(0)>t^{-1}}\right] \right) \\&\ge \ln \left( e^{-1}F(t^{-1})\right) ,\qquad t>0, \end{aligned}$$

to obtain

$$\begin{aligned} \varliminf _{t\rightarrow \infty }\inf _{y\in K}\frac{\alpha ^{d+2}(t)}{t}H\left( \frac{t}{\alpha ^d(t)}y\right)&\ge \varliminf _{t\rightarrow \infty }\inf _{y\in K}\frac{1}{\ln t}\ln \left( e^{-1}F\left( \left[ \frac{t}{\alpha ^d(t)}y\right] ^{-1}\right) \right) \\&\ge \varliminf _{t\rightarrow \infty }\frac{1}{\ln t}\ln \left( e^{-1}F\left( \left[ \frac{t}{\alpha ^d(t)}\max K\right] ^{-1}\right) \right) \\&=-\frac{2\eta }{d+2}. \end{aligned}$$

\(\square \)

Having checked the scaling assumption on H, we now have the Lifshitz tail result:

Theorem 5

[5, Theorem 1.3] Define the constant

$$\begin{aligned} \chi {:}{=}\inf _{g\in H^1({\mathbb {R}}^d),\,\left\| g\right\| _2=1}\left( \left\| \nabla g\right\| _2^2+{\widetilde{H}}{\text {Vol}}({\text {supp}}g)\right) , \end{aligned}$$

then

$$\begin{aligned} \lim _{t\downarrow 0}\frac{\ln I(t)}{t\alpha ^{-1}\left( t^{-1/2}\right) }=-2\,d^{d/2}\left( \frac{\chi }{d+2}\right) ^{(d+2)/2}. \end{aligned}$$

Remark

The function \(t\mapsto \alpha (t)\) is eventually increasing so \(\alpha ^{-1}(t)\) is well defined for large t. The original statement from [5] is far more general; our conditions on V make H fall into, what is there called, the \((\gamma =0)\)-class.

The constant \(\chi \) can be explicitly computed by means of the Faber–Krahn inequality:

Proposition 6

\(\displaystyle \chi =(d+2)\left( \frac{{\widetilde{H}}\omega _d}{2}\right) ^{2/(d+2)}\left( \frac{\mu _d}{d}\right) ^{d/(d+2)}\).

Proof

Starting from \(\chi =\inf _{g\in H^1({\mathbb {R}}^d),\,\left\| g\right\| _2=1}\left( \left\| \nabla g\right\| _2^2+D{\text {Vol}}({\text {supp}}g)\right) \) we see that we only need to consider the finite volume case. Hence

$$\begin{aligned} \chi&=\inf _{\begin{array}{c} A\subseteq {\mathbb {R}}^d,\\ {\text {Vol}}(A)<\infty \end{array}}\inf _{\begin{array}{c} g\in H^1({\mathbb {R}}^d),\,\left\| g\right\| _2=1,\\ {\text {supp}}g=A \end{array}}\left( \left\| \nabla g\right\| _2^2+{\widetilde{H}}{\text {Vol}}(A)\right) \\&=\inf _{\begin{array}{c} A\subseteq {\mathbb {R}}^d,\\ {\text {Vol}}(A)<\infty \end{array}}\left( \mu (A)+{\widetilde{H}}{\text {Vol}}(A)\right) , \end{aligned}$$

where \(\mu (A)\) is the principal eigenvalue of the continuous Laplacian (\(-\sum _{i=1}^d\partial ^2/\partial x_i^2\)) defined on A with Dirichlet boundary conditions. The Faber-Krahn inequality states that over all domains of a given volume the one with the lowest principal eigenvalue is the ball, therefore, using \(\mu (B(0,r))=\mu _d/r^2\) and \({\text {Vol}}(B(0,r))=\omega _d r^d\) we obtain

$$\begin{aligned} \chi =\inf _{0<r<\infty }\left( \frac{\mu _d}{r^2}+{\widetilde{H}}\omega _d r^d\right) . \end{aligned}$$

Evaluating at the only critical point \(\displaystyle r=\left( \frac{2\mu _d}{{\widetilde{H}}\omega _d d}\right) ^{1/(d+2)}\) finishes the proof. \(\square \)

We now exploit the connection between I and the distribution of \(\lambda _{n,V}\). This is a classic argument that can be found, for instance, in [9, Equation 4.46]. We present here a slightly modified version. Let \(n\in {\mathbb {N}}\) and define a new potential

$$\begin{aligned} V'(x)&{:}{=}{\left\{ \begin{array}{ll} \infty ,&{} x\in \Gamma ,\\ V(x),&{} x\in {\mathbb {Z}}^d\setminus \Gamma , \end{array}\right. }\\ \Gamma&{:}{=}\left\{ x=(x_1,\ldots ,x_d)\in {\mathbb {Z}}^d\,\Big \vert \,x_i\in (2n+2){\mathbb {Z}} \text { for some } i=1,\ldots ,d\right\} . \end{aligned}$$

Clearly \(V\le V'\) so for any \(k\in {\mathbb {N}}\) and \(t\in {\mathbb {R}}\) we have

$$\begin{aligned} \frac{\#\{\lambda \in \sigma (-\Delta _{(2n+2)k}+V)\,\vert \,\lambda \le t\}}{\#\Lambda _{(2n+2)k}}\ge \frac{\#\{\lambda \in \sigma (-\Delta _{(2n+2)k}+V')\,\vert \,\lambda \le t\}}{\#\Lambda _{(2n+2)k}}, \end{aligned}$$

where \(-\Delta _{(2n+2)k}+V'\) has (by definition) Dirichlet boundary conditions at \(\Gamma \). These Dirichlet boundary conditions at \(\Gamma \) imply that \(-\Delta _{(2n+2)k}+V'\) is a direct sum of \((2k)^d\) independent terms, all equal in distribution to \(-\Delta _{n}+V\). Therefore, by taking the limit \(k\rightarrow \infty \) on the above inequality and applying the Law of Large Numbers, we obtain

$$\begin{aligned} I(t)&\ge \left( \lim _{k\rightarrow \infty }\frac{(2k)^d}{\#\Lambda _{(2n+2)k}}\right) \mathbb {E}\left[ \#\{\lambda \in \sigma (-\Delta _{n}+V)\,\vert \,\lambda \le t\}\right] \\&\ge \left( \frac{1}{2n+2}\right) ^d\mathbb {P}\left[ \lambda _{n,V}\le t\right] . \end{aligned}$$

From the previous inequality, Theorem 5 and Proposition 6 we have

$$\begin{aligned} \mathbb {P}\left[ \lambda _{n,V}\le t\right] \le C(d) n^{d} I(t)\le C(d)\,n^{d}\exp \left[ -f(1/t)(1+o(1))\right] \quad \text {as } t\downarrow 0, \end{aligned}$$
(3)

where we have introduced \(\displaystyle f(t){:}{=}\frac{{\widetilde{H}}\omega _d\mu _d^{d/2}\alpha ^{-1}(t^{1/2})}{t}\). To finish the proof we need the asymptotic of \(f^{-1}(t)\) as \(t\rightarrow \infty \):

Proposition 7

  1. (i)

    For (C1), \(\displaystyle f^{-1}(t)=\frac{1}{\mu _d}\left( \frac{t}{\omega _d\left| \ln F(0)\right| }\right) ^{2/d}\).

  2. (ii)

    For (C2), \(\displaystyle f^{-1}(t)\sim _t\frac{1}{\mu _d}\left( \frac{d\,t}{2\eta \omega _d\ln t}\right) ^{2/d}\).

Proof

For (C1) there is nothing to prove since \(f(t)=\omega _d\left| \ln F(0)\right| \mu _d^{d/2}t^{d/2}\).

For (C2) we have

$$\begin{aligned} f(t)=\frac{k\alpha ^{-1}(t^{1/2})}{t}=k t^{d/2}\ln \alpha ^{-1}(t^{1/2}), \end{aligned}$$

with all the constants collected in \(k=\frac{2\eta \omega _d\mu _d^{d/2}}{d+2}\). Since \(\alpha \) is eventually increasing and has infinite limit, the same is true for f, in particular \(f^{-1}(t)\) exists for large t. By solving for the \(\alpha ^{-1}\) term in the first equality above, applying \(\alpha \) and simplifying some exponents we arrive at \(t=\left( \frac{f(t)}{k\ln \left[ t f(t)/k\right] }\right) ^{2/d}\). Replacing t by \(f^{-1}(t)\) we have

$$\begin{aligned} f^{-1}(t)=\left( \frac{t}{k\ln \left[ t f^{-1}(t)/k\right] }\right) ^{2/d}. \end{aligned}$$

In order to deal with the \(\ln [t f^{-1}(t)]\) term above, we multiplying the equality by t and then take the logarithm to obtain

$$\begin{aligned} \ln [t f^{-1}(t)]=\frac{d+2}{d}\ln t-\frac{2}{d}\ln \left[ k\ln \left[ t f^{-1}(t)/k\right] \right] . \end{aligned}$$

Since \(t f^{-1}(t)\xrightarrow [t\rightarrow \infty ]{}\infty \), the last equality implies \(\ln [t f^{-1}(t)]\sim _t \frac{d+2}{d}\ln t\) and therefore

$$\begin{aligned} f^{-1}(t)\sim _t\left( \frac{d\,t}{(d+2)k\ln t}\right) ^{2/d}=\frac{1}{\mu _d}\left( \frac{d\,t}{2\eta \omega _d\ln t}\right) ^{2/d}. \end{aligned}$$

\(\square \)

Going back to (3) with \(n=\left\lfloor e^m \right\rfloor \) and \(t=1/f^{-1}((1+\delta )d m)\) for some \(m\in {\mathbb {N}}\) and \(\delta >0\), we see that

$$\begin{aligned} \mathbb {P}\left[ \lambda _{\left\lfloor e^m \right\rfloor ,V}f^{-1}((1+\delta )d m)\le 1\right]&\le C(d) \left( \left\lfloor e^m \right\rfloor \right) ^{d}\exp \left[ -(1+\delta )d m(1+o(1))\right] \\ {}&\le C(d)\,e^{-m d\delta /2}, \end{aligned}$$

which is summable over \(m\in {\mathbb {N}}\). Therefore, by the Borel–Cantelli Lemma we have

$$\begin{aligned} 1\le \varliminf _{m\rightarrow \infty } \lambda _{\left\lfloor e^m \right\rfloor ,V}f^{-1}((1+\delta )d m)=(1+\delta )^{2/d} \varliminf _{m\rightarrow \infty } \lambda _{\left\lfloor e^m \right\rfloor ,V}f^{-1}(d m)\quad {\mathbb {P}}\text {-a.s.} \end{aligned}$$

As in the proof of Proposition 3, we define \(m(n)\in {\mathbb {N}}\) by \(\left\lfloor e^{m(n)} \right\rfloor \le n<\left\lfloor e^{m(n)+1} \right\rfloor \), so that \(\ln n\sim _n (m(n)+1)\). Since \(n\mapsto \lambda _{n,V}\) is monotone decreasing we have

$$\begin{aligned} \varliminf _{n\rightarrow \infty } \lambda _{n,V}f^{-1}(d \ln n)&\ge \varliminf _{n\rightarrow \infty } \lambda _{\left\lfloor e^{m(n)+1} \right\rfloor ,V}f^{-1}(d \ln n)\\ {}&=\varliminf _{n\rightarrow \infty } \lambda _{\left\lfloor e^{m(n)+1} \right\rfloor ,V}f^{-1}(d(m(n)+1))\\&\ge (1+\delta )^{-2/d}\quad {\mathbb {P}}\text {-a.s.} \end{aligned}$$

The proof of (2) is finished by sending \(\delta \rightarrow 0\) and noticing that Proposition 7 implies \(f^{-1}(d \ln n)\sim _n\frac{y_n^2}{\mu _d}\) for both (C1) and (C2).

3 Landscape Function

We start this section by deriving some general properties of landscape functions.

For a finite set \(A\subseteq {\mathbb {Z}}^d\) and a positive potential \(W:A\rightarrow [0,\infty )\) we introduce the Green function with 0 as spectral parameter

$$\begin{aligned} G_{A,W}(x,y){:}{=}{\left\{ \begin{array}{ll} \left\langle \delta _x ,(-\Delta _A+W)^{-1}\delta _y\right\rangle _{\ell ^2(A)},&{}(x,y)\in A\times A,\\ 0,&{}(x,y)\in ({\mathbb {Z}}^d\times {\mathbb {Z}}^d)\setminus (A\times A). \end{array}\right. } \end{aligned}$$

This function is known to be symmetric, non-negative, and decreasing on the potential W. It is also known to satisfy the geometric resolvent identity:

$$\begin{aligned} G_{A,W}(x,y)=G_{A',W}(x,y)+\sum _{(i,j)\in \partial A'}G_{A',W}(x,i)G_{A,W}(j,y),\qquad A'\subseteq A, \end{aligned}$$

where \(\partial A'{:}{=}\{(i,j)\in A'\times ({\mathbb {Z}}^d{\setminus } A')\,\vert \,\left| i-j\right| =1\}\) is the boundary of \(A'\) (see [10, Section 5.3]). By extending the definition of \(L_{A,W}\) to \(L_{A,W}(x){:}{=}\sum _{y\in {\mathbb {Z}}^d}G_{A,W}(x,y)\) for all \(x\in {\mathbb {Z}}^d\), the previously stated properties of \(G_{A,W}\) translate into non-negativity, potential monotonicity and domain monotonicity of landscape functions:

  • \(L_{A,W}\ge 0\) and \(L_{A,W}(x)=0\) if \(x\in {\mathbb {Z}}^d\setminus A\).

  • If \(0\le W'\le W\) then \(L_{A,W}\le L_{A,W'}\).

  • If \(A'\subseteq A\) then

    $$\begin{aligned} L_{A,W}(x)=L_{A',W}(x)+\sum _{(i,j)\in \partial A'}G_{A',W}(x,i)L_{A,W}(j)\ge L_{A',W}(x). \end{aligned}$$
    (4)

Our last general property is that \(\lambda _{A,W}\left\| L_{A,W}\right\| _\infty \) is bounded from above and below by two positive constants uniformly on A and W. This is based on the following upper bound of the \(\ell ^\infty \rightarrow \ell ^\infty \) norm of the semigroup, which can be found, for the continuous setting, in [7, Chapter 3, Theorem 1.2]. We could not find a proof in the literature for the discrete case, so we provide one in Appendix A.

Theorem 8

For a finite \(A\subseteq {\mathbb {Z}}^d\) and \(W:A\rightarrow [0,\infty )\) we have

$$\begin{aligned} \left\| \exp \left( -t[-\Delta _{A}+W]\right) \mathbbm {1}_A\right\| _\infty \le C(d)\left( 1+\left[ \lambda _{A,W}t\right] ^{d/2}\right) e^{-\lambda _{A,W}t},\qquad t\ge 0. \end{aligned}$$

As an an immediate corollary we obtain:

Proposition 9

For a finite \(A\subseteq {\mathbb {Z}}^d\) and \(W:A\rightarrow [0,\infty )\) we have

$$\begin{aligned} 1\le \lambda _{A,W}\left\| L_{A,W}\right\| _\infty \le C(d). \end{aligned}$$

Remark

The bound 1 is sharp. It is attained when A is a single point of \({\mathbb {Z}}^d\).

Proof

For the upper bound we use Theorem 8 and the substitution \(u=\lambda _{A,W}t\):

$$\begin{aligned} \left\| L_{A,W}\right\| _\infty&= \left\| \int _0^\infty \exp \left( -t[-\Delta _{A}+W]\right) \mathbbm {1}_{A}\,\textrm{d}t\right\| _\infty \\&\le C(d)\int _0^\infty \left( 1+\left[ \lambda _{A,W}t\right] ^{d/2}\right) \exp \left( -\lambda _{A,W}t\right) \,\textrm{d}t\\&=\frac{C(d)}{\lambda _{A,W}}\int _0^\infty \left( 1+u^{d/2}\right) e^{-u}\,\textrm{d}u=\frac{C(d)}{\lambda _{A,W}}. \end{aligned}$$

For the lower bound we just need to notice that the positivity of \(G_{A,W}\) implies

$$\begin{aligned} \left\| (-\Delta _A+W)^{-1}\right\| _{\ell ^\infty (A)\rightarrow \ell ^\infty (A)}= \left\| (-\Delta _A+W)^{-1}\mathbbm {1}_A\right\| _{\infty }=\left\| L_{A,W}\right\| _\infty , \end{aligned}$$

and therefore, denoting \(\phi \in \ell ^2(A)\) an eigenvector of \(-\Delta _A+W\) associated to \(\lambda _{A,W}\), we obtain

$$\begin{aligned} \frac{1}{\lambda _{A,W}}=\frac{\left\| (-\Delta _A+W)^{-1}\phi \right\| _{\infty }}{\left\| \phi \right\| _\infty }\le \left\| (-\Delta _A+W)^{-1}\right\| _{\ell ^\infty (A)\rightarrow \ell ^\infty (A)}=\left\| L_{A,W}\right\| _\infty . \end{aligned}$$

\(\square \)

3.1 Proof of Theorem 2(i)

We start with the asymptotic of the sup-norm of the landscape function on balls with 0 potential.

Proposition 10

\(\displaystyle \left\| L_{B(0,r)\cap {\mathbb {Z}}^d,0}\right\| _\infty \sim _r\frac{r^2}{2d}\).

Proof

Let \(r>0\) and consider the function \(\phi _r(x){:}{=}\frac{r^2-\left| x\right| ^2}{2d}\) defined on \({\mathbb {Z}}^d\). Clearly \(-\Delta \phi _r(x)=1\) for all \(x\in {\mathbb {Z}}^d\) and therefore \(L_{B(0,r)\cap {\mathbb {Z}}^d,0}-\phi _r\) is harmonic in \(B(0,r)\cap {\mathbb {Z}}^d\). By the Maximum Principle (see [8, Theorem 6.2.1]) we have

$$\begin{aligned} \left| \left\| L_{B(0,r)\cap {\mathbb {Z}}^d,0}\right\| _\infty -\frac{r^2}{2d}\right|&=\left| \sup _{x\in B(0,r)\cap {\mathbb {Z}}^d}L_{B(0,r)\cap {\mathbb {Z}}^d,0}(x)-\sup _{x\in B(0,r)\cap {\mathbb {Z}}^d}\phi _r(x)\right| \\&\le \sup _{x\in B(0,r)\cap {\mathbb {Z}}^d}\left| L_{B(0,r)\cap {\mathbb {Z}}^d,0}(x)-\phi _r(x)\right| \\ {}&=\sup _{x\in \partial ^+\left[ B(0,r)\cap {\mathbb {Z}}^d\right] }\left| L_{B(0,r)\cap {\mathbb {Z}}^d,0}(x)-\phi _r(x)\right| \\ {}&=\sup _{x\in \partial ^+\left[ B(0,r)\cap {\mathbb {Z}}^d\right] }\left| \phi _r(x)\right| \le C(d)\,r(1+o(1)) \end{aligned}$$

where \(\partial ^+ A{:}{=}\left\{ x\in {\mathbb {Z}}^d{\setminus } A\,\Big \vert \,\exists \,y\in A\text { such that }\left| x-y\right| =1\right\} \) is the outer boundary of \(A\subseteq {\mathbb {Z}}^d\). Dividing by \(\frac{r^2}{2d}\) and taking the limit \(r\rightarrow \infty \) give the proposition. \(\square \)

Recall the definitions of \(Y_n\) and \(x_n\) from Sect. 2.1. From domain monotonicity of landscape functions we have

$$\begin{aligned} L_{n,V}\ge L_{B(x_n,Y_n)\cap {\mathbb {Z}}^d,V}. \end{aligned}$$

For (C1), V is identically 0 in \(B(x_n,Y_n)\cap {\mathbb {Z}}^d\) so Theorem 1, translation invariance and Propositions 3, 10 give

$$\begin{aligned} \varliminf _{n\rightarrow \infty }\lambda _{n,V}\left\| L_{n,V}\right\| _\infty&\ge \lim _{n\rightarrow \infty }\lambda _{n,V}\left\| L_{B(x_n,Y_n)\cap {\mathbb {Z}}^d,0}\right\| _\infty \\&=\lim _{n\rightarrow \infty }\frac{\mu _d}{y_n^2}\frac{Y_n^2}{2d}=\frac{\mu _d}{2d}\quad {\mathbb {P}}\text {-a.s.} \end{aligned}$$

For (C2), we use the second resolvent identity, domain monotonicity of the eigenvalue, and Propositions 3, 9, 10 to obtain

$$\begin{aligned}&\lambda _{n,V}\left\| L_{B(x_n,Y_n)\cap {\mathbb {Z}}^d,0}-L_{B(x_n,Y_n)\cap {\mathbb {Z}}^d,V}\right\| _\infty \\&\qquad \qquad =\lambda _{n,V}\left\| (-\Delta _{B(x_n,Y_n)\cap {\mathbb {Z}}^d,0})^{-1}V L_{B(x_n,Y_n)\cap {\mathbb {Z}}^d,V}\right\| _\infty \\&\qquad \qquad \le \left\| (-\Delta _{B(x_n,Y_n)\cap {\mathbb {Z}}^d,0})^{-1}V \left[ \lambda _{B(x_n,Y_n)\cap {\mathbb {Z}}^d,V}L_{B(x_n,Y_n)\cap {\mathbb {Z}}^d,V}\right] \right\| _\infty \\&\qquad \qquad \le C(d)\epsilon _n\left\| (-\Delta _{B(x_n,Y_n)\cap {\mathbb {Z}}^d,0})^{-1}\mathbbm {1}_{B(x_n,Y_n)\cap {\mathbb {Z}}^d}\right\| _\infty \\&\qquad \qquad = C(d)\epsilon _n \left\| L_{B(x_n,Y_n)\cap {\mathbb {Z}}^d,0}\right\| _\infty \xrightarrow [n\rightarrow \infty ]{{\mathbb {P}}\text {-a.s.}}0, \end{aligned}$$

which implies

$$\begin{aligned} \varliminf _{n\rightarrow \infty }\lambda _{n,V}\left\| L_{n,V}\right\| _\infty&\ge \lim _{n\rightarrow \infty }\lambda _{n,V}\left\| L_{B(x_n,Y_n)\cap {\mathbb {Z}}^d,V}\right\| _\infty \\ {}&=\lim _{n\rightarrow \infty }\lambda _{n,V}\left\| L_{B(x_n,Y_n)\cap {\mathbb {Z}}^d,0}\right\| _\infty =\frac{\mu _d}{2d}\quad {\mathbb {P}}\text {-a.s.} \end{aligned}$$

This concludes the proof of Theorem 2(i).

3.2 Proof of Theorem 2(ii)

We assume from this point on that \(d=1\). We set \(\llbracket a , b\rrbracket {:}{=}[a,b]\cap {\mathbb {Z}}\) for any \(a,b\in {\mathbb {Z}}\), \(a\le b\). This proof is based on the following deterministic bound of the Green function in terms of the values of the potential.

Proposition 11

Let \(a,b\in {\mathbb {Z}}\), \(a\le b\) and \(W:\llbracket a , b\rrbracket \rightarrow [0,\infty )\). For any \(x\in \llbracket a , b\rrbracket \) we have

$$\begin{aligned} G_{\llbracket a , b\rrbracket ,W}(a,x)&\le \left( \sum _{j=0}^{x-a} (x-a+1-j) W(x-j)\right) ^{-1},\\ G_{\llbracket a , b\rrbracket ,W}(x,b)&\le \left( \sum _{j=0}^{b-y} (b-x+1-j) W(x+j)\right) ^{-1}. \end{aligned}$$

Proof

By translation invariance and isotropy of the Laplacian, it is enough to show the first inequality assuming that \(a=1\), \(b\ge 1\), \(W:\llbracket 1 , b\rrbracket \rightarrow [0,\infty )\).

Fix some \(x\in \llbracket 1 , b\rrbracket \). By potential monotonicity and Cramer’s rule we have

$$\begin{aligned} G_{\llbracket 1 , b\rrbracket ,W}(1,x)\le G_{\llbracket 1 , b\rrbracket ,W\mathbbm {1}_{\llbracket 1 , x\rrbracket }}(1,x)=\frac{\det \left( [-\Delta _{\llbracket 1 , b\rrbracket }+W\mathbbm {1}_{\llbracket 1 , x\rrbracket }]_{1\rightarrow \delta _x}\right) }{\det (-\Delta _{\llbracket 1 , b\rrbracket }+W\mathbbm {1}_{\llbracket 1 , x\rrbracket })}, \end{aligned}$$

where \([-\Delta _{\llbracket 1 , b\rrbracket }+W\mathbbm {1}_{\llbracket 1 , x\rrbracket }]_{1\rightarrow \delta _x}\) is the matrix obtained by replacing the first column of \(-\Delta _{\llbracket 1 , b\rrbracket }+W\mathbbm {1}_{\llbracket 1 , x\rrbracket }\) by \(\delta _x\). Computing the determinant on the numerator from its first column we see that

$$\begin{aligned} \det \left( [-\Delta _{\llbracket 1 , b\rrbracket }+W\mathbbm {1}_{\llbracket 1 , x\rrbracket }]_{1\rightarrow \delta _x}\right)&=(-1)^{x+1}\det \left( \begin{array}{c|c} T &{} 0 \\ \hline M &{} -\Delta _{\llbracket 1 , b-x\rrbracket } \end{array}\right) \\&=(-1)^{x+1}\det (T)\det (-\Delta _{\llbracket 1 , b-x\rrbracket })=b-x+1, \end{aligned}$$

since T is a lower triangular square matrix of size \(x-1\) with \((-1)\) on all the diagonal, and \(\det (-\Delta _{\llbracket 1 , k\rrbracket })=k+1\) for all \(k\in {\mathbb {N}}\). Here we have used that the determinant of an empty matrix is 1.

Consider \(\det (-\Delta _{\llbracket 1 , b\rrbracket }+W\mathbbm {1}_{\llbracket 1 , x\rrbracket })\) as a polynomial in \((W(j))_{j=1}^x\). It is clear that it does not contain squares, or greater powers, of any W(j). Moreover, a straightforward computation shows that the coefficient of \(W(j_1)W(j_2)\cdots W(j_{k-1})W(j_k)\), with \(1\le j_1<j_2<\cdots j_{k-1}<j_k\le x\) and \(1\le k\le x\), is

$$\begin{aligned} \det (-\Delta _{\llbracket 1 , j_1-1\rrbracket })&\det (-\Delta _{\llbracket j_1+1 , j_2-1\rrbracket })\cdots \det (-\Delta _{\llbracket j_{k-1}+1 , j_k-1\rrbracket })\det (-\Delta _{\llbracket j_k+1 , b\rrbracket })\\&=j_1(j_2-j_1)\cdots (j_k-j_{k-1})(b-j_k+1). \end{aligned}$$

The remaining coefficient (the constant one) is \(\det (-\Delta _{\llbracket 1 , b\rrbracket })=b+1\), which means all coefficients of \(\det (-\Delta _{\llbracket 1 , b\rrbracket }+W\mathbbm {1}_{\llbracket 1 , x\rrbracket })\) are positive. Therefore, by keeping only the linear terms, we obtain

$$\begin{aligned} G_{\llbracket 1 , b\rrbracket ,W}(1,x)&\le \frac{b-x+1}{\det (-\Delta _{\llbracket 1 , b\rrbracket }+W\mathbbm {1}_{\llbracket 1 , x\rrbracket })}\le \frac{b-x+1}{\sum _{j=1}^x j(b-j+1)W(j)}\\&\le \left( \sum _{j=1}^x j W(j)\right) ^{-1}=\left( \sum _{j=0}^{x-1} (x-j) W(x-j)\right) ^{-1}. \end{aligned}$$

\(\square \)

With the previous proposition in mind we define for \(\delta >0\) and \(x\in {\mathbb {Z}}^d\)

$$\begin{aligned} Z^+_\delta (x)&{:}{=}\min \left\{ n\in {\mathbb {N}}\,\Big \vert \,\sum _{j=1}^n(n+1-j)V(x+j)>\delta ^{-1}\right\} ,\\ Z^-_\delta (x)&{:}{=}\min \left\{ n\in {\mathbb {N}}\,\Big \vert \,\sum _{j=1}^n(n+1-j)V(x-j)>\delta ^{-1}\right\} ,\\ A_\delta (x)&{:}{=}\llbracket x-Z^-_\delta (x) , x+Z^+_\delta (x)\rrbracket . \end{aligned}$$

Notice that V(x) is not included in the definition of \(Z^\pm _\delta (x)\) and therefore \(Z^+_\delta (x)\) and \(Z^-_\delta (x)\) are independent for all \(x\in {\mathbb {Z}}\). Moreover, \(Z^\pm _\delta (x)\) is equal in distribution to \(Z^+_\delta (0)\) for all \(x\in {\mathbb {Z}}\).

It follows from (4), the definitions above, potential monotonicity, and Propositions 9, 11 that

$$\begin{aligned} \lambda _{n,V}\left\| L_{n,V}\right\| _\infty&\le \lambda _{n,V}\max _{x\in \Lambda _n}\left[ L_{A_\delta (x),0}(x)+2\delta \left\| L_{n,V}\right\| _\infty \right] \\&\le \lambda _{n,V}\max _{x\in \Lambda _n}\left\| L_{A_\delta (x),0}\right\| _\infty +2\delta C. \end{aligned}$$

By domain monotonicity and translation invariance, the last maximum above is attained at the \(x\in \Lambda _n\) that also maximises \(\#A_\delta (x)=Z^+_\delta (x)+Z^-_\delta (x)+1\). Moreover, V being i.i.d. implies

$$\begin{aligned} \mathbb {P}\left[ \lim _{n\rightarrow \infty }\max _{x\in \Lambda _n}Z^+_\delta (x)+Z^-_\delta (x)=\infty \right] =1, \end{aligned}$$

and therefore Theorem 1 and Proposition 10 give

$$\begin{aligned} \varlimsup _{n\rightarrow \infty }\lambda _{n,V}\left\| L_{n,V}\right\| _\infty \le \frac{\mu _1}{2} \varlimsup _{n\rightarrow \infty }\left( \frac{\max _{x\in \Lambda _n}Z^+_\delta (x)+Z^-_\delta (x)}{2y_n}\right) ^2+2\delta C\quad {\mathbb {P}}\text {-a.s.} \end{aligned}$$

The proof of Theorem 2(ii) is finished with the next proposition followed by the limit \(\delta \rightarrow 0\).

Proposition 12

For all \(\delta >0\), \(\displaystyle \varlimsup _{n\rightarrow \infty }\frac{1}{2y_n}\max _{x\in \Lambda _n}[Z^+_\delta (x)+Z^-_\delta (x)]\le 1\quad {\mathbb {P}}\)-a.s.

Proof

We will prove this over an exponential subsequence by means of the Borel–Cantelli Lemma; the extension to the whole sequence is done as in the proof of Proposition 3 using the monotonicity of \(n\mapsto \max _{x\in \Lambda _n}[Z^+_\delta (x)+Z^-_\delta (x)]\).

Assume (C1). For all \(t>0\) we have

$$\begin{aligned} \mathbb {E}\left[ e^{-t V(0)}\right] =\mathbb {E}\left[ e^{-t V(0)}\mathbbm {1}_{V(0)\le \frac{1}{\sqrt{t}}}\right] +\mathbb {E}\left[ e^{-t V(0)}\mathbbm {1}_{V(0)> \frac{1}{\sqrt{t}}}\right] \le F\left( 1/\sqrt{t}\right) +e^{-\sqrt{t}}. \end{aligned}$$

With this, we use the exponential Markov inequality and independence to obtain

$$\begin{aligned} \mathbb {P}\left[ Z_\delta ^+(0)>n\right]&=\mathbb {P}\left[ \sum _{j=1}^n(n+1-j)V(x+j)\le \delta ^{-1}\right] \\&\le e^{t/\delta }\prod _{j=1}^n\mathbb {E}\left[ \exp (-t j V(0))\right] \\&\le e^{t/\delta }\left( F\left( 1/\sqrt{t}\right) +e^{-\sqrt{t}}\right) ^n,\qquad n \in {\mathbb {N}}. \end{aligned}$$

Now we proceed with the distribution of \(Z^+_\delta (0)+Z^-_\delta (0)\) as

$$\begin{aligned}&\mathbb {P}\left[ Z^+_\delta (0)+Z^-_\delta (0)>n\right] \\&\qquad \qquad =\mathbb {P}\left[ Z^+_\delta (0)>n-1\right] +\sum _{j=1}^{n-1}\mathbb {P}\left[ Z^+_\delta (0)=j\right] \mathbb {P}\left[ Z^-_\delta (0)>n-j\right] \\&\qquad \qquad \le 2\mathbb {P}\left[ Z^+_\delta (0)>n-1\right] +\sum _{j=2}^{n-1}\mathbb {P}\left[ Z^+_\delta (0)>j-1\right] \mathbb {P}\left[ Z^+_\delta (0)>n-j\right] \\&\qquad \qquad \le 2e^{t/\delta }\left( F\left( 1/\sqrt{t}\right) +e^{-\sqrt{t}}\right) ^{n-1}+(n-2)e^{2t/\delta }\left( F\left( 1/\sqrt{t}\right) +e^{-\sqrt{t}}\right) ^{n-1}\\&\qquad \qquad \le n e^{2t/\delta }\left( F\left( 1/\sqrt{t}\right) +e^{-\sqrt{t}}\right) ^{n-1}. \end{aligned}$$

For any \(\epsilon >0\) define \(t(\epsilon )\) by \(\ln \left( F\left( 1/\sqrt{t(\epsilon )}\right) +e^{-\sqrt{t(\epsilon )}}\right) \le \frac{\ln F(0)}{1+\epsilon }\) so that

$$\begin{aligned}&\mathbb {P}\left[ \max _{x\in \Lambda _n}[Z^+_\delta (x)+Z^-_\delta (x)]>\left\lfloor (1+\epsilon )^2 2y_n \right\rfloor \right] \\&\qquad \qquad \le C n\,\mathbb {P}\left[ Z^+_\delta (0)+Z^-_\delta (0)>\left\lfloor (1+\epsilon )^2 2y_n \right\rfloor \right] \\&\qquad \qquad \le C e^{2t(\epsilon )/\delta } n \exp \left[ (1+\epsilon )^2 2 y_n\frac{\ln F(0)}{1+\epsilon }(1+o(1))\right] \\&\qquad \qquad =Ce^{2t(\epsilon )/\delta }n^{-\epsilon (1+o(1))}, \end{aligned}$$

which is summable over the exponential subsequence \(n=\left\lfloor e^m \right\rfloor \), \(m\in {\mathbb {N}}\).

Assume (C2). We follow the same steps as for (C1) above. To bound the Laplace transform of V(0) we consider the function \(f(t){:}{=}a[F(t)]^{1/\eta }\) for some \(a>0\). From (C2) follows that there exists \(t_0\in (0,\infty )\) such that \(F(t)\le 2\,c\,t^\eta \) for all \(t\in [0,t_0]\). Therefore, by choosing \(a{:}{=}(t_0^{-1}+(2c)^{1/\eta })^{-1}\) we obtain

$$\begin{aligned} 0\le f(t)\le {\left\{ \begin{array}{ll} a(2c)^{1/\eta }t\le t,&{}t\in [0,t_0],\\ a\le t_0\le t, &{}t\in (t_0,\infty ). \end{array}\right. } \end{aligned}$$

Moreover, since \(\mathbb {P}\left[ f(V(0))\le t\right] =\left( \frac{t}{a}\right) ^\eta \) for \(t\in [0,a]\), we have

$$\begin{aligned} \mathbb {E}\left[ \exp (-t V(0))\right]&\le \mathbb {E}\left[ \exp (-t f(V(0)))\right] =\frac{\eta }{(a)^{\eta }}\int _0^{a} e^{-t y} y^{\eta -1}\,\textrm{d}y\\ {}&\le \frac{\eta }{(a)^{\eta }}\int _0^{\infty } e^{-t y} y^{\eta -1}\,\textrm{d}y= \frac{\eta \Gamma (\eta )}{(a t)^\eta },\qquad t>0. \end{aligned}$$

The exponential Markov inequality at \(t=n\eta \delta \), independence, and the Stirling bound \((n/e)^n\le n!\) lead us to

$$\begin{aligned} \mathbb {P}\left[ Z_\delta ^+(0)>n\right]&\le e^{t/\delta }\prod _{j=1}^n\mathbb {E}\left[ \exp (-t j V(0))\right] \le \left( \frac{\eta \Gamma (\eta )}{(at)^\eta }\right) ^n\frac{e^{t/\delta }}{(n!)^{\eta }}\\&\le \left( \frac{\eta ^{1-\eta }\Gamma (\eta )e^{2\eta }}{a^\eta \delta }\right) ^n n^{-2\eta n}{=}{:}K_\delta ^n \,n^{-2\eta n},\qquad n \in {\mathbb {N}}, \end{aligned}$$

from which follows

$$\begin{aligned}&\mathbb {P}\left[ Z^+_\delta (0)+Z^-_\delta (0)>n\right] \\&\qquad \qquad \le 2\mathbb {P}\left[ Z^+_\delta (0)>n-1\right] +\sum _{j=2}^{n-1}\mathbb {P}\left[ Z^+_\delta (0)>j-1\right] \mathbb {P}\left[ Z^+_\delta (0)>n-j\right] \\&\qquad \qquad \le 2 K_\delta ^{n-1}(n-1)^{-2\eta (n-1)}+K_\delta ^{n-1} \sum _{j=2}^{n-1} \left( j-1\right) ^{-2\eta (j-1)}(n-j)^{-2\eta (n-j)}. \end{aligned}$$

The function \([2,n-1]\ni j\mapsto \left( j-1\right) ^{-(j-1)}(n-j)^{-(n-j)}\) attains its unique maximum at \(j=(n+1)/2\), therefore

$$\begin{aligned}&\mathbb {P}\left[ Z^+_\delta (0)+Z^-_\delta (0)>n\right] \\&\qquad \qquad \le 2 K_\delta ^{n-1}(n-1)^{-2\eta (n-1)}+(4^{\eta }K_\delta )^{n-1}(n-2)(n-1)^{-2\eta (n-1)}\\&\qquad \qquad \le (4^{\eta }K_\delta )^{n-1}n(n-1)^{-2\eta (n-1)}. \end{aligned}$$

Finally, for any \(\epsilon >0\) we have

$$\begin{aligned}&\mathbb {P}\left[ \max _{x\in \Lambda _n}[Z^+_\delta (x)+Z^-_\delta (x)]>\left\lfloor (1+\epsilon )2y_n \right\rfloor \right] \\&\qquad \qquad \le C n\,\mathbb {P}\left[ Z^+_\delta (0)+Z^-_\delta (0)>\left\lfloor (1+\epsilon )2y_n \right\rfloor \right] \\&\qquad \qquad = C n \exp \left[ -(1+\epsilon )4\eta y_n(\ln y_n)(1+o(1))\right] \\&\qquad \qquad = C n \exp \left[ -(1+\epsilon )4\eta y_n(\ln \ln n)(1+o(1))\right] \\&\qquad \qquad =C n^{-\epsilon (1+o(1))}, \end{aligned}$$

which is summable over the exponential subsequence \(n=\left\lfloor e^m \right\rfloor \), \(m\in {\mathbb {N}}\). \(\square \)